hacker news with inline top comments    .. more ..    22 Sep 2014 Best
home   ask   best   5 years ago   
TXT Record XSS
984 points by ryanskidmore  3 days ago   227 comments top 46
mrb 3 days ago 12 replies      
I am half serious, but how about making HTML served in TXT records a standard trick for serving small web pages very quickly? There are way fewer network round trips:

  1. DNS query for TXT record for example.com  2. DNS reply with HTML content
Compared with the traditional 7 steps:

  1. DNS query for A record for example.com  2. DNS reply with x.x.x.x  3. TCP SYN to port 80  4. TCP SYN/ACK  5. TCP ACK  6. HTTP GET  7. HTTP reply with HTML content
It would also make the content super-distributed, super-reliable, as DNS servers cache it worldwide (and for free so it would reduce hosting costs :D). Also TXT records can contain more than 255 bytes as long as they are split on multiple strings of 255 bytes in a DNS reply.

Again, I am only half serious, but this is an interesting thought experiment...

Edit: oddtarball: DNSSEC would solve spoofing. And updates should take no longer than the DNS TTL to propagate: the TTL is under your control; you could set it to 60 seconds if you wanted. It is a common, false misconception that many DNS resolvers ignore the TTL. Some large web provider (was it Amazon? I forget) ran an experiment and demonstrated that across tens or hundreds of thousands of clients wordlwide, 99% of them saw DNS updates propagated within X seconds if the TTL was set to X seconds. Only <1% of DNS resolvers were ignoring it.

ryan-c 3 days ago 4 replies      
I enumerated all IPv4 PTR records a few years back, and I saw a couple XSS things there as well. If anyone wants to host that data set somewhere, let me know, would be interesting to see what others do with it.

Edit: I found my data and have a grep running on it, will share what turns up.

Edit2: Somewhat less exciting than I remember:

$ fgrep -- '>' *










philip1209 3 days ago 4 replies      
I added FartScroll.js from the Onion to my text records:


SEJeff 3 days ago 1 reply      
From any Linux (or probably OS X) workstation / server, you can run the command "host -t TXT jaimehankins.co.uk" ie:

$ host -t TXT jamiehankins.co.uk

;; Truncated, retrying in TCP mode.

jamiehankins.co.uk descriptive text "<iframe width='420' height='315' src='//www.youtube.com/embed/dQw4w9WgXcQ?autoplay=0' frameborder='0' allowfullscreen></iframe>"

jamiehankins.co.uk descriptive text "v=spf1 include:spf.mandrillapp.com ?all"

jamiehankins.co.uk descriptive text "<script src='//peniscorp.com/topkek.js'></script>"

jamiehankins.co.uk descriptive text "google-site-verification=nZUP4BagJAjQZO6AImXyzJZBXBf9s1FbDZr8pzNLTCI"

kehrlann 3 days ago 4 replies      
This is hilariousy, but could this potentially be a real threat to anything ?
AsakiIssa 3 days ago 2 replies      
Wasn't expecting that at all! Had several tabs opened and was really confused for a few seconds while I tried to find the tab with 'youtube on autoplay'.

Firefox needs to show the 'play' icon for the audio tag.

ryanskidmore 2 days ago 1 reply      
Who.is have fixed it now, but you can still see it in action over at archive.org


garazy 3 days ago 0 replies      
I've found about 80 TXT records with <script tags in them - most of them look like the person not understanding where to paste a JavaScript snippet over XSS attempts, here's all of them -


There's a few that are "13h.be/x.js" that look like someone trying this out before.

jedberg 3 days ago 1 reply      
Come on people, this is so basic. If you didn't generate the data, don't display it on your web page without filtering it. It blows my mind that this isn't just everyone's default.
rbinv 3 days ago 3 replies      
Clever. I didn't get it at first.

Never trust user input.

Edit: See http://www.dnswatch.info/dns/dnslookup?la=en&host=jamiehanki... for the actual code.

colinbartlett 3 days ago 0 replies      
Bravo, I just embarrassed myself in a very quiet meeting.
toddgardner 3 days ago 0 replies      
The most clever exploit of XSS I've ever seen. Beautiful. Bravo.
JamieH 2 days ago 0 replies      
Still working here if anyone is yet to see it.


Sanddancer 3 days ago 0 replies      
Given how many whois sites cache results, I wonder how many of them are also vulnerable to SQL injections...
kazinator 3 days ago 1 reply      
Since there is very little discussion in the link, pardon me for stating what may be obvious to some, but not necessarily everyone.

The point here is that:

1. DNS TXT records can contain HTML, including scripts and whatever.

2. Domain registrants can publish arbitrary TXT records.

3. TXT records can appear in pages generated by web sites which serve, for instance, as portals for viewing domain registration information, including DNS records such as TXT records.

4. Thus, such sites are vulnerable to perpetrating cross-site-script attacks (XSS) on their visitors if they naively paste the TXT record contents into the surrounding HTML.

5. The victim is the user who executes a query which finds the malicious domain which serves up the malicious TXT record that is interpolated into the displayed results. The user's browser executes the malicious code.

Thus, when you are generating UI markup from pieces, do not trust any data that is pulled from any third-party untrusted sources, including seemingly harmless TXT records.

mike-cardwell 3 days ago 0 replies      
A while ago I experimented with adding stuff to the version.bind field in bind. Just updated it:

mike@glue:~$ dig +short chaos txt version.bind @

"<iframe width='420' height='315' src='//www.youtube.com/embed/dQw4w9WgXcQ?autoplay=1' frameborder='0' allowfullscreen></iframe>"

I put this in my named.conf:

version "<iframe width='420' height='315' src='//www.youtube.com/embed/dQw4w9WgXcQ?autoplay=1' frameborder='0' allowfullscreen></iframe>";

This site is vulnerable:


Although takes a minute before it kicks in. I did report it to them at the time, but never got a response.

elwell 3 days ago 1 reply      
In playing around with this hack, I discovered that Dreamhost doesn't properly escape TXT records in their admin interface when modifying DNS records. I put an iframe in and it shows the box but the src is removed; it also killed the page at that point so I'm unable to remove it...
bwy 3 days ago 3 replies      
Wish there was a warning, because I accidentally clicked this link in class just now.
0x0 3 days ago 0 replies      
Can it be done with CNAME and SRV records too?
Thaxll 3 days ago 2 replies      
It has nothing to to do with TXT record, it's just the website that render html. It could be any source.
gsharma 3 days ago 0 replies      
Not sure how Trulia handles input for its usernames, but at one point I was able to do this http://www.trulia.com/profile/-iframe--home-buyer-loleta-ca-...
sidcool 2 days ago 1 reply      
I opened this link on my Android's Chrome browser. The top search text input started wildly convulsing. First I thought the post was about that. But I didn't really get what this is about.
js2 3 days ago 0 replies      
All editors should, upon save, put up the following prompt:

"I acknowledge the code just written does not trust its input, under penalty of being whipped by a wet noodle."

But I guess folks would just click through.


sanqui 3 days ago 1 reply      
Looks like the who.is site has patched the exploit up a few minutes ago.
gcr 3 days ago 0 replies      
Warning: this page links to (loud!) automatic playing audio.
homakov 2 days ago 0 replies      
XSS on a shitty website not doing trivial sanitization gets 900 points on HN, oh guys you are disappointing me so much.
tekknolagi 3 days ago 0 replies      
This is hysterical.
indielol 3 days ago 0 replies      
Wouldn't this make it super easy for Google to ban (show the security warnings in Chrome) the domains?
nerdy 3 days ago 0 replies      
Best POC ever.
_RPM 3 days ago 1 reply      
When I went to the page, it started playing music. I find that very frustrating and annoying.
bdpuk 3 days ago 0 replies      
I've seen similar examples with HTTP headers and sites that display those, nice angle.
general_failure 3 days ago 0 replies      
Well played sir, very well played
wqfeng 3 days ago 1 reply      
Could anyone tell me what's about? I just see a DNS page.
thomasfl 3 days ago 0 replies      
Finally somebody found a way to put html injection on to good use.
tedchs 2 days ago 0 replies      
FYI it looks like who.is fixed the XSS bug.
ginvok 3 days ago 0 replies      
Aaaand now I'm deaf :)Gotta learn sign language
iamwil 3 days ago 3 replies      
How does this work?
ing33k 3 days ago 0 replies      
good hack but really stupid of me to click it directly :\
PaulSec 3 days ago 0 replies      
I wonder how this got so much points..Reflected XSS in 2014, yeah..
himanshuy 3 days ago 1 reply      
What's up with the search box?
zobzu 3 days ago 0 replies      
That made me laugh, good one :)
notastartup 3 days ago 0 replies      
man...I woke up and got a dose of surprise....love this song.
r0m4n0 3 days ago 3 replies      
isn't this technically illegal to demonstrate haha?
st3fan 3 days ago 0 replies      
sprkyco 3 days ago 0 replies      
Luckily it does not work on my normal browser: https://www.whitehatsec.com/aviator/
I was asked to crack a program in a job interview
995 points by m00dy  4 days ago   300 comments top 39
ckaygusu 4 days ago 1 reply      
I also tried to crack exactly this program a while ago. The company (I believe it is MilSoft, one of the most reputable software companies in Turkey) sent this challenge to university students to hire a part-time CS student. Nevertheless, this was the first time I've ever attempted to crack something and while I had little to no idea what was going on, it was a very thrilling experience. I think I went on 14 hours without taking a break.

I began by trying to run the program in GDB, got SIGSEGV'd. Afterwards I inspected the faulty address and tried to avoid it by changing its value, instead it crashed at somewhere else. After trying this hopeless catch-and-run for several hours, I decided I needed a better disassembly tool and went on to IDA Pro.

This particular program contains a trick that intrigued me very much, and it is the reason why I was getting SIGSEGV'd at different locations when altering the program code.

The main payload of this program is simply XOR-encrypted by some key. The whole thing begins by decrypting the payload and then begins its execution as normal. The gist is, the particular key that encrypted the main payload is the decryption code itself (for the unacquainted, assembly code is also just a byte stream). Here, this exact part:

   0x804762d:   mov    $0xaa,%dl   0x804762f:   mov    $0x8048480,%edi   0x8047634:   mov    $0x8048cbc,%ecx   0x8047639:   mov    %edi,0x80476f3   0x804763f:   mov    %ecx,0x80476f7   0x8047645:   sub    %edi,%ecx   0x8047647:   mov    $0x804762f,%esi   0x804764c:   push   $0x80476c1   0x8047651:   pusha     0x8047652:   mov    $0x55,%al   0x8047654:   xor    $0x99,%al   0x8047656:   mov    $0x8047656,%edi   0x804765b:   mov    $0x80476e5,%ecx   0x8047660:   sub    $0x8047656,%ecx   0x8047666:   repnz scas %es:(%edi),%al   0x8047668:   je     0x804770a   0x804766e:   mov    %edi,0x80476eb   0x8047674:   popa      0x8047675:   add    0x80476eb,%edx   0x804767b:   ret
As far as I can remember, the key was a bit more than that, but I'm sure it was including this part.

At the end of every iteration (of something involving this loop which I can't precisely recall now) the program checks whether it is running under debug mode (essentially makes a PTRACE call and reads its output, the OP also talks about it) If this is the case, it makes a jump to random address, so even if you are just neatly watching the program run under debug mode, you weren't going to achieve anything.

The next thing that occured to me is to manipulate how PTRACE returns its value, but I thought it would involve some kernel code fiddling and running the program under the modified kernel, which is WAY beyond my ability for now. I didn't know how to do it, but later by some very stupid trick I managed to pass this decryption part and the program made a jump to something like "__glibc_start". I needed to save the altered program and run it under gdb again (I don't remember why), but I was using the trial version of IDA Pro which prohibits me of such a thing. After making a few more desperate attempts I gave up.

But this "using the code as the key".. I think spending 14 hours to see this done was well worth it.

davidgerard 4 days ago 9 replies      
Real-life tests are THE best thing to send job candidates. It scales well (you don't have to spend personal hours on them) and you get real information.

This applies even to sysadmins. We have a favourite: set up a VM with a slightly-broken application in a slightly-broken Apache and Tomcat, and get them to ssh in and document the process of fixing it. Even people who aren't a full bottle on Tomcat will give useful information, because we get an insight into their thought processes. I recommend this to all.

(I note we've just done a round of interviews where we get a nice-looking CV and conduct a technical grilling. Hideous waste of time for everyone involved. All CVs should be regarded, on the balance of probabilities, as works of fiction. Do a remote self-paced test like this. You won't regret it.)

meepmorp 4 days ago 5 replies      
> Here is the first thing i typed in the terminal

root@lisa:~# ./CrackTheDoor

Um. I see at least one security issue already.

superuser2 4 days ago 4 replies      
This is Intro to Systems homework at UChicago (the course is heavily based on CMU's equivalent.) You're given a personalized binary that asks for a series of passwords to complete each level. If you get a password wrong, it phones home to a server run by the professor and decrements your grade.

The point is to teach you to reason about assembly using GDB. You can pretty trivially set a breakpoint at the phoning-home routine so that you never actually lose any points; then it's just a question of thinking and reading hard enough before the deadline arrives.

Levels range from very simple string comparison, to arithmetic, to pretty weird tricks.

It was about the most memorable homework assignment I've ever done.

pkaye 4 days ago 9 replies      
I don't know where you find candidates that can even approach this level of skill or desire to solve puzzles. Most people I interview struggle with a few lines C program coding.
freehunter 4 days ago 9 replies      
Really nice overview of the process. I was hoping to get into debugging and breaking code, but my career took a wild turn away from that part of the job. It's still something I would like to learn, so I'm reading as much about it as I can.

I'm going to take this way off topic here, but it's a curiosity of mine. Please don't take this as an insult; it seems to be very common and as a language learner myself I'm just wondering where it comes from.

At first , it looks...


debugger.Therefore , there...

mode.In my opinion , Intel...

So , those lines will basically scan the memory , if there is a 0xCC , it will crash your program and such ...

Specifically in these examples, I'm seeing a missing space between a period and the next word, as well as a space before a comma. As English is one of my native languages, I'm not sure how people go about learning English or what resources are available to anyone learning English.

I've noticed this with a lot of English as a second language speakers, and it doesn't seem to matter what their original language is. In this case, Spanish, but I've seen native Russian and Japanese speakers with the same thing. Can anyone tell me why this is?

AlyssaRowan 4 days ago 2 replies      
Crackmes (as they're known) can be kind of fun.

The late Katja Kladnik once sent me a diskful of 'crackme' virii. I tried to deadlist one of them; it infected me when I did, and dared me to try a less obvious approach.

Mangled symbol table => buffer overflow in debugger => arbitrary code. Sneaky.

joezydeco 4 days ago 2 replies      
That's some impressive work. But then...

"The company send me another crack me for round 2 :) That's also interesting.."

That wasn't enough to get the job?

wyc 4 days ago 6 replies      
This reminds me of the popular binary bomb lab offered in some computer architecture courses:http://csapp.cs.cmu.edu/public/labs.html
jsaxton86 4 days ago 1 reply      
Does this guy have any idea how hard it is to come up with good interview questions? By posting the question and solution online (complete with an md5sum and everything!), he has ruined the question, and his employer will now need to spend a significant amount of time coming up with another way to evaluate candidates.
enjoy-your-stay 4 days ago 1 reply      
Looks like he was doing this on Linux.

A quick experiment shows me that you can call ptrace(PTRACE_TRACEME,..)on OSX multiple times without it failing (the constant is actually PT_TRACE_ME on darwin). I wonder if that's the same for all BSDs ?

Interesting and educational writeup though, and just the thing to get me tinkering myself!

professorwimpy 4 days ago 0 replies      
"Now, I have been told that the best crackers in the world can do this in 60 minutes. Unfortunately, I need someone who can do it in 60 seconds."
zellyn 4 days ago 1 reply      
If this sounds fun, give microcorruption.com a try :-)
sayginbican 3 days ago 1 reply      
Dude?? Did you wait until you go to Spain to post this? Still, it is very fun to read this post and comments here. Actually, I prepared these two crackmes in order to arrange a small competition among universities at Turkey. But, they became very good interview questions also.

It's really good to read these responses. Cracking ability is really rare in CS student community in Turkey. Our intention was increase awareness. Reading these comments showed me it was a really good step.

imaginenore 4 days ago 1 reply      
If we asked questions like that at our interviews, it would take us 10 years to hire one candidate. Most people fail at basic basic stuff.
sbisker 4 days ago 2 replies      
Is this company ok with this being posted?

If so, they should say what company they are, because being associated with a clever puzzle like this is great for recruiting (even if it's not being used anymore). Unless they have their own reasons for remaining quiet (government? :)).

If not, they should probably take it down, as having the solutions posted would ruin the evaluative value of what must have taken a very long time to make.

userbinator 4 days ago 0 replies      
If you want to try cracking one yourself, there are plenty of crackmes at http://crackmes.de/
acjohnson55 3 days ago 0 replies      
Back when I was in high school, I had a Palm IIIxe. This was the days before app markets and nearly everybody who made PalmOS apps tried to sell them as shareware with a price of $20-50 -- well beyond what I could afford as a broke high school student.

Fortunately, I had learned Z80 assembly programming my TI-83, which had led me to dabble in 68k assembly when I bought a TI-89. I never mastered 68k the way I did Z80, but I knew enough to find the routines that ran the registration key check when the OK button was pressed, and by trial and error, I'd invert conditional jumps until I found the one that would turn a failed registration attempt into a success. Then I'd hex edit the binary to make the switch. Worked like a charm about 80% of the time!

estefan 4 days ago 0 replies      
I remember fravia and +orc back in the day... I think he passed away, but there are still archives online: http://www.woodmann.com/fravia/

I spent hours starring at softice & winice, and learning x86 asm

diminoten 4 days ago 0 replies      
How the hell do I have a job? I can't even follow most of this...
harshil93 3 days ago 1 reply      
This reminds of this quora post. A nice one for beginners like me. The guy reverse engineered Sublime Text to remove the nagware of registration.


PS- You should buy ST, it is one of the best code editors out there in the market.

jonahx 4 days ago 4 replies      
what's a good, simple intro to the basics of this kind of cracking for someone who is an experienced programmer, knows some C, etc, but has little system level or assembly experience?
aabajian 4 days ago 1 reply      
This is totally nostalgic of the "binary bomb" assignment in CS 107 @ Stanford. You have to run the program from Stanford's network. There are 6 levels and each level has a password you have to enter. If you enter the wrong password, the course server is notified, and a point is deducted from your grade. The correct way to solve each level is to disassemble the program and figure out what it's doing.

Here's Google's cache of this page:


...there's even a secret level in the binary.

skizm 3 days ago 1 reply      
Should have sent them a password locked program called *DoorHasBeenCracked". The only thing it does is post passwords to an http server that you control. There is a good chance they try their own password on it. New school phishing attack. /s
joeblau 3 days ago 0 replies      
I love the way the way m00dy dissected the problem. About 2 months ago, I was watching some advanced LLDB videos from Apple and they went into a lot of the tricks detailed in this post for setting breakpoints and debugging a program. That being said, some of the knowledge about halting commands and configuring gdb to ignore debug mode are just some things only a pro would know.

Great job and thanks for the great read.

fsniper 4 days ago 3 replies      
The post started very well but with the first screen shot, my mind started tingling: What the heck a security engineer is doing in a root shell? An unknown binary sent via an email is run in a root shell. There is also no mention of email source tracking.

Hey you are a security engineer you know about weakness of smtp right?

Even if this is a virtual machine, I would really reconsider employment of him or sit down and do a serious talking about this blog post if I were the employer.

I could not continue reading the post before ranting about it.

raverbashing 4 days ago 0 replies      
Very nice

My approach would be to disassemble, then try to find the strings in the program and see where they're being used and processed.

And kill the CC thing by hexediting the file

turtles 4 days ago 0 replies      
Similarly, I had to debug a vulnerability and write an exploit for a vulnerability in adobe reader for a job interview. :)
mahmoudimus 4 days ago 2 replies      
I have a pretty cool crackme that I programmed and I wanted to offer it as a puzzle to some candidates, but without the proper reverse engineering tools, I think most candidates would really struggle -- especially if you're looking for just general developers.

Haven't given it much thought past this.

terminado 3 days ago 0 replies      
Password variables stored as constants? In MY binary? It's more likely than you think!
mariuolo 3 days ago 1 reply      
You know that by publishing this now YOU will have to write the challenge programme for the next candidate, right?;)
Ben-G 4 days ago 0 replies      
Are there any good resources to learn what is necessary to solve this puzzle?
ohshout 3 days ago 0 replies      
why doesn't the author use objdump so there is no need to bypass ptrace()?
aceperry 4 days ago 0 replies      
This reminds me of a scene from the movie "Swordfish", starring Hugh Jackman, John Travolta, and Halle Berry. :-)
tomrod 4 days ago 0 replies      
That, my friends, was a powerful blogpost. Raw, exuberant, and purposeful. I learned much.
fastball 4 days ago 0 replies      
Agh wHy is thE capitalization & puncuation. so inconsistent?
ck2 4 days ago 0 replies      
Then gets fired for revealing the answer to the only test they have.

Just kidding, congrats!

marincounty 4 days ago 0 replies      
I've always believed a test a a fair way of hiring. It takes "the good ole boy", and the whole "my friend is brilliant" out of the equation. Personally, I've never liked, actually despised, the whole networking thing.
javajosh 4 days ago 1 reply      
So the only way for programs to get data from the outside world is to poll with system calls? I always thought that programs defined a "holding area" that the kernel would write into when it had data - the program still might poll, but it's polling (potentially very small, perhaps a single register) local data rather than making a system call.
Announcing Keyless SSL
499 points by jgrahamc  3 days ago   184 comments top 27
lucb1e 3 days ago 3 replies      
For those who want to understand how it works (it took me a minute, so I'll try to explain it simpler):

In simplified terms, the server usually stores a public and private key, and sends the public key to the client. The client generates a random password, encrypts it with the server's public key, and sends it to the server. Only anyone with the private key can decrypt the message, and that should only be the server.

Now you don't want to hand over this private key to Cloudflare if you don't need to, because then they can read all traffic. Up until now, you needed to.

What they did was take the private key and move it to a keyserver, owned by your bank or whomever. Every time the Cloudflare server receives a random password (which is encrypted with the public key) it just asks the keyserver "what does this encrypted message say?" After that it has the password to the connection and can read what the client (the browser) is sending, and write data back over the same encrypted connection. Without ever knowing what the private key was.

The connection from Cloudflare to your bank's webserver and keyserver can be encrypted in whatever way. It could be a fixed key for AES, it could be another long-lasting TLS connection (the overhead is mostly in the connection setup)... this isn't the interesting part and can be solved in a hundred fine ways.

Edit: Removed my opinion from this post. Any downvotes for my opinion would also push the explanation down (which I hope is useful to some). I mostly agree with the other comments anyway.

indutny 3 days ago 2 replies      
And my patch for OpenSSL that does the same thing: https://gist.github.com/indutny/1bda1561254f2d133b18 , ping me on email if you want to find out how to use it in your setup.
delinka 3 days ago 5 replies      
Instead of keeping the key in a potentially vulnerable place, they're putting it in an oracle: pass ciphertext to the oracle, get plaintext back. I'm interested in the authentication between CloudFlare and the oracle. Cryptographic examples involving an oracle tend to refer to the oracle as a black box that just blindly accepts data, transforms it, and replies. Of course, then the oracle's content (a key, an algorithm) risks exposure through deduction if an attacker can submit limitless requests. See http://en.wikipedia.org/wiki/Chosen-plaintext_attack

I'm not at all suggesting that CF hasn't thought of this; rather I want to see their mitigation of the risk.

mhandley 3 days ago 3 replies      
This seems to only slightly reduce the threat to the banks.

Currently, if someone compromises the Cloudfare servers, they gain the bank's private key and can impersonate the bank until the bank revokes their keys.

With this solution, if someone compromises the Cloudfare servers, they can impersonate the bank by relaying the decryption of the premaster secret through Cloudfare's compromised servers back to the bank. They can do this until Cloudfare notices and closes the security hole.

It's not clear that the difference is all that great in reality, as most of the damage will be done in the first 24 hours of either compromise.

personZ 3 days ago 4 replies      
After reading the beginning of the piece, I was expected something more...profound. Some deep mathematical breakthrough or something.

Instead they separate the actual key signing, delegating it to the customer's device. That's nice and useful, but isn't quite what I was expecting.

teddyh 3 days ago 4 replies      
So the communication between Cloudflare and the actual SSL key holder is secured by what? Another key? In that case, any compromise of Cloudflares key is the same as a compromise of the original SSL key (at least in the short term).
otterley 3 days ago 4 replies      
Keyless SSL is basically an analogue of ssh-agent(1) for OpenSSL. It's a nice feature that you no longer have to trust CloudFlare with your private key, but there's a huge tradeoff: if your keyserver is unavailable (ironically, due to any of the things CloudFlare is supposed to protect you from or buffer you against -- DDoS, network/server issues, etc.), they can no longer authenticate requests served on your behalf and properly serve traffic.
windexh8er 3 days ago 2 replies      
All other technicalities aside it's rather interesting. From an HSM perspective it either makes that hardware now very useful or very useless.

Think of a large organization - you've been there (or not), there are 30 internal applications with self-signed certificates. Fail. The organization had purchased an HSM, but never really got it deployed because - well, that was too complex and it didn't integrate well with 3rd party network hardware and failed miserably in your *nix web stack.

This could be interesting - and I'm not commenting with regard to the efficacy or security concerns around this, but mainly the workflow simplicity it provides to large organizations who end up in self-signed-cert-hell because HSMs don't interoperate easily in a lot of use cases.

But to my original statement - this is a very good thing or a very bad thing for Thales and the like. The only requirement for an actually certified HSM, really, is certification against some hardware and software standard you have a checkbox to fulfill. Beyond that this would be a killer in the middleground for those who want an HSM like functionality but don't have any requirements to meet other than housing a secure segment where key management can be done in a more controlled manner.

vader1 3 days ago 1 reply      
While this is a cool feature, I wouldn't say the improvement is more than marginal: all potentially sensitive customer data is still available to Cloudflare in plain text. And after all, with a Business plan you can already use your own ("custom") SSL certificate which you can then revoke at any time.

Why not offer a "pass through" mode where the proxying is done on the network layer rather than the application layer? Of course in such a modus all CDN-like functionality could no longer be offered, but it could still do a fair amount of DDOS protection, no?

mback2k 3 days ago 0 replies      
So, this is not actually keyless SSL but SSL using something like a Hardware Security Module over networked PKCS#11. Did I miss something?
zaroth 3 days ago 1 reply      
See: Secure session capability using public-key cryptography without access to the private key.


praseodym 3 days ago 4 replies      
So CloudFlare won't get your private key, but will still get to see unencrypted plaintext for all traffic? Sounds like a huge improvement...
xorcist 3 days ago 1 reply      
The article is somewhat light on content. There are standard protocols for HSM use. What is the reason you didn't use these? There are clear risks involved with inventing your own security related protocols.
_pmf_ 3 days ago 0 replies      
Are we reinventing Kerberos again?
blibble 3 days ago 3 replies      
isn't this completely missing the point, i.e. banks being able to say 'no third parties can see our clients identifying information/balances/etc?'

yes, the SSL key doesn't leave the bank, but everything it is protecting is..

bjornsing 3 days ago 0 replies      
> World-renowned security experts Jon Callas and Phil Zimmermann support CloudFlare's latest announcement sharing, One of the core principles of computer security is to limit access to cryptographic keys to as few parties as possible, ideally only the endpoints. Application such as PGP, Silent Circle, and now Keyless SSL implement this principle and are correspondingly more secure.

Ehh... I'd say Keyless SSL implements the opposite of that principle: encryption terminates with CloudFlare but authentication terminates in some bank.

yk 3 days ago 0 replies      
So the problem is, how to get a cloud in the middle while keeping the green lock in the browser? Just yesterday I read Douglas Adam's phrase "technologies biggest success over itself."
kcbanner 3 days ago 1 reply      
Interesting, but what about the latency issues of having to always contact the key server?
sarciszewski 3 days ago 0 replies      
That is amazing. I can't wait to play with this code :D
yusyusyus 3 days ago 1 reply      
How does this architecture address PFS? I'm guessing a future version would require the exchange of DH private key to make it work...
ambrop7 3 days ago 1 reply      
I don't like to sound hateful, but this is an obvious solution that any competent person knowing how TLS works would find. If someone tried to patent it, I suppose every smart card would be considered prior art. The only "novelty" is that the connection to the "smart card" is the network.

Not to say that it's not useful, but the article describes it as some grand invention.

general_failure 3 days ago 0 replies      
Well, cloudfare can still read all the traffic. I thought that problem had been solved somehow.
diafygi 3 days ago 1 reply      
Is this the free SSL announcement that CloudFlare said it was going to announce in October?
liricooli 2 days ago 0 replies      
It seems that the correct title should have been "all your keys are belong to us".
EGreg 3 days ago 0 replies      
Wow, what a great read!
ilaksh 3 days ago 1 reply      
This is a discussion about cyberwarfare in a literal sense. The technical discussion shouldn't really be separated from the economic, political, social and human health concerns because all of those parts of the system interact deeply and directly.

A goal of total political cooperation or submission leads to economic sanctions leading to serious human health effects leading to defensive denial of service attacks. This accelerates the need to decentralize the financial network systems to make them more robust.

How can we imagine though that even after a complete transition to next generation systems that are ground-up distributed designs (not just stop-gap tweaks like this) that we won't have new types of attacks to deal with.

The starting point is the belief system that provides such fertile ground for conflict. We have to promote the idea that human lives have value and that lethal force is not an acceptable way to resolve conflict.

As long as decision makers are living in a sort of 1960s James Bond fantasy world we will all be subject to the insecurity of that type of world. Its largely built upon a type of primitive Social Darwinism that is still much more prevalent than most will acknowledge.

Its much easier to accept a compartmentalization of these problems and focus on a narrow technical aspect, but that does not integrate nearly enough information.

zameericle 3 days ago 1 reply      
Sounds like Elliptic Curve Diffie-Hellman is used between client/server to establish a private key. Not sure how this is new.
Chromeos-apk Run Android APKs on Chrome OS, OS X, Linux and Windows
460 points by ProfDreamer  2 days ago   90 comments top 18
cryptoz 2 days ago 3 replies      
This is amazing. There's a long reddit thread and some additional instructions here: http://www.reddit.com/r/Android/comments/2gv035/you_can_now_...

From the README:

> Soundcloud - Works, crashes when playing sound

Funny definition of 'works'.

byuu 2 days ago 5 replies      
Can anyone explain how this differs from using an Android emulator? (http://developer.android.com/tools/help/emulator.html)

Is it a matter of features, speed, or convenience? Obviously, all of those can be overcome, be it as a fork of the official emulator or as a third-party emulator. For instance, this new Chrome extension must be the same thing under the hood: a Dalvik runtime, possibly an ARM->Intel recompiler for any NDK applications, etc.

I figured the only reason this wasn't done to mass effect already was because it wasn't in demand. But if it's so desirable, surely creating an actual emulator would be superior to hacking up web browser extensions and ostensibly playing cat-and-mouse with Google over this?

AdmiralAsshat 2 days ago 3 replies      
Neat proof of concept.

I hope Google gets us something official sooner rather than later. It's a little disheartening that I own a Chromebook Pixel and yet I can't use Google's own hardware to design or test Android apps without installing Eclipse on a sideloaded Linux chroot via Crouton.

wzsddtc 19 hours ago 0 replies      
We worked with the ARC team at Vine as a launch partner, there were 0 modifications that we had to do to get it working on ARC. The only difference was that the "bugs" we had to fix were all reproducible on Nexus devices as well BUT the threading model had to be more strict on ARC in terms accessing system resources.
kasabali 2 days ago 2 replies      
I will absolutely go nuts if this thing manages to run OneNote on my Debian desktop.
oldgun 2 days ago 0 replies      
This is amazing.

I hope Google could really carry this project as far as possible. The next several major issues would be polishing up the platform, eliminating the bugs, unifying the android and chromebook development interface. Think of one day when android developers could actually design apps for the desktop. How cool would that be?

That's when Microsoft should really get worried.

niutech 1 day ago 1 reply      
Running Android apps in Chrome on desktop is huge! I'm glad that the ARC runtime I provided in https://github.com/vladikoff/chromeos-apk/issues/5 helped to achieve this.
bla2 1 day ago 0 replies      
Interesting, Google announced working on this on this year's I/O and posted the first apps just one week ago ( http://chrome.blogspot.com/2014/09/first-set-of-android-apps... ).
tracker1 2 days ago 1 reply      
Hope this means good netflix support in Linux.
bmelton 2 days ago 1 reply      
So, now we can write apps in Angular that run on the web and compile to Java so that we can install them to Android, running on ChromeOS, running on OSX.


Edit: Perhaps the punny nature of this is deserving of downvotes, but the statement above is the actual use case I presented to a co-developer, discussing how this project could be of use to our app, which was built with Ionic.

FWIW, there's value in it (the app, not necessarily this post) even if it means having to unplug fewer devices to swap them out with different devices to test.

asadotzler 2 days ago 0 replies      
Java makes a triumphant comeback in the browser?
kyrrewk 1 day ago 0 replies      
I have had some success running Android x86 (http://www.android-x86.org/) in VirtualBox.
mattfrommars 1 day ago 1 reply      
How is this really good? Android apps are really good but they are designed for touch interface on mobile devices, not desktop.
bussiere 1 day ago 0 replies      
Fuuuuu Out There a good game only available on mobile crash with this solution ...

Dam but it looks full of promise i hope one day it will work well ...

em3rgent0rdr 1 day ago 0 replies      
Awesome! Works for me on arch linux running latest chromium. Much faster than running android emulator!
chj 2 days ago 0 replies      
Google needs to do this.
mjcohen 21 hours ago 0 replies      
Want Open Office!
stuaxo 2 days ago 0 replies      
Its about bloody time!
Apples warrant canary disappears
404 points by panarky  3 days ago   93 comments top 15
kwhite 3 days ago 4 replies      
Is there any reason why a company could not apply the same concept of a warrant canary on a user-by-user basis?

Imagine seeing a message every time you log into your Gmail account informing you that Google has never been compelled to surrender your private data to a law enforcement agency.

panarky 3 days ago 1 reply      
Possible explanations:

1) It wasn't a canary to begin with, so its removal means nothing.

2) There's no legal precedent for disclosing a Section 215 order by killing the canary, so Apple removed it before they received a Section 215 order. That way it doesn't disclose anything and Apple avoids legal liability.

3) Apple really did receive a Section 215 order.

rrggrr 3 days ago 0 replies      
As explained by Apple:

In the first six months of 2014, we received 250 or fewer of these requests. Though we would like to be more specific, by law this is the most precise information we are currently allowed to disclose.


nl 3 days ago 1 reply      
Interesting and somewhat disappointing that it took a year for anyone to notice that it had disappeared. The appearance generated quite a lot of interest.

(Of course, I'm as responsible as anyone else for not noticing. I wonder if it would be possible to build a service to proactively check for their disappearance?)

UVB-76 3 days ago 2 replies      
Gee, thanks for the hat tip...


johnhess 3 days ago 4 replies      
Could a lawyer or someone with familiarity with warrants like these explain how a "warrant canary" is legal?

I understand the concept, but discloses something you can't disclose. They can compel you to lie/not comment if asked, "Hey, Apple, did you get any of those National Security Letters".

Is there a clear cut loophole or is this something yet to be challenged?

tkinom 3 days ago 1 reply      
I wonder what happen if Russian, China, India, Japan, EU all demanding same level of access to Apple's data.

Apple might not care about Iran or other smaller countries, but how is it going to deal with big market like China, India, EU?

chiph 3 days ago 0 replies      
Under what conditions would the warrant canary statement reappear? I'm thinking of those workplace safety signs: "This corporation has operated for [ 179 ] days without a Section 215 warrant being served"
crazypyro 3 days ago 0 replies      
Have any of the other major tech companies had similar canary disappearances? I only ask because this is the first time I've heard of one actually being used by a tech company as a warning flare.

I'd expect a governmental legal challenge...

staunch 3 days ago 1 reply      
Apple should just declare that they have been subject to Section 215. Given how many users Apple has it can't reasonably be argued that such a disclosure would be a danger to national security.

Hopefully they would end up before SCOTUS and help defang the USA PATRIOT Act.

MrJagil 3 days ago 11 replies      
I've asked this before to no avail, but what can the NSA possibly do if Apple refuses?

Fine them? Sure, they have billions.

They can't arrest the company... Is Cook going to jail? What is the actual threat here? You could argue that Apple has more power than many governments.

stevewepay 3 days ago 0 replies      
So now what? Now that the canary has disappeared, is there no other information that can be transmitted to us? It feels like it's a binary signal that just got set permanently, so there's no more information we can glean from it.
ForHackernews 3 days ago 1 reply      
Very interesting in light of this: https://news.ycombinator.com/item?id=8333258
maresca 3 days ago 0 replies      
Perhaps this is the reason for all of the security updates in iOS 8.
Artificial sweeteners linked to glucose intolerance
411 points by bensedat  4 days ago   180 comments top 33
biot 4 days ago 3 replies      
I like how all the people who benefit from artificial sweeteners are refuting something which the study doesn't claim. For example:

  "The International Sweeteners Association (ISA) says it strongly   refutes the claims made in the study: 'There is a broad body of   scientific evidence which clearly demonstrates that low-calorie   sweeteners are not associated with an increased risk of obesity   and diabetes as they do not have an effect on appetite, blood   glucose levels or weight gain.'"
It's true that artificial sweeteners have no immediate effect on appetite, blood glucose levels, nor weight gain. None of these are claims made by the study. Everyone is refuting the immediate effects of artificial sweeteners. The study claims that after consuming artificial sweeteners, if you then consume something naturally sweet, the prior consumption of an artificial sweetener alters your glucose tolerance levels.

It's the equivalent of saying that removing all the trees from around rivers has no effect on fish population because clearly fish don't live in trees. But it's the secondary effects of this which such a statement ignores: the increase in soil erosion impacting water quality, change in water temperature due to having more direct sunlight, and so on.


  "'Decades of clinical research shows that low-calorie sweeteners   have been found to aid weight-control when part of an overall   healthy diet, and assist with diabetes management,' says Gavin   Partington of the British Soft Drinks Association."
This has little meaning without having a reference point to compare the results to. If the study is correct, take one group of people who use diet soft drinks with an overall healthy diet and compare it to another group of people who consume the same overall healthy diet but drink water instead of diet soft drinks, and the group that drinks water should have a better glucose tolerance response than the diet soft drink group.

nostromo 4 days ago 4 replies      
Here's a nice write up about the results: http://www.newscientist.com/article/mg22329872.600-artificia...

Note that the mice were given the human equivalent of 18 to 19 cans of diet soda a day.

skue 4 days ago 3 replies      
For those not aware, other studies have shown that consuming diet soda may actually increase the chance of obesity. So that is not necessarily news. If you are curious, here is a pretty good study (full text):


More recently, studies have tried to determine whether there is a satiety or protein mechanism that can explain this, whereas this new study demonstrates that gut flora may play a role.

This needs to be confirmed, and there may still be other mechanisms at play as well, but it is interesting.

(Disclaimer: I do have a healthcare background, but am not a researcher in this field. Would be happy to hear more from anyone who is.)

jimrandomh 4 days ago 1 reply      
The headline is suspicious, but unfortunately, this article is paywalled, so I can't tell what's really going on. The main problem with the headline is that it lumps together "artificial sweeteners" as a category, when that is in fact a pretty widely varied class of molecules.
mratzloff 4 days ago 1 reply      
FDA acceptable daily intake (ADI) for aspartame is 50 mg per kg of body mass.[0] For an individual 180 pounds, that's about 82 kg. That means his ADI is 4100 mg. Aspartame in popular diet sodas is between 50 and 125 mg.[1]

You'd have to drink A LOT of diet soda to reach these levels.

[0] http://www.cancer.org/cancer/cancercauses/othercarcinogens/a...

[1] http://static.diabetesselfmanagement.com/pdfs/DSM0310_012.pd...

sadfaceunread 4 days ago 0 replies      
This is an impressive piece of work but I worry that a larger amount of work is needed in the relationship between glucose intolerance, diabetes and metabolic syndromes in general. The fact that glucose intolerance is induced by a high sugar diet and leads towards a path of clinical outcomes ending in diabetes, doesn't necessarily indicate that glucose intolerance developed via artificial sweetener consumption is indicative of being on the same clinical pathways towards metabolic syndrome and diabetes.
themgt 4 days ago 4 replies      
I'd be curious if they tried this study with xylitol. I chew xylitol gum for dental health and from my understanding it's not thought to contribute to metabolic problems in reasonable quantities:


sp332 4 days ago 3 replies      
Does this mean diabetes could (in some cases) be caused by gut bacteria? Can we reduce diabetes risk factors with targeted antibiotics that attack certain glucose-intolerance-causing bacteria?
kens 4 days ago 2 replies      
This result seems pretty strange to me - why would artificial sweeteners affect bacteria's metabolism in this way?

It seems like a bizarre coincidence that bacteria would react in the same way to three different sweeteners, unless they have receptors that happen to match human taste receptors (which also seems unlikely). In other words, to bacteria these sweeteners should just seem like unrelated random chemicals.

(I read the Nature paper - most of it looks at saccharin since that had the strongest response, but all three artificial sweeteners caused marked glucose intolerance.)

blackbagboys 4 days ago 3 replies      
The New Scientist article notes that four of the seven human subjects who consumed three to four sachets of sweetener a day saw a significant change in their gut bacteria.

As someone who has consumed significantly more than that for a very long time, my question would be, did their gut flora reconstitute itself after they stopped using the sweetener? And if not, how could you go about repopulating your microbiome short of a stool sample?

oomkiller 4 days ago 0 replies      
This seems very misleading. The abstract (available without paywall) mentions a group of sweeteners, whereas the findings seem to show that only saccharin has these negative effects. I feel like NAS are probably bad, but without evidence to support it they should not claim that in the abstract.
rcthompson 3 days ago 0 replies      
For some biological context, we have taste receptors in our digestive tracts identical or nearly identical to those on our tongue, only the ones in our digestive tract are not hooked directly to sensory neurons, but instead trigger endocrine signals and such. Since the receptors are identical, then anything that tastes sweet on your tongue will activate these receptors as well. If I recall my metabolism course correctly, studies have found that artificial sweeteners can trigger insulin release through these receptors in the same way as real sugar (leading to possible hypoglycemia as your body compensates for a rush of sugar that never comes).

So basically, I have no trouble believing that artificial sweeteners can have many of the same long-term health effects as excessive consumption of real sugar, since they're already known to have many of the same short-term effects, including effects on insulin regulation.

lee 4 days ago 6 replies      
So given a choice, between Diet Soda vs. Normal Soda, what would be worse for your overall health?

I imagine even with increased glucose intolerance, you're still better off choosing Diet?

voidlogic 3 days ago 0 replies      
I'm not really surprised Saccharin isn't great for you personally- but this isn't so damning there are many other artificial sweeteners to choose from.


I'd of course like to see them all studied in this manner.

mcmancini 4 days ago 1 reply      
Overall, this was a nicely done study. The microbiome is fascinating and an exciting area of research.

One criticism however would be that the dose of artificial sweetener tested was atypically high.

It'll be neat to see further research into the cause of variable responses of the subjects to the artificial sweeteners.

mladenkovacevic 4 days ago 2 replies      
I hope this doesn't hold true for stevia as well :/
raverbashing 4 days ago 0 replies      
This raises more questions than it answers I think (which is a good thing)

1 - Is there such thing as a "sweet base"? Our tongues perceive sweeteners as sweet (duh) but it seems it mimics sugar in a way for bacteria as well.

2 - From the article "Wiping out the rodents' gut bacteria using antibiotics abolished all the effects of glucose intolerance in the mice. In other words, no bacteria, no problem regulating glucose levels."

Soooo... Bacteria affects absorption of glucose? They consume it? They change the intestinal PH? Or something else?

driverdan 4 days ago 0 replies      
Can someone post the full paper? The charts shown at the bottom seem to contradict some of their conclusions and implications. For example, some of the sweeteners seemed to result in lower chow consumption and increased energy expenditure. That would be a positive effect that isn't mentioned in the abstract.
kazinator 4 days ago 0 replies      
This seems misleading.

If you go through the graphs and results, what emerges is that only the sweetener saccharin has that altering effect on the gut bacteria. I cannot find among the results any claim that the other NAS that were studied (sucralose and aspartame) have the effect.

The thing is that saccharin is not widely used any more. If saccharin is found to be harmful, that is nice to know, but not highly relevant.

SCHiM 4 days ago 2 replies      
Can anybody explain or guess what the consequences of this intolerance are or could be?
FranOntanaya 4 days ago 0 replies      
Do they compare pure sweetener diet with pure sugar diet calorie per calorie, and sweetness units per sweetness units? Sweeteners are still caloric, the point is that they provide the same sweetness for less calories.
devindotcom 4 days ago 1 reply      
Doesn't seem so strange - if you create a sugar deficit in your body by significantly reducing your intake, wouldn't you expect the body to be more responsive to sugars when it encounters them?
WesleyRourke 4 days ago 1 reply      
Raised insulin levels are much more complicated than we once thought. You can get an insulin response from artificial sweeteners just swished in your mouth and spat out. Solution, eat real food when you can
dbbolton 4 days ago 0 replies      
Now let's see a human study where people who consume those sweeteners, eat carbohydrates in moderation, and exercise regularly are still at increased risk for diabetes.
Dirlewanger 4 days ago 3 replies      
"metabolic abnormalities"

Anyone know if they go into these into the paper? Really want to know what else is in the paper; I chew way too much sugar-free gum.

coldcode 4 days ago 2 replies      
People rarely consider how what you eat affects your microflora which then affects various other systems and may even affect your desire to consume.
louwrentius 4 days ago 0 replies      
The maximum dose is quite high, but does the effect also occurs when consuming more sane dosage a day?
pistle 4 days ago 0 replies      
The industry takeaway should be to try to isolate the bacteria that play the secondary part in the glucose resistance, then put ANOTHER additive in the drinks to kill that bacteria, then sell a more expensive NEW zero calorie drink?

People like sweet. Let's make sweet safe.

tokenadult 4 days ago 3 replies      
I'm paywalled out of seeing the whole article until I try a workaround (after which I may expand this comment), but I think we can all see the abstract of the article if we follow the link kindly submitted here. Yet some questions in other comments raise issues that are already responded to by the article abstract. Here is the full text of the article abstract available in the free view at the link:

"Non-caloric artificial sweeteners (NAS) are among the most widely used food additives worldwide, regularly consumed by lean and obese individuals alike. NAS consumption is considered safe and beneficial owing to their low caloric content, yet supporting scientific data remain sparse and controversial. Here we demonstrate that consumption of commonly used NAS formulations drives the development of glucose intolerance through induction of compositional and functional alterations to the intestinal microbiota. These NAS-mediated deleterious metabolic effects are abrogated by antibiotic treatment, and are fully transferrable to germ-free mice upon faecal transplantation of microbiota configurations from NAS-consuming mice, or of microbiota anaerobically incubated in the presence of NAS. We identify NAS-altered microbial metabolic pathways that are linked to host susceptibility to metabolic disease, and demonstrate similar NAS-induced dysbiosis and glucose intolerance in healthy human subjects. Collectively, our results link NAS consumption, dysbiosis and metabolic abnormalities, thereby calling for a reassessment of massive NAS usage."

AFTER EDIT: After reading all the comments in this thread to the time of this edit, I see that some participants here disagree entirely with how I commented at first (as above). I note their opinion with interest and say here for the record simply that I saw previous comments that raised questions about information that is available in the article abstract for all of us to read. I meanwhile did find my workaround to get the full text of the article (I have library access with journal subscriptions for one aspect of my work, which is rather slow and buggy) and from the full article text I see that the experimental approach the researchers tried--feeding mice with the artificial sweetener to see if that changed gut microbiota in the mice, and then transferring the gut microbiota to other mice--did indeed bring about clinical signs consistent with the idea that the sweetener itself might cause related clinical signs in human beings.

"To test whether the microbiota role is causal, we performed faecal transplantation experiments, by transferring the microbiota configuration from mice on normal-chow diet drinking commercial saccharin or glucose (control) into normal-chow-consuming germ-free mice (Extended Data Fig. 1e). Notably, recipients of microbiota from mice consuming commercial saccharin exhibited impaired glucose tolerance as compared to control (glucose) microbiota recipients, determined 6 days following transfer (P < 0.03, Fig. 1e and Extended Data Fig. 2e). Transferring the microbiota composition of HFD-consuming mice drinking water or pure saccharin replicated the glucose intolerance phenotype (P < 0.004, Fig. 1f and Extended Data Fig. 2f). Together, these results establish that the metabolic derangements induced by NAS consumption are mediated by the intestinal microbiota."

This preliminary finding, which of course needs to be replicated, has caused alarm in the industry, according to the link participant nostromo kindly shared in this thread.[1] There is epidemiological signal that human beings who consume a lot of artificial sweeteners are not especially healthy people compared to people who consume few. Teasing out the mechanism that may underly that observational finding will take more research, but this is important research to get right.

"To study the functional consequences of NAS consumption, we performed shotgun metagenomic sequencing of faecal samples from before and after 11 weeks of commercial saccharin consumption, compared to control mice consuming either glucose or water. To compare relative species abundance, we mapped sequencing reads to the human microbiome project reference genome database16. In agreement with the 16S rRNA analysis, saccharin treatment induced the largest changes in microbial relative species abundance (Fig. 2a, Supplementary Table 2; F-test P value < 1010). These changes are unlikely to be an artefact of horizontal gene transfer or poorly covered genomes, because changes in relative abundance were observed across much of the length of the bacterial genomes, as exemplified by one overrepresented (Bacteroides vulgatus, Extended Data Fig. 7a) and one underrepresented species (Akkermansia muciniphila, Extended Data Fig. 7b)."

The authors sum up their experimental findings by writing

"In summary, our results suggest that NAS consumption in both mice and humans enhances the risk of glucose intolerance and that these adverse metabolic effects are mediated by modulation of the composition and function of the microbiota. Notably, several of the bacterial taxa that changed following NAS consumption were previously associated with type 2 diabetes in humans13, 20, including over-representation of Bacteroides and under-representation of Clostridiales. Both Gram-positive and Gram-negative taxa contributed to the NAS-induced phenotype (Fig. 1a, b) and were enriched for glycan degradation pathways (Extended Data Fig. 6), previously linked to enhanced energy harvest (Fig. 2c, d)11, 24. This suggests that elaborate inter-species microbial cooperation may functionally orchestrate the gut ecosystem and contribute to vital community activities in diverging environmental conditions (for example, normal-chow versus high-fat dietary conditions). In addition, we show that metagenomes of saccharin-consuming mice are enriched with multiple additional pathways previously shown to associate with diabetes mellitus23 or obesity11 in mice and humans, including sphingolipid metabolism and lipopolysaccharide biosynthesis25."

There have been a lot of questions raised in this thread, and indeed the article itself raises plenty of interesting questions to follow up with further research. When discussing a new preliminary research finding like this, we can work outward from the article abstract to news reports about the article findings to the article text itself to focus on the known issues and define clearly the unknown issues. I appreciate comments from anyone here about how I can help contribute to more informed and thoughtful, in Hacker News sense of "thoughtful,"[2] discussion of research on human nutrition.

Other comments here asked why we should respect journal paywalls at all, and the basic answer to that question is a basic principle of economics, that people respond to incentives. (That's the same reason you don't found a startup that you expect will always lose money for all time.) Nature is one of the most-cited scientific journals in the world, so it's a big coup to be published there, and that means Nature gets a lot of submissions. To slog through all the submissions with adequate editorial work does cost money. (I used to be a junior editor of an academic journal.) The article gets more attention (it has received a lot of attention in this thread) if it is in a better rather than worse journal. Some journals are lousy enough to publish anything, and those journals beg for submissions, but Nature can charge for subscriptions and impose paywalls (which expire for government-funded research, with author sharing of author manuscripts on free sites usually being mandatory after a year embargo) because what it publishes is often worth reading (as here).


I see that while I was reading the fine article from Nature submitted to open a thread, my comment is now part of a thread that is about the New Scientist popular article on the same research finding. This will be confusing to readers newly visiting this thread. The title of the Nature article is "Artificial sweeteners induce glucose intolerance by altering the gut microbiota" (the Hacker News thread title I saw, per the usual rule of using the article headline as the submission headline) and the article DOI is


for the full article published online (behind a paywall) on 17 September 2014.

[1] http://www.newscientist.com/article/mg22329872.600-artificia...

[2] "The most important principle on HN, though, is to make thoughtful comments. Thoughtful in both senses: both civil and substantial."


Someone1234 4 days ago 3 replies      
So the public funds studies, which they give to journals for free, who then sell access for $3.99/view. I'm really not sure this was the "free exchange of ideas" which science is based upon.

Even the New York times only charges $3.75/week (the nature price is per article/view NOT per week, it would be $4.14 if their $199 plan was weekly) and the NYT has to actually pay journalists to create the content. Nature gets all their content for free.

So what are Nature's expenses anyway? They no longer have to type set as it is just an identical PDF which is sent to them. Is hosting and management of the web-site really so costly that it is $3.99/article?

kolev 4 days ago 1 reply      
Sweeteners are suspected to have downed the Roman empire (via lead poisoning), so, learn from history and just change your taste norm and you'll live longer and happier. I had a sweet tooth once and it took about a couple of years to even not being able to tolerate it. The weak find excuses, the strong adapt and improve. Just reject anything with refined sugar or fancy new "healthier" sweeteners - do you like sugar more than tomorrow? Cane sugar is not healthier than HCFS (it might be just slightly less harmful). Agave "Nectar" actually has significantly more fructose than HCFS... and it's not unprocessed as claimed, and so on. Stevia is slightly different, the plant has other benefits, but I wouldn't ever use the adulterated version (Reb A or whatever). Hack your taste buds, hackers!
jimhefferon 4 days ago 0 replies      
Diet soda makes you fat.
What Coke Contains (2013)
396 points by fmela  2 days ago   173 comments top 33
d0mdo0ss 2 days ago 4 replies      
> coca-leaf which comes from South America and is processed in a unique US government authorized factory in New Jersey to remove its addictive stimulant cocaine

According to Wikipedia "The Stepan Company is the only manufacturing plant authorized by the Federal Government to import and process the coca plant, which it obtains mainly from Peru and, to a lesser extent, Bolivia. Besides producing the coca flavoring agent for Coca-Cola, the Stepan Company extracts cocaine from the coca leaves, which it sells to Mallinckrodt, a St. Louis, Missouri, pharmaceutical manufacturer that is the only company in the United States licensed to purify cocaine for medicinal use."

Someone1234 2 days ago 11 replies      
I wish Coca Cola would make a acid free version of coke. The Phosphoric Acid adds a slight tang to the drink, but in exchange absolutely destroys your teeth over years of consumption.

For regular drinkers like myself I'd happily pay a small premium to buy the "acid free" version of the drink. The sugar still does damage but with both the acid AND sugar it is like a double whammy of "badness" (acid which destroys your teeth's natural protective coating, and sugar to feed the bacteria which actually eat away at your teeth).

No amount of brushing can really undo the amount of damage acidic soda does to your teeth, trust me I know! Even with prescription toothpaste with fluoride 5x times stronger than normal (5000 ppm toothpaste Vs. 1100 ppm) you're only slowing down the progression.

jstalin 2 days ago 4 replies      
The same type of story as the classic "I, pencil," published in 1958:


srean 2 days ago 1 reply      
The article waxes so eloquently about this beloved product that I would have mistaken it for a paid PR piece. The article is great read nonetheless.

For those who are also interested in the other darker, grimier side of the same coin, might want to check out its use of mercenaries for union busting in South America(by murder of course. In the hands of the right spinners that would be 'terrorism'), similar stuff happened in India as well.



gokhan 2 days ago 9 replies      
> The number of individuals who know how to make a can of Coke is zero.

This reminds me a fact I remember time to time. If civilization collapses after, say, a world war, I most probably can't make a pot, can't grow plants, can't differentiate if one is edible or not, can't dig for petrol, can't make plastic (or even glass), can't reinvent concrete, can't make gunpowder etc., you get the point.

I can only write software and maybe drill with tools and nail with a hammer but that's all.

klinquist 2 days ago 3 replies      
You can make your own almost-Coke... OpenCola, the open-source cola.


bjornsing 2 days ago 3 replies      
> The top of the can is then added. This is carefully engineered: it is made from aluminum, but it has to be thicker and stronger to withstand the pressure of the carbon dioxide gas, and so it uses an alloy with more magnesium than the rest of the can.

Nope, the pressure from the carbon dioxide pushes equally against all sides of the can. If anything the pressure at the top is slightly lower than at the bottom, at least if the can is standing, because of the weight of the coke pushing against the bottom.

vesche 1 day ago 0 replies      
> ... the inside of the can is painted toowith a complex chemical called a comestible polymeric coating that prevents any of the aluminum getting into the soda.

I though this was very interesting, so I did a little digging... There is remarkably little information on these 'comestible polymeric coatings', but I was able to find (see below) a reason as to why that is. Apparently these coatings are propriety to the manufacturer and there are competing companies who are constantly in a race to find the best coating.

It's supremely interesting the fact that drinking a can of coke is almost a magic trick right in front of your eyes. It'd be like someone holding a lighter straight to a piece of paper and everyone being baffled as to why it isn't lighting on fire. Yet when someone drinks a coke no one bats an eye as to how it isn't mixing with the metal salts and eating straight through the aluminum can.

"Interior can coatings designed to prevent migration of metal salts into the contained product are called "comestible polymeric coatings". The coatings ars polymers typically used in coil coating. The exact nature of the coatings isn't available since most are proprietary to manufacturers who continuously look for better coatings."

source: http://www.eng-tips.com/viewthread.cfm?qid=258261

JacobAldridge 1 day ago 1 reply      
Actually, the Pinjarra process creates Aluminium. The process of shipping it to Long Beach CA converts it into aluminum.
neya 1 day ago 2 replies      
I'm surprised that the author hasn't mentioned the use of toxins (pesticides)[1], to the extent that it is even being used as a real pesticide in various parts of India.

I know some may find this offensive, but sorry, I think I have a moral responsibility myself to let the people around me know of the harms caused by this carcinogen[1].



Tloewald 20 hours ago 0 replies      
This article reminds me strongly of a pivotal passage in the novel Gain, by Richard Powers (which I can't recommend highly enough, although it's a downer). In that passage he describes how a disposable film camera is made.
makmanalp 2 days ago 0 replies      
My favourite version of this is a picture of a boeing 787 and where all the parts are manufactured: http://seattletimes.com/art/news/business/boeing/787/partsen...

Of course if you could break it down further into smaller parts and tools to manufacture those parts, you'd get an even greater variety of countries and companies.

The center where I work actually does work slightly related to this, https://www.youtube.com/watch?v=0JC24CBVsdo

gburt 2 days ago 0 replies      
I am reminded of I, Pencil. [1]

[1] http://www.econlib.org/library/Essays/rdPncl1.html

Theodores 2 days ago 1 reply      
You could say this about any product. I think the essay would be considerably longer if it concerned a typical PC or phone, not to mention a car.

I also think the essay can be written with cynicism instead of wonder, e.g. with an anti-capitalist slant. With one innocuous affordable purchase you can deforest and pollute four continents whilst giving yourself diabetes and dental caries!!!

jeffbarr 2 days ago 2 replies      
This is my favorite sentence of the article:

> Modern tool chains are so long and complex that they bind us into one people and one planet.

When we think about colonizing the Moon or Mars with small groups of people with the intention of making the colonies self-sustaining over time, deep, long-evolved tool chains like the one described in the article could be very difficult to scale down and to replicate in other environments.

raverbashing 2 days ago 1 reply      
"The top of the can is then added. This is carefully engineered: it is made from aluminum, but it has to be thicker and stronger to withstand the pressure of the carbon dioxide gas, and so it uses an alloy with more magnesium than the rest of the can"

Yes, but the pressure is the same on all parts of the can. Ok, almost the same, still.

Maybe because of the parts that have been cut to make it easy to open?

AlyssaRowan 1 day ago 0 replies      
Not that I want to waste any time on a HPLC-MS machine on this, but I was distinctly under the impression Coca-Cola 7X does not actually contain kola nut?

I've had Red Bull Cola, and actually found it quite different, but delicious. No accounting for taste, though.

exacube 1 day ago 2 replies      
How can 0 people know what's in Coke while still getitng it FDA approved? Surely this can't be true.. How does the company know how to make a can of coke if they don't know how it's put together?
lpolovets 1 day ago 0 replies      
There's a book with a similar theme about Twinkies. It's called "Twinkie, Deconstructed" (http://www.amazon.com/gp/product/B000OZ0NZS)
NotOscarWilde 2 days ago 2 replies      
Speaking as somebody who's never even smoked a cigarette or a joint: are there people who tried to recreate the "original" coke recipe? The one with "unprocessed" coca leaves? Is it available on say the latest instance of Silk Road? What is it like?
TazeTSchnitzel 2 days ago 1 reply      
> The number of individual nations that could produce a can of Coke is zero.

While this is true in that no individual nation could produce Coke with the exact same formula, an individual nation could surely produce a soft drink.

Istof 2 days ago 1 reply      
"[...] and the edges of the can are folded over it and welded shut."

I never thought there was any weld in a soda can... (and I still don't think there is any)

cbhl 2 days ago 1 reply      
Article title should probably contain (2013).
justintocci 2 days ago 0 replies      
i wonder what the failure rate on the interior coating is? How often are people ingesting disolved aluminum?
swartkrans 2 days ago 1 reply      
Is the ammonia dangerous? Or can it be? How much ammonia can a person consume before it becomes dangerous?
alecco 2 days ago 1 reply      
To keep you drinking they add plenty of sodium (50mg+) masked with sugar, HFCS, or sweeteners. They also add caffeine as a diuretic to keep consumers drinking, too. And then they market it to children, lovely people.

Check out Dr. Robert Lustig videos. Also, the book Salt, Sugar, Fat, about food industry engineering.

yarou 2 days ago 0 replies      
He forgot to mention the Colombian paramilitaries that break up Coke bottling plant unions by kidnapping their children. Funny how "globalization" is presented in a saran-wrapped, sanitized version.
InclinedPlane 1 day ago 0 replies      
This is good, although I think it reaches just a little too far when it says that the number of nations that could produce a can of coke is zero. If the US so desired it could grow coca leaves, and kola nuts, and use locally produced aluminum, etc.
Smachine 2 days ago 0 replies      
Think of all of the jobs the making of Coke provides. Oh here we go......lol
WiggleYourIndex 1 day ago 0 replies      
Clean water tastes better.
argumentum 2 days ago 0 replies      
A brilliant paean to the free market and the invisible hand. Milton Friedman once described the manufacture of a humble pencil in this way.

(edit: just saw a link to an essay entitled "I, Pencil" at the bottom .. this might have pre-dated Friedman).

joshfraser 2 days ago 1 reply      
1 can of coke contains 160% of your recommended daily intake of sugar. But you won't see that on the label because money.
I Had a Stroke at 33
318 points by Thevet  10 hours ago   79 comments top 19
ohquu 5 hours ago 2 replies      
What a beautiful article.

My girlfriend had three strokes, in succession, two years ago (when she was 22). The night before these strokes occurred, she had a transient ischemic attack (TIA). She began speaking gibberish to her friends. She texted me later that night explaining what happened. Her friends had laughed about it because they thought she was just acting like a goofball. I had no idea these were signs of a TIA, but I told her that if it happened again she needed to go to the doctor immediately.

The next day, the right side of her body went numb. This time, she was around people who noticed something was wrong, and she was immediately rushed to the emergency room. By the next day, I had flown a thousand miles (from the location of my new job) to be with her. She couldn't remember many words. She couldn't read a clock. She did not know the answer to 3 + 0.

It turned out that, similar to the author of this article, clots had traveled through the hole in her heart and up to her brain. Luckily, she recovered fully and was back to her old self within about a month. She had surgery to fix the PFO a couple months later. The neurologist told her that nine times out of ten, the clot travels a different path, and the victim is left dead or braindead. I am so lucky. Writing about this has me in big tears.

I am going to stop writing and go hug her now.

weddpros 2 hours ago 0 replies      
I was 32 when I had a stroke (March 4th, 2003). It was a different kind of stroke, affecting a different part of my brain, essentially related to vision. I was half blind, but I only realized something was "strange" when I saw myself in the mirror: I had only one eye. My brain knew I should have two: I was half blind.

The first diagnosis was migraine with aura (blindness in my case). But the aura should have lasted no more than an hour. Two days later, the aura (blindness) was still there (a sign of infarct but my doctor didn't know it).

I spent 2 days alone in the dark. I forgot to eat but I knew I had to call a taxi to take me to the hospital. I wasn't scared, I though it was just a migraine. It really looked and felt like my usual migraines. So my doctor had me take anti-migraine pills, which are vasoconstrictors. That might have caused the actual stroke: extreme vasoconstriction. Never take anti-migraine treatment during the aura. Never.

It took 2 days before I was diagnosed at the hospital, but they just told me "I see a shadow on the CT scan"... so I spent the next 2 days wondering what kind of shadow? stroke or cancer? And no, I didn't think about asking.

It took one week to be hospitalized for 10 days (my mother called the hospital, harassed them until she could talk to a doctor, who said it was an emergency... one week after the stroke).

It took 15 days before I woke up in the morning and thought "Wow! WOW! I'm back now!". Before that, I spent most of my time sleeping, reading half a page between two naps. I was sleeping more than I was awake.

It took 3 months before I could look at everything I wanted. Before that, looking at trees (and other complex objects) was "painful", and watching movies was too exhausting (especially action movies). During these 3 months, I recovered from blindness, but not completely. I still have a blind spot in my field of view today.

It took 6 months before my mood was really restored. Before that, I needed a daily nap, lots of soothing music, and no pressure at all.

I took aspirin daily for 3 years, after which my neurologist told me I could stop.

I had a few migraines after that, and even ended under oxygen at the hospital once, but I always recovered within 15 days.

It was 10 years ago, and it changed my life. I quit my job as a developer, spent 2 years wondering what to do next, then became a wedding photographer. In february this year, almost 10 years after, I got a new job as a developer.

I'm back on rails (node.js to be precise :-)

ZeroCoin 2 hours ago 1 reply      
>I wandered outside the boundaries of telemetry. They lost my heartbeat. When I returned, they scolded me.

The audacity of health care industry workers (those who should know what a certain disease entails) who place blame on their patients for acting normally is infuriating.

I had kidney stones once at a young age. I remember barely walking into the emergency room one night after they became too painful.

As soon as I arrived, white as a sheet of paper, they asked me a few questions... doped me up on morphine... and managed to "lose" me on a gurney in a hallway somewhere for a few hours until my girlfriend at the time came and found me.

They took xrays I believe and I was free to go with some more painkillers in hand.

Apparently the hospital told me that I was supposed to call them by X date if I wanted any more painkillers.

I called them back about a week after that date had passed, asked for a refill, and was scolded like I was some drug addict just looking for a fix. I think they even hung up on me. How could I be so stupid as to have forgotten a date they told me when I was high as a kite by their own doing? Right.

I ended up passing them without any painkillers which as many of you have probably heard is unbelievably painful.

I understand that it can get monotonous working in a hospital, but with the amount of money they're paid to work there you would hope that they would be required to operate with a little compassion. Considering the fact that many people in a hospital are leaving this world.

What if the author's last memory was that of a person she didn't know berating her for something she wasn't sure she even did?

tucaz 1 hour ago 0 replies      
About 15 years ago my father had a stroke at our house. I was about 12 years old and at home at the time along with my grandmother. We didn't know what was happening. At one second he was okay and in another he was on the floor. It was almost impossible to put him back at the bed even with the help of one of our tenants.

We called my mother at work and the funny thing is that before she came home to take him to the ER he was able to ask for coffee (and drink it) and also to smoke a cigarette.

Moving 15 years forward he's still with us (62 years old) with no movement at all on the left side of his body. Had a heart attack with major surgery, is on more than 15 different medications, has diabetes and a bunch of other "minor problems".

My mother gave up her life to take care of him and everyday is a struggle because of the existing problems prior to the stroke and the ones that came after he became bitter and really mean to those who love and take care of him.

I'm not sure why I wrote about this but I felt like sharing. It's not easy when people don't recover, but for some reason I believe we have to take care of them and do our part.

patio11 1 hour ago 0 replies      
My mother had a stroke. The fallout is very, very hard for the patient and their family.

Diet and exercise are, apparently, the easiest levers you have to control for stroke risk. Trust me: this is the best of all possible reasons to care about those. You do not want to go through it and you do not want your family to go through it. Specifics elided for privacy but suffice it to say that it combined elements of a heart attack, advanced Alzheimer's, and a profound war injury in a compact package that arrived on a normal sunny Tuesday.

huhtenberg 6 hours ago 7 replies      
Remember this -

  You have FOUR HOURS to get a person with a stroke to the emergency. 
If you do, their chances of survival are dramatically higher.

pragone 4 hours ago 0 replies      
Strokes can present in truly any number of ways. The Cincinnati Stroke Scale, often seen in public health campaigns as "FAST", provides three simple, quick assessments that can reliably delineate a majority of strokes. It is the standard for basic EMTs as well. More advanced providers should perform a more comprehensive exam, testing all the cranial nerves (actually usually just II through XII). A more formalized, advanced stroke scale is the NIH stroke scale: http://stroke.nih.gov/documents/NIH_Stroke_Scale.pdf

While there are often some kind of neurologic deficit associated with a stroke, the goal standard is, of course, a CT or CTA that should be administered immediately upon arrival in the ED of a suspected stroke (depending on the presentation of symptoms an exam by a neurologist may occur first).

The symptoms described in this story would absolutely make me think this person was having a stroke if she had verbalized them to someone with my training.

It's also worthwhile to point out that the person having a stroke may not realize they are having a stroke. People may have the obvious symptoms - slurred speech and hemiparesis - and refuse to acknowledge that these problems exist, because, in their mind, they don't.

If you think someone is having a stroke, record the time you first noticed symptoms and call 911 immediately.

treehau5 2 hours ago 0 replies      
I am not sure if you are the OP or know her, but this story touched my heart. It is beautiful. I am only imagine how strong she has to be, and the people around her must be to get through this. My sister and her husband are going through the very same thing -- He was progressing very well in his career and they just had their first child when he suffered his from the same reason - a hole in the heart. All the best. You and all the stroke victims have my prayers tonight.
day_ 4 hours ago 0 replies      
Great article.

I had a stroke one night in my 20's. When I woke up, my right side was numb (I thought I just slept on my arm), I spoke gibberish and was unable to write but I felt fine and I thought I spoke perfectly fine. I finally figured out that something was not right when I tried to write a message to my mom on the back of an envelope to tell her that I was fine and I just drew a straight line instead of letters.That's when she called an ambulance.

Luckily I was back to normal within a month, but I struggled for some time to to find the right words when talking.

alexitosrv 3 hours ago 0 replies      
Four weeks ago my girlfriend, 32 yo, had a brain stroke because a deep venous thrombosis at her left side of the head. It was intense to see how much she deteriorated in the course of just a few hours, starting with a seizure and some very acute headaches she had together with vomiting the previous days. We were in intensive care around 10 days, and then 3 days more in hospitalization. The investigation of her tendency to hypercoagulate yielded as main culprit sedentarism and the previous uninterrupted usage of oral contraceptives (mercilon) for almost ten years. We were fortunate in some sense as the cause was easy to point out and also as we discarded autoinmuse diaseases (my biggest concern) and now she is under low molecular weight heparin, hoping that the clot is reabsorbed in two or three months.

As part of the recovery, I'm reading to her My Stroke of Hindsight, of Jill Bolte Taylor, and her symptoms and the description of the episode of the acute phase match largely: speech loss, paralysys of her right part of the body and rational disconnect with external stimuli.

This article highlights also how sensible we are to the changes of what we are at the end: physicochemical interactions. I was worried my girlfriend would lose her essence, but thanks to God her recovery has been amazing so far.

skizm 5 hours ago 2 replies      
Remember FAST: http://en.wikipedia.org/wiki/FAST_(stroke)

First 3 minutes of the house episode Fetal Position (S3 E17) demonstrate it.

pimentel 3 hours ago 1 reply      
All the stories I know and heard of stroke victims in their 30's or 40's make me think and ask: is there really a way to prevent or predict a stroke?

Would the "controversial routine full body scan" help? Specially to people who have a parent being an early stroke victim?

These things are scary as hell...

cell303 2 hours ago 0 replies      
I was terrified after reading this. Reminded me that I should live a bit healthier, not drink more coffee then water, got to sleep earlier, wake up earlier, maybe even exercise. But more important, it got me thinking. The non-routine kind of thinking. Read some old diary entries. Wrote a new one, after almost a year.
camperman 3 hours ago 1 reply      
Her memory experience was already reminding me of Leonard in Memento and then she writes, "it's time for my shot." That hit me unreasonably hard.
GuiA 4 hours ago 2 replies      
Will smartwatches with heart rate/other health sensors be able to detect strokes right when they happen? Or maybe even slightly before they do?
spindritf 5 hours ago 2 replies      
Well, I just popped an Aspirin for no reason.
bshimmin 5 hours ago 3 replies      
I wish Buzzfeed only had articles like this.
diestl 5 hours ago 3 replies      
Not sure what this has got to do with programming?
A Long, Ugly Year of Depression Thats Finally Fading
309 points by squiggy22  2 days ago   128 comments top 27
karmajunkie 2 days ago 0 replies      
Man, there are a lot of diagnoses getting thrown around this thread. As a caregiver to someone with a serious illness, as well as someone who periodically suffers from many of the same mental and emotional issues raised here... How about refraining from doing that unless you are A) a mental health or otherwise trained medical professional; and B) someone who has actually seen and assessed the patient. I'm not calling out anyone in particular because let's face it, this is HN and we're probably all know-it-alls at one time or another, but this can have some particularly pronounced thoughts and effects on the posters who are getting the comments.

If you are dealing with any of these issues, my heart goes out to you. Please reach out to a counselor, or at the very least a counselor or therapist who specializes in the things you're dealing with. If you need help finding one, my email is in my profile, i'm glad to help.

tst 2 days ago 4 replies      
I'm also recovering from a depression which lasted for quite a while. It absolutely sucks because you think you're worthless, nobody loves you, you can't get anything right and the best would be if you just wouldn't exist anymore.

And on top of that you isolate yourself. I know how hard it was to ask for help therefore I want to show you some things which helped me:

- Realize that your depression is lying to you. It doesn't tell the truth. It makes you believe that something is logical even if it isn't.

- Read 'Feeling Good' - terrible title, great book. It will probably work better than average on the average HN reader because it takes a 'rational' approach to depression (cognitive-behavioral therapy). It helps you to recognize destructive thought patterns and how to deal with them.

- Garbage in, garbage out. What works for computers also works for your body. Yeah, you're a geek but you can eat some vegs instead of the 500th pizza. Also working out (or other sports) are pretty great.

- Long term: Therapy which tries to work on the root cause and not just at symptoms.

Finally, here's a rather extensive list with lectures, books, exercises, etc. which help dealing with depression [1]. Back when I was fed up with feeling crap I created a spreadsheet with the 8 activities and tracked those every day.

Note: Every person seem to react to differently. I read about people who improved a lot by meditating - on the other hand, it didn't work for me.

So, try some things out and don't give up. You can beat that liar in your head.

[0]: http://www.amazon.com/Feeling-Good-The-Mood-Therapy/dp/03808...

[1]: http://www.reddit.com/r/getting_over_it/comments/1nd14u/the_...

PS: If you have any questions feel free to ask - if you want to send me a private one write at <username> @ panictank.net

dchuk 2 days ago 3 replies      
I guess I'll be the only person to comment on the actual Moz business struggles rather than the depression side of this post. Moz raised their money at a really tricky time because it was right before Google essentially bent over the SEO industry. When Rand mentions the Content tool that hasn't even started being developed, that was something that was supposed to take your Google Analytics keyword referrer data and match it to your content and your rankings and your links and your competitors and basically help you spot keywords and content you can easily rank better for.

The timeline seems to be matching up where they had this plan for this tool before any of the Google SSL stuff started, so as they started working on the design and UX of it, Google started rolling out the SSL stuff and it basically ruined their idea. Moz ended up adding tools to try and guess what keywords made up your "(not provided)" data but that's a far cry from what they were originally planning.

I'm basing this entirely on being heavily involved in the SEO industry around the times mentioned in Rand's article and having even run a successful SEO SaaS product (which is still going even though I've moved on to other projects). I just remember seeing screenshots of what they wanted to build and thinking "wow, if they can nail this, it will be great". I wanted to build a similar app. But when Google started hiding all organic keyword data in analytics, I distinctly remember saying "Well there goes Moz's whole new product".

Google really fucked the SEO world up with their (not provided) move. Think what you will about SEO but it's still a legitimate marketing channel and I really have never been able to understand why Google thinks it's ok to not share your organic keyword data but your paid keyword data is totally fine to share with site owners.

But not much anyone can do about that now I suppose.

jtbigwoo 2 days ago 0 replies      
>> ...layoffs is a Pandoras Box-type word at a startup. Dont use it unless youre really being transparent (and not just fearful and overly panicked as I was).

I made a similar mistake once as a manager and experienced this kind of thing more than once as an employee. Certain words like "layoffs" or "merger" are so loaded because employees know that you know more than they do. Even if you think you're being totally transparent, employees are correct to assume that you're holding some things back because you are. It's your job to understand the state and direction of the company and give your employees the information they need to do their jobs. Employees, especially the smart ones, are going to try to infer additional information from what you tell them even when you think you've told them everything they need to know. Leaders need to be aware that a certain amount of "Kremlinology" happens in every company.

He made things worse by being vague about the company's real situation and contradicting himself a couple sentences later when he said, "...we'll survive (though not with much headroom..." If he's talking about layoffs, who is this "we"? Everybody? Rand and Sarah? If you're going to be transparent, you also need to be specific and direct. A better approach might have been, "Sarah and I modeled out some worst-case scenarios last week and this stretches our break-even point an extra six months, which will constrain our growth."

astockwell 2 days ago 0 replies      
Speaking purely to the experiences of building a new software product, I've seen this exact story play out countless times. Everyone (except maybe the engineers themselves) seems to think that designing a software product is part of the "planning phase", and thus should happen before any time is "wasted" on development:

> "That product planning led to an immense series of wireframes and comps (visual designs of what the product would look like and how it would function) that numbered into the hundreds of screens..."

The biggest contributor to this I've seen is the dozens (hundreds? thousands?) of small ways that a design (done in a vacuum, without simultaneous prototyping) will differ from established development patterns, frameworks, and other pre-packaged solutions that engineers use daily to avoid reinventing every wheel. And engineers respond with timelines that expect to be able to leverage those frameworks. Thus the dissonance begins.

One example: a design calls for a form to be broken across 4 pages. There may be great aesthetic rationale or even user testing to support this, but that means that in all likelyhood any framework (e.g. Rails/Flask/Play/etc, not to mention native apps) will have to have additional modification to support sessions, changes to validation, changes to the auth domain, persistence changes, etc. And it's not necessary for an MVP. And many times these differences are much more subtle and deeply entrenched, and would require rethinking much of the wireframes/designs to align with development patterns. /rant

I'm not sure what the answer is here, except maybe that this is one more point in favor of having a "technical founder" or in general a technical person with decision-making authority, to avoid going down a road without proofing out your ideas or timelines.

Alex3917 2 days ago 0 replies      
> "the funny thing is, Marijuana doesnt have any pain-killing properties. It just lessens tension, anxiety, and stress for some people."

Marijuana is an analgesic. But in this case the effects are stemming from the fact that's its an anti inflammatory, so that the fluid in your disc is no longer compressing the spinal nerves. And the fact that it reduces anxiety also reduces inflammation even further, since anxiety is probably largely what was causing the inflammation.

johnyzee 2 days ago 0 replies      
I love it when CEO's own up like this, it's probably one of the most appealing traits in a leader I personally can think of. As long as they don't become too insecure to actually lead, introspection and self-criticism are strengths, not weaknesses. Besides, being aware of these traits and their negative repercussions put you in a pretty good place, the ones who really suffer are the guys who repress and deny the down slopes, always happy and bubbly on the outside but in reality inches from a mental breakdown.

The last part about how stress causes physical health problems is very important, and very overlooked. Besides the muscle and nervous tension the OP mentioned, stress seriously reduces immunity which can manifest itself in a myriad of unexpected ways (whichever subsystem fails first), from infections to cysts and all kinds of nastiness.

gadders 2 days ago 0 replies      
One last comment - this post from Rand reminds me of the following from Ben Horowitz:

"By far the most difficult skill for me to learn as CEO was the ability to manage my own psychology. Organizational design, process design, metrics, hiring and firing were all relatively straightforward skills to master compared to keeping my mind in check. Over the years, Ive spoken to hundreds of CEOs all with the same experience. Nonetheless, very few people talk about it and I have never read anything on the topic. Its like the fight club of management: The first rule of the CEO psychological meltdown is dont talk about the psychological meltdown."


mikeleeorg 2 days ago 0 replies      
This is an incredibly brave, and hopefully cathartic post by someone I greatly admire. I really hope he is able to find the support and peace he needs.

As a bit of an aside, I wonder how much of this has led to similar troubles for other founders:

When the Foundry investment closed, we redoubled our efforts to build Moz Analytics. We hired more aggressively (and briefly had a $12,000 referral bonus for engineers that ended up bringing in mostly wrong kinds of candidates along with creating some internal culture issues), and spent months planning the fine details of the product.

I've heard from friends & colleagues about the massive amount of pressure they've felt after closing an investment round. While fundraising is already an incredibly trying process, the next stage is sometimes even more difficult.

In contrast, other friends & colleagues who've opted for the bootstrapped route (either by choice or circumstance) haven't seemed to face a similar massive amount of pressure. Yes, they faced incredible stress too, but not to the level of those that have raised capital.

This is merely an anecdotal observation made in my peer group. I don't mean to imply that this is some kind of phenomenon. And clinical depression is something that can cut through any kind of circumstance.

I just can't help but notice the stark difference in stress level of founders who are growing organically & carefully vs founders who are in a mad recruiting rush and sometimes hire the wrong kind of people. I wonder how much of a relationship there is between having the right kind of people in your company vs the wrong kind of people, and the stress level of a founder. I would imagine a lot.

bocalogic 2 days ago 1 reply      
I respect Rand and give him a lot of credit for vocalizing his challenges. Depression is a challenge and it can be overcome.

I am not a doctor, but I can tell you that a lot of my peers are suffering from depression from business, marriage or just in general.

One thing I do know is that the world has changed a lot in the past decade. The price of everything just keeps going up and we are constantly bombarded by information. Humans are not built that way. There is no badge of honor for being under stress 24/7. It will catch up to you one way or the other.

Humans suffer from the fight or flight responses that we encounter during high stress situations. The challenge is to digest it and make decisions not based on fight or flight emotions.

The body produces cortisol when we are under duress and it is horrible for you. It screws up everything with your body and your mind. One way to counteract this is by working out, getting sunlight, eating the right foods and staying off caffeine. Try some black or green tea instead.

30 minutes of working out will combat cortisol production for about six hours. Even going for a walk helps a lot.

Most of the worlds brightest minds and most successful people suffer from depression and knowing that your ARE NOT ALONE is a huge step forward.

You can beat depression and your life will turn around!

Talking about it and seeking help is definitely a step in the right direction. Keep your chins up.

raheemm 2 days ago 0 replies      
So few people and places can allow for this level of vulnerability and authenticity. This post is going to help a lot of people.

I have even more respect for Rand and Moz. We can say Fail Fast, Fail this, fail that ... but this kind of writing is the true embrace of failure, learning, wisdom, humanity.

gadders 2 days ago 4 replies      
I admire what Moz has done and it was an interesting read.

My comment is more of a meta one about HN. Are we really that interested in these stories of depression? We seem to get at least one a week. I realise it's an issue that may affect people here, but I'm not sure if we need the volume we are seeing now.

jroseattle 1 day ago 0 replies      
I read through this and the Can't Sleep/Loop post, which had me wiping my eyes. I feel I'm there, right now.

We're in the middle of raising money, while I also keep the engineering ship moving forward with product releases. We're about to run out of initial seed money, as we were supposed to have brought in the balance of the round and been on to Series A at this point. It's challenging, but I feel like I'm handling it.

Or so I thought. It turns out, I'm getting little sleep right now -- maybe 4-5 hours a night, on average. I've gained back so much weight and I abhor seeing myself in photos. I watch colleagues take absurd plans to investors and get way overfunded, more than they were ever asking to take on, while our little operation that's actually generating revenue (we will likely be break-even in 6 months) gets passed. I know it's not a rational reaction, but still the mental headwinds it creates really sap my soul.

It sucks when you're a (very) logical being, and something in your head no longer fits into place. I'm short with my kids at home, and I literally dread downtime. I find that cocktails go down easy, really easy.

It's a loop, alright.

akrymski 1 day ago 0 replies      
I've been through this at every startup I founded, but managed to pull through in the end - and I'm still hoping this startup won't be any different. I struggle to imagine if any CEO has not had a tough time like this and felt utterly depressed at least once when things weren't working out. Rather than focus on the depression aspect however, why not discuss what COULD have been done better, and what Rand and other CEOs can learn from this - because ultimately there's an important lesson there besides "depression sucks":

- Don't bet your whole business on one product. Products come and go, businesses pivot. Remember how Steve Jobs launched the Mac? He created a separate, small division for the Macintosh to directly compete with the rest of the company (working on Lisa - which wasn't going well actaully). That's genius. He knew Mac is a risky project that could well take much longer than anticipated. He didn't bet the whole house.

- Start as small as possible. Moz Analytics was meant to be this giant swiss army knife right? Wrong. MVP lessons still apply. Couldn't you have launched the new brand with a tiny set of core features? Broke it into a modular setup where consumers could pay for features/modules in the future as you develop them?

- Iterate. Real artists ship, remember? Agile software development and all that? Doesn't sound like you had clearly defined iterative goals that you were hitting as you went, because then you'd really have an idea for where you are in the software development process. You seemed to have to go on someone's word on this. Instead you should have been producing A product every month with an increasing set of features. That way you could have still launched on time, but with less features.

- Review your progress often, and don't loose sight of the grand mission. Being smart doesn't help here - it often makes you stubborn, and I've got the same issue. But sometimes you need to have that thc-truffle, take a step back and think how else you could allocate your resources. Are there some other opportunities that the business can simultaneously pursue with a small set of resources as a backup plan? Are there some major M&A deals that can be done to shuffle things around? Do we need to hire more staff / or let people go who aren't hitting the deadlines? Drastic times call for drastic measures. The biggest issue with depression is that deep inside you still expect things to just get better on their own. And as they don't, you feel worse. Well the bad news is they won't get better on their own. You have to do something about it.

- Don't fail to communicate. The value of your business is in its passionate community, not one product. Seems like there are lots of people passionate about SeoMoz. Instead of shutting yourself out due to what appeared to you as a product failure, perhaps you could have engaged the community in the process, help you establish the product roadmap for the features you should be rolling out first, and try to understand why 90k of sign ups failed to try out the product.

danielweber 2 days ago 1 reply      
Slightly OT, but I read the whole thing thinking Moz was a nickname for Mozilla, or, at the least, that Moz was related to Mozilla.

It's still good to get these stories.

karl24 2 days ago 0 replies      
Mental illness impacts more people than cancer, diabetes, or heart disease. Unfortunately only 1/3 of people who have the illness get treatment due to cost, access, stigma, etc.

We're working on an app that uses technology to help bring clinically proven treatments to market at a price point that dramatically improve access. We are pairing this with product design that's common on the consumer web but uncommon in mental health apps to help with adherence and engagement with treatment.

I hope this isn't perceived as attempting to capitalize on a serious thread. We (the founders) have incredibly personal reasons for perusing this problem. Many in this thread are likely ideal early adopters for the product. The general awareness that this discussion is raising is a good opportunity to reach out and ask for help as helping us will ultimately help many others.

Two ways to help:

(1) 7 question survey, < 1 min to complete: http://bit.ly/1plE2Rg

(2) contact us directly via cbtmobileapp@gmail.com if you'd like to provide insight via a more in-depth interview.

swombat 2 days ago 9 replies      
Forgive my ignorance and bluntness, but reading the above, it sounds more like an anxiety disorder than like depression. Both are serious, but I'm not sure if it helps to confuse the two?

I've not experienced either seriously, but I know people who have. Depression seems to be more about things not mattering anymore, everything being pointless, the world seeming drab and just not fun anymore, rather than feeling that everything is going to go to shit. Anxiety, though, (and I'm speaking from experience here, having had some light anxiety attacks caused by too much regular caffeine usage) seems to be characterised by a feeling of impending doom, that everything is wrong, it can't be fixed, it's all hopeless, etc. But in my (mild) anxiety attacks, like Rand, I still cared about the outcome. I just felt like there were too many problems to solve, overwhelmed, ready to say "fuck this", give up the entire thing, and start again from scratch with something completely different.

PS: Otherwise, props for the very honest and open article. Running a business is a lot of responsibility and very stressful and it can be comforting to know you're not the only who seems surrounded by world-ending scenarios.

marklittlewood 2 days ago 0 replies      
Depression in technology is a very common condition. If you suffer from it, please know you ARE NOT ALONE. This talk is very honest, open and has some really helpful and practical advice.


ryanobjc 2 days ago 2 replies      
We talk a lot about successes.

It's also good to talk about failures, both partial and more complete.

And redemption.

The road to victory is long, and I would put my back against Rand because I know this struggle has made him better.

autism_hurts 2 days ago 2 replies      
I cannot stress how much exercising to exhaustion daily (read: Crossfit) and eating healthy (Slow Carb / Paleo) impacted my depression.

Please try them before you medicate.

austengary 2 days ago 0 replies      
Not an overnight fix. But with sustained effort, meditation changed my life. Eventually other things fell in place. Diet, exercise, relationships, mental health. Buddhist teachings really helped too.

I started here. http://headspace.com

DanBC 2 days ago 0 replies      
Here's what the English "National Institute for Health and Care Excellence" say: https://www.nice.org.uk/guidance/CG90
l33tbro 2 days ago 1 reply      
As somebody who is not depressed, it is always confronting to see just how hard depressed people are on themselves.
x0x0 2 days ago 0 replies      
Wow, props to Rand for sharing this.

Rand, if you're reading this, two things occur:1 - you're far from the first person to go for big-bang software releases (though listening to your cto is probably a good idea)

2 - in _Fooled By Randomness_ by Taleb (I believe, I could be misremembering) he describes the incredible level of stress that monitoring his investments daily created. I seem to recall the author writing that he simply was unable to monitor them every day and instead had to only look at some periodic summaries. Perhaps this may help people who get to mentally exhausted looking at numbers daily? I mean, it's good to notice immediately if they crater, though that can be scripted. Beyond that, there's probably not much value looking at them 7 days a week that you don't get looking at them once every seven days. I use the same technique on the elliptical machine; time crawls if I look at the timer, so it's an exercise of will to go as long as possible before looking.

Hope he's in a better place now.

andreash 2 days ago 0 replies      
One of the most honest blog post I've ever read.
Siecje 2 days ago 0 replies      
thinknothing 2 days ago 0 replies      
I started writing poetry when i got depressed - www.thinknothing.co
Keynote by John Carmack at Oculus Connect 2014 [video]
275 points by ivank  1 day ago   45 comments top 15
iamshs 21 hours ago 2 replies      
I am only 10 minutes into this talk, but John is one awesome speaker. No PR talk at all, he is speaking his mind freely and in fact started with shortcomings of the product. The segue between different sections is so smooth. I do not have background in VR, but he explains things so smoothly. He is just freely talking about supply chain, and what the product constitutes. And he has been standing in the same spot. What a genuine speaker. Also, looks like Facebook's influence has been minimal. There is just no iota of bullshit in him. I like him already. My first John Carmack video, and I am already hooked. Now onto watching the full video.
Laremere 23 hours ago 1 reply      
I love it when John Carmack talks, because he doesn't do marketing speak, and he doesn't dumb down his content. It's just a brain dump of technical info until they (almost literally) kick him off the stage.
gnarbarian 1 day ago 0 replies      
Carmack has been a hero of mine since the mid 90s. He was also the inspiration for me to go into computer science. Always a pleasure to listen to such a technically dense talk on the cutting edge of a subject dear to me. I highly recommend his quake-con keynotes as well for those of you who like this video.
webwielder 1 day ago 3 replies      
Perhaps even more impressive than Carmack's technical chops is his ability to stand in a single spot for hours on end.
justifier 9 hours ago 3 replies      
it becomes its own form of marketing speech,carmack was the reason i got involved: financially, temporally, and mentally; and i think the organisation understands this as common for a number of people.. especially 'developers'

the oculus is digital stereoscopy

which is hard with simple stationary fixed objects(i),but combine it with inferred spherical screen encapsulation and it becomes a real challenge, probably a fun one too

you let carmack wax poetic on his interesting ideas to fix this tech and he will talk about latency and hertz and i'll listen with bated breath because i like hearing people talk about solutions to problems

but then i put the headset on and i realise these are hardly the problems befallen the proposed goal

i want someone to address that piece of a person that is lost when they put the headset on for the first time,it almost appears physical when you see it waft out of them

i lost it, my gamer friend who already preemptively developed a defensive cyncism to the tech lost it, the eleven year old i introduce hacking to lost it,and that last one was probably the most signifigant for me to see

i had been using the object sitting on top of my bookshelf as an incentivising mechanism:'finish your project and i'll let you use the oculus'; last week he pushed his finished project but i had other obligations the following week so he had to wait 'two! whole! weeks!' to get to use the oculus

when i picked him up the following week, uncharacteristically early this time.. we both are lax in our punctuallity but he refused to let me be late today so he came directly to me fifteen minutes early.. he went on and on about how he has been 'scared' all day:'scared, but like happy scared'; i tried to explain to him the concept of anxiety but his mind was hurling itself around all of what he was about to become witness to

i put the headset on him and he had fun with it, but when he took it off he became suddenly very pragmatic in his demeanor,he told me he thinks it hurt him,his head, his eyes, something.. he needed a glass of water,i explained that that was because instead of being a virtual reality in which he was transposed to the thing exploits an optical illusion which means your brain is doing a lot more work than it usually does trying to rectify the inconsistencies,if you've ever been frustrated by trying to see a sailboat in a magic eye you know what it feels like to use the oculus

i asked him his opinion:'honestly? ..well, unfortunately a little dissappointed';

i see my position as creating a safe environment for him to develop his ideas so naturally i challenged him to explain himself by defending the technological feat that he was holding in his hand,but the only thing we could talk about quickly became anything other than what we wanted to talk about

so we talked about the tech,i started going all carmack on him and we had fun talking tech but the conversation was clearly avoiding talking about the 'experience' one develops when wearing the headset

i wanted to know what he lost, and asked him to describe the thing he thought it was going to be,he was unable:'i don't know, just different, like? more 3d`ish'; in fashion i told him to explain himself explicitly stead superficially:'but what does that mean? what did you think it was going to be? describe that to me';

'i don't know anymore'

this i understood, but my experience was different,after wearing the headset i started to dream up better ways to do what i thought they were trying to do before i put it on,ways to do what i wanted from virtual reality,they are dreams and some built on the sort of technological feats of dreams but this was and still is my reaction each time i wear it

so yes john, tell me all about your brilliant ideas for fixing latency issues because this stuff is fun,but please acknowledge the baseline of this research is fundamentally flawed as it pertains to the proposed goals

i've stopped calling the oculus virtual reality,the oculus is digital stereoscopy




.(i) the first thing i did with the oculus was pull up two terminals, cat out some of my writing,align vertically,then slowly move one terminal into the field of view of the other eye until the text seemed to stop wonking my brain and really pop out at me

the experience was profound

so, i threw together a little browser playground with two 117px squares,one blue and one pink,i aligned them vertically then again slowly moved one into the field of view of the other eye,and i waited until those two distinct colors overlaid in my mind as a single purple

herein lies the problem:there was a multi pixel range where my brain would close the gap manually, out of my control and rather forcibly;it was impossible for me to find the perfect distance between the two divs,340pxs worked but so did plus or minus 4px from 344px,the perfect'exact`preferred`innate distance was undiscoverable because of the exception handling in my brain's interpretation of my visual input

.. edit:: gramm`err

jayavanth 21 hours ago 1 reply      
Michael Abrash's keynote is worth checking out! https://www.youtube.com/watch?v=KS4yEyt5Qes
asciimo 7 hours ago 1 reply      
While listening to all of the mitigation strategies that Carmack proposed for the technological challenges, I wondered if you could hack the user. What about drugs? Is there something that can reduce our sensitivity to low-frequency displays and yaw lag? At the very least, motion sickness drugs?
asadlionpk 19 hours ago 0 replies      
Just finished watching, I am impressed at how low-level/technical he can get without boring or confusing the audience.

I have some experience with technical speaking and its very hard to make a technical point without dumbing it down for the audience.

riffraff 13 hours ago 1 reply      
Sorry for the somewhat lame question, but is he always that still?

I'm 10 minutes in and I don't think he moved his feet once, and his right hand just a couple times.

It feels very weird for me to watch and I just noticed it now, is there something wrong with me?

lucasgw 19 hours ago 2 replies      
I was in the room - he is a truly dynamic speaker and obviously a super-intelligent guy. I think he went off the rails a bit with the suggestion of interlacing as a potential solution. That makes little sense to me. It's, at best, a short-term solution once you get fast enough displays and rendering. (And as an old-time video guy... just... god, please... no...)
Kenji 11 hours ago 0 replies      
Nothing Carmack does is ever boring. This man is a huge inspiration for me.
walterbell 18 hours ago 0 replies      
Nice use of keynote to directly present requirements to engineers throughout the display supply chain, especially in large companies like Samsung.
vertis 18 hours ago 0 replies      
This keynote was by far the highlight of the entire conference for me

Second were the amazing demos on the Crescent Bay prototype

Vanayad 20 hours ago 1 reply      
Can anyone tl;dr the new stuff in this version of the oculus prototype ?
bsaul 17 hours ago 1 reply      
Anyone's got a link to the slidedeck ?
Apple Privacy Government Information Requests
276 points by declan  3 days ago   201 comments top 20
downandout 3 days ago 9 replies      
"On devices running iOS 8, your personal data such as photos, messages (including attachments), email, contacts, call history, iTunes content, notes, and reminders is placed under the protection of your passcode. Unlike our competitors, Apple cannot bypass your passcode and therefore cannot access this data"

This is key. The way we engineer software and services can have a major impact on the war against overly invasive government requests. We know that these requests will come; it's our responsibility to design things in a way that protects customers from our legal obligations when confronted with them to the greatest extent possible.

While this certainly serves their own interests, kudos to Apple for baking this type of consideration into the basic iOS design. They should and will be financially rewarded for it.

mkal_tsr 3 days ago 7 replies      
Yet no comment from them about what being a "provider" under PRISM entails.

* "In addition, Apple has never worked with any government agency from any country to create a back door in any of our products or services."

If Apple provides an interface to request user-data to law enforcement / NSA, that's not a back door in the product or the service.

* "We have also never allowed any government access to our servers. And we never will."

If they provide user-data after being served with a warrant (possibly through email or to their legal department), their servers were never accessed, yet the data was provided.

It's always interesting to read what is and isn't said. Word games, I swear.

crishoj 3 days ago 1 reply      
Here's an observation, and an idea for testing Apple's claims on iMessage privacy:

China seems quite determined to block IM systems which do not cooperate with the authorities and permit monitoring of communications. Most recently, both Line and the Korean KakaoTalk were blocked [1].

Skype remains useable in China, presumably because Skype permits efficient monitoring [2].

It seems unlikely that China would tolerate such a prominent opaque communications channel as iMessage in the hands of a significant proportion of their citizens.

Thus, if China refrains from blocking iMessage for a prolonged period of time, wouldn't it be reasonable to assume that China is in fact able to snoop on iMessage?

[1] http://www.ibtimes.com/china-restricts-messaging-apps-confir...

[2] http://www.reuters.com/article/2012/01/31/us-china-dissident...

clamprecht 3 days ago 2 replies      
So the US is now a country where mainstream companies market it as a competitive advantage that they will try to minimize what they will release to the government. I'm glad companies are doing this, but I'm sad that they even have to.
DigitalSea 3 days ago 1 reply      
The honest truth about all of this is, even if Apple were handing over information because of back doors, custom database interface applications for the NSA, they wouldn't tell us and would probably be gagged from doing so anyway, have we all forgotten about Lavabit? I hope not.

I think we are all intelligent enough to know that even if Apple were handing over information, it wouldn't exactly be good for business to admit you've been complicit in handing over personal details to the Government, would it? "Yes, we have been giving away your information, but we promise not to do it any more. Hey, we just released a couple of new iPhones, want to buy one"

Anyone else notice the page is cleverly worded and any mention of security seems to be limited to iOS 8 context? "In iOS 8 your data is secure", "In iOS 8 we can't give law enforcement access to your phone" - maybe I am just overanalysing things here, but I have learned not to be so trusting of companies as big as Apple considering the amount of information that they hold.

You know we're living in a new kind of world when privacy is being used for marketing purposes...

jpmattia 3 days ago 2 replies      
> less than 0.00385% of customers had data disclosed due to government information requests.

According to [1], there are about 600 million apple users, so this translates to 23,000 customers exposed due to government information requests.

Seems like a large number. Is 600M correct?

[1] http://www.cnet.com/news/apple-to-reach-600-million-users-by...

fpgeek 3 days ago 0 replies      
My fundamental issue with Apple's privacy claims is they are pretending that they have a technological solution to what is, ultimately, a political problem. As the laws in the US (and I imagine some other countries stand), Apple can be compelled provide your data to appropriate governmental authorities, install back doors, not tell you and even lie to you and the world about it. As long as that's true, no assurance from any third-party service provider is worth a damn.

I can understand the marketing benefits Apple sees in making these disingenuous privacy claims. I'd be willing to call that "just business" except for one thing: Trying to persuade people they have a technological solution will necessarily get in the way of the absolutely vital political project of destroying the political and legal foundations of the surveillance state.

ckuehl 3 days ago 1 reply      
I'm very skeptical that traditional screen-lock passcodes offer useful protection for the average person. Most people still choose to use 4-digit passcodes for convenience, leaving exhaustive key search [1] well within the reach of even very small attackers.

Are these four-digit passcodes being used to derive encryption keys? If so, I'd like to hear where the additional entropy comes from. There's no use encrypting things with a 128-bit key when the effective entropy of the key is really only ~12.3 bits.

I'm sure the engineers at Apple would not have overlooked this; it would be great to hear more about the specifics.

[1] especially if the attacker can download encrypted data and try an infinite number of times (instead of e.g. typing the passcode on the phone or hitting the iCloud servers)

declan 3 days ago 3 replies      
If you're an iOS user who becomes the target of an investigation by a law enforcement or intelligence agency, remember your data is likely unencrypted in the cloud. So if your device is inaccessible, your email, your location history, your text messages, your phone call history will probably remain accessible. Apple acknowledges, for example, that "iCloud does not encrypt data stored on IMAP mail servers":http://support.apple.com/kb/HT4865

[Edited because it now seems unclear which Apple policies have changed.]

zobzu 3 days ago 3 replies      
"Unlike our competitors, Apple cannot bypass your passcode and therefore cannot access this data"Oh really? Privacy is marketing now.
sidcool 3 days ago 0 replies      
Apple has taken a shot against Google and facebook. It has mentioned that unlike its competitors their business model does not depend on selling user data. Which is kind of true, but Google and facebook's business model itself is using user data for marketing.

Sometimes I feel it's not unethical to use user's data for marketing, the way facebook and Google tell us; that they don't directly share details with marketers, but they let them target the audience.

danford 3 days ago 2 replies      
Except it's not open source. If it's not open source then you have no idea what's going on beyond what Apple tells you.

Ask your self:

Would Snowden use this phone? Your answer to this question is the same as the answer to the question "Is this phone secure?"

I guess I'll get downvoted for this sense it goes against the Apple circlejerk, but this issue is more important to me than magic internet points.

xkiwi 3 days ago 10 replies      
Finally those numbers of iPhone activation and mac sold are useful.

#1 Mac unit sales


2010 @ 13662k

2011 @ 16735k

2012 @ 18158k

2013 @ 16341k

Total = 64,896,000

#2 iPhone unit sales


I only take the number from 2013 & 2014 because Apple trend to upgrade fast.

2013 @ 53.6 Million,

2014 @ 63.2 Million,

Total = 116,800,000

Now, quote from "Government Information Requests"

"less than0.00385%of customers had data disclosed due to government information requests."

Only 699529.6 round to 699529 customers had data disclosed.

dubcanada 3 days ago 0 replies      
These threads should come with a tin foil hat requirement. There is so many different views on this. But if you wear a thick enough tin foil hat, it really doesn't matter what anyone says. You will think the gov is spying on you regardless...
BillFranklin 3 days ago 0 replies      
19250 people have their Apple accounts accessed by #NSA every year.
krisgenre 3 days ago 0 replies      
Doesn't Android phones also have 'Encrypt phone' feature?
baby 3 days ago 0 replies      
It almost seems like it's a feature of iOS8.
adventured 3 days ago 1 reply      
I understand this does nothing to stop the NSA from snooping on me. However, the local / state police are a much more imminent threat to your average person with the rise of the police state than the NSA and FBI are. The local police are becoming ever more aggressive when it comes to your privacy and devices like your phone.

If this turns out to be as good of a move as it seems like it is, Apple has acquired my attention in a way they weren't able to previously (I've been an Android user from day one). Plus I like the new larger iPhone 6.

pikachu_is_cool 3 days ago 2 replies      
I don't need to read this. Everything on the iPhone is proprietary software. As it has been proven countless times, there is an 100% probability that there are backdoors everywhere on this device. This entire blog post is a lie.
wyager 3 days ago 4 replies      

Can someone confirm or deny the following? I think this is the current state of affairs.

A) Apple will unlock PIN-locked devices by government request, but the best they can do is brute-force. This is very slow, as it can only be done using the phone's on-board crypto hardware (which has a unique burned-in crypto key), and the PIN is stretched with PBKDF2. It has been this way for a while. Apple has no "backdoor" on the PIN or any form of cryptographic advantage here that we know of.

B) The new thing mentioned in the OP's link is that things stored on Apple's servers are now encrypted as well, with your iCloud password.

Is this correct?

Larry Ellison Will Step Down as CEO of Oracle, Will Remain as CTO
270 points by jhonovich  3 days ago   87 comments top 13
chollida1 3 days ago 5 replies      
Interesting that they name Co-CEO's in Catz and Hurd. I wonder how that will work, especially given Hurd's "tough to work with" reputation.

Interestingly Ellison will be the CTO. This could be a shit show with 3 people trying to run the show!

I mean does anyone really expect Larry Ellison to start taking marching orders. Will be interesting to watch the short interest on this company!

I think the two headed CEO is what the street expected all along as Catz has been around for ever and alot of people thought that Hurd, the former HP CEO, was promised the CEO title when Ellison resigned.

It looks like they, Catz and Hurd, will split the running of day to day operations as Hurd gets sales, marketing and strategy reporting to him, while Catz will continue to have finance, legal and manufacturing.

Its down about a dollar after the close on about a third higher trading volume than normal. So it doesn't look like anyone is "spooked" by the news.

dm8 3 days ago 1 reply      
If you want to read about Larry Ellison's personality and his management style, you should read - "The Difference Between God and Larry Ellison: Inside Oracle Corporation; God Doesn't Think He's Larry Ellison". (http://www.goodreads.com/book/show/181369.The_Difference_Bet...)

It's one of the best books written on him and the way he managed Oracle right from it's beginnings. He was damn good at selling things.

mindcrime 3 days ago 1 reply      
Not really sure what to say about this. I don't know Ellison, nor do I own Oracle stock, or have any particular interest in Oracle per-se. But nonetheless, I've always seen Ellison as an important character in our industry, and after reading a biography about him, I felt a sort of kinship with him based on some shared interests.

At any rate, it definitely feels like the "end of an era" in a sense. I got my start in this industry in the mid to late 90's when Oracle, IBM, Novell, Microsoft, Borland, etc. were duking it out for supremacy, and - for better or worse - you've never really been able to escape Oracle's shadow to some extent. And Ellison was Oracle, in so many ways.

Edit: It's been a while, but I think this[1] was the biography I read. I'll just say this: regardless of what you think of Ellison, he's an interesting character and reading about the history of Ellison / Oracle is quite fascinating.

[1]: http://www.amazon.com/Softwar-Intimate-Portrait-Ellison-Orac...

smacktoward 3 days ago 0 replies      
I'm guessing he wants to spend more time wringing extortionate license fees out of his family?
bsimpson 3 days ago 0 replies      
Someone in The Verge's comment section noted that this Forbes list will now need to be updated:


spindritf 3 days ago 1 reply      
The final Larry Ellison scorecard: Oracle stock is up 89,640% since he took the company public in March 1986.


ChuckMcM 3 days ago 1 reply      
Demonstrating once again that tech companies really don't "get" succession planning :-) I'm kind of half joking, if you look at a bunch of 'old school' BigCorps, the progression is (CEO->Chairman, SVPx -> CEO, VPx -> SVPx) and then the Chairman of the board retires and the CEO takes on both roles Chairman and CEO, priming the pump for the next cycle.

Co-CEOs have so far been an experiment in disaster, something about not having an ultimate authority seems to really crimp organizations. I wish Oracle well but they have a lot of challenges to overcome, if I were a share holder I wouldn't be all that pleased with this arrangement as it seems to basically leave all the same people in place with all the same problems (Amazon/Google EC2/GCE, MySQL vs NoSQL vs expensive Oracle, Cheap Clusters with High Reliablity vs Expensive Servers, Etc.)

turar 3 days ago 9 replies      
Co-CEOs? I only know one company that had co-CEOs, and that didn't work out well for them.
sebst 3 days ago 0 replies      
joelrunyon 3 days ago 4 replies      
Are there any more details into why he's doing this?
azifali 3 days ago 0 replies      
The end of an era for Oracle that existed as a software (licensing) company. I think that Ellison stepping in as the CTO is probably more important than him stepping down as the CEO.

This move will perhaps will lay the groundwork for the next tens of billions in revenue for Oracle, in cloud based software and infrastructure.

sebst 3 days ago 1 reply      
Will Oracle then become better? Maybe as good as Sun used to be?

just dreamin'...

justinph 3 days ago 3 replies      
What is with the capitalization on the headline on Recode? I read it and thought, who is "Will Remain"?

It should be:Larry Ellison will step down as CEO of Oracle, will remain as CTO

Headline capitalization is pretty easy: Capitalize the first word, then any proper nouns. That's it.

FBI and Secret Service Files: Aaron Swartz
264 points by signa11  1 day ago   125 comments top 7
pocketheyman 1 day ago 2 replies      
Kind of interesting, according to the case file, the PACER records were being pulled en masse during normal court hours (typically when courts are also accessing the PACER database). A user noticed that PACER was going slow and notified PACER of the apparent slowness. Looks like they investigated, shut the PACER system down and were able to detect the requests were coming from an Amazon Web Hosting account linked to Swartz.

I find this interesting because it wasn't some flag on the PACER system screaming "HEY SOMEONE IS DOWNLOADING THESE EVERY TWO SECONDS" but instead was noticed because some law clerk was irritated at how slow the server was at responding.

manifesto 1 day ago 7 replies      
A reminder: the petition https://petitions.whitehouse.gov/petition/remove-united-stat... has not been responded yet, after more than one and a half years.
nutate 1 day ago 4 replies      
Was there ever an argument beyond 'information wants to be free' to this? Let's say PACER docs were being pulled and hosted elsewhere. What if case information was updated as per part of the legal process, aka person X is now innocent. How does this change to past case documents get propagated to the 'illegal' mirror?

This is interesting because I think we do want an authoritative document store and that, yes, we hence need to pay for its upkeep. So if he had mirrored and hosted all of these cases, they would've been merely snapshots of past history, not the curated corpus that PACER has.

The same could be said of scientific papers where large retractions are handled by the journals, but may be lost by some mirrors.

Information quality, provenance and current validity is more important than the trope of 'wanting to be free.' Once information passes into the 'historical' realm, perhaps it should/must be free, but when we are in the malleable phase it's irresponsible to 'mirror once' without knowing how to get pushed (or pull) updates.

Look at how the Linux kernel mirror system works, push mirroring, etc. The scrape method doesn't pass the smell test if you really want to provide a service beyond point in time archiving (aka archive.org).

Regarding depression, suicide and unfair persecution I'll withhold comment.

vajorie 1 day ago 2 replies      
How come no one even bothered to remove his full address and ssn from the records?.. On the other hand, even the very names of people who approved and drafted the documents are removed.
herge 1 day ago 1 reply      
Wait, were the case files for Aaron Swartz classified or just never made public? What would be the reasoning for classifying his case? How was he a threat to national security?
yuhong 1 day ago 1 reply      
On PACER fees, IMO a good compromise is to only charge for the actual court documents retrieved. No charging for search results, docket listings etc, and there is already a $3 cap on documents.
jdong 1 day ago 3 replies      
What makes this case such a big deal? Swartz did something that was obviously illegal and got caught.
OpenGL in 2014
251 points by ingve  17 hours ago   103 comments top 15
c3d 8 hours ago 1 reply      
The multiplicity of APIs demonstrates that the problem is hard. The needs of game developers pull the APIs in a specific direction. And these requirements must be addressed, because the games market is huge and pushes the envelope.

But other users may have different needs. OpenGL is used by games, but not just games. For example, at Taodyne, we use OpenGL for real-time 3D rendering of business information on glasses-free 3D screens. I can tell you that my pet peeves with OpenGL have nothing to do with what's being described in any of the articles.

Some of the top issues I face include 3D font rendering (way too many polygons), multi-view videos (e.g. tiled videos, which push texture size limites, or multi-stream videos, that bring a whole bag of threading issues), large numbers of large textures without the ability to manually optimise them (e.g. 12G of textures in one use case).

Heck, even the basic shader that combines 5, 8 or 9 views into one multiscopic layout for a lenticular display makes a laptop run hot for a mere HD display, and requires a rather beefy card if you want to have any bandwidth left for something else while driving a 4K display.

Many of these scenarios have to do with limitations of textures sizes, efficient ways to deal with complex shapes and huge polygon counts that you can't easily reduce, very specific problems with aliasing and smoothing when you deal with individual RGB subpixels, etc.

Of course, multiscopic displays are not exactly core business right now, so nobody cares that targeting them efficiently is next to impossible with current APIs.

fizixer 8 hours ago 0 replies      
It seems no one has mentioned the long peaks fiasco yet, which is an important part of understanding OpenGL history and the committee(s) in charge of the standard:


TL;DR: This is not the first time people are pissed at OpenGL. Last time when industry, developers, etc were sick and tired, around 2006-2007, and it was decided to do something about the API, an effort was initiated. Once the work was close to finishing, those who had seen the glimpse of this yet-to-be-released API were excited and were eagerly waiting for the release. Then the OpenGL committee vanished from the scene for a year or so, and when it re-appeared, it released the same old shitty API with a handful of function calls on top of that.

zerebubuth 15 hours ago 2 replies      
OpenGL might well be the "only truly cross-platform option", but it seems to me that, for games or mobile app development, getting stuff drawn on screen is only part of the problem. The rest is about doing so with the minimum use of cycles - either for better frame rates or better battery life. I can easily imagine that this is a classic 80/20 problem, with the 20% that takes 80% of the time being adequate ("butter smooth") performance.

So, given that the capabilities of the graphics hardware can vary a lot, how closely can a single, unified API like glnext approach optimal use of the hardware? And without the kinds of platform-specific code paths which are necessary under current OpenGL?

pjmlp 15 hours ago 1 reply      
Now they just have to create ONE single API, instead of forcing everyone to write multiple code paths to target the various flavours, extensions and drivers workarounds.

Specific graphics APIs only matters when graphics middleware is not an option.

Which OpenGL always requires. Since the standard leaves out how image/shader/texture/fonts/GUI/math are handled.

I think the commoditization of engines will be the second coming of the OpenGL 2.0 - 3.0 stagnation, if they don't improve on these areas.

maaaats 16 hours ago 1 reply      
We need OpenGL as an alternative. What would Direct3D have been today without competition? But at the same time, GL is such a PITA to use directly that I don't bother without some middleware abstracting it away.
sheng 13 hours ago 4 replies      
All the whining and complaining makes me wondering how anyone was able to write something with OpenGL at all. This is fascinating because a great amount of people were actually able to write awesome Games and Applications with this API.

Look at the whole lot of mobile devices. I have no numbers to base this statement on but I would be bold enough to claim that OpenGL is thanks to the multiplatform ability by far the most successful graphics API out there. The set of devices that brings some or another form of OpenGL support outnumbers other graphics platforms. This alone is a huge accomplishment. Heck, even Minecraft was able to run on PowerPC systems until they pushed the java version supported[1].

But now I need to look at the link and have to admit that the criticism is still correct. The API is still pretty rough and could see some improvements. I know this myself, I also played around with OpenGL at some point. There is a lot of boilerplate code that needs to be written before you can start yourself with the real game. This was always the case. This is why we always had an engine, a framework to built on.

But to say that it all is a huge pile of shit is a little bit harsh

[1] https://help.mojang.com/customer/portal/articles/884921-mine...

shurcooL 5 hours ago 1 reply      
My current approach is to use Go and target WebGL as the lowest common denominator, but with OpenGL (and/or OpenGL ES) backends as well.

That way graphics code written once can run on OS X, Linux, Windows, browser (including iOS).

bhouston 15 hours ago 1 reply      
Great article, thank you! Any news as to when we will get a WebGLNext?
frozenport 13 hours ago 0 replies      
We all got messed up with the transition to OpenGL 4 and now we are gonna have another OpenGL? I don't see OpenGL getting out of this funk until the language you learn today will be useful tommrow. Perhaps, a new API is a step in the right direction but things are gonna hurt bad bad for years to come, especially when OEMs don't support the API.
fulafel 15 hours ago 0 replies      
On Linux you could in principle use the lower level hardware specific command issuing APIs as well. Mesa is not a privileged library.
BadassFractal 16 hours ago 6 replies      
The saying is that total rewrites are always a bad idea. It'll be interesting to see if this one would be an exception to the rule.
shmerl 11 hours ago 0 replies      
Is there any ETA for OpenGL-next?
Stolpe 15 hours ago 0 replies      
So basically, "OpenGL in 2015" will be great!
_random_ 15 hours ago 0 replies      
Whoever doesn't force me to use C/C++ or JavaScript.
Amazon releases new Kindle products
234 points by tgcordell  4 days ago   191 comments top 47
swanson 3 days ago 12 replies      
I've owned every model of the Kindle (minus the comically large DX) - it's been fun to watch them iterate and refine this device. It really is a great product and the price point is always within my "insta-splurge" budget. I read roughly 10-20 books a year on the Kindle.

The Kindle Voyager fixes the biggest complaint I have with the Paperwhite: page turning via touching the screen is worse than the physical buttons on older-gen Kindles. And the auto-brightness sensor means there is one less thing for me to fiddle with. Higher DPI and thinner (flush bezel looks sexy!) are just icing on the cake.

It's kind of hard to explain why I love the Kindle so much - and why I've owned every model - but something just feels right to me about reading with it. It's modern but familiar and so much more convenient for me (click Buy Now on amazon.com and the book is loaded by the time I walk over to pick it up from the shelf).

FYI: I always buy the models "With special offers" (ads shown on the lock screen - but usually Amazon does a free giftcard offer during the first few weeks so free $$) and "WiFi" (I've rarely used the 3G - and you can always just tether to most phones nowadays anyway).

soapdog 3 days ago 3 replies      
I come from a different country (Brazil) but I am addicted to eReaders. I had a couple Kindles, they were always rock solid. Page sync was the most useful feature for me because I kept my Kindle in my home and read on the go with a phone.

The main pain point for me was the lack of Epub support in it. I wanted to buy the paperwhite but in an effort to not support DRM based solutions I started buying my technical books directly on the publishers website with non-DRM formats.

Then Kobo released the Kobo mini and that was the perfect pocketable size for me. I jumped in. All my Kindle notions and impressions were out of the door. The Kobo was a much better device in my opinion. The "Reading Life" feature was awesome and the UX and font selection great. Stopped using the Kindle.

Then I missed a light. I tend to read on the dark hours and something like the paperwhite became a need. eReaders are not cheap here in Brazil. A Kindle Paperwhite with cost you USD 200+. Since I was a fan of Kobo, I decided to check out the Kobo Aura HD. Heck the thing was the price of a laptop.

In the end a major book retailer here in Brazil decided to ship their own eReader called Lev. It had a version with light, it could read Epubs and other formats and it fit my budget. Also it had a killer feature the both Kindle and Kobo lacked: PDF Reflow. This small simple eReader can reflow text on a PDF to fit the screen and it works pretty well. I was sold. I am pretty happy with my Lev eReader now, I have all the features I could want from the competitors plus the ability to read old LISP book PDFs as if they were meant for that screen.

Moral of the story: Instead of jumping in and buying the new thing from gigantic retailer, shop around and see what the small guys are doing in your region. There might be an eReader there that fits your needs much better than the Kindle. (Still miss page sync though)

amerkhalid 3 days ago 3 replies      
Kindle Voyage looks like almost perfect reader.

I have been Kindle user since they had Kindle 3. I love K3 but I wish it had backlight. I bought Kindle Touch but gave it away, cuz touch screen was clumsy to use for page turning. There were so many times where I tapped a link accidentally while turning a page. However, I liked touch for quickly tapping word to look it up in dictionary.

This Kindle Voyage with dedicate page turn buttons, backlight, and touch screen might just be perfect..

If only it had "Text to Speech." I guess not many people like to hear books in monotone. But I use it to listen to old classics, blogs, or other fiction while walking on treadmill, driving, or when I just too tired.

krschultz 3 days ago 0 replies      
I write mobile apps for a living. My desk is littered with iPads, Android phablets, etc. I've tried them all.

You can drag the e-ink display Kindle from my cold dead hands. Nothing is better for serious book reading. It's the only screen my wife and I allow in our bedroom, it's the only electronic device I'm bringing to the beach. I'm very happy to see Amazon continuing to refine them.

otoburb 3 days ago 0 replies      
The Kindle Voyage[1] doesn't state whether it supports Amazon's etextbook format. As an example, the Kindle Probabilistic Graphical Models[2] textbook is only available on PC or Kindle Fire, which is a bummer if still true since the Voyage looks like it should be able to handle this now, but no actual mention of this in the product description nor in the pull-down hover label "Available only on these devices".

[1] http://www.amazon.com/dp/B00IOY8XWQ/ref=fs_kv

[2] http://www.amazon.com/Probabilistic-Graphical-Models-Princip...

scw 3 days ago 7 replies      
Here's hoping that the roughly quadrupling of resolution (167dpi to 300dpi) will make reading journal PDFs tolerable over the current generation of Kindles, where it's obnoxiously cumbersome (it requires rotating the screen and viewing 1/4 a page at a time). The DX had a 1200x824 screen, but only at 150 ppi. This is better, and has a less expensive launch price.
unlingua 3 hours ago 0 replies      
Turns out it DOESN'T change the worst thing about the paperwhite, because it's new buttons PLUS the touchscreen page turning 'zones'. You can't disable touch to turn pages.

Accidental page turns will be as big a disaster as ever.

Big fail at Lab126

tomw1808 3 days ago 1 reply      
I think the more than competitive offer of amazon is not really surprising, thinking that they always reduce their own margin significantly to offer such awesome products. Well done, well done!

What I am wondering though is, why they post a picture instead of text on the website - not only for SEO. It results in everyday problems: eg I can't copy and paste the text and post it into skype to inform my dad about that. Yeah, sure, I could post the link, but this is what I consider as a really bad practice. Compare the beautifully crafted privacy statement of apple, even the text in the charts is "text". Just my 2 cents...

tzs 3 days ago 1 reply      
I have a Kindle Paperwhite, and one thing that has puzzled me is the light. The light setting has a slider to set the brightness, and the labels recommend a low setting for dark rooms and a high setting for bright rooms.

What puzzles me is that in bright rooms (e.g., all my daytime reading if the window blinds are open) I turn the light all the way off [1]. One of the points of eInk is that you don't need any extra light when you are in a bright room.

Why does Amazon want me to turn the light up, which eats up the battery? I realize it does make the screen look whiter to have the light on in a bright room, but as far as I can see it does not make a noticeable difference in readability.

The new high end eInk Kindle features automatic light adjustment based on a light sensor. If that means that in bright rooms it is going to crank up the light, I would not be happy. Does it do that? If so, can it be overridden?

[1] well, not quite...the light cannot be turned all the way off on the Paperwhite while reading. It only goes all the way off when you put it to sleep.

necubi 3 days ago 0 replies      
The Verge has a first look at the new high-end Kindle Voyager [0] and it sounds like the increased resolution (300dpi) really makes a difference. I'm also excited that the screen is now glass. The old plastic ones were very easy to scratch.

[0] http://www.theverge.com/2014/9/17/6353785/amazon-kindle-voya...

martco 3 days ago 1 reply      
"For the first time, you and your family can access and easily share not only your own Kindle books, but also books from the Amazon account of a spouse or partner"

This seems like a really nice feature that was somewhat buried in the Kindle Voyage description.

reedlaw 3 days ago 2 replies      
$80 more for the higher-DPI Kindle Voyage seems a bit steep. Usually each generation of e-Ink Kindles were roughly the same price but with incremental improvements. I don't see much improvement in the Voyage to justify a big price increase. Plus I wish they'd bring back the physical page-turn buttons. Touch screens and "haptic" responses can't beat old-fashioned buttons.
ComputerGuru 3 days ago 3 replies      
Wait, what? It's $200 dollars? I spent a good five minutes trying to find the mistake that was showing me that price. Did Amazon learn nothing from the Kindle Fire phone fiasco?

Kindle had always been a no-brainer purchase at a "don't think too hard about it" price point that made upgrading to each new model actually feasible. But two hundred for a higher resolution screen and a $2 dollar photo sensor module? Color me confused.

I clicked the link with every intention of buying after reading the comments here, but I can't believe no one mentioned the (hefty) price tag!

malloreon 3 days ago 1 reply      
I have a paperwhite and an iphone.

I am an iOS engineer by trade.

If I had to give up my paperwhite or my iphone, I'd give up the iphone in a heartbeat.

If you read, you need a paperwhite.

dilap 3 days ago 0 replies      
I'm a little bit sad to see the touchless base kindle go away.

I have both a kindle paperwhite and the previous base, touchless model, and the text is noticeably more recessed underneath the physical display on the paperwhite -- presumably to accomadate the touch sensor and the light.

So even though it's higher DPI, the actual reading experience feels more analog/physical-paper-like on the classic, touchless kindle.

Eleopteryx 3 days ago 0 replies      
I initially loved my Paperwhite, but I was turned off by the lack of epub support. Whatever their rationale is, I object to not supporting an open format in principle. I have no trouble buying books from their store; 4 out of 5 times they have the lowest price anyway, and the purchasing process is smooth. But 1 out of 5 times they don't have the lowest price, or the formatting of a book I want to purchase is for some reason fubared (the last couple of books I got from Google Play specifically because of this), so I should be able to upload an epub painlessly. So the device starts to feel more like a vehicle to get you to buy into their Kindle ecosystem than anything else, even though the reading aspect of it is really nice.

This doesn't seem to bother most people, though.

I ended up getting a tablet, but I can't say it's an improvement in every regard. Tablets cost more (actually with the voyage at $200, not that much more), they weigh more, the batteries don't last nearly as long (although I get a good 1-2 days out of use), and good luck reading in the sun. But they also do more.

The Kindle app on Android is in some ways more feature-ful and easier to use than the Paperwhite's software. Taking notes (I read a lot of non-fiction) for example, is a cinch when I can use SwiftKey, where as the Kindle's native keyboard was a pain in terms of responsiveness, predicting words, and making corrections.

To each his or her own, though. But I'm definitely not in the "I don't need a tablet" crowd.

That said, I had no idea that I could jailbreak the Paperwhite, or that there was such a huge scene around it. Gonna check that out.

YBibo 3 days ago 3 replies      
I seriously don't get why people want buttons (including these side "buttons" with haptic feedback) to turn pages. It's so much more satisfying to turn a page on a Paperwhite with a flick of the finger sliding across the "page" (the screen) like you would do with a book. You can already tap the screen with your thumb (using the same hand to hold the device) to turn a page. Why on earth would you want these ugly lines and dots now on the side of the Voyager reader so you can turn your pages that way?
andor 2 days ago 0 replies      
I'm a bit disappointed that they haven't upgraded to a proper line-breaking algorithm. The high resolution display is probably great, but look at that inter-word spacing! How can they state this is "passionately crafted for readers" when the typesetting is worse than in Microsoft Word?
dombili 3 days ago 2 replies      
Voyager looks nice, but as a Paperwhite 2 owner, I'm not going to upgrade it. I don't really care much about the physical buttons on the side for page turning but the screens and the bezel having the same height and being on the same level sounds great.

Also, $79 regular Kindle doesn't have any kind of backlight (or whatever that's called in Kindles), which could be a deal breaker with some people. Not to mention its DPI is much lower than Paperwhite 2, probably even 1.

acabal 3 days ago 3 replies      
Looks like a great upgrade to the Kindle Paperwhite, in particular the return of physical page turn buttons. But my biggest wish--native epub support--still isn't there :(
polskibus 3 days ago 0 replies      
I really wish there was a new DX model, with screen size between old DX and the standard version
wsc981 3 days ago 4 replies      
I understand that many people like to read fiction and the like on the Kindle.

How useful would the Kindle be for reading technical books?

swartkrans 3 days ago 3 replies      
I had the original Kindle Fire which worked nicely, but had a pretty poor screen. Now I have the 7" Kindle Fire HDX which has been fantastic with a beautiful screen, except a few days ago I noticed a yellow blur irregularity in the top right corner, like a little streak stain that turns things a little bit yellow. I don't know, if Amazon Video were available on a real Android tablet I'd get that, but until then I'm using these Fire tablets.

Also, my opinion on the kids version of the Kindle Fire is don't get it. Get an iPad. The iPad ecosystem is so much better for kids. There are so many great learning apps, there is no question unless the price is really a deal killer that you should get an iPad for a child. Very few of these children apps are available on Android tablets, much less the Amazon app store. I say this as someone who owns a Galaxy S5 and loves Android. These products are great, but sometimes even frustrating for adults so for kids they are not so usable and have a poor choice of apps.

bithush 3 days ago 2 replies      
UK prices are (with special offers/without special offers)

Fire HD 6"

79/89 - 8GB

99/109 - 16GB

Fire FDX 8.9

329/339 - 16GB

369/379 - 32GB

409/419 - 64GB



Kindle Voyage

169 WiFi Only

229 Wifi + Free 3G

mchart 3 days ago 1 reply      
It feels somewhat sad to see them take away the physical keyboard. I find myself regularly using the keyboard on my current Kindle to annotate books as I read them. Even if it is more efficient to have an on-screen keyboard, physical keyboards symbolically imply that one could potentially be an active critic and participant rather than only a passive consumer of media. For the same reason I have always preferred computer to console video games, and I have never completely warmed up to the iPad even though I own one. Then again, my physical paper books always seemed to become filled with sticky notes and (erasable, penciled-in) annotations, too.
thomasfl 3 days ago 0 replies      
Why is there no web browser on the tablets without backlight?

I have wanted a laptop or tablet without backlight for years. The backlight not only makes it harder to get to sleep at night, but it also makes it harder to concentrate. A laptop where you didn't have to stare at a backlight would make it easier to get shit done. I can't just dim the screen on my MBP. It is even some research that suggests there is a hypnotic effect of staring into a bright lit TV or computer screen.

Sami_Lehtinen 3 days ago 0 replies      
I love my Kindle because it got ultimate display compared to any other media device. It's just delighting experience to read in full sunshine. If you try your iDevice(s), you can't see a thing in those conditions. Another great features are of course the battery life and weight. Many manufacturers advertise light tablets, but most of those aren't light. I'll always have my Kindle with me, and it allows me to read tons of stuff during the year. I'm using it more than one hour / day.
blaabjerg 3 days ago 3 replies      
I've never owned a Kindle before, but I want one and I can't decide if I should go for the standard Kindle or the Voyage. What are your thoughts, is the upgrade worth it on a tight student budget?

I'd be upgrading from paper books, so I'm not entirely sold on the idea that I particularly need a front light. Is the readability significantly better even in a well-lit room?

theon144 3 days ago 0 replies      
What's Amazon's grip against physical buttons? Out of the whole range, I still like Kindle 3's page-turning buttons the most.
fourstar 3 days ago 1 reply      
The "Worry free" warranty for kids is great.
petercooper 3 days ago 0 replies      
The new HD tablets might finally be the first tablets to make it into my in-laws' house. The first tablet I've seen that's basically guaranteed to be of a certain quality, that isn't expensive, and is by a company they've heard of. I think this could be bigger than it looks from a tech point of view.
grinnick 3 days ago 2 replies      
I have never owned a kindle before. Should I wait for the Kindle Voyage (wait about 6 weeks and pay $100 extra) or buy a Kindle Paperwhite now?

It appears the main improvement is the resolution but it's difficult for me to get a sense of how important this is without having used a Paperwhite.

jwr 3 days ago 2 replies      
I stopped upgrading my Kindle at Kindle 4. There were three things wrong with the Paperwhite:

1) no physical page turn buttons,2) weight,3) worse typography.

They seem to have fixed (1) in the latest model, but I still need to check if it's as heavy as the paperwhite and whether they improved they way text is displayed.

collyw 3 days ago 2 replies      
I never understood the appeal of touchscreen son Kindles. They have a really nice screen, why get greasy finger marks all over it? These ones don't appear to have the side buttons that my basic kindle has, which implies that I would need to hands to switch page.
zak_mc_kracken 3 days ago 1 reply      
The Kindle for Kids is very interesting, does anyone know if it's still possible to install arbitrary .apk on it? I know it doesn't have Google Play (only access to Amazon's store) but if I can install .apk files from the Play store manually, then it's an instant buy.
prezjordan 3 days ago 0 replies      
I left my Kindle on the Caltrain about 2 weeks ago (side note: if you found a Kindle on the Caltrain about 2 weeks ago please let me know!). I guess I picked a pretty good time to need to buy a new one - not totally sure if I can justify the $200 price tag for the voyager just yet.
a3176082 3 days ago 0 replies      
Fire HD 6 - Quad Core but they seem kinda silent about the memory. And for a good reason, it has 1GB of RAM. That is way too little for Android. Wouldn't buy, no matter the price. Note how they also call "storage" as "memory".
gfunk911 4 days ago 4 replies      
$99 seems insanely cheap, assuming it's not completely gimped.

The kids edition is also a cute idea

jscheel 3 days ago 0 replies      
I got a Paperwhite about 3 weeks ago. I've kept it in the packaging, anticipating a new release was coming. I was not anticipating such a huge bump in price though. Guess I'm keeping the Paperwhite.
taeric 3 days ago 1 reply      
I'm impressed that they aren't hyping up the profiles stuff more. Especially the feature where you can link multiple accounts to it. Not that it is that big of a deal, but it is rather comical on what they think I want to read/watch nowdays.
hdra 3 days ago 2 replies      
Another Kindle release that I can't have. All these cool products that aren't available where I live.

Seriously though, does anyone know whats stopping Amazon from making their offerings available in more countries?

jongalloway2 3 days ago 0 replies      
It's too bad that none of the e-ink Kindles has audio anymore. I really liked being able to read on a device, then seamlessly (on the same device) switch to listening while driving / flying / working out.
rtcoms 3 days ago 0 replies      
We really need front-light displays for laptops/monitors. Is any company making those ?
paulornothing 3 days ago 0 replies      
Isn't it called the Kindle Voyage? Everyone says Voyager.
bithush 3 days ago 0 replies      
The Kindle Voyage looks really nice but at 169 (a 60 increase over the Paperwhite) there is no way I will be getting one. That is a lot of money for just an ereader to me.
riffraff 3 days ago 0 replies      
interestingly, Kindle Voyager and Kindle Kids don't seem to be available on all national amazons, I wonder why.
hnriot 3 days ago 1 reply      
$20 more to avoid ads on the lock screen and another $20 for the power adapter!!! WTF
Faced with change, an all-female indie dev team evolves to a higher form (2013)
223 points by hnal943  2 days ago   191 comments top 15
dgreensp 2 days ago 6 replies      
Im certain that if I had children, I would be failing at my job.

Ive hit my 30s, a period when it seems as if all of my friends suddenly have kids. Thats a priority shift completely incompatible with my goals. Startups require that you give it all or go home, routinely requiring long nights, longer weekends, and blood and toil. If you arent willing to put in the hours, eager replacements are standing behind you. If I fail, the women I work with will be out of their jobs.

It's this fearful attitude, lurking in the minds of bosses and employees, that is the problem facing women in the workplace who want to have children, more than anything else. (For example, I put it at the root of poor leave policies.) It's called sexism when it comes from a man, but here (from a female boss) it's clear it's just culture (American culture?).

I just had my first kid, and my wife had to go back to work at six weeks. I'm a software engineer, and she's a medical device rep in trauma. Unlike me, she can't work from home, she carries a pager, and she can't choose her work hours or reduce them. She wasn't itching to go back to work either; she loved being at home with the new baby. However, you do what you have to do. Some new moms do quit their jobs, especially if they weren't making much more than they'd save on childcare by staying at home, or if it was a crappy work environment or an unfulfilling role anyway. However, for many, it's not an option not to work, and being a software developer is actually a pretty cushy gig that I would wish on moms everywhere.

If you're afraid for yourself or someone else of having kids, go out and talk to some power moms.

mikeleeorg 2 days ago 0 replies      
I'm pleasantly surprised the comments here aren't overly caustic.

And I really liked this article. As an entrepreneur who has structured my life around my family (i.e. work from home, flexible hours), I can empathize with Brianna and Amanda's points of views. The entrepreneur in me is obsessed with development and deadlines and shipping. The father in me is obsessed with spending time with my daughter. There are times when both are at odds, and while I like to say I always make the right decision, I don't. It's a tough struggle. And it's a struggle I am very conscious of, because I have competitors who don't have or want to deal with similar constraints.

But honestly, I often think these constraints make me a better entrepreneur than I used to be, because I am forced to be strict about my priorities and time. If something is a waste of time, I don't give it a second glance and move on to something else (HN notwithstanding, ahem).

up_and_up 2 days ago 1 reply      
> Im certain that if I had children, I would be failing at my job.

Quality not quantity.

I work as an engineer for an NYC startup and have 3 kids. No, its not easy, but yes you need to reset your priorities. Life becomes more focused on fewer activities. Once the kids get a bit bigger its not as time intensive.

I work roughly 6:30am - 8:30am and then 10am - 5pm M-F.

I have many other friends who are engineers at fast moving companies with 2,3,4 or more kids. Its definitely doable.

If your company is asking you to work hours and hours maybe there is something wrong with their product development process or business plan.

Stop worrying and start procreating!

mutagen 2 days ago 1 reply      
I'm glad I read this despite the link title, which is appropriately based on the article's sub-title (The title, "Choose Your Character", is even less descriptive). The article hits on some of the startup and indie gamedev work-life balance issues that affect everyone and some unique to women.
melling 2 days ago 0 replies      
I believe this team was interviewed on Debug.


hrktb 2 days ago 0 replies      
A bit OT, but I think it's refreshing to have a character like her in the tech seen, vocal and taking the spotlight in a lot of places.

At first I was thrown off by the very douchy looking attitude, it felt too much like overcompensating. And I'd hate to work in her company for so many reasons, the burning startup mindset being the main one.

But this article, as her Debug interview or the Isometricshow podcast also show other facets that are pretty fair, balanced and well thought. The podcast particulary brings hilarious and soul crushing moments alternatively, I'd recommand to anyone wanting to hear something a bit different.

incision 2 days ago 0 replies      
I'd liken worries about staying productive while raising children to worries about being able to run a marathon.

You're probably safely certain you couldn't do it tomorrow, but that says little about your ability to do it 9 months from now and nothing about what the next person is capable off.

Ask around and you'll find supremely productive people who do both.

jbrooksuk 2 days ago 1 reply      
Wow! It's lovely to see Brianna doing well, I interviewed her back in 2012 - http://james-brooks.uk/interview-with-brianna-wu/


robertfw 2 days ago 3 replies      
Here is the game in question: http://www.revolution60.com/

The feedback in the article was spot on. The characters look decidedly anorexic.

wmeredith 2 days ago 1 reply      
Regardless of subject matter, hot damn this person can write. I hope she's putting some of that spark into her games. That was riveting.
Paul-ish 2 days ago 2 replies      
How has the game and her indie studio fared today?
spopejoy 2 days ago 2 replies      
It's completely hilarious that this article would bring the anti-PC haters out of their cave. There is absolutely nothing in this article about PC, it breaks the script in numerous ways:

- referring to her employees as "girls" instead of women

- her conflicts about her employee's pregnancy

- fretting over the attention to female-image issues in games, wondering if "the only way to win this game is not to have women at all"

I guess as long as a tech writer dares to use the female first person, HN will be deluged with comments from "gahh I HATE politics" know-nothings plus their more anti-social brethren. It's even curious there would be such focus on the boss being childless, this is so not the point of the article. I would probably criticize her cheezy i'm-so-rad-on-my-red-motorbike aesthetic before even thinking about gender stuff.

If there's a bright side to all the defensiveness, it suggests that the recent focus on gender is working. Much like the Anita Hill hearings brought out all sorts of ugliness out on the way to sensible anti-harassment policies, we're witnessing the next evolution.

metafex 2 days ago 5 replies      
It's silly how much goes into correctness in games nowadays. You'd have to make an Asteroids clone just not to offend anybody (except sentient asteroids...).

Just make your game fun, challenging or whatever your goal is and have fun making it. And of course you can put in interesting looking characters, it's called art :)

Also, to the politics topic: Oh I hate that so much, it only takes one person to mess up whole teams and the worst thing is if it's one of your superiors. It's horrible when you can't do anything but change your job (been there, done that).

edit: to the downvoters, please read the whole thing and my response down there, if you still disagree, no hard feelings :)

edit 2: From the article, one of the points I was referring to

"Why are they all white? sneered a liberal friend of mine before launching into a 20-minute screed about how offended he was by the naked shower scene in Heavy Rain."

wmt 1 day ago 0 replies      
Are you sure you're on the right forum? The kind of hateful comments your every comment appears to be are not needed here.
foobarbecue 1 day ago 0 replies      
By "aspirational" I suppose she meant "inspirational"?

(As for the article itself, I only made it through a few paragraphs. I assume it was going to be about sexism and reproductive discrimination in the workplace, which I think is a serious problem. Part of this problem is solved in Sweden, where a couple can split maternity / paternity leave any way they like.)

Tim Berners Lee slams Internet fast lanes: Its bribery.
234 points by esolyt  1 day ago   85 comments top 15
josteink 1 day ago 1 reply      
So the man who decided that DRM in HTML5 was a good thing has an opinion on the well-being of the world wide web, eh?

Sorry if this sounds bitter. I'm just posting from a browser unable to access HTML5 content at a regular interval. It's an open source browser, and the suggested "fix" is always using a closed-source browser, OS or both.

I thought this web-thing was supposed to be open and cross platform?

jbza 1 day ago 1 reply      
I can't help but wonder if the fate of Internet infrastructure will follow that of transport infrastructure (Build-operate-transfer : http://en.wikipedia.org/wiki/Build-operate-transfer)

Based off history, it seems that private ownership of common property is doomed to fail.

7952 1 day ago 3 replies      
I am in favour of net neutrality but worry that everyone is defending a status quo that is still bad. For most consumers and buisness access to the net is hardly free in a financial sense. Entry level bandwidth on AWS/Azure/App engine is still very expensive and seems completely overpriced compared to storage.

For the consumer lack of last mile competition and monthly contracts make competition almost impossible on a day to day basis. An entrepreneur could setup a WiFi hotspot in an area with poor coverage but no one would use it because we are all trying to do everything on a 3g data plan.

I want competition for last mile access that allows the consumer to connect based on the best connection available regarldess of who has provided it. Companies should be paid for providing bandwidth and it makes sense to ask large players like Facebook and Nextflix to pay the bill ($0.2 per GB to guarantee a fast connection to the user should be reasonable). The only way to break the telcos is to fund open compeition.

twirlip 1 day ago 0 replies      
See, telling Congress that something is "bribery" and expecting Congress to think that's something bad is just wishful thinking on TBL's part, bless his heart.
linguafranca 1 day ago 3 replies      
How does this ISP issue affect the internet outside the US? I'm guessing it's not really going to change "The Internet" as a whole, just us in the US.
teddyh 1 day ago 0 replies      
More like a protection racket, really.
rayiner 1 day ago 7 replies      
M-W defines "bribery" in relevant part as: "money or favor given or promised in order to influence the judgment or conduct of a person in a position of trust."

It's only "bribery" if the recipient is in a position of trust. The ISP's are not. They are just businesses operating their wholly private networks. Paying someone to use their private property for your benefit isn't "bribery." It's a basic commercial transaction. E.g. it's not "bribery" for me to pay an Uber Black driver more than an Uber X driver to take me somewhere.

McCoy_Pauley 1 day ago 0 replies      
What ISP's say to companies who want to use the internet, "Either you pay us to access your customers, or we break your kneecaps." This is about both monopolies and mafia business tactics.
vrama 1 day ago 1 reply      
Internet has long provided the opportunity for a new comer to challenge the status quo because of net neutrality. Now we are creating a barrier of entry and it is going to be hard for new startups. American dream is correlated to equal opportunity for everybody irrespective of the background. That's what is stake at here.
yarrel 1 day ago 0 replies      
Simply use DRM. Some media can be marked as "slow download" and the web browser has to respect this.

Write it into the HTML spec et voila! One happy Tim Berners-Lee.

3327 1 day ago 0 replies      
Not to be feared. Let it pass. Please let it pass. This is all that is needed for a company like google or someone else to disrupt the system. This is the final chess piece for Verizon and similar scum to finally get run over.
RA_Fisher 1 day ago 4 replies      
If we don't allow ISPs to throttle things like Netflix, isn't the impact that all other packets might suffer latency? In that case, is NN a subsidy to large content producers?
zwieback 1 day ago 3 replies      
Did Tim envision internet bandwidth consumed by Netflix? Is the current mix of content we consume something he approves of?

Instead of staking out absolutist positions it would be helpful to come up practical solutions now. Some form of fast lanes is inevitable over time and just maybe it could improve the situation of people like me who would likely remain on the "normal speed" lanes.

nickik 1 day ago 1 reply      
I think that offering a range of diffrent models of threwput and latancy can only be good for the internet as a hole.

What I am conserned with is this, I want to tell the ISP what packets should run with what characterisitcs. I am completly against the ISP making deep packet inspection and deciding the selfs what packets to drop or dely.

I think like in everything else, when you have a finite resources you need a market. Having every packet be the same and then just randomly drop them, is just bad for the internet as a hole. We need QoS we need to be able for some services to run with priority.

When I play video games or skype I dont wanne wait, if I torrent every episode from a podcast the latency does not intrest me so much.

So Im am PRO net neutrality in this sence, the ISP is not allowed to look into my packets and change there priority.

Can the World Really Set Aside Half of the Planet for Wildlife?
236 points by benbreen  2 days ago   191 comments top 26
grondilu 2 days ago 5 replies      
> the sixth mass extinction event, the only one caused not by some cataclysm but by a single speciesus.

Not so sure. IIRC there are reasons to believe that the big one, the Permian-Triassic extinction, was due to methanosarcina, an archaean genus. OK, that's not a species but a genus, but still.

It's a bit naive to think that all extinctions events happen because of some geologic or celestial event. Sometimes, evolution goes terribly wrong and sh.t hits the fan. Either it is by releasing nefarious gazes in the atmosphere, or creating a Primate intelligent enough to rule and consume most of biosphere.

spodek 2 days ago 3 replies      
The question is not can we.

The question is what standard of living do we want for species in the long term.

We can always live more comfortably today by consuming non-renewable resources that make our world sustainably enjoyable, but at the loss of the benefit that resource would later give. Slash-and-burn farming does this. As do putting up a mall over untouched land, burning fossil fuels, and overpopulation, for example, all of which do the opposite of setting aside part of the planet.

Business people know the concept better than anyone. They know a company is in trouble if it sells an asset whose operation produces profit to pay for current operations.

We can set off as much of the planet as we like and live in as much abundance per person as the planet can sustain indefinitely, though not as much abundance per person as we can today by consuming non-renewable resources. Using up those resources today only impoverishes future generations.

We can do either. What do we choose?

drzaiusapelord 2 days ago 2 replies      
I can't imagine us doing something like this for animals when we can't even do it for Ukrainians. From a political perspective, aggressive nations will always be seeking out annexations/territorial control and limiting the amount of land for human use would only encourage this. I mean, we're already discussing oil territorial disputes in multiple locales as well as upcoming "water wars" as unavoidable.

I don't think humanity is up to the task. This proposal sounds like something out of a sci-fi novel where everyone is a Marty or Mary Sue or some benevolant engineer dictator is running the show. In real life, guys like Putin don't give two shits about life and will march troops on a whim to obtain resources.

junto 2 days ago 3 replies      
The joker in me wants to say that they already have more the half, but most of them can't swim.

The serious me thinks that trying to prevent a mass extinction is noble, and should be widely supported.

burtonator 2 days ago 6 replies      
It needs to be a HARD set aside.. perhaps only trials and fire roads too.

NOT what the US does with "national forest"... IE it's pseudo wilderness. They let ranchers use it to raise their cattle and lets companies harvest trees and mine it.

DodgyEggplant 2 days ago 2 replies      
We should consider a single plastic toy, bought from Amazon, used for a few months and thrown away: The material and minerals are taken from the ground, factories to produce it, ship to the harbour, overseas, to stores, to the consumer, and then - disposal? For what? Animals are living creatures, that inspired (and still inspire - so many movies, stories, sport teams, logos, metaphors) humanity for ages. Many, many daily things we can really live without. Think shoedazzle. Do we really need new shoes monthly, or "get obsessed"[1] about shoes? Can we at least buy something with better quality that lasts for years? This shopping and comforts have a cruel irreversible price tag on animals and wildlife. Add to this wars and conflicts all around the world, and the results are devastating.

[1] Home page of http://www.shoedazzle.com/

sxp 2 days ago 2 replies      
Humans only live on a small fraction of the planet even when you limit the area to dry land.


monsterix 2 days ago 2 replies      
It is possible.

One of the discussions that spurred here in my cubicle was: How? How can we set aside half of the planet for anything other than ourselves? It's impossible! With almost every nation, state or person out there worrying about their piece of land it surely must be impossible.

But not quite.

Use Nuclear Leakage & Irradiation. Like the one that led to the Red Forest in Chernobyl [1].

'Radiological Reserves' are probably the only way to set aside a large area for animals/plants with a guarantee that humans will not come by. Not in the next 10,000 years!

Eat that! :)

[1] http://en.wikipedia.org/wiki/Red_Forest

dfc 2 days ago 2 replies      
It is strange that modern agriculture employs crop rotation in order to increase yields but we do not do the same thing for harvesting food from the ocean. CBC's The Nature of Things recently had a series about the state of the oceans. During one of the episodes they showed the success of marine reserves in New Zealand. I am having trouble finding a good link but the turn around was amazing.

Currently less than 1% of the ocean is protected. Greenpeace has been campaigning to set aside a large amount of the ocean as a "marine reserve." http://www.greenpeace.org/international/en/campaigns/oceans/...

Htsthbjig 2 days ago 1 reply      
In the future, yes I believe so.

Today I believe there is no way.

The world approaches population stabilization. Japan 's population is going to go down. So is China, Germany, Spain...

As we reduce illness in Africa and increase automation people need less children.

Population will get a peak and then not grow anymore.

If we solve fusion energy we will be able to plant vegetables or plankton underground, in floors, in a much more efficient way, as we will be able to have a stable temperature all day long, with pests controlled without using chemical products, just controlling physically the access, and very near the places they are consumed.

tatterdemalion 2 days ago 1 reply      
I think the only ultimately sustainable solution is not to "set aside" any percentage of the planet for wildlife, but to develop ways of living that are not based on a differentiation between spaces of civilization and spaces of wilderness. Our species is naturally a node in a complex set of ecological systems, and instead of trying to detach ourselves from that system we should find a way to achieve our goals while living within it.
bmh100 2 days ago 0 replies      
There are fundamental cultural issues that will need to be addressed for civilization to reach sustainable, large populations. Yet, even if we do undergo a mass extinction, it may be slow enough that we can actively intervene in the ecology to prevent the collapse of civilization. With the rise of synthetic biology, advanced genetic engineering, realistic ecological simulations, and perhaps AI-engineered organisms, it may be exiting. We could be on the cusp of an unprecedented explosion in new genes, phenotypes, biochemistry, and general biodiversity.
zaroth 1 day ago 0 replies      
"Without any human intervention, here is a forest with tall, straight trees that are rather widely spaced, plenty of sunlight and lots of open, grassy meadows. Longleaf branches out only after its high overhead, where glistening needles up to two-and-a-half-feet long are arrayed in pomponlike sprays. Below the branches is empty space a hawk can glide through."

Sounds beautiful!

jccooper 2 days ago 0 replies      
See also: http://www.americanprairie.org/ -- an effort to link public and private lands to create a 3 million acre preserve of the Great Plains ecosystem. That's pretty big, but not even close to the scale this article considers.
chiph 2 days ago 0 replies      
He needs to talk with the Florida DOT about building wildlife bridges so the animals can safely cross roads and rail lines.
nroets 1 day ago 0 replies      
A good starting point would be scrapping agricultural subsidies. Since South Africa scrapped it, a lot of farm land was turned into wildlife farms.
futbol4 2 days ago 1 reply      
according to NOAA http://www.noaa.gov/ocean.html 71% of the earth is Ocean leaving only 29% land. So the goal is for 14.5% of the earth to be set aside for Wildlife? That is a terrible title for an article. 14.5% != 50%
naringas 2 days ago 0 replies      
>Can the World Really Set Aside Half of the Planet for Wildlife?


rasz_pl 2 days ago 0 replies      
just nuke some land, worked great for wildlife in Cernobyl
thisjepisje 2 days ago 0 replies      
Majority of the planet is ocean, so maybe they could.
lotsofmangos 2 days ago 0 replies      
We could set aside 90%, if we turned agriculture over to nuclear powered subterranean farms.

It isn't so much what can we do, as what can groups of people be bothered to do collectively and whether anyone else is going to complain.

kolev 2 days ago 0 replies      
Our society is very irresponsible. Every holiday is a nightmare for the planet. The tons of junk, wrapping, and throwaway stuff we consume will be ridiculed from future generations. Not to mention the time and energy (literary, too) wasted for shopping. I stopped buying birthday decorations and try to educate my kids to stop having these merchant-inspired "festivities". All junk from Easter, Thanksgiving, Christmas, and the endless kids' birthdays piles up to a ton per year. Be responsible as we're leaving a huge liability to the future generations and our children and grandchildren, which we care the most about! I'm really disappointed at the Waste Management Recycling Centers who refuse to take anything, but CRV just recently. I invest time and pile up tons of non-CRV recyclables and they do not take it anymore.
squozzer 2 days ago 1 reply      
I guess it depends upon how many humans one must move or exterminate.
ilaksh 2 days ago 2 replies      
Not sure why I bother trying to talk to you people anymore, but I will go ahead and throw out an idea that I assume you will simply reject because it goes against your belief system.

We should not worship nature to such a high degree. Yes, we should try to conserve wild areas as a buffer against mistakes and for basic enjoyment. And we are not doing a good enough job of that.

But the assumption is that basically the wild areas have some sacred process or system going on that we cannot possibly ever aspire to understanding or surpassing.

First of all, there is absolutely no separation between the "wild" world and the "human" world. The idea of a natural world that is separate from a human world is an oversimplification that has become misleading.

Everything in the world, including people and the things that we make, from human feces, to plastic trash bags to rocket ships and computers, is the result of the same natural physical laws and processes involved in the universe.

The planet sees itself with billions of eyes. The planet thinks with billions of tiny minds.

The cities, roadways, and agricultural fields that cover increasingly large areas of earth are part of the natural evolution of the planet.

Its hard to really convey especially since we are so far down the line of nature worship, but part of what I am trying to get across is that humans have already surpassed nature in some ways, and if we haven't already done so then we can create environments that do.

I think it will be easier to appreciate this type of thing once we become a multi-planet species. Or at least get a colony on the moon or something.

Because part of the nature worship is the reality that we only have one biosphere to support us. We need to fix that.

But another thing -- this does tie into Malthusian population control, eugenics, classism, etc. There is an inherent disgust for the dirty masses that is hidden behind the earth worship. We have to remember the value of human life.

transfire 2 days ago 4 replies      
This will only become possible when a couple for important milestones are reached --and they a biggies. First we have to get rid of a lot of the roads. And that can't happen until personal flight becomes common place. Second human population has to stop growing, in fact it needs to shrink even now. We are already reaching upper limits on agricultural and water availability. Unfortunately, while the former is difficult enough, the later is near impossible due to the dominance of infantile religions.
sshrc make your ssh sessions feel like home
231 points by Russell91  1 day ago   43 comments top 20
DrewRWx 1 day ago 1 reply      
Ouch, this is a better version of what I submitted a few days ago [1]. I guess that's what I get for trying to emulate the unwieldiness of ssh_config.

[1] https://news.ycombinator.com/item?id=8324538

gburt 1 day ago 0 replies      
I feel like this should be out of the box ssh functionality. This is exactly what I've needed my whole sshing career.
c3RlcGhlbnI_ 1 day ago 0 replies      
I made myself a simpler version of this a while back. I would simply put the rcfile in a folder and then host that folder with:

  python2 -m SimpleHTTPServer 12345
Then you can just use something like the following function to get a remote session:

  sshrc() {    ssh -R 12345: -t ${*:1} 'bash -c "bash --rcfile <(curl -s"'  }
This has some nice side effects in that you can then just host your vimrc the same way. I think you can get vim to load plugins from a http runtime path too, but I have never looked into it.

Doing it this way has its own quirks though.

possibilistic 21 hours ago 0 replies      
I was thinking of putting together a crude hack to implement just a subset of what this offers. I cannot thank the authors enough. I'm trying this out the moment I get home.

I feel as though we should have had first-class support for portable environments all along, and it shocks me that we haven't considered building this kind of facility before. This could be a game changer. Like vim package managers and other sensibly modern things, I totally expect to see this evolve to become the new norm.

I see this kind of configuration:

* global personal configs (lightweight, truly global settings for work and home)

* local overrides (work, home, per machine, non-SSH)

* remote overrides (SSH machines)

Global configs should be portable everywhere. For everything else, I see a system capable of merging several such configs and matching the current environmental and capability context.

derekp7 1 day ago 0 replies      
Something similar, which allows you want to extend shell scripts to a remote system transparently is "rpcsh" -- take a look at:https://gist.github.com/derekp7/9978986

This function allows you to push out local functions, variables, and arrays to a remote system. There is an updated version in the client script for my backup utility (snebu), which also includes getting remote variables returned to the local shell script, and also bouncing through sudo.

Should I move this to a regular project on Github, or keep it in the gist?

TravisLS 1 day ago 0 replies      
Well, this post read my mind. Literally my first thought waking up this morning was "why don't I have some way to use my .vimrc over ssh?" I open up Hacker News and voila!

Very nice - clean and simple.

jamiesonbecker 9 hours ago 0 replies      
Love the elegant simplicity. Should we add something like this to Userify (https://userify.com) or perhaps offer to pull your dotfiles (.bashrc, .vimrc, etc) from Github?
alexbel 10 hours ago 0 replies      
Is it possible to source my dotfiles?

Something like this:

# .sshrc

source ~/.bashrc

source ~/.alias

laumars 1 day ago 1 reply      
I do something like this manually. I set up an alias on my home that copies a locally stored environments file onto any server I ssh onto.

Pretty simple stuff to set up, but allows for some pretty powerful configuration

donw 1 day ago 1 reply      
Fantastic. Looks like a perfect complement to my Workspace[1] script, and way easier than having to git push/pull my dotfiles directory.



larrybolt 1 day ago 1 reply      
I wonder if it would be able to make it work with mosh (http://mosh.mit.edu/), which would be awesome!
jeroenjanssens 1 day ago 0 replies      
As a complement to this nice tool, I can recommend sshfs [1], which enables you to mount a remote machine as a local folder.

[1] https://www.digitalocean.com/community/tutorials/how-to-use-...

jph 1 day ago 0 replies      
Eureka! This so simple to use and so useful -- I'll use sshrc every day now. And the embedded xxd -ps is a clever copy. Great idea Russell. Thank you.
nha 1 day ago 2 replies      
What would it take to adapt this to also source other config files ? I'm thinking about gitconfig and vimrc, but I'm sure there could be more.
userbinator 1 day ago 1 reply      
"make your ssh sessions feel like"
bsg75 1 day ago 1 reply      
Are there any security implications using this? It sounds very handy if not.
1945 1 day ago 1 reply      
I wanted this 10 years ago, thank you!
dobrescu 1 day ago 0 replies      
this is pretty awesome! any hint on how to load the vimrc config by default without the actual :so?
spc476 1 day ago 2 replies      
How is this any different than .bashrc or .bash_profile?
linguafranca 1 day ago 1 reply      
I've seen tool after tool that tries to make SSH'ing into a remote machine more "comfortable", but I honestly don't do enough remote work over SSH to make it worth the time required to set it up. I probably spend an hour or two over SSH per week, and Ubuntu's minimalist shell defaults are perfectly fine for the little work I do.
No, You Cant Manufacture That Like Apple Does
220 points by brk  3 days ago   101 comments top 18
mgkimsal 3 days ago 5 replies      
It's pretty much the same in software too. I often get requests for functionality and when I say "no, we can't do that", I get "Why not? Amazon does it".

Hrmm... Amazon has dozens of people working and supporting just that one feature. You're trying to engage me to do an entire project. On a fixed budget. With a fixed time frame. And you've changed your mind 3 times in the last 3 weeks on key points.

Of course, yes, there are amazing things you can do with software that weren't remotely feasible even just 5 years ago. But there's always a moving target - the market leaders (Apple, Google, Amazon, etc) are constantly pushing the boundaries of what's considered 'normal', and most people have 0 idea of the real cost and effort involved in having the functionality come across as polished and error free as the big boys.

josefresco 3 days ago 3 replies      
Forget fancy techniques, just scale alone is enough to shock you when looking into manufacturing. We got quotes from various providers at $4-$12/part. Meanwhile, in grocery stores, department stores, even dollar stores we would see similar products (using the same materials) being sold for $1-2.

The difference is mostly related to the number of parts being ordered. For a startup, ordering 100,000+ parts just to get pricing reasonable is a no-go unless you (or your backers) take a major risk.

Makes you feel like getting off the ground is almost impossible, when you can't even get your wholesale cost below the retail cost of similar products.

foofoo55 3 days ago 1 reply      
The main point is that manufacturing, especially high-volume, consumer, apple-quality products, is very hard and requires serious expertise. Such startups should bring in such mechanical & production engineering expertise, because the hardware becomes as important as the software & electronics.

The irony is that many software startup wizards brush off mechanical design the same way that naive managers treat software development. "It's just a [box/case/app/website], how hard can it be?"

taylodl 3 days ago 1 reply      
Thanks to follow-up research after reading this article I now know what an ejector pin mark is, and now that I know I see them everywhere! Gah!
serve_yay 3 days ago 3 replies      
The more you know this sort of thing the harder it becomes to believe that Apple makes the same things everyone else does but with fancy marketing.
noir_lord 3 days ago 2 replies      
$12 for a box at scale, well damn!

Use 100% recycled cardboard and print the box in a single color water based bio-degradable ink.

Then claim you do this to save the environment, win-win ;).

zwieback 3 days ago 0 replies      
Not mentioned is the challenge of managing a good CM in China or Malaysia. If you're small you get what everyone else does, at higher prices. If you're huge you can groom your CM and make sure you get what nobody else does, at a lower cost. Of course sooner or later the manufacturing knowledge leaks out so it's a rat race even for someone like Apple.
psychometry 3 days ago 1 reply      
There are so many acronyms in this article that aren't defined. Maybe I'm not the intended audience for this article, but it's pretty hard to understand when you don't know what CM, CNC, or BOM mean. Is it really so hard to use the <abbr> tag?
eitally 3 days ago 0 replies      
This is why -- for example -- Flextronics opened their Lab IX [1] in SV last year. In cases where hardware ideas are good ones, it makes a lot of sense for the guys with the manufacturing expertise to get in the loop early on. They can incubate, invest, and indulge in some of the wild stuff innovators want but can only be executed with $mm of equipment.

[1] http://www.labix.io/

qwerta 3 days ago 3 replies      
Article does not mention capacitive touch screens. Apple basically build its own factories (and subsequently entire industry) to manufacture for iphone 1.
lazylizard 3 days ago 3 replies      
but apple is not the only large luxury goods manufacturer, right? how does louis vuitton do their thing? or leica? or rolex? if anything, aren't lv and rolex the orginals at mass producing/marketing luxury goods?

separately, i imagine not just leica, but the entire optics industry has answered the question of precision/quality at scale before?

and finally, there're more , right? like mercedes-benz, bmw, lexus, porsche, medical instruments, the aerospace industry... don't all of them have to solve 'quality at scale' problems?

and at smaller scales..parker pens? zippo lighters? swiss army knives?

and then..something like http://www.muji.us/store/ could be good enough as far as the perception of quality goes?

ashish01 3 days ago 5 replies      
Then how did Nest do it?
niels_olson 3 days ago 0 replies      
Aren't turbine blades also grown from crystal before machining? Even Apple doesn't do that...
whizzkid 3 days ago 2 replies      
This article actually misses some key points while it is telling the truth.

Most of the things mentioned in the article is correct BUT,

If you are going to make a product and you think that your product is going to be as revolutionary as Apple was at the beginning, then don't worry. You will be good to go.

If you can provide a unique, mind blowing product just like Apple Lisa in 1983, you can sell it for really unrealistic prizes.

Apple Lisa was sold for US$ 9,995 at the time it was released. You could buy a new house around $86000.

So the question is not how expensive is going to be, the real question is,

Is your product mind blowing?

DarkIye 2 days ago 0 replies      
Seriously? Most founders' ideas hinge on their hardware product looking sleek? This is progress?
at-fates-hands 3 days ago 1 reply      
When did Apple go from manufacturing ordinary hardware to making that leap where they had the financial resources to truly make something incredibly unique and beautiful? Was it one product one year, or did happen over a span of years where they had smaller increments of change?

I'm curious to know how a company can get to a point and say, "Ok, we can do something really cool, on a massive scale and make it successful." Is a slower transition, or more of an abrupt change that takes place?

snowwrestler 3 days ago 2 replies      
Surprising statement that the box might be the most expensive single element of the iPhone.
ww520 3 days ago 0 replies      
Build a Facebook lite for such and such...
Bad Notes on Venture Capital
220 points by brentm  3 days ago   84 comments top 21
grellas 3 days ago 4 replies      
With seed funding, founders need to accomplish several goals: (1) raise early money as a bridge to Series A to avoid having to take large money right up front when valuation is at its lowest and the dilutive hit will be at its maximum; (2) keep equity incentives priced relatively low so that a quality team can be assembled without giving its members tax grief; (3) keep transaction costs in raising the money reasonable; (4) move to close relatively quickly to avoid getting sucked into the fundraising time sink.

Convertible notes and convertible equity (SAFEs) both allow founders to meet these goals in ways that equity rounds simply do not. In this sense, they have incredible value as tools to be used by an early-stage startup.

Are they perfect tools? Well, no. Each has its own limitations and problems and each can be abused on either the company side or the investor side. Founders routinely used to take uncapped convertible notes, build value, stretch out the process, and leave the investors getting ever-diminishing rewards all the while that their money was being used to build that value. Investors wised up and began insisting on caps to avoid such abuses. They wanted to protect the idea that they would get significant extra rewards for taking the earliest-stage risk. It was not enough, e.g., to get a 20% price discount at Series A if you convert at $20M pre-money valuation when the company was probably worth no more than, e.g., $5M at the time you invested the seed money. The investor insisted on locking in that ceiling on valuation as a means of self-protection. Does this result in occasional windfalls to bridge investors who take capped notes (or SAFEs)? Yes, it does. Does this arrangement have attributes of something that resembles a liquidation preference in its functioning? Well, yes, it does. Ditto for the full-ratchet anti-dilution attributes. These things are very real attributes that do affect the value of using these tools for founders and their companies. They add to the negative side of the ledger.

But let us say that your startup had to give four times the value in Series A preferred stock relative to other investors to the hypothetical investor noted above ($20M valuation, $5M cap). What does this mean? If that early investor put in $300K and the Series A round was for $7.5M, the early investor might have gotten a windfall but the impact on the round is small because the dollars involved are not large. And if the venture did not do well and the Series A round priced at $3M, then that same investor would still get something like a 20% discount off the lower valuation instead of having had to peg his fortunes to the $5M value used for the cap as he would have had to do had he taken equity. But so what? The dollars are still small and it doesn't matter relative to the value and utility offered by using the convertible note (or SAFE) tool. And, for every "windfall" gained by such investors, you have all sorts of cases where the failure rate is particularly high because of the extreme risks existing at the earliest stages before it is even determined that a company is truly "fundable."

The value of notes and SAFEs is flexibility. Your occasional investor will get special advantages but these are not unduly costly to you as a company. And you have a good measure of control over how you do things. Your first note can be capped but, as you gain traction, later notes can be uncapped. You can raise money at any time without having to worry about creating tax risks and without having to mess up your equity pricing. You can do the number of notes and in the amounts you immediately need. You don't have to go through endless negotiations over the sorts of things that can accompany an equity round (especially protective provisions and similar restrictions on what a company can do). You avoid giving your new investors a veto right or other significant say on what you can do in future rounds, as you would normally have to do if you did an early-stage equity round using preferred stock.

Basically, there is a whole different dynamic in using notes/SAFEs versus doing an equity round. And, in most cases, it is a useful and valuable dynamic for founders and their companies notwithstanding the trade-offs. In this sense, the main idea of this piece that I would strongly disagree with is its suggestion that using convertible notes is somehow a sucker deal. It can be if done wrong but it most often is just the opposite.

Another metric for measuring the relative value of these tools is to see who is using them. Convertible notes have had massive and widely dispersed use now for many years. The most sophisticated investors have had no problem with them in general, and that includes the VCs who lead Series A rounds in which converting noteholders get the benefits of their caps and take more relative value in the round than the VCs themselves do.

As with anything else in the startup world, founders need to use good judgment. The points made in this piece are informed and technically accurate. And they do underscore some clear disadvantages in using notes/SAFEs. My point is that, notwithstanding these defects, notes/SAFEs retain great utility for founders in the context of the broader issues they need to address (minimizing tax risks, keeping stock price low, keeping transaction costs down, etc.). And so this is a good piece to add to your knowledge base but it should not deter you from using non-equity forms of seed funding as long as use of these tools meets your bigger goals. You have control at that stage. Use that control wisely.

patio11 3 days ago 7 replies      
Is there an entrepreneur willing to say "Yep, tried them, got bitten in the hindquarters", even anonymously? I don't necessarily disbelieve here but it strikes me that VCs have the curious property of always giving fundraising advice which is in the interest of VCs. I acknowledge that somewhere in this wide world there may actually be a VC motivated primarily by friends-and-family getting appropriate compensation for risky investments but that is not the way I will bet.

My understanding -- limited, in that I've been a party to two but from the other side of the table -- is that a major reason they became popular is because they take the fangs out of collusive behavior by investors. The Valley had evolved a "Well, I'll invest if you get a lead" "Well, I'll lead if you close $X" system which froze out a lot of companies if the founders didn't have deep, pre-existing social networks. "You want a lead!" sounds a lot like wistful nostalgia for this gatekeeping behavior. I understand why a gatekeeper would see it that way. I don't understand why the gated would.

AVTizzle 3 days ago 2 replies      
Posts like this make me realize how amateur and inexperienced I am. Seriously.

I've taken it for granted for the past 3 years now that convertible notes were universally regarded as the best and smartest form of fundraising for seed rounds, just based on what I've found and read in various places online (lots of it here on HN, for that matter...)

And here we have Suster laying out clearly the opposite side of the argument in a way that humbles me. This is clearly an area that I have a lot to learn from people much smarter than I.

Most YC companies go on to raise rounds using YC's SAFE, which is an adaptation of convertible notes, right? If so, I'd love to hear a YC partner (or partners) address these points.

abalone 3 days ago 1 reply      
Here's the thing that's left out of this investor-friendly analysis, and why Suster is wrong about the "irrationality" of no-cap deals: For smaller angels, access to the deal itself is valuable.

Small investors don't have access to Series A. That's why the analogy with the stock market is broken:

"Can you imagine investing in the stock market where your price was determined at a future date and the better that company performed the HIGHER the price you paid for that investment."

The reality is angels don't have the option of purchasing the stock after the seed round. So, it can make perfect economic sense for an angel to pay a premium to get access to a deal. And if that premium comes in the form of pre-paying for a chunk of the Series A (one way to look at a no-cap deal), that can make sense -- perhaps even more sense that an enhanced seed valuation would. A discount would be sweetener on top of that. But to be clear: there is a rational case for smaller investors taking no-cap deals in order to get a chunk of the next Facebook, which they otherwise wouldn't be able to get.

Seed rounds present a unique intersection point for founders & smaller angels where their interests overlap. I think the bigger-sized investor community are somewhat threatened by that, and that's why we're seeing such a sophisticated campaign against no-cap convertible notes. But the climate is now competitive enough and small-angel platforms are getting enough traction that you can sense their anxiety that no-caps may be coming back, to the great benefit of founders.

joshu 3 days ago 0 replies      
So here's how the preference multiplier happens: when the company raises an equity round, you calculate the new share price, and the noteholders are assigned the investment's worth of shares. However, because of the change in value, they would get more preferred shares per dollar invested than the new investors.

So generally you get just the original price's worth of preferred and the rest is common.

jsun 3 days ago 0 replies      
Wow. this is misleading. Convertible notes are the best things to happen to startup founders in the history of fundraising.

Take a modern convertible note to an angel investor from the pre-bubble 90's and they'd laugh you out of whatever coffee shop you happen to be sitting in.

All of the "examples" shown in the blog post make irrational arguments. Show me one scenario (in numbers) where using a convertible note for a seed round was suboptimal compared to an obtainable equity deal.

If I didn't know better I'd think this was an example of a VC trying to smear an awesome instrument so hopefully they won't have to compete with investors willing to write them.

Brushfire 3 days ago 1 reply      
The benefit of convertible notes / SAFE's is that you get the invested cash IMMEDIATELY. This is a non-trivial benefit for most startups.

When raising a full round of capital, say 1M, you don't get any of it until you 'close', after everyone is committed.

Convertible notes allow entrepreneurs to build progressively towards a close, without waiting for all the cash to come in -- which might come too late. So, sure, if you're flush with cash, then convertible notes are a worse idea relative to a priced round. But when is that ever the case when fundraising?

randylubin 3 days ago 1 reply      
It's my impression that everything he says, if true, can also be applied to YC safe. Is that correct?


tptacek 3 days ago 2 replies      
The second vignette in this article has Suster suggesting that convertible notes carry liquidation preferences and anti-dilution. Does he mean that some subtle property of convertible notes create those terms in practice, or does he literally mean that if you read the note paperwork you'll find a 2x liquidation preference and a full ratchet?
tsunamifury 2 days ago 0 replies      
All of this is a distraction game played by VC's to avoid the real point of entrepreneurship.

Make a product worth something to a target market. Sell it or get enough traction. Find a company who would like to buy you or go public based on your growth.

All this other stuff is a distraction created to keep you from focusing on what you are here for. To build great things and sell it for profit.

Otherwise... whats the point. The best negotiating term is having a great product. You won't raise your way to a product/market fit.

lightedstar 3 days ago 0 replies      
Why don't convertible notes in the friends in family scenario just set a multiple that will be suitable in the case equity is diluted? In this case it seems the main interest in supporting an entrepreneur they believe in and getting their money back - ideally at a better return than they'd get elsewhere. For example, would a 2x return on a convertible note pose any real problem to investors in subsequent rounds?

Glad to see so much discussion about it. After taking Venture Deals with Brad Feld and Jason Mendelson through Kauffman Fellows Academy / Novoed, I'm quite curious about trends on this topic.

sandGorgon 2 days ago 0 replies      
Does anyone have a valuation model spreadsheet that accounts for multiple rounds of convertible note financing, and purposed towards a Series A conversation ?

As a founder, I have tried searching for these but never managed to find one.

I'm not sure if it is an India thing, where investors actively discourage convertible notes because of the availability of several pounds of equity flesh in exchange for angel stage funding.

vasilipupkin 3 days ago 0 replies      
How often do companies that end up doing a round below the original cap ( effectively a down round ) actually recover and do well? Does this really matter? Seems like it is such a big risk to investors to invest below the cap that they should be appropriately compensated for it, which is what the discount accomplishes
ForHackernews 3 days ago 3 replies      
I don't really understand any of this. What are "convertible notes", what's a "cap"? Is there a beginners glossary somewhere?
jacquesm 3 days ago 0 replies      
Super good piece, required reading for anybody currently founder or co-founder of a start-up and considering convertible notes.
rahooligan 3 days ago 2 replies      
according to this the main problem with notes is a down round can get you into a lot of trouble because of the liquidation preference multiplier. but what if you never have a down round? if you can avoid the down round, it seems like convertible notes are still preferable. down rounds are bad even with priced seed rounds.
AndriusWSR 3 days ago 1 reply      
Any more good pieces regarding this topic? Super interesting!
taytus 3 days ago 0 replies      
Only morons start a business on a loan - Mark Cuban https://www.youtube.com/watch?v=KYneLGRTgy8
api 3 days ago 0 replies      
Check my understanding?

So convertible debt isn't really debt-- it's an equity investment with deferred closing and terms. Right?

I can see then how this can lead to bad outcomes for the entrepreneur (behaves like a full ratchet if there's a cap) or bad outcomes for the investor (you've deferred the conversation and agreement on expectations). But at the same time it is in some ways simpler, and you get the cash immediately. Right?

Hmm... this is interesting. He suggests setting a price. So let's say I want to raise convertible debt... he's suggesting that one instead says "I'm raising one million at X" and sell convertible notes with a fixed price instead of a cap? What would that look like?

notastartup 3 days ago 0 replies      
does this mean that the founders in that story accept money based on the agreeement that whatever the VC thinks their stake is is what will be the valuation? I'm a little fuzzy on how this works.
mmphosis 3 days ago 2 replies      
VC wants to make money.

VC wants to make money off of your work/startup/company.

VC makes money using the given terms.

When it comes right down to it. The money they "give" is not given. It is a loan with terms and conditions. You need to be aware of this. You need to know what a loan is. It is money you are borrowing that you must pay back WITH INTEREST. Find out how much interest that is.

"Its like we need a finance 101 course for entrepreneurs"

Debt money is bad. I realize a lot of people will tell you that this is how the world runs, or this how they run their business, or I got successful in business taking out a big loan, etc and so on. Like the gambler, they don't tell you about all the losses.

Save your money. Keep your work/startup/company.


I've read that Microsoft has always had enough money in the bank to operate for a full year without making any money.

Rust lifetimes: Getting away with things that would be reckless in C++
224 points by dbaupp  1 day ago   89 comments top 11
missblit 1 day ago 2 replies      
In C++ if the string is a rvalue reference you could std::move it into part of the return value. Think a signature like

    template<typename T>    std::pair<std::string, std::vector<std::string_view>>    tokenize_string(T &&str);
This would be efficient when the user passes a temporary, and it would be safe.

Which isn't to say the Rust solution isn't totally cool. Being able to easily check this class of errors at compile time is probably a lot nicer than needing to learn all the relatively complicated parts that would go into a easy to use / safe /efficient / still slightly weird C++ solution.

svalorzen 1 day ago 5 replies      
Or, you know, instead of returning two C pointers which in modern C++ makes no sense, return a vector of `std::pair<size_t,size_t>` with position and length of each substring, and if needed use `std::string::substr` to extract the parts you need.
enjoy-your-stay 7 hours ago 0 replies      
In C++, the best way to hand out pointers to anything where the creator may not necessarily be the last one referencing that object or chunk of RAM is to use reference counting, which would have solved the posters' problem.

It would mean that you would have to wrap the incoming string in a class, and probably add the tokenize_string method to that class. Then you would also have to wrap the results vector in a class that then addrefs the original string wrapper class.

But after that, handing out pointers to the contents of the string would be no problem as the results class would addref the string class and then release it when done ensuring that the string wrapper class remains alive as long as the results object has not gone out of scope.

Of course Rust's approach of alerting you when your code path causes dangling pointers is also interesting, but I wonder how that would work if you were to link against a static library that handed out references to internal objects like that - could the compiler see the scoping problem?

bsaul 1 day ago 2 replies      
Which makes me wonder :

1/ could you build the same unsafe behiavor in Rust if you wanted to by not specifying lifetime constraints ?

2/ If yes, shouldn't lifetime constraints be mandatory ?

shmerl 20 hours ago 0 replies      
> The function get_input_string returns a temporary string, and tokenize_string2 builds an array of pointers into that string. Unfortunately, the temporary string only lives until the end of the current expression, and then the underlying memory is released. And so all our pointers in v now point into oblivion

So what stops you from returning a shared pointer in case of get_input_string? Then take over that ownership and use it. It's still a potential problem that v is logically disconnected from lifetime of that pointer, but at least you could avoid the problem you described.

asuffield 1 day ago 4 replies      
There's an obvious extension here for lifetime inference - the example given doesn't need to be an error, it could compile correctly by increasing the object lifetime to the outer block. I don't know offhand whether there is a universally correct inference algorithm for that (if every other language feature was static then unification would solve it easily, but the other language features are not static and I don't know how it would interact with rust type inference).
keeperofdakeys 1 day ago 0 replies      
Just as an aside, the &str is not stored as two pointers, but a pointer and a length.
overgard 1 day ago 2 replies      
This seems like the kind of place where std::shared_ptr would really shine. The author's point on the danger of pointers is well taken, but some of the new pointer types get around a lot of these issues. You couldn't use it to point into the middle of the string, but if you paired it with some offsets you wouldn't have to worry about the ownership of the pointer anymore.
GoGolli 1 day ago 0 replies      
Rust is the best complicated language I have seen!!!!!!
Yardlink 1 day ago 4 replies      
Is there are reason this language exists? They're solving a problem that's been solved many times over for at least 2 decades in the form of managed languages.
linguafranca 1 day ago 6 replies      
I'm hearing an awful lot about Rust on HN, even though afaict it still does't have a basic http package yet, limiting the main types of apps I would build with it. Maybe I'm in the minority, but perhaps we can slow down on Rust news until it's a little closer to usable?
One Thing Well A weblog about simple, useful software
210 points by tete  9 hours ago   36 comments top 16
Argorak 8 hours ago 1 reply      
This tumblr doesn't quite live up its name: http://onethingwell.org/post/97725615916/busybox

BusyBox is great and everything, but it's definitely not subscribing to the "One Thing Well"-philosophy, quite the contrary: everything in one.

doctorpangloss 52 minutes ago 0 replies      
> Simple, useful software

I came expecting examples of to-do lists, mail clients, clever messaging apps, etc. There are a handful of those.

Instead, the majority of apps are described by sentences where literally every word would be unfamiliar to a typical computer user. For example, "Cram is a functional testing framework for command line applications based on Mercurials unified test format."

Simple is in the eye of the beholder.

state 8 hours ago 3 replies      
Sorry, but there is nothing I find more annoying than the "Never miss a post!" spam that Tumblr now inserts in to every page post acquisition.

Perhaps someone could do one thing well and come up with a blogging platform for this nice project?

eps 4 hours ago 0 replies      
11 pages of Windows software! Who would've thought it exists :)


asymmetric 4 hours ago 0 replies      
OT, but it's heartening to see a link to an RSS feed next to Twitter and G+. I find that more and more sites are abandoning this public, open source standard in favor of proprietary platforms.
fizixer 8 hours ago 0 replies      
- See also: suckless.org

- An LFS build off kernel.org (the kernel) and github (the rest of userland) would be an interesting experiment.

alanning 2 hours ago 0 replies      
Short examples would greatly enhance comprehension for me
denizozger 4 hours ago 0 replies      
I love the idea but not the implementation. Categorising software according to purpose and tech stack would be the best.
tete 8 hours ago 0 replies      
Disclaimer: Not my blog, but found it today and really loved it.
tretiy3 4 hours ago 1 reply      
Very good.Is there any way to subscribe (no count tumblr rss twitter)?
zomg 1 hour ago 0 replies      
the original "product hunt"! :)
nXqd 6 hours ago 0 replies      
This site could be named unix_hunt :D
juef 8 hours ago 0 replies      
Alibaba Raises $21.8B in Initial Public Offering
212 points by SuperKlaus  3 days ago   106 comments top 10
skuhn 3 days ago 18 replies      
Alibaba going for a higher selling price just makes the continued undervaluing of Yahoo itself increasingly hilarious. At some point, I would think someone will want to buy them for all of this free money they have sitting around.

  YHOO market cap : 42.3bn  + liabilities : 3.7bn  = 46bn (to own outright)

  Cash on hand = 1.1bn  + Other assets = 15.3bn  + Cash from Alibaba sale = 8.3bn  + Value of remaining Alibaba stock = 22bn  + Value of Yahoo Japan stock = 9bn  = 55.7bn (of value, if you can unlock it)
So simply by buying them, firing everyone and selling everything in a firesale, without even cashing those checks from advertisers, you can make $9.7bn. Seems like a sweet deal.

bane 3 days ago 4 replies      
Gosh....that's barely more than WhatsApp. Woulda done better selling it to Facebook.

More seriously, the Economist put a valuation at $55-$120b. This puts it at $168b. Or 9 WhatsApps. The IPO was expected to raise $20b, so this is really good.

It took 3 years after founding in 1998 to reach profitability.

It defies most conventional notions of a startup, despite having started in somebody's apartment. It's unbelievably unfocused, it does pretty much every kind of business you can do on the Internet.

It might be one of the first Asian-style conglomerates to be born on the Internet.

There are plans for it to open brick and mortar stores.


piettro22 2 days ago 0 replies      
Well....you guys have to look at the bigger picture: Alibaba was rejected by HK stock market the first time around, then Jack decided to take it to NYSE. Of course 21B is nothing in comparison to the other big brothers like Facebook, but ask yourself this question: what really is Alibaba doing in China? For one, Alibaba is about trading on the international scene, and its seed Taobao is like eBay China version; then they are into car manufacturing as well, not to mention Alipay being the paypal of modern day China....clearly, that 21B he raised can't even cover a lot of the expenditures for development. What Jack really is doing is to get his company that global recognition. Any of you can name ONE brand/company that based in China on top of your head? Well, that's what Jack is trying to do. You think he's unfocused? How about this: for someone who owns the e-commerce platform of the world's 3rd biggest country, he is utterly focused on becoming "China's international brand." Don't forget, China is the jumping board, and Jack is the pioneer. Once he claimed first, who will remember the ones from the subsequent ranks?
pstrateman 2 days ago 1 reply      
This seems like a pretty bad deal for foreign investors.


aresant 3 days ago 1 reply      
"[Yahoo] will have raised nearly $8.3 billion through the offering"

If I'm reading that right Yahoo stands to triple their cash position almost overnight which today stands at around $4b.(1)

Given Marissa Meyer's appetite for acquisitions this puts them into a potentially interesting new ballpark of scale.

(1) http://finance.yahoo.com/q/bs?s=yhoo

dave1619 3 days ago 1 reply      
Here's a link to the Alibaba IPO Roadshow Presentation,https://www.youtube.com/watch?v=sa9R2SqZsHM

I highly recommend watching it. To save time, watch it on Chrome at 2x speed.

jacoplane 3 days ago 0 replies      
Planet Money had a decent piece on Alibaba recently: http://www.npr.org/blogs/money/2014/09/03/345310125/episode-...
baddox 3 days ago 1 reply      
> Though it did not claim the title of biggest initial public offering ever, Alibaba will still lay claim to having held one of the biggest stock sales on record, surpassing offerings from Facebook and General Motors.

If you're curious (I was), Wikipedia puts the Agricultural Bank of China as the largest IPO, with Facebook at number 6.


lurkylurk 2 days ago 2 replies      
A bit off topic: Is there a good site to keep track of all tech IPOs and including historical data and upcoming/withdrawn IPOs?
vrama 2 days ago 0 replies      
Yahoo can buy 30 tumblrs with the alibaba money. That's neat.
Transducers by Rich Hickey at Strange Loop [video]
197 points by sgrove  2 days ago   46 comments top 8
tel 1 day ago 3 replies      
First, to be clear, I really liked this presentation. The criticism below is both technical and small---all in all I greatly enjoy Rich Hickey's work and generally admire his ability to talk compellingly about complex technical topics.

That said.

I somewhat disliked Hickey's presentation of typing transducers here. I feel as though he builds a number of strawmen toward typing and then tries to knock them down and suggest that either Clojure has some kind of mystical mechanism that is ineffable to types or that the exercise of typing transducers is wasteful. I disagree on both accounts, I suppose. I think types are useful for analysis and teaching.

The two major points he seems to make is that in order to "properly type" transducers you must

    1. Index the type of the "accumulation so far" so        that it cannot be transformed out-of-order    2. Implement early stopping "without wrapping anything       except for the reduced value"
There may be other critiques as well, but I want to examine these two in the context of Haskell.

With respect to the first point, the major concern appears to be prohibiting behavior loosely described as "applying the reducing function, say, 3 times and then returning the first resulting accumulation". In some sense, the idea is to force us to be faithful in passing on the accumulating parameter. In code, a pathological setting is the following:

    transduce :: (r -> a -> r) -> (r -> a -> r)    transduce reduce accu0 a =       let acc1 = reduce acc0 a          acc2 = reduce acc1 a          acc3 = reduce acc2 a      in  acc1
The concern is unfounded in a pure language, however, since calling `reduce` can have no side effects. This entails that all possible effects on the world of calling `reduce` are encapsulated in the return and, therefore, we can completely eliminate the steps producing `acc2` and `acc3` without worry.

    transduce :: (r -> a -> r) -> (r -> a -> r)    transduce reduce accu0 a =       let acc1 = reduce acc0 a      in  acc1
Now, there may be concern here that we still want to index the `r` type somehow to allow for changes of accumulation to occur. This is not the case (in this simple model!) as in order to achieve the "baggage carrier independence" property the `r` type must be left unspecified until the transducer is actually applied. The cleanest way to do that is to use a higher-rank type (Hickey mentions these briefly and offhandedly toward the end of his talk)

    type Transducer a b = forall r . (r -> b -> r) -> (r -> a -> r)
which thus prohibits the implementer of a Transducer from affecting the values of `r` in any way whatsoever---they must be left anonymous until someone decides to use the Tranducer on a particular collection of values `a`.

(It must be noted that the model given above is isomorphic to a "Kleisli arrow on the list monad" which I described a little bit here http://jspha.com/posts/typing-transducers/. It should also be noted that this model includes neither (a) the ability to use local state to capture time-varying transductions or (b) the ability to terminate early)

With respect to the second point, I'd like to suggest that there is a difference between the semantic weight of wrapping the result types in an Either in order to indicate early termination and the implementation weight. I completely agree that using an Either to implement early stopping (as it's easy, if finicky for the library implementor, to do) will involve wrapping and unwrapping the "state" of the transduction continuously. I also would like to suggest that it's a very natural way of representing the "accumulation | reduction" notion Hickey uses in his own "Omnigraffle 8000 type system".

We really would like to capture the idea of the transducer state as being "either" in-progress or fully-reduced and act accordingly. If Clojure's implementation of that requires fewer runtime tags than an Either, so be it, but I personally fail to see a semantic difference except in the way one can play fast-and-loose with dynamic types over static types to begin with.


So, I gave above an implementation of Transducers in types which has some of their properties, but certainly not all. In fact, I abused the fact that there is no ambient state in Haskell in order to ensure that a certain property would hold (notably this doesn't require a type system at all, just purity). I also argued that using Either is a perfectly natural way to implement early termination in such a transduction pipeline.

I've also made an extension to the `(r->b->r) -> (r->a->r)` mechanism which enables local state to be enabled for various components of the transduction pipeline. A version without early termination is available here:


Notably, this uses most of the same typing tricks as `(r->b->r) -> (r->a->r)` but adds a "reduction local hidden state" variable which lets us implement `take` and `partition`. This takes Hickey's notion of needing to be explicit about the state being used to a whole new level.


So what is the point of all this?

I'd like to argue that Transducers do not present such a mysterious mechanism that they cannot be sanely typed in a reasonably rich language. I believe that I can capture most of their salient features in types without using the dependent indexing Hickey suggested was necessary.

More than this, the compartmentalized, hidden reducer-local state in the Gist implementation allows for each reduction step to include fairly exotic local states in their state machine. You could implement a kind of type indexing here if desired and no end-user would ever know of its existence.

I also absolutely concede that many type systems people regularly use could not achieve this kind of encoding.

Finally, what I really want to say is that type systems are not something to be denigrated. I believe some of the earliest "transducers v. types" argumentation took a nasty turn as amateur type theorists (myself included) rushed to write things like "Transducers are just X".

I want to apologize for any kind of bad feelings my own writing in that thread may have stirred up. I try not to be haughty or dismissive with this kind of writing, but I also make mistakes.

So what I'd really like to suggest is that types should not be taken as reductivist on interesting techniques like Transducers but instead as a tool for analyzing their construction and either improving it or better teaching it. Hickey himself often turns to some kind of "pseudotyping" to talk about how Transducers work---formalizing those notions should only lead to greater clarity.

Of course, implementations will differ in small ways. As I've noted abundantly here, a major difference between the Haskell and Clojure implementations is driven more by Haskell's purity than its typing. Hopefully, however, exploration of alternative implementations and the rich analysis produced by their typing can help to introduce new ideas.

For instance, the Gist implementation, if you strip the types away, is an interesting divergence in raw functionality from Clojure Transducers---if Clojure Transducers are "reduction function transformers" than the Gisted Transducers are "Moore-style (Infinite) State Machine transformers" and that difference allows the implementer to be extra explicit about the use of local state.

I'd rather see discussion about whether such InFSM transformation techniques have a place in Transducers literature than a fight over whether or not its possible or reasonable to "type transducers".

hawkice 1 day ago 2 replies      
I enjoyed the discussion of the types. I dig haskell (and clojure), and I think this is perhaps the perfect lens with which to view how to make choices between them. You can have an insanely complex typesafe haskell transducer, a still very complex but unsafe haskell transducer, a weaker and less flexible version of transducers with a simpler type encoding in haskell, some combination of those ideas, OR...

you just test your code out in the repl while developing in clojure and just kinda rely on the fact that core infrastructure or popular libraries will generally work.

kazagistar 1 day ago 2 replies      
I'm still a little confused and will have to go over some code or something to really understand the limitations of what transducers can do... can any transducer be used in a "parallel" context (like map and filter) or are they limited to a linear context (like the fold makes me suspect)?
raspasov 1 day ago 0 replies      
Great talk by Rich on transducers, instrumental in understanding the "hows" and "whys" behind the concept.
atratus 1 day ago 0 replies      
Removing conj is what finally made it click
iamwil 1 day ago 2 replies      
Transducers really sound like monads. Are they the same thing?
scythe 1 day ago 1 reply      
I threw together a toy implementation in Lua:


Granted, none of the cool out-of-order-iteration is there, but the reverse composition looks natural to me now, so I can sleep at night.

Animats 1 day ago 1 reply      
I think somebody just reinvented data flow programming.
Node.js Best Practices
213 points by finspin  2 days ago   61 comments top 10
Touche 2 days ago 5 replies      
Notice something not on the list of best practices: documentation. While Node itself has excellent docs (for the most part), the Node community is terrible about documentation. If you're lucky you'll get a single README.md file with the basics covered.

Pick any category of module and there's a good chance the most popular modules have little documentation; certainly nothing close to comprehensive.

general_failure 2 days ago 1 reply      
Just a few comments:

> if (!(this instanceof MyClass)) return new MyClass();

If you really want, just throw an exception and kill the program at compile time. Catch programming errors in testing and not do some magic to 'autocorrect' code.

> var localFile = fs.createWriteStream('localFile.tmp');

Always catch 'error's in stream objects. Otherwise, it might thrown an exception at runtime.

localFile.on('error', /* do something */)

Coding style:In most cases if you write you code properly, you don't need to nest more than 3-4 levels. If it gets deeper split it out into separate functions. Otherwise, it's a perfect job for async.series.

whatthemick 1 day ago 1 reply      
I'm personally not a huge fan of the eventemitter for cases where an event is only ever emitted once (an sql query is done or similar).

For 2.0 of Sequelize we've moved almost the entire codebase to promises and will encourage users to interact with Sequelize with promises.

Kiro 2 days ago 3 replies      
I'm coding a fairly large application in Node and have never heard of EventEmitter or Streams. Does that mean I'm doing something wrong? The impression I get from the article is that it's such a fundamental patterns that every serious application should use it.
pavlov 1 day ago 0 replies      
Not that long ago, anonymous functions and closures were the height of fashion in every language.

Now the recommendation is to name your functions and avoid closures. This brings us right back to what C programmers have been doing with function pointers since forever. Not that this is such a bad thing.

pedalpete 2 days ago 2 replies      
When they say 'avoiding closures', how does that relate to functions in your module? Your module is often exported as a function, so are they suggesting that every function be exported? I suspect I'm not understanding the logic behind 'stack based'. Why is it better to have your functions not contain other functions (or is it not be contained?)
donbronson 2 days ago 4 replies      
Anyone have more insight into the statement "Avoid closures"? Or rather, an alternative to having private functions?
_random_ 1 day ago 0 replies      
0. Avoid using it, unless truly necessary.
EGreg 2 days ago 0 replies      
Very useful!
passfree 2 days ago 2 replies      
I have one word: CoffeeScript.
Against Sharing
189 points by smacktoward  2 days ago   139 comments top 32
tptacek 2 days ago 4 replies      
At some point in the next 10-15 years, there's going to be some kind of reckoning over the "sourcing and allocating independent contractors" model of business. Right now, companies can insulate themselves from labor laws almost entirely by (a) not providing employees with equipment and (b) using structured customer feedback and expectations in lieu of training and supervision. There are instances where this model can be fair to all parties, but there are also obviously instances where it's abusive.

My intuition, which is probably wrong, is that the model fails most straightforwardly when there are monopsonies playing the "source and allocate" role. The trouble is that many of these companies work on a "winner take all" go-to-market plan, which almost dictates that they end up controlling the market.

There have already been low-grade rumblings over this in the past. For instance, the Microsoft "Permatemp" scandal. But we can see the model taking hold across the economy --- see: drivers, housecleaning. My guess is that the long-term resolution for this is going to be legislative, and it's not going to make companies like Uber happy.

ameister14 2 days ago 5 replies      
I spoke with my UberX driver about unionization the other day; it's honestly the only way forward for the drivers.

Without unionizing, the prices will continue to be cut and they will continue to lose out. Going even further, lots of people are talking about the job creation boom associated with the sharing economy. However, in the case of Uber and Lyft, that's short term. In 10-15 years, does anyone actually think people will be driving cars for Uber in San Francisco? With the way things are going, auto-driving cars will take their place.

Unions could lobby the state legislature to make for auto-driving cab or ride services illegal without a driver in the car. Right now, they could make a serious case for that as a safety thing, since the technology is unproven. In 5-10 years, that won't be the case.

So yes, from the driver perspective, they have to unionize. If they don't they're finished sooner or later.

graeme 2 days ago 3 replies      
How is Uber considered "sharing economy"? If the term is to have any meaning, then it refers to people having spare capacity in an asset and allowing others to use it.

Airbnb: Have a room in your home or can leave your home, and allow others to rent it.

Taskrabbit: Have extra time you can sell

Say what you want about the ethics of either company, but users are free to negotiate prices.

A "sharing economy" car company would monetize spare capacity in cars and allows drivers/passengers post rates.

But that would be complicated. So Uber has adopted a fixed rate model. Whatever it's virtues, it's not sharing. It's just service for hire at a fixed rate.

ef4 2 days ago 3 replies      
Uber is not a monopoly and the barriers to entry are lower than people think.

Now that the idea is proven, there's nothing stopping more competitors from building local app driven services to compete with them.

If you can organize thousands of drivers, you shouldn't bother with a union. You should just have them collectively fund their own driver owned app and compete with uber instead of asking for crumbs.

johnrob 2 days ago 2 replies      
Part of the blame needs to be on the drivers, who are likely bad at estimating the true cost of operating a vehicle. Uber could probably do better at making this clear, but it's not really in their interest. Long story short: traditional taxi pricing seems expensive but is actually pretty fair once you account for costs and a decent wage.

Airbnb has given the sharing economy an inflated reputation, because they feed off a great business model: renting a scarce asset (land) with minimal marginal cost.

mahyarm 2 days ago 2 replies      
The biggest unmentioned thing I'm noticing, is when are these car hailing apps going to start competing in how much of a cut they take? The marginal cost of their software is near %0.

When will amazon-uber come up and say to uber: "Your [20% profit] margin is my opportunity".

coldcode 2 days ago 1 reply      
I'm no marxist, but this is exactly what Karl Marx railed against: exploitation of the masses by capitalists. Of course a market economist sees this as an optimization of a market. If people suffer enough what usually happens is change, sometimes violent change.
hywel 2 days ago 1 reply      
Uber must have a lot of drivers if "Arman, an Uber driver in LA" isn't specific enough to be identified.

It would be tragically ironic if this article resulted in his being 'deactivated'.

sebst 2 days ago 1 reply      
Not sure if that article is really about shifting risks from corporates to workers and weakening their protections.

It is more like a kind of gatekeeper discussion. Digitalization has moved the gate keeper role from media dinosaurs to agile startups and companies.

Uber can de-activate their users as well as Facebook can reduce organic visibility of fan pages or Google may kick you out of their index. The open internet that we have is still built upon the gate keeper behavior.

In the Uber case that clearly affects individuals, like taxi drivers, but in the case of Facebook, Twitter and Uber that can affect large companies, too.

So, may I read this article also as a manifest against "attention" monopolists?

crazypyro 2 days ago 1 reply      
The problem is when one technology company that uses this model reaches critical mass stage, they create a chicken-and-egg problem for anyone that wants to attempt to disrupt their market. You can't get contractors without clients and if you don't have contractors, clients won't want to wait. The problem has been getting worse with the rise of the internet and global connectivity as these companies can more easily expand into an entire industry and quickly dominate the technology side.

This domination of the tech side of old industries makes it especially hard for tech companies that want to disrupt the "old" companies because the market often assumes there is only room for one technological provider/disruptor per market service. This is similar to the first to market principle. If you get your service out there and generating new users before your competitor, you have an innate advantage over all the competition AND a great advantage over future companies using a similar model to yours. In the process of shutting out new, small tech companies, the competition is narrowed down to the "old" companies/interests in which the tech company has a clear new, tech-inspired advantage over, else why would they be called a disrupting tech startup?

So you have situations where 1 tech company, slowly morphing into a giant, effectively shuts out both the entrenched interests who cannot adapt fast enough to stem the hemorrhage of customers and the new interests who can't compete on either price or availability without a long ramp up and generally necessary venture capital. This creates a situation in which the contractors are taken advantage of, due to the lack of competition and complete monopoly that is slowly being acquired by the tech company.

golemotron 2 days ago 1 reply      
When I first saw Uber it seemed obvious to me that drivers would be squeezed. In the regulated hodge-podge of companies that Uber and Lyft are replacing, drivers didn't make very much. Uber and Lyft drivers got a "raise" when they signed up, but that came from Uber and Lyft's cash-on-hand and their rally to sign up drivers.

The fact is that there is no scarcity of people who can drive, so prices will be driven down to the point where a consumer can make a choice between a college student who is part-timing with a ten year old car and a full time driver with a brand new Prius. You might pay a small premium for the latter, but the former will be dirt cheap and they'll be paid next to nothing.

AirBnb is a sharing company that won't have this squeeze because the supply of properties is more bounded than the supply of drivers.

sagevann 2 days ago 1 reply      
Against 'Sharing Economy'.

Maybe I'm splitting hairs, but does it bother anyone else that the 'Sharing Economy' is not sharing at all? Renting your assets out for use when you're not using them is entirely different than giving them to someone else to use and expecting nothing in return.

abakker 2 days ago 0 replies      
From a recent Oxford Economics study sponsored by SAP/SuccessFactors:

"As the economy evolves to a state where nearly everything can be delivered as a service, companies are increasingly tapping external expertise and resources they need and on an as-needed basis to fill skills and resource gaps and to accommodate rapidly changing business and customer demands. That means more temporary staff, more consultants and contract workers, and even crowd-sourced projects. In fact, of those companies surveyed as part of Workforce 2020, 83 percent of executives say they will be increasing the use of contingent, intermittent or consultant employees. " (Source, Workforce 2020 study, Oxford Economics, SAP)(http://www.successfactors.com/en_us/lp/workforce-2020-insigh... - need to register to download)

I laughed when I first heard the term "contingent labor". We all know that companies like Uber, and a whole host of companies that rely on contractor drivers/deliverydrivers will be the first to buy self-driving cars. That Uber would tout their job-creating function is pretty low. They would love to cut out the drivers and run 24/7. The sad part is, engineers could probably figure out optimizing algorithms that would make the service better if it didn't have human drivers, and maybe even cheaper.

I haven't reached the point yet where I give up on these kind of services, but its close. I feel badly that we aren't coming up with jobs as fast as we are coming up with ways to get rid of them.

chrismcb 1 day ago 0 replies      
If any other company did what uber does, they would be branded as evil, and the mob would have pitchforks out, and calling for boycotts. They exploit their employees, they skirt and out right break laws, they appear to commit fraud to disrupt competitors. Yet because they perform a service that many here like, many sing the praises... And keep using the service.
golemotron 2 days ago 0 replies      
I just want to know who owns the cars. To me, it is beyond belief that each of the Uber and Lyft drivers I've had ponied up and bought a fresh new car to participate in "sharing."

There's more polish on an Uber vehicle than any car or cab in my neighborhood. It seems odd to use the word "sharing" for something that seems like ultra professional service - even on the low UberX end.

Is Uber offering financing?

remoteone 2 days ago 0 replies      
The original Jacobins were mostly libertarians and would presumably like Uber's model. The idea that they were anti-capitalists is wrong. In the US, "Jacobin" was used as a pejorative to describe Jefferson.

It's unfortunate people associate the Jacobins and the Revolution with Robespierre, when it was much more Thomas Paine-esque.

edgyswingset 2 days ago 0 replies      
It seems like this article is about Uber, not the sharing economy as whole. We don't have enough data to say if a "sharing" economy is viable or ethical. Companies skirting around laws and exploiting their employees (or whatever they choose to call them) such as Uber don't reflect well on this, though.
cheald 2 days ago 0 replies      
Maybe it's just my lack of understanding of the subtleties of the situation, but I'm not quite sure why people are surprised that this is happening. Without the constraint of regulatory restrictions on the number of people who can drive, there is more supply than demand for drivers. Additionally, apart from that, when multiple companies start competing in the same market, their prices will drop. Given the supply of drivers and the need for companies to compete on fares, why would anyone expect expect pricing to not reach an equilibrium where margins are as low as drivers will tolerate?
auggierose 2 days ago 0 replies      
Sounds a lot like another sharing economy: prostitution.
bbd 2 days ago 0 replies      
According to wiki: sharing economy is socio-economic system built around the sharing of human and physical resources. For ridesharing industry, the resources are drivers and their personal resources. Uber/Lyft/... are intent to become the gateway or dispatch or distribution layer to some extent. They are competing to "own" these resources and become monopoly. But for the drivers and the customers, it will always be in their interest to keep it a more "perfect competition" situation. Some third party shall come in to facilitate the conflict of interest -- maybe government, union or others.
driverdan 2 days ago 1 reply      
I would love to see a system like Uber or Lyft that used variable pricing and a bid system for drivers. Drivers could enter the minimum fare they're willing to work for and users could take it or leave it.
dwg 2 days ago 1 reply      
The subject of this article frames the argument incorrectly. Some drivers on Uber may have full-time jobs and are just driving for Uber to get a little extra in their spare time. Those people who treat Uber itself as a job, however, have little to do with the "sharing economy". IMO this is just another labor dispute, not really an issue with a sharing. On a side note, it's too bad a company so young and apparently game changing should be risking a labor dispute so early in its life.
eykanal 2 days ago 1 reply      
The article argues pretty strongly that Uber is a horrible company, which isn't really news to anyone. I'm missing, though, the leap from that argument to the idea that (quoting the article tagline) "Sharing economy companies like Uber shift risk from corporations to workers, weaken labor protections, and drive down wages." There are numerous other examples (Lyft, TaskRabbit, JobRunners, even Amazon's Mechanical Turk) where this model seems to work pretty well.
herbig 2 days ago 0 replies      
A simple Google search and I was able to find the Facebook profile of the guy whose last name was not included "out of fear of retribution".
elwin 2 days ago 5 replies      
> Uber is just capitalism, in its most naked form.

Considering that the workers own the means of production, this is a curious thing to say.

shkkmo 2 days ago 0 replies      
It seems to me that Uber and Lift are working hard to poach eachother's partner drivers. Perhaps one of them will actually make the leap to make a partner driver union with a seat on the board and a say in policy and pricing changes? Wouldn't that help them poach the other companies employees?

Maybe I'm being overly optimistic here...

lnanek2 2 days ago 1 reply      
Doesn't seem like a very fair or intelligent article. It basically says Uber is doing nothing good and just exploiting workers, but if I wrote an Uber clone app, no one would download it, or even hear about it. Then if someone did download it there would be only one or two drivers. Then if they traveled to another city and opened the app there would be no drivers at all.

We developers can often write an app that is technically competent, but then it never pulls in the users. In the case of Uber, they not just have a successful app and brand that people know and download, but the app also needs to have a large network of drivers in it to provide good service. The vast majority of drivers I've ever had gave me their personal business card, but I never use them since I'd have to call and negotiate a time and a price - and the price they start at is often more than just booking them through Uber or SuperShuttle or whatever. So these guys aren't getting the fares on their own, Uber is clearly providing some benefit.

maceo 2 days ago 0 replies      
Labor organizers ought to use the same tactics to reach Uber drivers as Uber used to reach Lyft drivers.
golemotron 2 days ago 0 replies      
Speaking of the sharing economy, why isn't AirBnB setting price and using surge pricing?
sharer 1 day ago 0 replies      
this sounds like a criticism of Uber's practices not sharing in general.
pasbesoin 2 days ago 0 replies      
Economists could do some solid and useful work by researching historical as well as current cost/benefit scenarios. So much at least of general public knowledge and reporting in the realm is focused upon "prediction". How about painting some clear pictures of what has happened; why; and who paid and who benefited?

And as our knowledge, e.g. of long-term health effects, continues to grow, such topics remain open for further analysis and refinement.

Perhaps Ken Burns could put together a series on the economic history of the U.S., for example. (Perhaps those spreadsheets are more interesting when subjected to a slow pan... ;-)

Seriously, though. Last year I got sucked into yet another discussion of recent politics, and I suggested putting the whole situation under a comprehensive cost accounting analysis. For the other parties in the discussion, it -- a bit surprisingly, to me -- seemed to be something of a "lightbulb moment".

vdaniuk 2 days ago 3 replies      
I have an intuitive understanding that these "socialist" publications about the danger and perils of sharing economy are absolutely useless.

Either developed countries implement some variation of basic income or the social order will go down in flames. Unskilled labour, such as taxi drivers, will be commoditized and become interchangable. If anti-Uber initiatives and quasi-unions will get more power it will only increase the incentives to getting self-controlled cars on the roads. The automation-worker conflict can't be won by humans and can't be wished away.

Socialists (an outdated term for the society to come) should choose their battles much more wisely and concentrate efforts on developing and implementing of sustainable models of creating and redistributing wealth geared towards 90% of population.

Anti-Uber sentiment and the likes aren't sustainable and will fail.

To Get More Out of Science, Show the Rejected Research
201 points by andrewl  2 days ago   47 comments top 15
schrodingersCat 2 days ago 3 replies      
I haven't read all the responses, so I hope I am not repeating anyone's insight. I'm finishing my PhD in biophysics, and I wanted to share my perspective from conversations I've had with my boss/PI and other investigators. In life science research, there are huge incentives to not repeat others research and furthermore to not publish negative results.

The first disincentive comes from funding bodies: NIH et. al (NIGMS, NIEHS, ...) don't like to pay for you to do "someone else's science". If you manage to get a grant, and it comes out in a progress report that you did repeat too much of other people work, be prepared to get that funding reduced and or cut.

Academic departments strongly discourage new hires from publishing negative results and /or repeating other peoples work (mostly because this will likely decrease chances of getting published and funded).

Academic journals hate to publish negative results, but seemingly have no problem publishing bad science (yes Nature, I'm looking at you: http://retractionwatch.com/2014/09/11/potentially-groundbrea...). Early in my PI's career, she tried to publish a very important negative fining in a high impact journal. The article's acceptance was accompanied by a personal letter from the editor urging her to consider other journals for negative results.

Another barrier quite honestly is ego. While it may sound as if my boss is "one of the good ones", alas, she is not. On occasions that I have asked to repeat other group's seemingly unbelievable results myself, I've been flatly denied on grounds that this kind of work does not express the sort of originality of research produced by her lab. In other words, nobody wants to be known as "that lab", the nay-sayers of the field, those that would dare to question a colleague's ideas.

Finally, this lead me to the last barrier I have observed: scientific communities / societies. If you are of the lucky few that end up publishing negative results of major significance, prepare to not be invited to dinner at next years Society for X annual meeting. Yes, in many ways life-science is stratified just like high school. You have the cool kids on track for the nobel, the weirdoes in their corner pushing the boundaries of what is possible, the "jocks"/ career scientists who manage to turn a couple of tricks and some charisma into a living, and finally the tattle-tales who seem to piss everyone off with their negative results. These are HUGE oversimplifications / generalizations, but I really think that all of these barriers need to be addressed in some way to fix life science.

tjradcliffe 2 days ago 6 replies      
In physics, we have been aggressively publishing negative results for decades. There is an entire field dedicated to such work, called "Physics Beyond the Standard Model" (spoiler: there isn't any). I've seen entire careers of extremely good experimentalists dedicated to "failing to reject the null hypothesis" at more an more stringent limits (neutrinoless double beta decay is a good example of this) and have been to week-long conferences where every single paper was either a crazy theory or a negative experimental result.

I left the field 15 years ago because I didn't want to spend my career measuring zero, and presumably over time it will eventually dry up. The conditions for its existence seem to be more to do with having a highly trained group of people who have exhausted all plausible avenues of research in a given area and are left chasing a few scraps. In areas where there are still plenty of positive results to be had, the tendency will always be to emphasize the positive.

As a partial solution to this tendency, in my applied physics work, where I did get positive results, I tried to include a section in papers entitled "Things That Didn't Work So Well" that sketched failed approaches to save other people the trouble of trying stuff that seemed like a good idea but didn't pan out... at the very least we should expect that from the average publication, and be suspicious of any experimental paper that does not include some description of the blind alleys.

mherdeg 2 days ago 1 reply      
Just gonna leave a quick link to my favorite general-purpose academic journal: http://www.jasnh.com/

The Journal of Articles in Support of Null Hypothesis collects experiments that didn't work. Not very much volume, not a huge area of prestige, but there should be no shame in publishing there. The content is very diverse and pretty fun.

Titles like "No Effect of a Brief Music Intervention on Test Anxiety and Exam Scores in College Undergraduates"; "Parenting Style Trumps Work Role in Life Satisfaction of Midlife Women"; "Does Fetal Malnourishment Put Infants at Risk of Caregiver Neglect Because Their Faces Are Unappealing?"; "Is There an Effect of Subliminal Messages in Music on Choice Behavior?". Plenty more cool stuff.

kyro 2 days ago 2 replies      
I have always had a huge problem with the non-reproducible nature of medical research and its acceptance within the field. Being in medicine, every day you hear a physician citing some study from 15 years ago conducted on a sample size of 20/50/100 or so patients as a way to justify their clinical decisions. And it always worries me that we tend to put so much faith in these "landmark" studies, as if their findings are somehow legitimate and true because statistical significance was reached at least once, and seem to forget all of the misaligned incentives and game-playing that goes on in research, and the amount of data burying by the FDA, all that almost certainly influence and manipulate data, publication, etc in a negative way.

I tend to take the majority of medical research with a grain of salt, for the reasons listed here and in the article, unless there's some very convincing meta-analysis or successfully reproduced evidence. Call me overly cynical, but that we calculate a parameter or administer a drug or change our methods because of some article you read last month in NEJM is beyond bogus.

vanderZwan 2 days ago 2 replies      
There's many more issues distorting or distorted by academic publishing. For example, grants being awarded to the most productive members. Sounds fine at face value, but compare biologists working with ecological systems and biologists working with DNA. In the first case collecting data by definition has to take decades, in the other it's a matter of hours these days. The ecologists needs more money because it inherently takes longer to research, yet the grants are more likely to be awarded to the DNA researchers because they publish more.

DISCLAIMER: I'm not a biologist myself - this is a second-hand story from a biologist friend so if the story doesn't hold up under close scrutiny, my apologies.

smaldj 2 days ago 2 replies      
Ben Goldacre has been rallying around this cause for years now. He has:1. presented a TED talk (http://www.ted.com/talks/ben_goldacre_battling_bad_science?l...), 2. authored books (http://www.amazon.co.uk/dp/0007350740/), and 3. written numerous articles (http://www.theguardian.com/profile/bengoldacre) on the misrepresentation of statistics within the pharmaceutical community.

Probably the best summary post of his is this one: http://www.theguardian.com/commentisfree/2011/nov/04/bad-sci...

unclebunkers 2 days ago 2 replies      
I don't think they are aware of how much bad science is out there and how many people are trying to publish it. It wouldn't be a journal every month, it would be a phone book every week. Corporations would simply drown out real science with papers designed to support whatever narrative they were promoting.

Where this may make sense is when Watson grows up, and you can aggregate the volume of garbage to fill in the holes of knowledge. But that's more than a couple years off I suspect.

CognitiveLens 2 days ago 1 reply      
The suggestion in the article to pre-register trials is a good one, but I'm extremely wary of a more general effort to "publish rejected research" because there is a huge quantity of very poorly conducted research that really does not deserve publication. Most fields are already drowning in a sea of journal articles - few researchers are aware of all the published studies that might be relevant to their own work - and greatly increasing the quantity of published material will dilute the pool even further.

Replicating findings should be given higher priority, pre-registering methods and analyses should be encouraged/required, but it's important to stop short of "publish all the things".

netcan 2 days ago 0 replies      
We kind of think of science as an embodiment of modernity, and therefore modern. What it is though is an institution, similar to academia and fairly old. Human institutions take time to change.

In any case, I'm pretty excited that it's coming under pressure to improve. Publication is really a method of communication and the revolution in communication of the last generation is a profound step change in human history, in my opinion. To use some terms that our great predecessors would have been comfortable with, science is a way to uncover the truth using light. Experimentation, debate, publication, review: these are all ways of making light.

Bringing modern communication into science and the collaborative opportunities inherent in better communication is a potentially very bright light.

   Nature, and Nature's Laws lay hid in Night.   God said, 'Let Newton be!' and all was Light.   -- Alexander Pope
Reproducibility and negative results are two parts of the same problem, and fundamental one in science since the beginning. A better method for solving it (using a computer (: ), is probably coming. If not now, within ten years. Maybe twenty. Soon.

Shivetya 2 days ago 0 replies      
There is the big problem now of negative results not being published, mainly because of the competition for federal (government worldwide) funding does not lend itself to proving something "otherwise".


tim333 2 days ago 1 reply      
The process generally seems a bit broken what with the concerns in the article and with Elsevier making everything only available to those who pay loads. You'd think you could have something like arXiv and a rating service to figure out which research was good and worth reading and should be rewarded career wise. Something for a YC start up to fix?
denzil_correa 2 days ago 1 reply      
There are two issues here - (1) Irreproducible research and (2) Negative results. (1) is clearly a problem and must be dealt with by scientific community and processes. Let me talk about (2).

For negative results to be published, they too should follow basic patterns of positive results - innovative, scientifically rigorous. There are always more negative results possible than positive results. A negative result should be something which people think would intuitively work but wouldn't. For example - an apple falling from a tree and floating in thin air isn't a negative result because we all know that it's supposed to fall down to the ground via gravity.

Edit - We also have a Journal of Negative Results


mnw21cam 2 days ago 0 replies      
For an instructive insight into the dangers of only publishing positive results, see http://xkcd.com/882/
vedtopkar 2 days ago 0 replies      
A lot of this has to do with incentive structures, especially in Life Science research. The grant landscape is intensely competitive, and writing up results is incredibly time-consuming. There is little incentive to take the time to write up and submit negative results to relevant journals. If institutions and grant committees were to require this practice, it wouldn't be nearly as big of a problem.
bmh100 2 days ago 1 reply      
It is tragic to imagine the amount of time wasted by repeating the unpublished experiments of others. It is even more tragic to imagine that someone might be able to gain a hidden insight by finding the gaps in various negative results, which might remain undiscovered for a long time otherwise.
Remove Ex-Mode from Neovim
176 points by stepanbujnak  2 days ago   118 comments top 18
linguafranca 2 days ago 0 replies      
This is one of the reasons I very strongly support the Neovim project. They are taking an incredibly sensible and pragmatic approach to modernizing Vim.

Yes, there will be controversies. There always are. But even collaborative open source projects need strong-willed leadership, or they cannot grow properly.

epmatsw 2 days ago 2 replies      
For those who, like me, had never heard of Ex-mode:https://en.wikibooks.org/wiki/Learning_the_vi_Editor/Vim/Mod...
otikik 2 days ago 1 reply      
This is like dropping support for Internet Explorer 6 in a website - code gets cleaner at the expense of a minor group of users getting a worse experience. I think it is a good tradeoff.
unclebunkers 2 days ago 0 replies      
Good riddance. It's cruft. I've used it exactly twice, and never was it something I couldn't have done on the command line. It might be useful on Windows, I'm not sure? But if you're on a *nix, there really isn't any point.
Slackwise 2 days ago 4 replies      
While I don't particularly care for ex-mode, I think it's weird to be calling it Neo 'vim' without it.

If it's going to become more of a radical departure from Vim and start omitting features as well as adding, I would rather they change the name of the project instead, one that alludes to its Vim heritage rather than having a prefix that means 'new'. Is it a new version of Vim? Or an editor that started as a fork of Vim, but only has the good parts?

Edit: Removed my insult of the name 'Neovim'. Just going to state that I dislike it.

bryanlarsen 2 days ago 1 reply      
AFAICT, ex-mode is usually used for scripting, as an alternative to sed, awk, and/or perl.

So even if you have neovim set up as your "vi", you'll probably still have an "ex" available on your command line. It'll just be symlinked to legacy vim, not neovim.

bstar77 2 days ago 3 replies      
I've been using vim for 7 years and I've never even attempted to use this mode. Am I missing out?
dorfsmay 2 days ago 1 reply      
As an old "vi" user, I used it quite a bit both while editing and to test ex scripts. To me, by removing it, they will compete with emacs and the likes but not with vi/vim.
krick 2 days ago 6 replies      
Removing pretty powerful feature just because author of fork doesn't use it sounds well, actually author of fork can remove whatever he likes, but presenting neovim like "refactord vim with more features" isn't fair or even plausible in this case. And for what reasons? Oh, I see, it makes code complicated. Did he think vim'c codebase is scary for no reason? Removing all the code from vim would be the ultimate simplification in that case. It's not popular feature? Well, I believe that there can still be more vim users that don't use macros, maybe remove them as well? Or maybe just replace vim with nano?

I, personally, used ex-mode only a couple of times, so even if I will use neovim I don't think I would care. But that just doesn't sound like a right think to do.

dllthomas 2 days ago 0 replies      
Can they implement ex over the same msgpack interface they want for GUIs? The ability to switch back and forth between ex mode and visual mode is much lower priority than the ability to interface with my editor over something resembling ex (which itself is not super high priority, but this definitely falls in the "cons" column for neovim).

Edited to add:


"As for the ex command-line utility, that can easily be implemented as separate program that talks to nvim via msgpack-rpc"

Great! I have no to minimal objection.

nicwest 2 days ago 2 replies      
I have been using Ex mode for it's REPL like qualities while learning vimscript, it's a useful feature if you are writing vim plugins or doing complex search/replace operations.

Will a similar conversation happen with Replace mode? I use this mode less than Ex mode and it has a useful key bind (R) that could be recycled.

mcantor 2 days ago 2 replies      
Hadn't heard of nvim until today.

Can anyone sell me on this project? I looked at the homepage and README, but they're both pretty hand-wavey.

What problems does nvim solve for experienced vim users?

erikb 2 days ago 0 replies      
Although I have no idea how good or bad this will turn out, I think it's a very good idea to kill edge features and instead focus on treating edge cases of core features better.
icambron 2 days ago 0 replies      
I probably feel the same way about this mode as my mom feels in a terminal window.
4ad 2 days ago 1 reply      
Ugh. Some awful distributions remove ed(1) by default, when that happens, I use vi's ex mode as a substitute. Now these people want to remove that too, amazing. I can only hope this vim fork never becomes the default vi on any systems I am forced to use...
co_dh 2 days ago 1 reply      
as a developer, I think it's good to remove some rare feature to reduce code complexity.

Also it's good to make this kind of feature as plugin, so it will reduce the code complexity.

hardikpandya 2 days ago 0 replies      
Very well.
adamors 2 days ago 1 reply      
Removing "NeoVim" from the title removed the context as well. As it is, the title doesn't make any sense.
How to buy a tank: a BRDM-2 story
188 points by 8ig8  3 days ago   108 comments top 22
yaakov34 2 days ago 2 replies      
Here's a story from someone who actually did buy some military vehicles for the guys in the South-East:


He says he bought two BRDMs (like the one in the article) for ~$40,000 each, and a BTR-80 (a relatively new fighting vehicle) for ~80,000. The difference is that the vehicles are fresh (new from old stock/conservation) and they "come with all the accessories, if you know what I mean" (wink-wink, nudge-nudge, say no more).

Maybe the guys from the South-East really did buy their stuff at the surplus store, like Putin said.

According to the same source, a tank (with all accessories included?) would cost ~$200,000.

smoyer 3 days ago 2 replies      
For those wondering ... the (current) value of those 100M Belarus Rubles is $9492.16 US.
blhack 2 days ago 1 reply      
40 liters per 100 KM = 10gal/60mi.

It's about 6mpg. Not great, not as bad as I would have guessed.

ojbyrne 2 days ago 3 replies      
Not actually a tank. No tracks, no turret, no armor piercing gun.

EDIT: it does have a turret. Still not a tank.

EDIT2: the main armament is a machine gun. Totally not a tank.

rurounijones 2 days ago 1 reply      
> Thankfully, a nearby construction site loaned us their crane.

Hahahaaaaa, brilliant! The kind of things you would never be able to do in a western country "Think of the insurance liabilities!"

markvdb 2 days ago 1 reply      
Quote from the article:"The driver had a nice sense of humour and at the first three police stops he was like:- What's this?- That's for our guys on the South-East."


redwood 2 days ago 0 replies      
I thought all you needed was to be a school district and they came free
Erwin 2 days ago 1 reply      
This would have made a great episode of "Top Gear: Russia". I imagined an excited Clarkson narrating this.
justintocci 3 days ago 1 reply      
the horror! don't empty the tires on a seven ton vehicle!!
JackuB 2 days ago 0 replies      
Well, in EU, it doesn't seem like big issue to buy tank: http://www.armytechnika.cz/nabidka/pasova-technika/tanky/tan...
_mulder_ 2 days ago 1 reply      
For those in the UK and Europe, check out http://www.russianmilitary.co.uk/for-sale.php?headers=land. You can buy similar vehicles, fully working, for a similar price infact
iguana 3 days ago 3 replies      
Would it be possible to import into the US?
bipin-nag 2 days ago 0 replies      
It's like the APC in GTA 4. Corvette certainly is no match for the BRDM-2.
JacobEdelman 3 days ago 0 replies      
One part of me was thinking who has the time and money to go and buy a tank but another part just kept thinking about how the US supposedly has a huge amount of tanks they are not using...
edem 2 days ago 0 replies      
This is actually not a tank. It is an Armoured Fighting Vehicle, but not a tank. It looks like an Armoured Car/Security Vehicle instead.
_RPM 2 days ago 1 reply      
What is the motive for buying a tank? I just don't see how it could be useful.
viggity 2 days ago 0 replies      
there was a guy that drove a similar looking vehicle in my college town. His license plate said "HUM THIS"
sssilver 3 days ago 3 replies      
I guess the Russians enjoy a different kind of freedom that we don't experience in highly regulated countries.
jsonmez 2 days ago 0 replies      
Tanks a lot for this article.
galago 2 days ago 3 replies      
For $9k one could do a lot of amazing things...travel the world, not just tourist crap but visit amazing places...you could seed some kind of business...you could just help someone in need...or you could buy obsolete military equipment. I'm not sure I get it. ???
       cached 22 September 2014 02:11:01 GMT