hacker news with inline top comments    .. more ..    26 Sep 2016 Best
home   ask   best   3 years ago   
Akamai takes Brian Krebs site off its servers after record cyberattack businessinsider.com
649 points by bishnu  2 days ago   435 comments top 50
parshimers 2 days ago 4 replies      
Quite impressive. You know your blog is good when folks will try to take down a CDN to supress what's on it. He's also had heroin mailed to him in combination with a swatting attempt before: http://webcache.googleusercontent.com/search?q=cache:gEjqPfc...
headmelted 2 days ago 7 replies      
Still not a good move for Akamai, though.

I get him speaking out for them about the hosting having been free, but Akamai is now the CDN that got bullied into kicking someone of their service against their own will.

Terrible PR, and that mud will stick in tech circles. Akamai folds under pressure.

I know it's a crude comparison, but we don't negotiate with terrorists for a reason.

zx2c4 2 days ago 6 replies      
Isn't this the point at which Cloudflare is supposed to gain a handful of PR points for putting him back online, pro bono, and then doing a write up on how effortlessly they handled the bandwidth with eBPF?
xarope 2 days ago 3 replies      
Here's a "philosophical" question with regards to the internet, and perhaps even it's future. Given that a currently anonymous attacker, and likely not a "state" player (i.e. not a governmental entity with almost unlimited resources) has managed to DDoS a single website, does this portend that unless there are significant changes to the way the internet infrastructure works, we are seeing the demise of the WWW?

Kind of like a reverse wild-wild-west evolution, where the previously carefully cultivated academic and company site presence, gradually degenerates into misclick-hell? And the non-technical, non-IT savvy masses, in a bid to escape this all, end up in a facebook-style future where media is curated and presented for consumption (or perhaps in future, facebook-type entities end up with their own wild-wild-west hell)?

I have a strange feeling that we are seeing the decline of a city/civilisation; once you used to feel safe walking out at night, knew everybody in the neighbourhood, could leave your doors unlocked... and now, you don't dare to go down the lane to the left in case you pick up a nasty virus, and if you hear a knock on the door at night/email from DHL, you don't dare to even look through the peephole/preview the JPG!

betaby 2 days ago 4 replies      
I would like to see stats from Tier1/Tier2/IX for that.Krebs claims it's 665Gbit/s https://twitter.com/briankrebs/status/778404352285405188 Such attack must be visible in many places, however not a single major ISP reported that in mailing list. Previous smaller attacks were reported 'slowing down' some regional ISPs. Perhaps ISPs got better.
panic 2 days ago 1 reply      
This recent talk about DDoS attacks is worth a watch if you're curious about why it's a hard problem to solve: https://www.youtube.com/watch?v=79u7bURE6Ss
WhitneyLand 2 days ago 2 replies      
This is bad PR for Akamai and a tactical error for them to boot Krebs even if they were providing free service.

To some, the implication would will be "they couldn't handle it" so why should I trust the DDOS they are heavily promoting on their site?

At minimum they should comment on the situation, at best restore his service and learn how deal with high profile clients.

owenversteeg 2 days ago 2 replies      
The first thing a lot of people are thinking (and saying) is "switch to Cloudflare". But there's another name I think needs to be said - OVH. OVH can withstand a Tbps scale attack as far as I know, and it provides this to pretty much anyone. They have a pretty good interface and some of their plans are extremely cheap. They're also great at standing up for free speech, which I really appreciate.
flashman 2 days ago 3 replies      
> I likely cost them a ton of money today.

But more specifically, whoever launched the attack cost them that money.

Also, ha:

PING krebsonsecurity.com ( 56 data bytes

reustle 2 days ago 1 reply      
It would be interesting to try out some of these new p2p website technologies like IPFS/WebTorrent with these high profile sites who are frequently attacked.
xarope 2 days ago 0 replies      
I tried to get to an article on Krebs' site from a Bruce Schneier blog post, and couldn't, then bumped into this post in HN.

It's a pity Akamai booted him off; on the one hand, I can understand that it would significantly impact on their SLAs to other customers, but on the other hand it's a shame they don't have a lower impact network to re-host him on, and use this as a learning lesson on how to better mitigate such DDoSs...

geofft 2 days ago 0 replies      

"Before everyone beats up on Akamai/Prolexic too much, they were providing me service pro bono. So, as I said, I don't fault them at all."

josho 2 days ago 3 replies      
I'd love to learn more about these botnets. I wonder about things like What's the average time that a compromised computer stays in this net. What is the typical computer (grandmas old PC running XP). Do the ISPs ever get involved to kill bots running on their networks?
ChuckMcM 2 days ago 0 replies      
Wow, I figured that everyone that had hired vDOS would be irritated but that is pretty impressive. Still it says a lot for how effective he has been at rooting out this stuff, not like the TierN infrastructure folks have managed to track this stuff down with their resources.
VertexRed 2 days ago 0 replies      
These 'attackers' give Krebs' more publicity than he would ever be able to generate himself.

It's also useful to point out that Krebs' hasn't been the only target as half a dozen other large targets were attackedhttp://www.webhostingtalk.com/showthread.php?t=1599694

mirekrusin 2 days ago 2 replies      
Isn't this whole thing a bit silly? I mean what's the point? They just spend time on making him the best marketing, he'll double his audience/readers, no?
Futurebot 2 days ago 0 replies      
Something about the platform-centric world we're in now is that this sort of attack doesn't have the blocking power it once did: you can mirror your content on Twitter, FB, G+, etc. and cross-link so people can still read your stuff. This makes the "denial" part pretty watered down; it's a wonder people even bother with these sorts of attacks anymore for non-services (i.e., for regular media material like text, photos, etc.)

Of course, maybe the goal is to deny someone ad revenue, but that seems awfully low-status for such a high-profile attack: "Yeah, we really got 'em! Denied 'em AD REVENUE for a whole week!"

zaidf 2 days ago 3 replies      
He should get a Facebook page and publish a copy of all his posts on it.
dmix 2 days ago 0 replies      
If you're curious what the source of the DDOS attacks are from, here is a recent one that hit OVH:

> This botnet with 145607 cameras/dvr (1-30Mbps per IP) is able to send >1.5Tbps DDoS. Type: tcp/ack, tcp/ack+psh, tcp/syn.


This is much higher than the Akamai attack on Krebs too. Welcome to the wonderful side-effects of the totally insecure firmware of IoT...

ckdarby 2 days ago 3 replies      
The ddos attacks seem to be getting larger these days.

I've recently seen a ~200 Gbit/s hit us.

Does anyone have good resources around mitigation? I was looking at the BGP flowspec but was hopefully that someone might have come across other tactics?

rabboRubble 2 days ago 1 reply      
Here's a link to the last post from his website. Google did not appear to have this cached:


redorb 2 days ago 1 reply      
Cloudflare should pick up the site for good advertising..
desireco42 2 days ago 0 replies      
I understand that this is burning bandwidth for Akamai, but seriously, taking into account what is at stake here, I think they need to do their share and continue to support Brian.
marmot777 2 days ago 1 reply      
Brian Krebs is a hero. Are Akamai executives cowards for dumping him? I'd like to add that law enforcement are heroes.

And it's honorable he wants to meet Fly in person, recognizing him as a human being. I haven't read it yet but I'm assuming the reference to 12-step hints that Fly's having some post alcohol binge regrets.

I'm sure alcohol makes it easier to hurt other human beings, which is why violent people are often drunk. I'd be ashamed of myself if I woke up realizing that I'd spent my life actively trying to harm other human beings for money, feeling no remorse until Karma (here defined as law enforcement officials) finally caught up with me.

mirekrusin 2 days ago 0 replies      
It's funny how my mom after reading "record cyberattack" would be wondering how many poor people died but what it means is that somebody was downloading images from website many times.
sfifs 2 days ago 1 reply      
I'm wondering if the rising scale of these attacks & the seeming ease with which sites can be taken down will ultimately result in an "authenticated" internet - ie. you can't even connect without identity verification.

We already see publishing through FB Instant Articles etc. moving in that land on top of the current internet, to combat these types of firehose attacks, the only solution may be to take authentication one level deeper into the connection level.

That of course sounds good to security agencies as that's the end of anonymity online.

jsjohnst 2 days ago 0 replies      
There are a number of factors that go into play (did the site use custom SSL, what edge locations were they providing caching in, etc), but had Kreb been a normal paying customer, this could easily have been a over a million dollar bill (if it was sustained long enough to alter his 95th percentile bracket) in the cheapest case. If things like custom SSL are in the mix (which Akamai charges absurdly high prices for), or lots of traffic from more expensive POPs, or lack of already having pricing commiserate with high volume traffic commitments, the bill could've been 5-10x that amount or more.
atombath 2 days ago 0 replies      
It's kind of stupid to me that the massive and advanced cdn of akamai protect something as non-important as a blog against such a major ddos attack. If they were doing it pro-bono wouldn't the prudent action be to mitigate ddos's until a certain treshold and then actually assess the value of what you are protecting? A good lesson to have learned, I believe.

But no, they'll drop this client which had to have continually given good referrals.

tuna-piano 2 days ago 0 replies      
Some are guessing the DDOS was because of this recent post of his, about a large DDOS network.


exolymph 2 days ago 2 replies      
It would be interesting if he started writing on Medium (not saying technically advisable, just interesting). I wonder if he'd ever consider trying that.
EGreg 2 days ago 1 reply      
Why don't we switch to a distributed network with a DHT like freenet? So many benefits, including not being able to take down content via DDOS.
saganus 2 days ago 1 reply      
So if Akamai can't hold an attack of this size, who can?

Or is it that they actually can hold it off but it costs too much money?

Igalze 2 days ago 0 replies      
Unbelievable, they enjoyed year of free publicity from association with him, and this is how they repay him. Its bad enough that they couldn't handle the attack, despite all the bragging about their multi-Tbps capacity...
nodesocket 2 days ago 0 replies      
Brian Krebs' wasn't a paying customer right? Akamai provided the service pro-bono. Perfectly acceptable for them to suspend service if it becomes more than trivial in terms of cost or it puts their paying customers at risk.
nodesocket 2 days ago 1 reply      
I've always wondered if your domain is under a http DDoS attack, couldn't you in theory update your DNS A record to another ip and take other servers down (maliciously)?
Globz 2 days ago 2 replies      
At this scale it must also cost a ton of money to carry out this attack, I wonder if there's a vulnerability that we don't know about that let them do this so easily?
dragonbonheur 2 days ago 1 reply      
Are there web servers or software that blacklist IP addresses that disconnects after a short time and redirects them to a static page?
csomar 2 days ago 1 reply      
I'm really interested to read his blog now. Any way I can find a readable version for his blog posts?
snowy 2 days ago 0 replies      
krebsonsecurity.com is now resolving to localhost. I guess he doesn't want to give the DDoSers a target.....
EJTH 2 days ago 0 replies      
Too bad, I had some nice reads on his website. Hopefully this will only be temporarily...
shshhdhs 2 days ago 0 replies      
So the attackers win..
ttam 2 days ago 2 replies      
so long for using a CDN to protect from DDOS attacks...
hetfeld 2 days ago 0 replies      
You'll be redirected in... never redirected.
known 2 days ago 2 replies      
Is it according to terms/conditions of Akamai?
dragonbonheur 2 days ago 1 reply      
Who profits from this attack?
pitaj 2 days ago 3 replies      
tl;dr Akamai was hosting his site pro bono. His site was being DDOSed, which cost Akamai a ton of money, so they kicked him off since they were literally only losing money on the deal.
yAnonymous 2 days ago 0 replies      
Time to use Github pages.
ninja-wannabe-7 2 days ago 0 replies      
Should've used CloudFlare.
rasz_pl 2 days ago 1 reply      
I think its time for some serious financial incentives for ISPs to start getting serious about routing (or rather not routing) garbage. Financial fines for every DOS originating from your AS, or blacklisting if you are a repeated offender.
codedokode 2 days ago 1 reply      
Such attacks are possible because Internet is decentralized. There is no way to tell peers that you don't want to get traffic from some AS.

And investigation is difficult because attacking nodes might be in different countries, in some of which DDOS attacks are not illegal.

Maybe it is time to start building international firewalls to protect local infrastructure?

Announcing TypeScript 2.0 microsoft.com
550 points by DanRosenwasser  3 days ago   306 comments top 23
k__ 3 days ago 4 replies      

- Simplified Declaration File (.d.ts) Acquisition (via npm)

- Non-nullable Types

- Control Flow Analyzed Types

- The readonly Modifier

Finally, I was wating for months :)

edroche 3 days ago 2 replies      
I can't wait to use some of the new features in our production apps. Typescript is/was my bridge into javascript development, because IMHO javascript was a broken language for a long time, and I am not sure if I could have ever done as much as I have without its existence.

Non-nullable types, tagged union types, and easy declaration file acquisition are definitely the biggest wins for me with this release.

dested 3 days ago 1 reply      
This isn't really the place for it but I really wish that both Webstorm and Resharper used the actual typescript compiler for its tooling (like vscode) vs handrolling their own. Now I have to wait until Webstorm 2016.3 to see the full benefit of 2.0, rather than getting it for free by just updating typescript. Not to mention the obscene number of typescript edge case inconsistencies in the warnings, errors, and refactorings.
HeyImAlex 3 days ago 1 reply      
Typescript is such a neat project. The js ecosystem is vast and diverse and the typescript team has the unique job of figuring out how to make common dynamic patterns type-safe and convenient. Like... that's so cool. Every little pattern like its own type system puzzle, and there's no _avoiding_ the issue like a ground-up language can do, because their job is literally to type the JavaScript we write today.

Also, how much money is MS pumping into TS? A lot of OSS has one or two super-contributors that carry the project on their backs, but typescript has a small army of smart people with significant contributions.

bsimpson 3 days ago 1 reply      
Control-flow analysis? I think that was Flow's differentiating feature.

Obviously, there are still differentiators between the projects (like TypeScript including a known set of transpilers vs. Flow delegating to Babel), but I'm curious to know if they are converging on their core feature (e.g. how to do type-checking/static analysis).

jameslk 3 days ago 1 reply      
One of the things I've been really needing is buried in the wiki:

> Previously flagged as an invalid flag combination, target: es5 and 'module: es6' is now supported. This should facilitate using ES2015-based tree shakers like rollup.

So now I can add rollup to my production build pipeline to remove dead code. Nice!

qwertyuiop924 3 days ago 16 replies      
Man, I feel like the only one here who doesn't really like static types. I like dynamic typing just fine (it's crazy, I know: it must be the lisp influence). And if I want static typing, TS feels a bit intrusive. Flow is much better in this respect.

I also don't think JS is the root of all evil, and I use Emacs rather than an IDE (although we do have really good integration with regular JS, in the form of the famous js2-mode, and flycheck's linter integration). I, mean, do you really need your IDE checking your types as you type? It's not that slow, and us Emacs users have M-x compile, so we can run our code, and than jump back to the problematic line when an error occurs, and I know IDEs have similar functionality.

Don't get me wrong: static typing can be good at times, and option static typing and compile-time type analysis are useful tools, and I'm glad TS, Flow, and the like exist. But I always see a flock of comments saying that they couldn't possible live without static types, and thanking TS for taking them out of the hell of dynamism, and wishing there was something similar in Ruby/Python/whatever.

I don't really get that.

easong 3 days ago 5 replies      
I really wish that MS would release typescript as a collection of plugins for babel that would handle only one thing at a time (eg, the type system). Having my production build, es6 transpiler, type system, JSX compiler and so on (including a bunch of features I would rather didn't exist at all) all in one package feels like a failure of separation of concerns.

I understand that people find Babel's plugin ecosystem confusing and intimidating (it is), but I don't think a separate monolithic typescript that reimplements popular babel functionality is the answer.

sdegutis 3 days ago 4 replies      
> In TypeScript 2.0, null and undefined have their own types which allows developers to explicitly express when null/undefined values are acceptable. Now, when something can be either a number or null, you can describe it with the union type number | null (which reads as number or null).

Great news, but I suspect it's going to be pretty difficult to migrate large codebases to account for this properly, even with the help of `--strictNullChecks`. Sounds like days worth of tedious work analyzing every function.

smithkl42 3 days ago 1 reply      
Huge TypeScript fan here - been using it since its 0.8x days. And I'm very interested in the new --strictNullChecks compiler flag. But I'm trying to implement that on our current codebase, and I'm coming to the conclusion that it's still a bit premature. There are a lot of very common and harmless JS (and TS) patterns which this breaks, and for which it's been difficult (for me at least) to find a workaround.

Turning on --strictNullChecks flagged about 600+ compiler errors in our 10Kloc codebase. I've addressed about half of those so far, and I can't say that any of them have actually been a real bug that I'm glad got caught. On the contrary, because of the weird hoops it makes you jump through (e.g., encodeAsUriComponent(url || '')), I'd say that our codebase feels even less clean.

oblio 3 days ago 0 replies      

* npm replaces typings/tsd

* non-nullable types (has to be switched on)

* better control flow analysis ( la Facebook Flow)

* read-only properties

Arnavion 3 days ago 1 reply      

 $ npm view typescript 'dist-tags' { latest: '2.0.3', next: '2.1.0-dev.20160922', beta: '2.0.0', rc: '2.0.2' }
Yet https://www.npmjs.com/package/typescript says

>typescript published 5 months ago>1.8.10 is the latest of 447 releases

leeoniya 3 days ago 3 replies      

 class Person { readonly name: string;
couldn't they have just reused `const`?

Roboprog 2 days ago 4 replies      
Does Typescript have a facility to support partial function application?

Say I have a function of arity 4, and want to bind / partially apply (some might say "inject") 2 arguments to it to create a function of arity 2, can TS infer the types of the remaining arguments, or, that the result is a function at all???

I use partial function application MUCH more than classes in the JS code that I write. There just seems to be less to need all that "taxonomy" related refactoring.

"Stop Writing Classes", "Executioner" in "The Kingdom of Nouns" (not!), and all that sort of thing :-)

netcraft 3 days ago 0 replies      
I've got to find the time to play and get typescript set up figured out. I've had several false starts, always running into duplicate type definition issues and similar problems - looking forward to see if the new @types from npm help things.
equasar 3 days ago 2 replies      
Awesome release! Yet, I'm still waiting for 2.1 to bring finally async/await generators.
mhd 3 days ago 2 replies      
Seems the spread operator (a roadblock for a lot of JS interop) will have to wait until 2.1.
zem 3 days ago 0 replies      
typescript is a nicely conservative set of extensions to javascript, but if you're willing to venture a bit further afield, redhat's ceylon [https://ceylon-lang.org/] is well worth a look. it has an excellent type system, and seems to be under active development, particularly in terms of improving the runtime and code generation.

i played with it for a bit before deciding that for my personal stuff i prefer more strongly ML-based languages, but if i ever have to develop and maintain a large team project i'll definitely give ceylon a very serious look.

macspoofing 3 days ago 1 reply      
Why choose "readonly" as the modifier for immutable properties? A little long, no?
ihsw 3 days ago 1 reply      
Still no async/await for ES5 build targets, but other than that a plethora of excellent new features.
grimmdude 3 days ago 2 replies      
I gave TypeScript a try but honestly thought it was more trouble than it's worth in most javascript applications/libraries. Maybe that's just the lazy person inside me, or maybe my projects aren't big enough to make use of it's features.
ggregoire 3 days ago 2 replies      
TypeScript is so strange for me. Each time I read something about it, I want to give it another try... then I read some codes and I just can't go further. I like JavaScript as it is, without types on every line of code.

Unlike most people on HN, I like JavaScript. I build web app since 2011. I liked working with jQuery, then Backbone and Grunt, then Angular and Gulp. Now I'm working with React, Webpack and Babel (ES6/ES7) and writing web apps has never been so much pleasure. JavaScript in 2016 is really fine for me. And the common point in my JS experience from 2011 to 2016 is that dynamic typing has never been a problem. I also worked with strongly typed languages for years like Java or C# and I still prefer the flexibility of JavaScript.

So it's strange because I admire TypeScript. The work accomplished by its team is really amazing. And it's so nice to see a single library reconcile developers with JavaScript. But in the other hand, I prefer keeping my JS without types because it just works fine for me and the teams with who I worked.

velmu 3 days ago 0 replies      
Ripgrep A new command line search tool burntsushi.net
704 points by dikaiosune  2 days ago   185 comments top 35
losvedir 2 days ago 2 replies      
Meh, yet another grep tool.... wait, by burntsushi! Whenever I hear of someone wanting to improve grep I think of the classic ridiculous fish piece[0]. But when I saw that this one was by the author of rust's regex tools, which I know from a previous post on here, are quite sophisticated, I perked up.

Also, the tool aside, this blog post should be held up as the gold standard of what gets posted to hacker news: detailed, technical, interesting.

Thanks for your hard work! Looking forward to taking this for a spin.

[0] http://ridiculousfish.com/blog/posts/old-age-and-treachery.h...

ggreer 2 days ago 7 replies      
I'm the author of ag. That was a really good comparison of the different code searching tools. The author did a great job of showing how each tool misbehaved or performed poorly in certain circumstances. He's also totally right about defaults mattering.

It looks like ripgrep gets most of its speedup on ag by:

1. Only supporting DFA-able Rust regexes. I'd love to use a lighter-weight regex library in ag, but users are accustomed to full PCRE support. Switching would cause me to receive a lot of angry emails. Maybe I'll do it anyway. PCRE has some annoying limitations. (For example, it can only search up to 2GB at a time.)

2. Not counting line numbers by default. The blog post addresses this, but I think results without line numbers are far less useful; so much so that I've traded away performance in ag. (Note that even if you tell ag not to print line numbers, it still wastes time counting them. The printing code is the result of me merging a lot of PRs that I really shouldn't have.)

3. Not using mmap(). This is a big one, and I'm not sure what the deal is here. I just added a --nommap option to ag in master.[1] It's a naive implementation, but it benchmarks comparably to the default mmap() behavior. I'm really hoping there's a flag I can pass to mmap() or madvise() that says, "Don't worry about all that synchronization stuff. I just want to read these bytes sequentially. I'm OK with undefined behavior if something else changes the file while I'm reading it."

The author also points out correctness issues with ag. Ag doesn't fully support .gitiginore. It doesn't support unicode. Inverse matching (-v) can be crazy slow. These shortcomings are mostly because I originally wrote ag for myself. If I didn't use certain gitignore rules or non-ASCII encodings, I didn't write the code to support them.

Some expectation management: If you try out ripgrep, don't get your hopes up. Unless you're searching some really big codebases, you won't notice the speed difference. What you will notice, however, are the feature differences. Take a look at https://github.com/BurntSushi/ripgrep/issues to get a taste of what's missing or broken. It will be some time before all those little details are ironed-out.

That said, may the best code searching tool win. :)

1. https://github.com/ggreer/the_silver_searcher/commit/bd65e26...

minimax 2 days ago 1 reply      
In contrast, GNU grep uses libcs memchr, which is standard C code with no explicit use of SIMD instructions. However, that C code will be autovectorized to use xmm registers and SIMD instructions, which are half the size of ymm registers.

I don't think this is correct. glibc has architecture specific hand rolled (or unrolled if you will lol) assembly for x64 memchr. See here: https://sourceware.org/git/?p=glibc.git;a=blob;f=sysdeps/x86...

cwillu 2 days ago 0 replies      
I wish more people actually took steps to optimize disk io though; my current source tree may be in cache, but my logs certainly aren't. Nor are my /usr/share/docs/, /usr/includes/, or my old projects.

Chris Mason of btrfs fame did some proof of concept work for walking and reading trees in on-disk order, showing some pretty spectacular potential gains: https://oss.oracle.com/~mason/acp/

Tooling to do your own testing: https://oss.oracle.com/~mason/seekwatcher/

_audakel 2 days ago 0 replies      

Id like to try to convince you why you shouldnt use ripgrep. Often, this is far more revealing than reasons why I think you should use ripgrep."

Love that he added this

jonstewart 2 days ago 2 replies      
Nice! Lightgrep[1] uses libicu et al to look up code points for a user-specified encoding and encode them as bytes, then just searches for the bytes. Since ripgrep is presumably looking just for bytes, too, and compiling UTF-8 multibyte code points to a sequence of bytes, perhaps you can do likewise with ICU and support other encodings. ICU is a bear to build against when cross-compiling, but it knows hundreds of encodings, all of the proper code point names, character classes, named properties, etc., and the surface area of its API that's required for such usage is still pretty small.

[1]: http://strozfriedberg.github.io/liblightgrep

bodyfour 2 days ago 2 replies      
It would be interesting to benchmark how much mmap hurts when operating in a non-parallel mode.

I think a lot of the residual love for mmap is because it actually did give decent results back when single core machines were the norm. However, once your program becomes multithreaded it imposes a lot of hidden synchronization costs, especially on munmap().

The fastest option might well be to use mmap sometimes but have a collection of single-thread processes instead of a single multi-threaded one so that their VM maps aren't shared. However, this significantly complicates the work-sharing and output-merging stages. If you want to keep all the benefits you'd need a shared-memory area and do manual allocation inside it for all common data which would be a lot of work.

It might also be that mmap is a loss these days even for single-threaded... I don't know.

Side note: when I last looked at this problem (on Solaris, 20ish years ago) one trick I used when mmap'ing was to skip the "madvise(MADV_SEQUENTIAL)" if the file size was below some threshold. If the file was small enough to be completely be prefetched from disk it had no effect and was just a wasted syscall. On larger files it seemed to help, though.

lobster_johnson 2 days ago 1 reply      
Very nice. Not only fast, but feels modern.

Tried it out on a 3.5GB JSON file:

 # rg rg erzg4 k.json > /dev/null 1.80s user 2.54s system 53% cpu 8.053 total # rg with 4 threads rg -j4 erzg4 k.json > /dev/null 1.76s user 1.29s system 99% cpu 3.059 total # OS X grep grep erzg4 k.json > /dev/null 60.62s user 0.96s system 99% cpu 1:01.75 total # GNU Grep ggrep erzg4 k.json > /dev/null 1.96s user 1.43s system 88% cpu 2.691 total
GNU Grep wins, but it's pretty crusty, especially with regards to its output (even with colourization).

cm3 2 days ago 1 reply      
To build a static Linux binary with SIMD support, run this:

 RUSTFLAGS="-C target-cpu=native" rustup run nightly cargo build --target x86_64-unknown-linux-musl --release --features simd-accel

dikaiosune 2 days ago 0 replies      
Compiling it to try right now...

Some discussion over on /r/rust: https://www.reddit.com/r/rust/comments/544hnk/ripgrep_is_fas...

EDIT: The machine I'm on is much less beefy than the benchmark machines, which means that the speed difference is quite noticeable for me.

echelon 2 days ago 1 reply      
Rust is really staring to be seen in the wild now.
Tim61 2 days ago 0 replies      
I love the layout of this article. Especially the pitch and anti-pitch. I wish more more tools/libraries/things would make note of their downsides.

I'm convinced to give it a try.

krylon 2 days ago 2 replies      
When I use grep (which is fairly regularly), the bottleneck is nearly always the disk or the network (in case of NFS/SMB volumes).

Just out of curiosity, what kind of use case makes grep and prospective replacements scream? The most "hardcore" I got with grep was digging through a few gigabytes of ShamePoint logs looking for those correlation IDs, and that apparently was completely I/O-bound, the CPUs on that machine stayed nearly idle.

chx 1 day ago 0 replies      
I am not sure how excited I am ... I readily accept this to be faster than ag -- but ag already scans 5M lines in a second for a string literal on my machine. Not having to switch tools when I need a recursive regexp is win enough to tolerate a potential .4s vs .32s second everyday search.
pixelbeat 2 days ago 1 reply      
Thanks for the detailed comparisons and writeup.

I find this simple wrapper around grep(1) very fast and useful:


TheGrassyKnoll 16 hours ago 0 replies      

 Mega-Thanks to the authors of grep (longtime user) ack (nice innovation) ag (outstanding work) rg (outstanding work)

h1d 2 days ago 0 replies      
"if you like speed, saner defaults, fewer bugs and Unicode"

Warning - Conditional always returns true.

fsiefken 2 days ago 2 replies      
nice, but does it compile and run on armhf? I don't see any binaries
xuhu 2 days ago 1 reply      
Why not make --with-filename default even for e.g. "rg somestring" ? That seems like it could hinder adoption since grep does it and it's useful when piping to other commands.

Is it enabled when you specify a directory (rg somestring .) ?

reubano 18 hours ago 1 reply      
Nice writeup! Any chance you'll support macports for those of us who never jumped ship to homebrew?
qwertyuiop924 2 days ago 0 replies      
That is really cool. Although I think this is a case where Good Enough will beat amazing, at least for me (especially given how much I use backrefs).
AlisdairO 1 day ago 0 replies      
Superb work, and a superb writeup. It's really great to see such an honest and thorough evaluation.
visarga 1 day ago 1 reply      
Great tool. Does there exist a faster implementation of sort as well? I once implemented quicksort in C and it was faster than Unix sort by a lot, I mean, seconds instead of minutes for 1 million lines of text.
petre 2 days ago 2 replies      
Does it use PCRE (not the lib, the regex style). If not, ack is just fine. My main concern with grep are Posix regular expressions.
wamatt 2 days ago 1 reply      
On a somewhat related note.

There does not appear be a popular indexed full-text search tool in existence.

Think cross-platform version of Spotlight's mdfind. Could there be something fundamental that makes this approach unsuitable for code search?

Alternatively, something like locate, but realtime and fulltext, instead of filename only.

justinmayer 2 days ago 2 replies      
Anyone have any suggestions regarding how to best use Ripgrep within Vim? Specifically, how best to use it to recursively search the current directory (or specified directory) and have the results appear in a quickfix window that allows for easily opening the file(s) that contain the searched term.
pmontra 2 days ago 1 reply      
It looks very good and I'd like to try it. However I'm lazy and I don't want to install all the Rust dev environment to compile it. Did anybody build a .deb for Ubuntu 16?
hxn 2 days ago 2 replies      
Looks like every tool has its upsides and downsides. This one lacks full PCRE syntax support. Does one have to install Rust to use it?
spicyj 2 days ago 5 replies      
rg is harder to type with one hand because it uses the same finger twice. :)
libman 2 days ago 1 reply      
Tragically the news that LLVM is switching to a non-Copyfree license (see copyfree.org/standard/rejected) has ruined everything... Nothing written in Rust can be called Free Software. :(
chalana 2 days ago 2 replies      
I'm never sure whether or not I should adopt these fancy new command line tools that come out. I get them on my muscle memory and then all of a sudden I ssh into a machine that doesn't have any of these and I'm screwed...
serge2k 2 days ago 1 reply      
> We will attempt to do the impossible

Oh well. Waste of time then.

kozikow 2 days ago 2 replies      
1. Ag have nice editor integration. I would miss emacs helm-projectile-ag

2. Pcre is good regexp flavor to master. It is have good balance of speed, power and popularity. In addition to Ag, there are accessible libraries in many languages, including python.

I think it would be good if everyone settled on Pcre, rather than each language thinking they will do regexps better.

zatkin 2 days ago 0 replies      
>It is not, strictly speaking, an interface compatible drop-in replacement for both, but the feature sets are far more similar than different.
wruza 2 days ago 1 reply      

 ... $ rg -uu foobar # similar to `grep -r` $ rg -uuu foobar # similar to `grep -a -r`
I knew it. The name is absolutely ironic. I cannot just drop-it-in and make all my scripts and whatever scripts I download work immediately faster (nor is it compatible with my shell typing reflexes). New, shiny, fast tool, doomed from birth.

Bike manufacturer sees huge reduction in delivery damage by printing TV on box medium.com
542 points by Someone  1 day ago   221 comments top 32
charlieegan3 1 day ago 2 replies      
Related: https://www.atheistberlin.com/study - Shoe company finds relationship between lost packages and package branding.
GigabyteCoin 21 hours ago 3 replies      
Popular Mechanics found different results when they did a similar study [0].

>"One disheartening result was that our package received more abuse when marked "Fragile" or "This Side Up." The carriers flipped the package more, and it registered above-average acceleration spikes during trips for which we requested careful treatment."

[0] http://www.popularmechanics.com/technology/reviews/a6284/whi...

analog31 22 hours ago 4 replies      
I should paint a TV on myself for when I'm riding my bike in traffic.
aluhut 1 day ago 9 replies      
It seems like people who are responsible don't care anymore. Maybe it's the wages, the pressure or whatever. It looks like it's about time to replace even more humans from the equation.
lttlrck 1 hour ago 0 replies      
Bikes are not packed particular well. The top and bottom staples pull out quite easily and could pop out under reasonable twisting. They really should be strapped.

I bought a bike from Jet and it arrived damaged, the box popped open, parts had fallen out. Returned that (trouble free which was nice) and ordered another from Amazon instead.

Amazon have a checkbox to have large deliveries that would normally not be in an Amazon branded box placed in one at no extra cost. Checked that box knowing it would act as sacrificial outer layer.

delinka 1 day ago 2 replies      
For Science: Let's see if LG's willing to have some TV boxes printed with bicycles...
has2k1 1 day ago 1 reply      
This is analogous to Batesian mimicry [1].

[1] https://en.wikipedia.org/wiki/Batesian_mimicry

Darthy 14 hours ago 3 replies      
I see a possible solution here using technology:

Senders should add a small $1 "black box" recording acceleration data, and shipping companies should be able to query for a certain package and a certain timestamp, which employee was accountable at that moment.

Then when you receive a broken package, the black box tells you the timestamp when it was thrown to the ground, you tell that to the shipping company, which then finds the employee at fault and gives him/her a warning/sacks him/her.

WalterBright 1 day ago 3 replies      
Unfortunately, the boy who cried wolf will apply if this is more widely adopted, and then pity the poor folks who order TVs.
younghak 5 hours ago 0 replies      
In Korea, the magic phrase is 'contains kimchi' and you are guaranteed of safe delivery. All hell break loose when kimchi leaks; boxes get wet and smelly, kimchi stains don't come off easy so delivery people take extra measures to prevent it.
massysett 1 day ago 2 replies      
I wonder if the number of stolen boxes (either while in shipment or when left on porches) went up?
userbinator 1 day ago 4 replies      
I wonder what sort of damage these bikes are receiving, because they're designed to be ridden by a person... a TV is definitely far more fragile.
jaimebuelta 13 hours ago 0 replies      
The details of shipping are quite interesting. Martin Guitars (a well know brand) removes absolutely every reference to their brand or the fact that they are guitars or musical instruments in the external packaging, while keeping an internal box with their logo, etc... a box within a box

They started doing so after having issues with "disappearing" guitars in transit (though probably at the moment with all the new tracking systems this is more complicated nowadays)

Their packaging is also quite protective, as you can imagine with a musical instrument...

xir78 1 day ago 1 reply      
Boeing puts a picture of a Lamborghini on their first class seats while in the factory in Evert to covey the cost of them -- amazingly they do cost about as much as one too.
hanoz 1 day ago 2 replies      
Printing wolf on the box would see them some careful handling too, for a while...
0xmohit 13 hours ago 0 replies      

Now I hope that some car manufacturers would introduce new models that look like a TV thereby resulting in fewer accidents and lost lives.

satysin 1 day ago 1 reply      
Wonderful (part) solution. I love things like this that tap into the mind so subtly.
losteverything 20 hours ago 0 replies      
The "never get damaged" parcels are the live chicks we deliver (and the return empty).

If someone told me they improved shipping damage by a simple outside change that much I would say the have poor parcel design and strength to begin with.

Daily I see idiotic mailers with improper packaging. Examples diapers normally on a grocery shelf with open space on the underside (Amazon is famous for this) that are exposed

Liquids that spill over other unprotected parcels and slugs.

LPs with soft cardboard.

Anything sent from an Etsy source. It's a serious joke.

The article claim is very questionable in my mind from my perspective. Even the worst package gets through unscathed. I deliver coconuts from Hawaii with only a stamp and Sharpie address.

The greatest factor in the proper safe arrival of a parcel is NOT the delivery BUT THE PACKING. Take that to the bank.

Nanite 14 hours ago 0 replies      
Pretty decent piece of stealth marketing! Catchy blog posting about a fragile goods shipping hack, raises brand awareness for a company, it's products and its mission.
kylehotchkiss 21 hours ago 1 reply      
When I ordered my bike from UK (Evans cycles is awesome), it shipped via DHL. They're pretty high on the meh scale. The box had double corregated cardboard and the bike was packed for war. I'm sure it wasn't handled gently. That seems like the expectation with shipping. Super cool this hack is! Maybe one day they'll try a picture of a glass chandailer too.

This all said, 90% of the boxes I get from Amazon via UPS are in perfect condition - it's remarkable how well they handle small packages.

There's a national geographic show called "ultimate factories" that has an episode called "ups worldport". Super fascinating. I recommend it!

backtoyoujim 20 hours ago 0 replies      
I have a conspiracy theory that the entire delivery infrastructure in the US (all the u's, p's s's, and f's) have been infiltrated by Scientology/Chucky Cheeze.

I'm still fine tuning it.

williwu 23 hours ago 0 replies      
Genius idea. Similar idea applies for iPhone's anonymous shipping packaging and plain envelope for credit cards -> reduce theft.
slovette 1 day ago 1 reply      
This does not surprise me. To inflict change, you don't need to control the person, you just need to control their perception of reality.
kalefranz 21 hours ago 0 replies      
This makes me smile inside. Hacking at it's best.
gnipgnip 20 hours ago 1 reply      
I wonder if this works when flying :P
santoshalper 1 day ago 0 replies      
What a great idea, but this really feels like the kind of thing they should have kept quiet about.
kardashian007 17 hours ago 0 replies      
Handwritten address, "Do the right thing" and "family sentimental heirlooms" might also work.
seesomesense 23 hours ago 0 replies      
Time to replace the humans in the logistics chain with robots.
logicallee 1 day ago 2 replies      
True, but they could reduce damage even more by putting a picture of a stained glass window and giant letters "HIGHLY FRAGILE DELICATE STAINED GLASS WINDOW! HANDLE WITH EXTREME CARE!!" on it. That would certainly reduce damages further.

The problem is that it isn't one (a TV). Why would someone feel mortified if they accidentally drop a packaged bicycle from 2-3 feet (typical carrying height) when a fully assembled bike can be dropped from 2-3 feet, and this is packaged, so it should be even safer. On the other hand no one would feel free to drop a packaged LCD TV from even half a foot because people know it includes a giant pane of essentially glass, and they know that there are limits to what packaging can do.

So, yeah, by failing to meet expectations when it comes to packaging a bicycle, they can reduce damages by writing on it that it's a TV instead. All right.

But isn't this still them not meeting expectations exactly? If they write on it that it's a delicate stained-glass window, that would still be not meeting expectations. If the handler is the one with unreasonable expectations or behavior (if 2-3 feet isn't a reasonable drop height and should be considered a failure), then maybe educate the handler with some writing or warnings on the packaging.

isn't the real issue here that handler's expectations of bike packaging does not meet bike packaging's characteristics? so, you could tackle it head-one by writing care instructions.

alternatively, the article says only 70-80% reduction in damages was achieved. Maybe by lying and saying it is delicate stained-glass, handle with extreme care, they could up that to 95% reducted. I guess I've just saved them 15% of their former damages (even higher percentage of their remaining damages) with this one neat trick.

Theodores 1 day ago 1 reply      
Most things arrive fully assembled. With that TV you just plug it in and that is it. You don't have to adjust the HDMI sockets with a screwdriver or double check the earth lead is correctly bolted on. You don't have to get a spanner out to adjust that five degree tilt to one side in the base.

But with a bicycle, it is an entirely different story. The seat is not centered on the rails, nice and level. Much has to be assembled and that is understandable, however, the brakes and the gears rarely work as well as Shimano intended. The bike is part assembled and the consumer is left to do the rest. Rarely is the finished result as polished as the fit and finish that the TV arrives with.

If a bicycle manufacturer jost got that final assembly together so that only seat height adjustment was needed with nothing else needing a double check, then they might be able to sell to the end customer properly. As it is there is no quality in the final delivery, bikes sent to the customer will be far from expertly 'tuned'.

orblivion 1 day ago 3 replies      
Clever, but seems ethically questionable.

Why do the shippers care about breaking a TV? Presumably there are repercussions, such as an insurance plan. So why don't those repercussions just apply to bicycles? If they're fined for enough bikes being broken, they should probably learn that they need be more careful than they thought, right?

EDIT: Toning down my choice of words.

An Important Message About Yahoo User Security yahoo.tumblr.com
506 points by runesoerensen  3 days ago   341 comments top 54
nodesocket 3 days ago 22 replies      
You'd think this would affect the stock price, but currently YHOO only trading down 8 cents (-0.18%). I honestly see this all the time. What sounds like really horrible news for a company, does not affect the price. Howerver, some random analyst or reporter who works at the Mercury Star Sun Inquirer writes a negative article or downgrade and the stock tanks. Doesn't make much sense.
supergirl 3 days ago 8 replies      
"state sponsored actor". I wonder how they decided that. did the hackers plant a flag inside yahoo's data center? or is any attack originating from outside US now considered state sponsored? of course, we will never see any proof of this.

also, did it take them 2 years to discover this breach? that's bad. or, do they just announce it now? that's worse.

nostromo 3 days ago 8 replies      
"The data stolen may have included names, email addresses, telephone numbers, dates of birth and hashed passwords but may not have included unprotected passwords, payment card data or bank account information, the company said."

What's the difference between "may have" and "may not have" in this context?

It seems like they're saying anything could have been stolen.

newscracker 3 days ago 13 replies      
Moving email addresses out from one provider and creating another one is more difficult than moving phone numbers (in the latter case, number portability could help, if available).

What exactly can an average/common end user do for such incidents, even if it is to avoid them in the future? I use different passwords across accounts, with all of them being somewhat complex or very complex.

I have looked at a few different paid service providers before, but they're all very expensive. Expensive for me is anything that charges more than $20 per year, or worse, charges that amount or higher for every single email address/alias on a domain. My use of email for personal purposes is writing about a handful of emails in an entire year, but on the receiving side, I get a lot of emails - most of them somewhat commercial in nature (like online orders, bank statement notifications, marketing newsletters I've explicitly signed up for, etc.). I also have several email addresses, each one used for a different purpose and with some overlap across them.

It seems like web hosting has become extremely cheap over time whereas email hosting has stagnated on the price front for a long time.

throwawayReply 3 days ago 1 reply      
Is there some kind of "statute of limitations" thing that means we're suddenly finding out about a string of breaches from 2012 now?

Or is there some group that is trading breach data privately that have themselves been compromised so that data coming from them is finally leaking out?

I'm now more worried about the 4 year delay in these things coming to light than the effect of the breaches themselves given how many times I now show up on haveibeenpwned.

jonbarker 3 days ago 1 reply      
Yahoo has recommended that users "check their accounts". What exactly would they be checking? Doesn't a compromised account look the same as an uncompromised account from a user perspective?
jap 3 days ago 1 reply      
I wonder if "500M" is a silly way of saying all user account details were stolen.
munk-a 3 days ago 1 reply      
Does the incredible delay of this announcement count as being grossly negligent?

Maybe they're trying to devalue their stock prior to the merger? Similar to what Caris did: http://www.law360.com/articles/684195/caris-employees-get-16...

mey 3 days ago 1 reply      
I found it rather perverse that the login and account recovery screens of Yahoo! have 3rd party ads running. Doesn't give me any confidence in their security (in addition to the breach).
soci 3 days ago 1 reply      
Wait, Yahoo believes the data was stolen by a "state-sponsored actor"!

If they have such evidence, why don't they explain so? To me it looks like a tactic to put the focus on the "noughty" government instead of themselves.

Anyway, it will be an interesting read (if ever written) how Yahoo discovered they had been stolen and by who (what state?).

Also, if "the state" is finally behind this, who will they prosecute till death? I bet it's the hacker :(

sambe 3 days ago 0 replies      
One of the more convoluted announcements I've seen. I have to be aware that yahoo officially communicates via tumblr.com, check two different announcement pages which may not yet be up (converting time zones). When I clicked one of them I had to find the notice "in my region" which had only one option (not my region) and linked to another (non-yahoo?) site with an image of a document. I can't imagine all 500M users will jump through these hoops and remember when they last changed their password.
St-Clock 3 days ago 1 reply      
"encrypted or unencrypted security questions and answers"

This is bad right? Like, worse than your hashed password and your mailing address.

The only good thing is that if I ever implement security questions, I'll remember Yahoo! and how it could end up in the wrong hands.

leesalminen 3 days ago 1 reply      
I wonder how many dummy accounts from the mid-2000s of mine were included in that.

I was born in 1990, and my insecure online behavior from 2000-2005 scare me. Hopefully HaveIBeenPwned gets their hands on this so I can scan for my teenage usernames.

geraldcombs 3 days ago 0 replies      
I wonder what percentage of those 500 million accounts correspond to real human beings. Much of the spam on the sites I run come from what appear to be fake @yahoo.com accounts.
mschoebel 3 days ago 0 replies      
FWIW... I just logged in to my Yahoo Account and removed the security questions. Just to be sure. I had already changed my password a few months ago when first rumors of this came up. I'm pretty sure that the option to remove the security questions wasn't there back then.
pavornyoh 3 days ago 2 replies      
Can anyone elaborate as to why this is being announced two years later? Why now and not when it happened?
zymhan 3 days ago 1 reply      
Does this mean Verizon would assume liability if the purchase closes before a lawsuit/fine is brought?
Spydar007 3 days ago 0 replies      
>Yahoo believes that information associated with at least 500 million user accounts was stolen

That tops the HIBP list for the most stolen.[1]

[1] https://haveibeenpwned.com/PwnedWebsites

mtgx 3 days ago 0 replies      
> We have confirmed that a copy of certain user account information was stolen from the companys network in late 2014 by what it believes is a state-sponsored actor.

GCHQ? Although GCHQ seems to have hacked them even earlier than that.


Fuzzwah 3 days ago 0 replies      
"We are recommending that all users who havent changed their passwords since 2014 do so."

And then don't include an easy link to where users can do that? Great work yahoo.

I found my way to http://profile.yahoo.com but apparently from my machine at an AU University: "profile.yahoo.coms server DNS address could not be found"

heroiccyber 3 days ago 7 replies      
You can verify if your credentials have been compromised at https://heroic.com
slicktux 3 days ago 0 replies      
Interestingly my account is not part of the compromise and my friends are; I can confirm this because they received a message about the compromise and I did not. . .I asked them how long they've had their accounts and they said for about a year; where as I have had my account for about 5 years. Interesting no?
yAnonymous 2 days ago 0 replies      
Is it possible that certain companies leave their user data open for attacks to illegally share it with third parties?

At this point, it should at least be considered. There's obviously quite a bit of incompetence at Yahoo, but still...

norea-armozel 3 days ago 0 replies      
I think my only concern is what data I had attached to my Yahoo account (for Flickr) which I think they required me to tie to a phone number. So I guess that means I can expect people trying to abuse that phone number as a point of identification in identity theft attempts. Oh joy.
elorant 3 days ago 2 replies      
Theres one thing I dont understand with this state sponsored actor. Say you are an oppressive regime and you target activists who use yahoo mail to publish your dirty laundry. Why on earth would you hack half a billion accounts just to get access to a few dozen ones? Doesnt make sense. You attract too much attention. A thing like that would never go unnoticed. If on the other hand youve found some exploit and target specific accounts which are numbered in the tens, say hundreds, you can easily get away with it.

BTW, I dont know if its coincidental but just yesterday I received a notification from Yahoo to disable access to Mail from third party apps.

Alex3917 3 days ago 0 replies      
This is where the phrase "adverse material fact" comes into play.
sofaofthedamned 3 days ago 1 reply      
Yahoo will survive this regardless of their 'state sponsored' hand waving or not.

The day the same happens to Google or Facebook will be very different.

aaronkrolik 3 days ago 2 replies      
How is it that 200M user accounts are worth only $1800?
inestyne 3 days ago 0 replies      
The only reason they announced it was to avoid being guilty of an actual crime after being acquired and not announcing it before hand.
itsnotlupus 3 days ago 0 replies      
The timeline seems close to that HN item: https://news.ycombinator.com/item?id=8416393 (dead link, cache at http://archive.is/PpCth )

It could be entirely unrelated.

Fej 3 days ago 0 replies      

More relevant than ever.

bpyne 2 days ago 0 replies      
Oddly, I changed the password for 2 Yahoo accounts only a month ago. I have to wonder if Yahoo filtered for people who recently changed passwords before designating me as a person who might be affected.
xorgar831 3 days ago 0 replies      
Yahoo won't let you enable two-factor auth with a Google Voice phone number. Oh well, time to delete my account.

Here's the magic link:https://edit.yahoo.com/config/delete_user

badthingfactory 3 days ago 0 replies      
I have an account that was definitely compromised. I had completely forgotten this account existed and never used it to sign up for anything else. I was rather surprised when I realized someone had that email and password.
finid 3 days ago 0 replies      
System was hacked in "late 2014", but we only now found out about it in 2016. That's almost 2 years.

Whoever the "state-sponsored" hacker is probably has lost interest in that access.

halayli 3 days ago 0 replies      
What evidence do they have that it's a state sponsored attack?
esaym 3 days ago 0 replies      
Long live yahoo...oh well. I've never used them for email, I only had an account for yahoo IM, which they just killed. I have no use for them at all now.
ashitlerferad 2 days ago 0 replies      
Interesting that they are moving beyond passphrase authentication towards an "Account Key". I wonder how that works...
ianai 3 days ago 1 reply      
Sometimes I play with the notion of a future without asymmetric information. If it is to be known it is known by all.
chris_wot 3 days ago 0 replies      
Will this be reported to the Australian Privacy Commisioner? I'm assuming it affected Yahoo Australia.
nradov 3 days ago 3 replies      
It seems bizarre that Yahoo would use a post on tumblr.com to make such an important announcement. From what I've seen Tumblr has become mostly a wasteland of worthless garbage in the past few years and no one takes it seriously any more. Isn't this the sort of thing that ought to be on the yahoo.com home page from a PR crisis management standpoint?
thereisnogadi 3 days ago 0 replies      
Wow. If you scroll up and look at the header, that's some very sexy UX. Good job, Yahoo!
willow9886 3 days ago 2 replies      
Yahoo's login experience has been horrible lately. This must be a contributing factor.
therealmarv 3 days ago 0 replies      
MD5 for password hashing? Seriously? That was even 2014 waaaaaaay outdated.
tzakrajs 3 days ago 0 replies      
Yet another "state actor hit us with 0days" statement.
realraghavgupta 3 days ago 1 reply      
using 2 Factor Authentication comes handy in situations like this.
smaili 3 days ago 1 reply      
Anyone happen to know if this was the largest hack in history?
beezle 3 days ago 3 replies      
Anybody know what hash they use at yahoo for account passwords?
backtoyoujim 3 days ago 0 replies      
I should have never created that rocketmail account.
eternalban 3 days ago 0 replies      
local area askHN:

what to do if one had an ancient account that was abandoned but has one's name on it?

[p.s. forgot password, etc.]

shruubi 3 days ago 0 replies      
It's obviously so important that they posted it to Tumblr instead of on the yahoo website itself...
luckydata 3 days ago 0 replies      
why don't they die already?
justinv 3 days ago 2 replies      
"by what it believed was a "state-sponsored actor.""
mapletree 3 days ago 0 replies      
I purchased my own credentials from the hackers just to make sure nobody else gets them. So much easier than coming up with another password.
It Costs $30 to Make a DIY EpiPen technologyreview.com
504 points by Halienja  3 days ago   385 comments top 45
googamooga 3 days ago 12 replies      
Medicine is probably the second best place after military where we can observe how greed and corruption are literally killing people.

I'm living in Russia and recently have been involved in medical devices market here. The local market for cardiology stents (little springs they insert non-surgically into your heart to remove artery clogging and prevent heart attack or stroke) has been long occupied by the three US companies. The Russian company I invested in, made their own stent design and launched a production factory in Western Siberia. Our prices are three to four times lower that prices for the same class of stents from the US competitors and the quality is the same or higher. We fought out 15% or the market for the last two years.

I have to say, that almost 99% of all stents in Russia are installed at the cost of the state medical insurance - every person in Russia is covered by this insurance, and that insurance is just sponsored by the state or local budget. The budget allocated to this kind of medical support is fixed, so if the yearly budget is 100M rubles (our local currency) and cost of a manipulation and a stent is 100K rubles then you can install stents in 1000 patients in one year. If the price goes down four-fold, then it will be 4000 patients. And this stent manipulation is a life saver in true sense of this word. So, basically with our stents we can save four times more people's lives, which on a scale of Russia would be tens of thousands of people.

Here enters the greed and corruption. One of the US companies approached one of the most powerful Russian oligarchs with good ties in the government. He lobbied a government decree stating that this US company will be single supplier for cardiology stents starting Jan, 2017. So, all hospitals and clinics are obliged to buy stents only from them, at the price they set. Tens of thousands of Russian people will die each year because of the greed and corruption - and we can't do much about it.

suprgeek 3 days ago 7 replies      
The Hackers groups are doing what they can to expose Mylan's (EpiPen makers) Greed which is laudable. What is really needed is also an explainer on why a bit of govt. leverage (socialism if you will) is good in Medicine Pricing as well.

Mylan is a really great user of buying legislation.They leveraged their 90% market ownership of the epinephrine auto-injector market such that

[1] It lobbied hard to ensure that all parents of school going kids (or tax payers) paid for EpiPens by making it into a bill that politicians could easily justify.

Once the bill passed and schools all over the country purchased these by the boatloads, then they just kept raising the price over and over and milking the profits.

When it got too much and they could not ignore the patient backlash they have turned again to purchasing legislation..

[2] Now they want to make it so that the patients do not see the copays - instead every one suffers by paying more for health insurance.

[1] https://www.opensecrets.org/lobby/billsum.php?id=hr2094-113

[2] http://www.nytimes.com/2016/09/16/business/epipen-maker-myla...

With scumbags like these, is it any wonder that the USA has the most expensive healthcare system in the world?

SteveGregory 3 days ago 5 replies      
Probably too late to really contribute, but either way -

I feel like health is a degenerate case of free markets. In any free market, the price is set by the consumers assessing their utility for the goods or services purchased. In cases of pencils, productivity software, energy, raw materials, etc, consumers compare the methods of resolving the need, or at baseline the cost of not addressing the need.

In healthcare, there are lots of situations where the cost is X dollars vs literal death. Of course, death is not an acceptable alternative, so an acceptable X ends up being very, very high for the treatment. Most people would pay their life savings to treat themselves of any life-threatoning ailment.

I honestly believe that free markets setting prices is good for most industries, but I cannot see it working in situations where the benefit categorically supercedes any amount of money.

It seems like we need to either rethink IP law surrounding healthcare, or have a monopsony (single payer or something else) setting prices.

This is a hard thing for me to resolve, as somebody who normally likes a libertarian approach.

anaolykarpov 3 days ago 4 replies      
The title could be rephrased as "Cheap guys risk the lives of thousands of people by promising savings of a few bucks".

The problem is not with the "greedy corporations", but with the poorly dedigned legislation regarding intellectual property rights.

The state created the protectionist environment in which companies can become bullies and be sure that they won't be exposed to any economic competition.

Of course, the complete lack of IP laws would deter companies from investing in research, but the same effect have too strong IP laws. Why would a company risk their money and do research once they found a cash cow which can be milked for a long time, having the state guarantee it?

fernly 3 days ago 1 reply      
TIL that an "autoject" is an inexpensive self-injection tool commonly used by diabetics[1] that can be carried safely and used easily. It can be loaded with Insulin, or with any drug whatever. The OP article describes using it to inject epinephrine, stating that

> A 1mL vial of epinephrine costs about $2.50... Doses range from 0.01mL for babies, to 0.1mL for children, to 0.3mL for adults.

In other words, if your doctor will give you a prescription for the drug itself, you could assemble three epipen equivalents for less than $100.

[1] https://smile.amazon.com/AJ-1311-Autoject-Injection-Removabl...

mikekij 3 days ago 2 replies      
(preparing for an onslaught of down votes, but here we go.)

It's awesome to see a 'hacker' building a $30 EpiPen. But looking only at the materials cost for a medical device ignores the millions (sometimes billions?) of dollars spent on R&D, IP licensing, and (perhaps most significantly) regulatory compliance.

The pricing system for devices and drugs is definitely screwed up in the US, but Mylan's 36% gross margin on the devices doesn't seem criminal.

Perhaps they're padding their cost numbers. And perhaps there are IP shenanigans at play that I'm not aware of. But one needs a thorough understanding of the total costs to invent, develop, achieve regulatory clearance for, and market a medical device in order to assess the morality of the pricing.

sp527 3 days ago 9 replies      
This doesn't feel like the right platform for DIY. When someone needs an EpiPen, it's because they might be dying. Presumably, a large and well-capitalized organization will have tested their device extensively and can offer better guarantees about it actually working (I should stress presumably). There are a lot of ways in which the hacker mindset can be beneficial to society, but this particular application feels like an ethical gray area.
zaroth 3 days ago 1 reply      
So "Four Thieves Vinegar" says their DIY auto-injector works probably almost as well as the EpiPen. Sign me up! </s>

Are we really complaining about an "onerous regulatory process" for a device which untrained laymen need to be able to use in a high stress emergency situation?

I'd like to see Four Thieves Vinegar fund the necessary trials to prove their device is safe, gain FDA approval, bring the device to market, and defend themselves against the inevitable lawsuits, and then tell us how they can sell the device with less than 80% gross margins. The marginal cost of making one more pill or one more device is almost entirely irrelevant, and any article that tries to make a case for a medical product being overpriced based on COGS isn't worth reading IMO.

The price for EpiPens went up because no one else was able to make a competing product that didn't malfunction or deliver the wrong dose of epinephrine.

justinlardinois 3 days ago 5 replies      
I feel like a lot of the commenters here didn't read the article.

> Four Thieves Vinegar have created and uploaded the plans for the simple version, called the Epipencil. Also spring loaded, the parts are gathered over the counter. The epinephrine will still need to be acquired with a prescription.

This still involves an FDA-approved drug obtained through normal channels; the DIY part is the injector.

Creating DIY medical drugs would certainly be something to be concerned about, but I don't see the problem with DIY medical tools.

ncavet 3 days ago 3 replies      
"Hacking" medicine doesn't really work. See Theranos.

First, epenephrine degrades when exposed to light so your epipencil may be ineffective from the start.

Second, parents when measured took 2.5 minutes to fill a syringe with epinephrine which is not fast enough in an emergency.

The price gouging is terrible. There are cheaper alternatives but Epi-pen has the most well known brand.

People buy Tylenol and Advil not Aspirin and Ibruprofen.

Drs have appealed to ban drug advertising. Medicine should have no place in capitalism.

Source: http://www.aaaai.org/ask-the-expert/effect-of-light-on-epine...

agotterer 3 days ago 1 reply      
Michael gave a fantastic talk at Hope this year titled "how to torrent a pharmaceutical" where he made Daraprim on stage for only a few cents. Its defingely worth watching: http://livestream.com/internetsociety3/hopeconf/videos/13073...
CodeWriter23 3 days ago 0 replies      
If you are persistent, you can get CVS to order the "generic" epinephrine injector for $5.


contingencies 3 days ago 2 replies      
I just checked and here in China you can buy 10x1mg doses of epinephrine (which is actually adrenalin) over the counter/online for 4 ($0.6USD) ... https://world.taobao.com/search/search.htm?sort=price&_ksTS=...

That means the epinephrine (adrenalin) itself is essentially free. What do they charge in the US/Europe/Australia?

enoch_r 3 days ago 1 reply      
The hack here is simple: this group did not get FDA approval for their device. Greedy corporations have repeatedly tried to make money by competing with Mylan with cheaper Epipens, but they've been prohibited from doing so by the US government (not so in Europe, where the unfortunate Europeans suffer eight greedy corporations trying to drive prices down).


bayesian_horse 3 days ago 0 replies      
It may be better than having no epinephrine at hand. But other than that, there are a lot of problems: How sure are you that it will work when you need it? Can you fill the syringe cleanly enough? Will the epinephrine in the syringe degrade, or worse, develop a bacterial growth?

It may be a better Idea to look into a syringe+vial combination on hand, prescribed by a doctor. Less convenient, and you need to learn how to use it (and preferably teach those close to you), but this may be a whole lot safer. The downside of course is the problem of self-administration when in anaphylactic shock.

snow_mac 3 days ago 1 reply      
I've had to use an EpiPen twice in my life. Oh my gosh, the terror in your heart when you're self administering it is real. I will never forget the experience for the rest of my life. I don't want to trust some hack with no FDA approval in that moment.

I don't give a damn if the product is $50 or $500. I will buy it, it's saved my life many times. Its not awesome to see a hacker point out while the materials are cheap

jpalomaki 3 days ago 0 replies      
Wouldn't it be better to focus on the reasons why there is no viable competition for this company even though the business seems to be extremely profitable?

The article links to another that lists some of the issues:https://www.statnews.com/2016/09/09/epipen-lack-of-innovatio...

This points out (among other things) that the design is patent protected and FDA rules make it difficult to come up with other designs that don't violate the patent. It is also mentioned that the devices need to go through long and expensive regulatory process.

chillacy 3 days ago 4 replies      
Regarding the original price hike which motivated this project, I saw an interesting perspective on the matter: https://www.youtube.com/watch?v=RoMlxVimwiU

Now granted Shkreli is a controversial figure, but basically drug companies are businesses, and if you sort of detach yourself and look from a business perspective and value-based pricing, Epipen competes with the ER, and $600 is a bargain vs an ER visit.

And of course his ultimate conclusion is that maybe life saving drugs are more like water and power than cell phones and wine? Maybe the government should get involved in making generic drugs available.

jMyles 3 days ago 1 reply      
Somewhat tangential: I surmise that this title will be subject to editing by HN staff, but I think that "Hacker group creates $30 DIY Epipen to expose corporate greed and save lives" is an exemplary post title for HN and want to see more like it.
a-no-n 3 days ago 1 reply      
Just saw more Epipen Congressional testimony. The actual unit cost of the Epipen (whether branded or "generic") is around $67 USD. Assuming that this cost were not overly inflated beyond actual overhead and unit costs, in order to be sustainable, a reasonable retail price without distributors would be $134 USD... with distributors $200-238.

That said, the more downward pressure from competitors (commercial or nonprofit projects), the better for customers; especially where a monopoly existed, it's rational to for customers to band together and attack excessive hegemony.

Enteprising folks need to jump on this to sell this as a kit (w/ or w/o the medication).

iamflimflam1 3 days ago 0 replies      
"3.1.24 The health economics model assumes that people who receive adrenaline auto-injectors will be allocated two epinephrine pens (EpiPens) with an average shelf-life of 6 months. Each auto-inject EpiPen costs the NHS 28.77 (British national formulary 60). This equates to 115.08 per person per year."


From 2011 - but I can't imagine the cost has gone up by that much.

smsm42 3 days ago 0 replies      
$30 is way too much, production cost of EpiPen is probably in single-digit dollars, maybe even less. That's not the point, nobody thinks EpiPen costs $300 to produce.

The system it built in a very specific and deliberate way in the US - there are patented drugs that are expensive, by design, and the pharma is supposed to finance R&D and FDA testing and so on from that money, instead of financing it through taxation, or venture investing, or other means. Now, one can claim maybe Mylan is abusing the system and the money that were supposed to finance R&D are instead financing lavish salaries or whatever. And one can claim the system should not be built at all like this but should be built other way. Maybe.

But completely ignoring the whole design and saying "ah, we've discovered it costs $30!" is useless. Yes, it actually costs even less to manufacture, way less. It's obvious. The reason why Mylan charges more is not because it costs a lot to manufacture. The reason is because that's how patented drugs market in US works. If one wants to change it, it needs to be understood how it works. It's not corruption, it's the design of the system.

wodenokoto 3 days ago 0 replies      
There are 3 epipens in this article.

The non-generic at ~$350

The generic at ~$150

And the homemade at ~$30

The homemade is equivalent to the generic and the difference between generic and non-generic is not clearly mentioned so let's talk about the price of the generic epipen.

According to the article there is difficult bureaucracy to navigate and very large liability should an epipen fail. On top of that there's distribution and offices that needs a cut or to be paid for.

Is 5x markup that horrible?

robomartin 3 days ago 0 replies      
Ther's an abysmal difference between hacking something together an manufacturing a reliable product at scale that people can bet their lives on. Everything, from R&D to the cost of lawsuits, FDA trials and regulatory frameworks makes these comparisons dumb and ridiculous.

I've been manufacturing products for thirty years. It's never simple for good products, not even a cup of coffee at Starbucks.

eggy 3 days ago 2 replies      
I am a hacker at heart, and I believe there are definitely some shady dealings with government and industry lobbyists, however, I like to look at things on both sides, since there is always another side.

Truth is if it was more than one hacker in this collective making the 'Epipencil' they must have designed, procured materials, fabricated and did this all in less than an hour to say $30, and they would have had to do all of that in less than an hour to meet minimum wage requirements.

This does NOT speak to QA/QC, testing, insurance, FDA approval, legal costs or even their hacker lab overhead in equipment and energy to make one, let alone hundreds of thousands of these potentially life-saving products.

My guess is that the $150 per Epipen is close to what you need to fulfill all of the above and then some requirements. Far from the $300 or more in pen price hikes, so it was good they did this as an exercise for putting Mylan and government in the spotlight. Bravo, really!

My belief is that it is not solely big bad corporations, but big bad government AND big bad corporations. Just look at the moral integrity of our two current POTUS candidates.

I am trying to become more financially literate in my old age, and I am trying to teach my children likewise, since financial illiteracy is a deterrent to poor people improving their lives, or hackers making a worthwhile dollar in conjunction with learning and exploration.

I tell my kids to think twice when they reactively say or answer:

"ASAP" - when is that? Point to a date on the calendar;

"It will take 5 min." - It never takes just a minute or five;

"It only cost $8 for the materials." - How much is your time worth? Learning is a benefit that cannot be quantified too easily, but for other matters, you need to value your time.

throw2016 3 days ago 0 replies      
The expected market response should have been a flood of alternatives at 1/100 or even 1/10 the price since the base ingredient costs pennies. But these 'ideal' market scenarios that are in public interest rarely come through.

What we often get instead are completely self serving and crafty efforts in collusion with 'ngos' and lobbyists to leech tax payer subsidies and 'force' it onto institutions via legislation.

This pattern is repeated so often and widely its predictable. Also predictable is framing it as a capitalism vs socialism issue to trigger and distract while the corruption continues unabated.

The problem is healthcare is critical. If your checks and balances and idealised system does not work you risk letting people feed on others desperation and create demons. And these sociopaths then multiply within your society killing it from within. This is the biggest argument for socialized healthcare.

pingec 3 days ago 0 replies      
heironimus 3 days ago 2 replies      
There are so many reasons the EpiPen costs $318, corporate greed being one of them. One of the huge reasons that no one talks about is that most rarely actually sell for $318. It's priced at that, but insurance companies negotiate a lower (unpublished) price in most/all insurance purchases. It's only those with no insurance or who are buying it without insurance that pay the full price.

This is true for nearly all drugs, medical equipment, or medical procedures in the US. This is one of the huge problems with our system. Everyone puts a huge price-tag on their stuff knowing that insurance with negotiate down.

To me, this seems like the biggest problem here.

amalcon 3 days ago 0 replies      
This is an interesting approach. I've been wondering about refilling the things -- once the injectors I have expire, I may disassemble one to see if I can work that out.

As long as the needle hasn't been used, and the refill is the same dosage as it came with, I'd expect it to be just as effective as a new injector.

(Disclaimer: I am not a doctor, even if I were you're not paying me, this is not medical advice)

This may be legal to do commercially as well, since you're not manufacturing new devices that could infringe the patent. Sorting out FDA issues would be the only hard part (though likely very hard).

(Disclaimer: Nor am I a lawyer, and you are still not paying me, this is not legal advice)

wyager 3 days ago 1 reply      
> corporate greed

Can we please give blame where it's due? http://slatestarcodex.com/2016/08/29/reverse-voxsplaining-dr...

bonoboTP 3 days ago 1 reply      
Is EpiPen that well known among Americans? I (non-American) never heard of it until all the news about it's price in the US.

Does it get prescribed more often in the US than other countries? Why didn't I know about the existence of this thing?

a3n 3 days ago 0 replies      
Other big grownup companies have tried to make a precise injector, and not done nearly as well as Epipen. It's not just a needle in front of a spring. (I haven't read the article, it won't load atm.)
KaiserPro 3 days ago 0 replies      
it does cost $30 to make an epipen.


It needs to be proved to work, which is rightly arduous. Unlike in (most)software, you can just fix it later. Defects kill. There needs to be a high bar of evidence to prove that:

A) the drug works

B) It doesn't cause your face to melt off

C) its reliable.

All of this is costly, Now, you have two choices, nationalise your drug R&D and charge a uniform cost spread over all drug classes, or through general taxation. Or Sweep away all your regulations on drug prices and start again. (like why the fuck is medicaid not allowed to collectively bargain on price? that's taxpayer subsidy right there...)

In the UK there is a thing called NICE, which is semi autonomous and run by people who can understand stats (ie not politicians) its job is to evaluate the cost of drugs, and crucially the effectiveness of all drugs prescribed within the NHS.

Is the drug actually effective? (sure 50% more powerful, but it costs 190%, just double up the old one, etc etc)

does it provide value for money?

is it safe?

are all the questions they ask. If a drug fails the tests its either written out of guidelines, or more unusually its banned.

ChuckMcM 3 days ago 0 replies      
Pretty neat. I wondered why you could just use an autoinjector like diabetics use (answer you need a larger diameter needle). Still easily doable and its all off the shelf made by medical device manufacturers and drug makers so not so much "DIY" as "repurposing existing medical gear to be more versatile".
KKKKkkkk1 3 days ago 0 replies      
How much does it cost to get and maintain FDA approval for marketing the EpiPen? What are the financial costs of the legal risks you are taking by selling it to patients? In other words, if it's so lucrative, why isn't anyone else doing it?
repiret 3 days ago 3 replies      
Thats like saying pirated software exposes the greed of software companies. I don't think that anybody believes that EpiPens themselves are very expensive at all - just like software, the cost comes from the cost of development, which in this market consists mostly of regulatory compliance and approval. If it were easy to bring an epinephrin injector to market, Teva would have already done so and Sanofi wouldn't have had to recall theirs. If there were more auto injectors on the market, the prices would go down.

The outrageous price of EpiPens is not a result of corporate greed so much as a failure of the FDA and Congress - but mostly Congress, the FDA is their subordinate. They failed to promogulate rules that maintained a competitive market for epinephrin auto injectors.

JustSomeNobody 3 days ago 1 reply      
Ok so Mylan can get them made for $30 and sells them (now) for $150. Is a 5X sale price not acceptable? If not why aren't people going after every single product manufactured and sold?

Don't get me wrong, I'm not defending anyone here; that whole industry needs some fixing. I'm just tossing out the question.

bahmboo 3 days ago 0 replies      
Should be wearing gloves and preserving sterile field for making something injectable.
dang 3 days ago 0 replies      
Url changed from https://www.minds.com/blog/view/625077755582623755, which points to this.
red_blobs 3 days ago 3 replies      
tn13 3 days ago 2 replies      
It costs $30 to make an EpiPen at home why dont you create a company and sell it for $50 and solve the problem all ya complaining about ?

Mylan deserves our appreciation for inventing EpiPen when none of the other smarty pants who claim to make it in $30 bothered to help the needy.

endgame 3 days ago 0 replies      

I'm sure this is an interesting article but the only way we will stop this practice is if we stop giving user-hostile publications our eyeballs.

pweissbrod 3 days ago 0 replies      
Watch the video. All described is loading epinephrine into an autoinjector. This is great because it suggests the barrier to competition is relatively low hanging fruit for those already in the drug delivery markets.

Also: screw mylan

Mao_Zedang 3 days ago 1 reply      
If the product is so expensive, and someone can make a competing product for less viably I find it hard to believe that it hasn't been done. A more fair comparison would be "medical aid which wasnt subjected to the same regulations and testing is cheaper to make and distribute" aka Corporate greed.
crazy1van 3 days ago 3 replies      
If someone knows how to make a product for $30 that the competition charges $300 for, why not go into business and undercut their price by a huge margin? Millions of users' lives would be instantly improved with dramatically cheaper epipens. That will do far more to combat greed than a blog post.

However, I think that if someone were to try this, they'd find there are many more costs involved than the raw ingredients and it might not be quite so simple to massively undercut the competition. But still, they should go for it! Competition is is the best medicine for over priced goods.

What every coder should know about gamma johnnovak.net
556 points by johnnovak  4 days ago   181 comments top 42
jacobolus 4 days ago 3 replies      
One thing I hate is that essentially all vector graphics and text rendering (Cairo, Quartz, MS Windows, Adobe apps, ...) is done with gamma-oblivious antialiasing, which means that apparent stroke width / text color changes as you scale text up or down.

This is why if you render vector graphics to a raster image at high resolution and then scale the image down (using high quality resampling), you get something that looks substantially thinner/lighter than a vector render.

This causes all kinds of problems with accurately rendering very detailed vector images full of fine lines and detailed patterns (e.g. zoomed out maps). It also breaks WYSIWYG between high-resolution printing and screen renders. (It doesnt help that the antialiasing in common vector graphics / text renderers are also fairly inaccurate in general for detailed shapes, leading to weird seams etc.)

But nobody can afford to fix their gamma handling code for on-screen rendering, because all the screen fonts we use were designed with the assumption of wrong gamma treatment, which means most text will look too thin after the change.

* * *

To see a prototype of a better vector graphics implementation than anything in current production, and some nice demo images of how broken current implementations are when they hit complicated graphics, check this 2014 paper: http://w3.impa.br/~diego/projects/GanEtAl14/

cscheid 4 days ago 3 replies      
Hey, so gamma is not a logarithmic response. You claim that the delta you use in Figure 2 is a ratio, but your code, https://github.com/johnnovak/johnnovak.site/blob/master/blog... uses a fixed power. These are not the same thing.

f(x+eps)/f(x) ~= eps f'(x)/f(x) + 1

f(x) = x^2.2f'(x) = 2.2x^1.2

f(x+eps)/f(x) ~= 1.2 eps/x + 1

Human response to light is not particularly well-modeled by a logarithmic response. It's --- no big surprise --- better modeled by a power law.

This stuff is confusing because there's two perceptual "laws" that people like to cite: Fechner-Weber, and Stephens's. Fechner-Weber is logarithmic; Stephens's is a generalized power-law response.

inopinatus 3 days ago 5 replies      
Um. Curiously, that first example didn't work for me. Figures 1 & 2, under "Light emission vs perceptual brightness" are compared thus: "On which image does the gradiation appear more even? Its the second one!"

Except that for me it isn't. The first one, graded by emission rather than perception, appears more evenly graded to me. There is no setting I can find using the Apple calibration tool (even in expert mode) that does anything but strengthen this perception.

This raises only questions. Is this discrepancy caused by my Apple Thunderbolt Display? By my mild myopia? The natural lighting? My high-protein diet? The jazz on the stereo? The NSA? Or do I really have a different perception of light intensity?

And is anyone else getting the same?

Note: I have always had trouble with gamma correction during game setup; there has never been a setting I liked. Typically there'll be a request to adjust gamma until a character disappears, but however I fiddle things it never does.

Negitivefrags 4 days ago 1 reply      
Something that is important to note is that in photoshop the default is gamma incorrect blending.

If you work on game textures, and especially for effects like particles, it's important that you change the photoshop option to use gamma correct alpha blending. If you don't, you will get inconsistent results between your game engine and what you author in photoshop.

This isn't as important for normal image editing because the resulting image is just being viewed directly and you just edit until it looks right.

ansgri 4 days ago 2 replies      
Enough has been said about incorrect gamma (this and [0]), now I think it's high time to bash the software of the world for incorrect downscaling (e.g. [1]). It has much more visible effects, and has real consequences for computer vision algorithms.

In the course on computer vision in my university (which I help teaching) we teach this stuff to make students understand physics, but at the end of the lecture I'd always note that for vision it's largely irrelevant and isn't worth the cycles to convert image to the linear scale.

[0] http://www.4p8.com/eric.brasseur/gamma.html

[1] http://photo.stackexchange.com/questions/53820/why-do-photos...

Alexey_Nigin 3 days ago 2 replies      
I tried viewing the article on 4 different monitors. All monitors had default settings except for brightness. Monitors A & B were on new laptops, monitor C was on a very old laptop, and monitor D was on a smartphone. Here are the results:

FIGURES 1 & 2. On monitor A, all bands of color in figure 1 were easily discernible. The first four bands of color in figure 2 looked identically. Figure 1 looked more evenly spaced than figure 2. On monitor B, all bands of color in figure 1 were easily discernible. The first five bands of color in figure 2 looked identically. Figure 1 looked more evenly spaced than figure 2. On monitor C, all bands of color except the last two in figure 1 were easily discernible. The first three bands of color in figure 2 looked identically. Figure 1 looked about as evenly spaced as figure 2. The result from monitor D was the same as the result from monitor A.

FIGURE 12. On monitors A and B, the color of (A) was closer to (B) than to (C). On monitor C, (A) appeared equally close in color to (B) and (C). On monitor D, the color of (A) was exactly identical to (B).

CONCLUSION: On monitor C, gamma correction had neutral effect. On all other monitors, the effects were negative. Unfortunately, I was unable to find a standalone PC monitor for my comparison. It is entirely possible that a PC monitor would give a different result. However, since most people use laptops and tablets nowadays, I doubt the article's premise that "every coder should know about gamma".

kazinator 4 days ago 3 replies      
I was going to comment snarkily: "Really? Every coder? What if you program toasters?"

Then it immediately occurred to me that a toaster has some binary enumeration of the blackness level of the toast, like from 0 to 15, and this corresponds to a non-linear way to the actual darkness: i.e. yep, you have to know something about gamma.

crazygringo 4 days ago 3 replies      
This is one of the most fascinating articles I've come across on HN, and so well explained, so thank you.

But I wonder about what the "right" way to blend gradients really is -- the article shows how linear blending of bright hues results in an arguably more natural transition.

Yet a linear blending from black to white would actually, perceptually, feel too light -- exactly what Fig. 1 looks like -- the whole point is that a black-to-white gradient looks more even if calculated in sRGB, and not linearly.

So for gradients intended to look good to human eyes, or more specifically that change at a perceptually constant rate, what is the right algorithm when color is taken into account?

I wonder if relying just on gamma (which maps only brightness) is not enough, but whether there are equivalent curves for hue and saturation? For example, looking at any circular HSV color picker, we've very sensitive to changes around blue, and much less so around green -- is there an equivalent perceptual "gamma" for hue? Should we take that into an account for even better gradients, and calculate gradients as linear transitions in HSV rather than RGB?

datenwolf 3 days ago 4 replies      
I think the deep underlying problem is not just handling gamma but that to this day the graphics systems we use make programs output their graphics output in the color space of the connected display device. If graphics system coders in the late 1980-ies and early 1990-ies would have bothered to just think for a moment and look at the existing research then the APIs we're using today would expect colors in linear contact color space.

Practically all the problems described in the article (which BTW has a few factual inaccuracies regarding the technical details on the how and why of gamma) vanish if graphics operations are performed in a linear contact color space. The most robust choice would have been CIE1931 (aka XYZ1931).

Doing linear operations in CIE Lab also avoids the gamma problems (the L component is linear as well), however the chroma transformation between XYZ and the ab component of Lab is nonlinear. However from a image processing and manipulation point of view doing linear operations also on the ab components of Lab will actually yield the "expected" results.

The biggest drawback with contact color spaces is, that 8 bits of dynamic range are insufficient for the L channel; 10 bits is sufficient, but in general one wants at least 12 bits. In terms of 32 bits per pixel practical distribution is 12L 10a 10b. Unfortunately current GPUs experience a performance penality with this kind of alignment. So in practice one is going to use a 16 bits per channel format.

One must be aware that aside the linear XYZ and Lab color spaces, even if a contact color space is used images are often stored with a nonlinear mapping. For example DCI compliant digital cinema package video essence encoding is specified to be stored as CIE1931 XYZ with D65 whitepoint and a gamma=2.6 mapping applied, using 12 bits per channel.

skierscott 4 days ago 0 replies      
I work on algorithms that can be applied to images, and was equally surprised when I saw a video called "Computer color is broken."

I investigated and wrote a post called "Computer color is only kinda broken"[1].

This post includes visuals and investigates mixing two colors together in different colorspaces.


tomjakubowski 3 days ago 1 reply      
Hi John!

If you're reading comments, I just thought you should know that the link to w3.org in the (color) "Gradients" section is broken.

It should point to https://lists.w3.org/Archives/Public/www-style/2012Jan/0607.... but there's an extra "o" at the end of the URL in your page's link.

Glyptodon 4 days ago 2 replies      
The thing that seems a bit weird to me is that the constant light intensity graduation (fig 1) appears much more even/linearly monotonic to me than the perceptual one (fig 2) which seems really off at the ends, kind of sticking to really really dark black for too long at the left end, shifting to white too fast at the right end.
elihu 4 days ago 2 replies      
This is very good and useful; I'll have to update my ray-tracer accordingly.

One thing not discussed though is what to do about values that don't fit in the zero-to-one range? In 3-D rendering, there is no maximum intensity of light, so what's the ideal strategy to truncate to the needed range?

panic 4 days ago 1 reply      
Nowadays GPUs are able to convert between sRGB and linear automatically when reading and writing textures. There's no more excuse for incorrect rendering on modern hardware!
jadbox 4 days ago 2 replies      
Interesting that Nim (lang) is used in the examples- good readable code too
mxfh 4 days ago 0 replies      
Good reminder about these persisting blending issues in the linear interpretation of RGB values, which was well explained to non-coders as well here in a quite popular MinutePhysics video:https://www.youtube.com/watch?v=LKnqECcg6Gw

As others commented the gamma scaling issues seem even more relevant.

Just please, don't use RGB color space for generating gradients. In fact, it's ill fitted for most operations concerning the perception of colors as is.

chroma.js: https://vis4.net/blog/posts/mastering-multi-hued-color-scale...

D3: https://bl.ocks.org/mbostock/3014589

Interesting excursion: historically the default viewing gammas seem to have lowered, because broadcasting defaulted to dimly lit rooms, while today's ubiquitous displays are usually in brighter environments.


mixmastamyk 3 days ago 0 replies      
The conclusion reminded me of the "unicode sandwich," i.e. decode on data load, process in a pure form, then encode before writing to disk.
kristofferR 3 days ago 1 reply      
I get that this is an article about gamma, but it should have mentioned that sRGB is on the way out. People who need to think about gamma also need to think about wider color spaces like DCI-P3, which the Apple ecosystem is moving to pretty quickly (and others would be dumb to not follow).
qwertyuiop924 4 days ago 0 replies      
I'd already seen most of this in a video (courtesy of Henry, aka MinutePhysics, https://m.youtube.com/watch?v=LKnqECcg6Gw), but it was nice to see a programmer-oriented explanation, nonetheless.
spunker540 4 days ago 1 reply      
I don't think this is something every coder should know about-- maybe every graphics coder
kevin_thibedeau 4 days ago 1 reply      
What is usually little mentioned is that the transfer function for LCDs is a sigmoid rather than an exponential. The latter is simulated for desktop displays to maintain compatibility with CRTs. Embedded LCDs don't usually have this luxury.
willvarfar 4 days ago 4 replies      
I'm divided; I really want the article to be true, and for everyone to realise what a whole mistake we've been making all along... but, as the legions of us who don't adjust for gamma demonstrate, ignoring it doesn't make the world end?!
emcq 3 days ago 0 replies      
Meh gamma is a simplistic nonlinearity to model the world; if you care about perception use a CIE colorspace, if you care about gaming they have developed more sophisticated nonlinearities for HDR.
chmike 3 days ago 2 replies      
Does it mean that when doing a conversion from sRGB encoding to physical intensity encoding we have to extend the number of bits to encode the physical intensity values to avoid rounding errors in the sRGB encoding ?

I guess that that the required number of bits to encode physical intensity values depends on the operations. performed. The author suggest using floats, but this means 3x4 bytes and 4x4 bytes with the alpha channel. Would 16 bit unsigned integer be enough ? Floats are ok when using graphic cards, but not ok when using the processor.

nichochar 4 days ago 1 reply      
The design of your website, and it's readability, is great! Good job
reduxive 4 days ago 0 replies      
This article could really benefit from an image DIFF widget. Even animated flashing GIF images would be an improvement.

It needs something that not only permits comparable overlays, but (perhaps with a third diff layer) also highlights the ugly/wrong pixels with a high-contrast paint.

A handful of images are only somewhat obviously problematic, but for most of the images, I really had to struggle to find undesirable artifacts.

If it's that difficult to discern inconsistent image artifacts, one can understand why so little attention is often paid to this situation.

slacka 4 days ago 1 reply      
> The graphics libraries of my operating system handle gamma correctly. (Only if your operating system is Mac OS X 10.6 or higher)

Not just OS X. The majority of Linux games from the past 2 decades including all SDL and id tech 1-3 games relied on X server's gamma function. An X.Org Server update broken it about 6 years ago. It was fixed a few weeks ago.


daredevildave 3 days ago 0 replies      
And if you want to use a WebGL engine with gamma correct rendering... https://playcanvas.com ;-)


amelius 3 days ago 0 replies      
> sRGB is a colour space that is the de-facto standard for consumer electronic devices nowadays, including monitors, digital cameras, scanners, printers and handheld devices. It is also the standard colour space for images on the Internet.

Ok, does that mean that the device performs the gamma-transformation for me, and I don't need to worry about gamma?

(and if not, why not?)

j2kun 4 days ago 1 reply      
Is this why my computer screen's brightness controls always seem to have a huge jump between the lowest two settings (off and dimmest-before-off)?
sriku 3 days ago 1 reply      
When viewing this on a macbook air, the discussion around the two images in the section "Light emission vs perceptual brightness" appears weird. To me, the first image appears linearly spaced and in the second image I can hardly make out the difference between the first few bars of black.
kevinwang 4 days ago 4 replies      
On my iPhone, for the checkerboard resizing, the srgb-space resizing (b) is almost an exact match, while C appears much whiter.
anotheryou 3 days ago 0 replies      
Can anyone recommend a tool/library to output at least full HD image fades to a video with correct gamma? Preferably even with dithering for finer steps when fading slowly.

My main problem is that I'm not good at on-the-fly encoding and outputting frame by frame feels a bit excessive.

notlisted 3 days ago 2 replies      
Beautiful description and great examples. One thing confuses me. I'm actually using PS CS5 (supposedly the last 'correct' one?) and resizing figure 11 to 50% actually results in B, not C. Is there an option/setting I can use to fix this?
Retric 4 days ago 1 reply      
Did anyone else think the first set of bars was linear not the second? I could not notice any difference between the leftmost three bars on the bottom section. Or does this relate to how iPad renders images or something? ed: Same issue on PC.
catpolice 3 days ago 0 replies      
As a pedantic writer, it annoys me that the article starts by mentioning a quiz and making a big deal about answering yes or no to the questions... but there aren't actually any questions. The "quiz" is a list of statements. Each one can be understood by context to imply a question about whether you agree with the statement, but it's distracting because you can't answer yes to something that isn't a question.
wfunction 4 days ago 0 replies      
Can anyone get IrfanView to output the correct image? I'm trying the latest version I can find and it still gives me full gray.
pilooch 3 days ago 0 replies      
I do AI, I let my CNN eat the gammas :)
platz 4 days ago 2 replies      
please dont use condescending titles barking what all coders should or shouldnt know (invariably the topic is a niche that the author wants to cajole others into caring about too)
optimuspaul 4 days ago 2 replies      
Lost me when he said the fig 2 appeared to have a more even gradation than fig 1, and not just because of the spelling error. The fig 1 looked more even to me, but I am colorblind.
twothamendment 4 days ago 0 replies      
I know and love my gamma. She makes the best cookies!
The MIT License, Line by Line kemitchell.com
516 points by monocasa  3 days ago   132 comments top 18
richardfontana 3 days ago 3 replies      
Despite the assumption of some newer open-source developers thatsending a pull request on GitHub automatically licenses thecontribution for distribution on the terms of the projects existinglicensewhat Richard Fontana of Red Hat callsinbound=outboundUnited States law doesnt recognize any suchrule. Strong copyright protection, not permissive licensing, is thedefault.

That isn't quite what I mean by "inbound=outbound". Rather,inbound=outbound is a contribution governance rule under which inboundcontributions, say a pull request for a GitHub-hosted project, aredeemed to be licensed under the applicable outbound license of theproject. This is, in fact, the rule under which most open sourceprojects have operated since time immemorial. The DCO is one way ofmaking inbound=outbound more explicit, and I increasingly think onethat should be encouraged (if only to combat the practice of usingCLAs and the like). But under the right circumstances it works even where the contribution is notexplicitly licensed (I think this is what Kyle may bequestioning). There are other ways besides the DCO of creating greatercertainty, or the appearance of greater certainty, around the inboundlicensing act, such as PieterH's suggestion of using a copyleftlicense like the MPL, or the suggestion of using the Apache License2.0 (whose section 5 states an inbound=outbound rule as a kind ofcondition of the outbound license grant).

PieterH 3 days ago 10 replies      
This is a really good article. There's one part in particular that struck me:

"Despite the assumption of some newer open-source developers that sending a pull request on GitHub automatically licenses the contribution for distribution on the terms of the projects existing licensewhat Richard Fontana of Red Hat calls inbound=outboundUnited States law doesnt recognize any such rule. Strong copyright protection, not permissive licensing, is the default."

In other words the fork + pull request + merge flow does not work on a project unless you have an explicit step like a CLA, or an alternative solution.

We faced this problem early on in ZeroMQ, that asking contributors to take this extra step increased the work for maintainers (to check, is this the first time person X contributes, and have they made a CLA?) It also scared off contributors from businesses, where this often took approval (which took time and was often denied).

Our first solution in ZeroMQ was to ask contributors to explicitly state, "I hereby license this patch under MIT," which let us safely merge it into our LGPL codebase. Yet, again, another extra step and again, needs corporate approval.

Our current solution is I think more elegant and is one of the arguments I've used in favor of a share-alike license (xGPL originally and MPLv2 more these days) in our projects.

That works as follows:

* When you fork a project ABC that uses, say, MPLv2, the fork is also licensed under MPLv2.

* When you modify the fork, with your patch, your derived work is now also always licensed under MPLv2. This is due to the share-alike aspect. If you use MIT, at this stage the derived work is (or rather, can be) standard copyright. Admittedly if you leave the license header in the source file, it remains MIT. Yet how many maintainers check the header of the inbound source file? Not many IMO.

* When you then send a patch from that inbound project, the patch is also licensed under MPLv2.

* Ergo there is no need for an explicit grant or transfer of copyright.

I wonder if other people have come to the same conclusion, or if there are flaws in my reasoning.

SamBam 3 days ago 2 replies      
> The phrase arising from, out of or in connection with is a recurring tick symptomatic of the legal draftsmans inherent, anxious insecurity.

Indeed. I'm trying to imagine a court saying "Well... there were damages, but they arose out of the software and not from the software, so therefore- oh, wait! The license actually includes arising "out of" the software as well as "from" the software, so I guess the limitation of liability stands. Case dismissed!"

gakada 3 days ago 1 reply      
It's funny that the MIT license has the reputation of being "The license you choose when you don't care about attribution, or it would be unreasonable to require attribution".

As the article points out, copies of MIT licensed code must include not only the copyright line but the whole damn license.

AstroJetson 3 days ago 1 reply      
Crushed under load, to get the cache


I've always wondered about the nuances around the different licenses, it's nice to get a non-lawyer guide to the MIT one.

tbirdz 3 days ago 2 replies      
Does anyone know if this will be part of an ongoing series, covering many open source licenses?
tunnuz 3 days ago 1 reply      
Also relevant, this free and very accessible book from O'Reilly http://www.oreilly.com/openbook/osfreesoft/book that explains the history and caveats of most open source licenses.
twhb 2 days ago 9 replies      
A bit off-topic, but I would be very interested in somebody making a case for why an OS license is better than a simple line like "This code is free for everybody to use as they wish." I've read about it plenty, but remain unconvinced.

The OS software I write is for the good of everybody, not just its own popularity or the OS community. I'm fine with all uses of it, in whole or in part, whether or not I'm credited. The license reproduction requirement therefore feels like unnecessary noise, and I'd like to think that courts are sane enough that the warranty disclaimer is unnecessary too - is there any real court case where somebody has been sued for a defect in free, OS software, without an explicit warranty, and lost?

kazinator 3 days ago 3 replies      
These licenses have little flaws upon closer examination. One day I was reading the BSD license closely, in the context of its use in the TXR project, and was astonished to find that it was buggy and required tweaking to make it more internally consistent and true to its intent. I added a METALICENSE document where the changes are detailed:



The main problem is that the original says that both the use and redistribution of the software are permitted provided that the "following conditions are met", which is followed by two numbered conditions, 1 and 2. But the two conditions do not govern use at all; they are purely about redistribution! Rather, the intended legal situation is that use of the software means that the user agrees to the liability and warranty disclaimer (which is not a condition). But the BSD license neglects to say that at all; it says use is subject to the conditions (just that 1 and 2), not to the disclaimer.

reitanqild 2 days ago 1 reply      
This is fantastic IMO. More of this.

I also think there could be room for something similar about code (but I haven't had time to read aosabook.org yet, so maybe that is wheree I'll find it.)

A note on the Crockford joke:

I think it is on IBM, not the lawyers: I think he describes somewhere the fun of getting a payment from IBM followed by sending an additional license entitling IBM to use the software for evil.

vonnik 3 days ago 0 replies      
We've worked with Kyle Mitchell. He's a smart guy.
nickpsecurity 2 days ago 0 replies      
Licenses like MIT and BSD should be avoided due to the patent risk in favor of licenses that explicitly grant patent protection like Apache 2.0. The patent troll risk is just way too high. As rando said, companies like Microsoft are even open-sourcing code while raking in hundreds of millions from patent suits against open-source software. That this is even working for them shows the permissive licenses need to eliminate that tactic entirely.
breakingcups 2 days ago 2 replies      
A very lovely article, I enjoyed it very much since it gives insight into the "syntax" of legal documents in the US.

This article brought up a point I find very interesting. The MIT license (and a bunch of other licenses as well) are very US-oriented when it came to their writing, provisions, etc. I'd love to read a similar article exploring licenses like these from a, say, European point of view. Would the same constructs hold up in a German court, for example. What language is missing or superfluous?

branchly2 3 days ago 0 replies      
This looks excellent, and I'm going to curl up with it and a cup of tea tonight to read it more carefully. Thanks!

Would love to see the author do one for the GPL. I realize the result would be quite a bit longer.

MindTwister 2 days ago 1 reply      
Huh? I submitted this 23 hours ago with the exact same url, glad to see som discussion though.
cpdean 3 days ago 3 replies      
Would someone care to elaborate why the "Good, not Evil" clause in the JSON license is bad?
rando832 2 days ago 2 replies      
cyphar 3 days ago 2 replies      
> The MIT License is the most popular open-source software license.

I'm fairly certain the GPL is still more popular.

Heavy SSD Writes from Firefox servethehome.com
467 points by kungfudoi  2 days ago   336 comments top 45
lighttower 2 days ago 13 replies      
Chrome, on my system, is even more abusive. Watch the size of the .config/google-chrome directory and you'll see that it grows to multi-GB in the profile file.

There is a Linux utility that takes care of all browsers' abuse of your ssd called profile sync daemon, PSD. It's available in the debian repo or [1] for Ubuntu or [2] for source. It uses `overlay` filesystem to direct all writes to ram and only syncs back to disc the deltas every n minutes using rsync. Been using this for years. You can also manually alleviate some of this by setting up a tmpfs and symlink .cache to it.

[1] https://launchpad.net/~graysky/+archive/ubuntu/utils[2] https://github.com/graysky2/profile-sync-daemon

EDIT: Add link, grammar

EDIT2: Add link to source

Yoric 2 days ago 12 replies      
Hi, Im one of the Firefox developers who was in charge of Session Restore, so Im one of the culprits of this heavy SSD I/O. To make a long story short: we are aware of the problem, but fixing it for real requires completely re-architecturing Session Restore. Thats something we havent done yet, as Session Restore is rather safety-critical for many users, so this would need to be done very carefully, and with plenty of manpower.

I hope we can get around to doing it someday. Of course, as usual in an open-source project, contributors welcome :)

zbuf 2 days ago 4 replies      
I have been running Firefox for a long time with an LD_PRELOAD wrapper which turns fsync() and sync() into a no-op.

I feel it's little antisocial for regular desktop apps to assume it's for them to do this.

Chrome is also a culprit, a similar sync'ing caused us problems at my employer's, inflated pressure on an NFS server where /home directories are network mounts. Even where we already put the cache to a local disk.

At the bottom of these sorts of cases I have on more than one occasion found an SQLite database. I can see its benefit as a file format, but I don't think we need full database-like synchronisation on things like cookie updates; I would prefer to lose a few seconds (or minutes) of cookie updates on power loss than over-inflate the I/O requirements.

RussianCow 2 days ago 6 replies      
Serious question: Is 12GB a day really going to make a dent in your SSD's lifespan? I was under the impression that, with modern SSDs, you basically didn't have to worry about this stuff.
rayiner 2 days ago 2 replies      
Doing all this work is also probably burning battery life. An SSD can use several watts while writing, versus as low as 30-50 milliwatts at idle (with proper power management).
blinkingled 2 days ago 11 replies      
Even better just disable session restore entirely - Browser.sessionstore.enabled - Since Firefox 3.5 this preference is superseded with setting browser.sessionstore.max_tabs_undo and browser.sessionstore.max_windows_undo to 0.

As I understand this feature is there so if the browser crashes it can restore your windows and tabs - I don't remember having a browser crash on me since the demise of Flash.

robin_reala 2 days ago 2 replies      
Its always annoying when an issue like this is reported yet no bugzilla reports are mentioned. Has anyone else filed this already, or shall I?
Someone 2 days ago 0 replies      
12GB/day is about 140kB/second, or one Apple 2 floppy disk every second.

It also is about single CD speed (yes, you could almost record uncompressed stereo CD quality audio all day round for that amount of data)

All to give you back your session if your web browser crashes or is crashed.

Moore's law at its best.

vesinisa 2 days ago 0 replies      
I've already moved all my browser profiles to `/tmp` and set up a bootscripts to persist them during boot / shutdown. E.g. for Arch Linux see https://wiki.archlinux.org/index.php/profile-sync-daemon

This is a far superior solution to fiddling with configuration options in each individual product to avoid wearing down your SSD with constant writes. Murphy's law has it such hacks will only be frustrated by next version upgrade.

And no, using Chrome does not help. All browsers that use disk caching or complex state on disk are fundamentally heavy on writes to an SSD. The amount of traffic itself is not even a particularly good measure of SSD wear, since writing a single kilobyte of data on an SSD can not be achieved on HW level without rewriting the whole page, which is generally several megabytes in size. So changing a single byte in a file is no less taxing than a huge 4 MB write.

raverbashing 2 days ago 1 reply      
Are these writes being sync'd to disk?

Because FF may die but the OS will save it later. That's fine

Not every write to a file means a write to disk

CoryG89 2 days ago 1 reply      
Maybe I am not understanding this right, but is this saying that Firefox will continually keep writing to the disk while idle? Does anyone know more about this? Why would this be needed to restore session/tabs? Seems like it should only write after a user action or if the open page writes to storage? Even if it was necessary to write continually while idle, how could it possibly consume so much data in such a short period of time?
weatherlight 2 days ago 0 replies      
Spotify does some pretty evil I/O as well. https://community.spotify.com/t5/Desktop-Linux-Windows-Web-P...
towerbabbel 2 days ago 0 replies      
I observed something similar several years ago: http://www.overclockers.com/forums/showthread.php/697061-Whe...

I still think the worry about it wearing out an SSD is overblown. The 20GB per day of writes is extremely conservative and mostly there to avoid more pathological use cases. Like taking a consumer SSD and using it as the drive for some write heavy database load with 10x+ write amplification and when you wear it out demand a new one on warranty.

Backing up the session is still sequential writes so write amplification is minimal. After discovering the issue I did nothing and just left Firefox there wearing on my SSD. I'll still die of old age before Firefox can wear it out.

Falkon1313 2 days ago 1 reply      
I checked my system - Firefox wasn't writing much and what it is writing is going to my user directory on the hard drive instead of the program directory on the SSD, so that's nice. But still, I don't want my browser cluttering up my drive with unnecessary junk - history, persistent caching from previous sessions, old tracking cookies, nevermind a constant backup of the state of everything. I try to turn all that off, but there's always one more hidden thing like this.

If I want to save something, I'll download it. If I want to come back, I'll bookmark it. Other than those two cases and settings changes, all of which are triggered by my explicit choice & action, it really shouldn't be writing/saving/storing anything. Would be nice if there were a lightweight/portable/'clean' option or version.

When I tried Spotify, it was pretty bad about that too - created many gigabytes of junk in the background and never cleaned up after itself. I made a scheduled task to delete it all daily, but eventually just stopped using spotify.

alyandon 2 days ago 0 replies      
Yep, I have a brand new SSD drive that over the course of a few months accumulated several TERAbytes (yes - TERA) of writes directly attributable to the default FF browser session sync interval coupled with the fact I leave it open 24/7 with tons of open tabs.

Once I noticed that excessive writes were occurring, it was easy for me to identify FF as the culprit in Process Hacker but it took much longer to figure out why FF was doing it.

zx2c4 2 days ago 2 replies      
I have fixed this issue forever. I got a Thinkpad P50 with 64 gigs of ram. So, I just mount a tmpfs over ~/.cache.

I actually use a tmpfs for a few things:

 $ grep tmpfs /etc/fstab tmpfs/tmptmpfsnodev,nosuid,mode=1777,noatime0 0 tmpfs/var/tmp/portagetmpfsnoatime0 0 tmpfs/home/zx2c4/.cachetmpfsnoatime,nosuid,nodev,uid=1000,gid=1000,mode=07550 0

tsukikage 2 days ago 1 reply      
The interesting question here is, why is the browser writing data to disk at this rate?

If it's genuinely receiving new data at this rate, that's kind of concerning for those of us on capped/metered mobile connections. The original article mentions that cookies accounted for the bulk of the writes, which is distressing.

If it's not, using incremental deltas is surely a no-brainer here?

nashashmi 2 days ago 0 replies      
On a related note: also see http://windows7themes.net/en-us/firefox-memory-cache-ssd/

Just another firefox ssd optimization.

Edit: And see bernaerts.dyndns.org/linux/74-ubuntu/212-ubuntu-firefox-tweaks-ssd

It talks about sessionstore.

justinrstout 2 days ago 0 replies      
Theodore Ts'o wrote about a similar Firefox issue back in 2009: https://thunk.org/tytso/blog/2009/03/15/dont-fear-the-fsync/
joosters 2 days ago 2 replies      
Does firefox sync() the data? If not, these continuous overwrites of the same file may not even hit the disk at all, as it could all end up being cached.

Even if some data is being written, it could still be orders of magnitude lower than the writes executed by the program.

There are legitimate pros/cons of using sync() or not. Missing it out could mean that the file data is lost if your computer crashes. But if firefox crashes by itself, the data will be safe.

vamur 2 days ago 0 replies      
Using private mode and a RAM disk is a quick solution for this issue. Easy to setup on Linux and there is a free RAM disk utility on Windows as well.
leeoniya 2 days ago 2 replies      
i'm not seeing these numbers, using I/O columns in Process Explorer. i'm running Nightly Portable with maybe 80 tabs open/restored.
Nursie 2 days ago 1 reply      
Firefox has been terrible for disk access for many years. I remember I had a post install script (to follow, I never actually automated) that I would run through in my linux boxes back in about 2003 that would cut down on this and speed up the whole system.

Basically chattr +i on a whole bunch of its files and databases, and everything's fine again...

digi_owl 2 days ago 1 reply      
I do wonder if their mobile version have a similar problem. I have noticed it chugs badly when opened for the first time in a while on Android, meaning i have to leave it sitting or a while so it can get things done before i can actually browse anything.
gcb0 2 days ago 0 replies      
> goes to the point of installing weird programs to be "pro" about their ssd life

> failed to read the very first recommendation on every single guide for ssd life: use ram disk cache for browser temp files.

yeah, let's upvote this

iask 2 days ago 1 reply      
So Firefox is also expensive to run in terms of energy consumption. No wonder the fans on my MacBook Pro always sound like a jet engine whenever I have several tabs open. Seriously!

Disclaimer: I dual boot (camp) windows 7 on my mac.

caiob 2 days ago 0 replies      
That goes to show how space/memory hungry and bloated browsers have become.
waldbeere 2 days ago 0 replies      
Simple solution change save time to 30s

Windows file compression cookies.sqlite => 1MB => 472KBsessionstore-backups => 421 KB => 204 KB

Move TMP cach folder to ram drive ie ImDisk

tesla23 2 days ago 0 replies      
I'am sorry if I drop it, but it seems not many people know about cache poisoning. I have always kept the suggested settings since the age of javascript.
rsync 2 days ago 0 replies      
I continue to be impressed with the content and community at servethehome - it's slowly migrated its way into my daily browsing list.
rasz_pl 2 days ago 0 replies      
For comparison ancient Opera Presto stores about 500 bytes per tab in Session file.
Animats 2 days ago 2 replies      
Firefox is relying too much on session restore to deal with bugs in their code. Firefox needs to crash less. With all the effort going into multiprocess Firefox, Rust, and Servo, it should be possible to have one page abort without taking down the whole browser. About half the time, session restore can't restore the page that crashed Firefox anyway.
Freestyler_3 2 days ago 0 replies      
I use Opera on windows, No idea how to check or change the session storage interval.

Anyone got ideas on that?

HorizonXP 2 days ago 5 replies      
Wow, that's really unfortunate.

I just built a new PC with SSDs, and switched back to Firefox. Even with 16GB of RAM on an i3-2120, Firefox still hiccups and lags when I open new tabs or try to scroll.

This new issue of it prematurely wearing out my SSDs will just push me to Chrome. Hopefully it doesn't have the same issues.

Sami_Lehtinen 2 days ago 1 reply      
uBlock also keeps writing "hit counts" to disk all the time, as well as for some strange reason they've chose database page size to be 32k so each update writes at least 32kB.
yashafromrussia 2 days ago 0 replies      
Sounds sweet, ill try it out. How is it comparing to ack (ack-grep)?
bikamonki 2 days ago 0 replies      
Can this be avoided in FF and Chrome with private tabs?
falsedan 2 days ago 0 replies      
Using a display font for body text--
aylons 2 days ago 2 replies      
In Linux where should this be written? Inside the home folder?

Maybe moving this folder to a HDD should suffice.

amq 2 days ago 0 replies      
Observed similar behavior with Skype.
PaulHoule 2 days ago 0 replies      
The whole "restore your session" thing is the one of the most user hostile behaviors there is.
kordless 2 days ago 1 reply      
I seriously dislike Firefox, but must use it at work due to browser incompatibility issues with Chrome and sites I use heavily. Anything that makes the experience better is much appreciated.
rackforms 2 days ago 3 replies      
Putting aside how this may not be all that bad for most SSD's, does anyone know when this behavior started?

Firefox really started to annoy me with its constant and needless updates a few months back; the tipping point being breaking almost all legacy extensions (in 46, I believe). This totally broke the Zend Debugger extension, the only way forward would be to totally change my development environment. I'm 38 and now, and apparently well beyond the days when the "new and shiny" hold value. These days I just want stability and reliability.

Firefox keeps charging forward and, as far as I can tell, has brought nothing to the table except new security issues and breaking that which once worked.

I haven't updated since 41 and you know what, it's nearly perfect. It's fast, does what I need it to do, and just plain old works.

Firefox appears to have become a perfect example of developing for the sake of.

Upgrade your SSH keys g3rt.nl
426 points by mariusavram  2 days ago   144 comments top 26
developer2 1 day ago 4 replies      
Seriously, the default options to ssh-keygen should be all anybody needs. If you need to pass arguments to increase the security of the generated key, then the software has completely failed its purpose. Passing arguments should only be for falling back on less secure options, if there is some limiting factor for a particular deployment.

There is absolutely no reason to pass arguments to ssh-keygen. If it is actually deemed necessary to do so, then that package's installation is inexcusably broken.

tete 1 day ago 2 replies      
Something I don't understand is the "hate" that RSA gets. Yeah, Elliptic Curves are promising, have benefits (smaller/faster).

But RSA isn't broken, it is well understood, is "boring" (a plus on security, usually), has bigger bit sizes (according to people that know a lot more to me that's a plus point, regardless of EC requiring smaller ones, because of certain attacks), isn't hyped and sponsored by the NSA and isn't considered a bad choice by experts.

Not too many years ago Bruce Schneier was skeptical about EC, because of the NSA pushing for it. Now, I also trust djb and i an sure that ed25519 is a good cipher and there are many projects, like Tor that actually benefit from it, increasing throughput, etc., but for most use cases of SSH that might not be the issue, nor the bottleneck.

So from my naive, inexperienced point of view RSA might seem the more conservative option. And if I was worried about security I'd increase the bit size.

Am I going wrong here?

matt_wulfeck 2 days ago 3 replies      
I disagree with the author. Before you go upgrading into ed25519, beware that the NSA/NIST is moving away from elliptical curve cryptography because it's very vulnerable to cracking with quantum attacks[0].

"So let me spell this out: despite the fact that quantum computers seem to be a long ways off and reasonable quantum-resistant replacement algorithms are nowhere to be seen, NSA decided to make this announcement publicly and not quietly behind the scenes. Weirder still, if you havent yet upgraded to Suite B, you are now being urged not to. In practice, that means some firms will stay with algorithms like RSA rather than transitioning to ECC at all. And RSA is also vulnerable to quantum attacks."

Stick with the battle tested RSA keys, which are susceptible but not as much as ECC crypto. 4097 or even better 8192-bit lengths.

There's no perceptible user benefits to using ed25519 and it's not even supported everywhere. Also you won't have to rotate all of your keys when workable quantum computers start crackin' everything.

[0] https://blog.cryptographyengineering.com/2015/10/22/a-riddle...

Achshar 1 day ago 4 replies      
Noob question here, why move just one step ahead. Why not 8192 or hell 16,384? I can see it can lead to higher CPU consumption on often used keys but for keys that are not accessed more than a couple of times a day, why is it such a bad idea to overdo it?
LeoPanthera 2 days ago 4 replies      
Can someone explain to me why RSA 2048 is "recommended to change"? It's still the default for gpg keys and as far as I know is widely thought to be secure for at least few hundred years!
brandmeyer 1 day ago 2 replies      
If you have servers too old to work with the latest keys, you can easily modify your ~/.ssh/config to automatically use a per-machine private key file:

 Host foo.example.com Keyfile ~/.ssh/my_obsolete_private_keyfile

loeg 2 days ago 4 replies      
RSA 2048 is still the openssh default, i.e., best current advice from the openssh authors. The fact that this article's author labels that as "yellow" is a red flag.
jkirsteins 1 day ago 1 reply      
Can anybody elaborate on the idea that for RSA <=2048 is potentially unsafe? Is it true? It seems that even 1024 bit keys haven't been factored yet, much less 2048, so why use anything else currently?


morecoffee 1 day ago 1 reply      
Ed25519 is fast, but I don't think the speed is significantly faster to be an argument for using it. Running the borgingssl speed tool on a skylake mobile processor:

 Did 1083 RSA 2048 signing operations in 1017532us (1064.3 ops/sec) Did 29000 RSA 2048 verify operations in 1016092us (28540.7 ops/sec) Did 1440 RSA 2048 (3 prime, e=3) signing operations in 1016334us (1416.9 ops/sec) Did 50000 RSA 2048 (3 prime, e=3) verify operations in 1014778us (49271.9 ops/sec) Did 152 RSA 4096 signing operations in 1000271us (152.0 ops/sec) Did 8974 RSA 4096 verify operations in 1076287us (8337.9 ops/sec) ... Did 6720 Ed25519 key generation operations in 1029483us (6527.5 ops/sec) Did 6832 Ed25519 signing operations in 1058007us (6457.4 ops/sec) Did 3120 Ed25519 verify operations in 1053982us (2960.2 ops/sec)
RSA key verification is still extremely fast.

(also don't look at these numbers purely as speed, but as CPU time spent)

katzgrau 1 day ago 0 replies      
Security is not my specialty, but I obviously wade in this field, being a developer. Having read this article I will say this to OP and the author:

Thank you, I am sufficiently paranoid enough to change my keys now.

jlgaddis 2 days ago 1 reply      
If you have any RHEL machines, you might wanna keep an RSA (or ECDSA) key around. RHEL doesn't support Ed25519.

I haven't checked, but I presume this also goes for CentOS, Scientific Linux, and other derivatives.

tw04 1 day ago 3 replies      
This is my standard on new server setup (which is admittedly overkill but I'd rather have it slightly slower and safer):




sources.list (if you're on an older version of debian)deb http://http.debian.net/debian wheezy-backports main

apt-get -t wheezy-backports install --reinstall ssh


cd /etc/ssh

rm ssh_host_key

ssh-keygen -t ed25519 -f ssh_host_ed25519_key -a 256 < /dev/null

ssh-keygen -t rsa -b 4096 -f ssh_host_rsa_key < /dev/null

(do not password protect server side keys)



Protocol 2

HostKey /etc/ssh/ssh_host_ed25519_key

HostKey /etc/ssh/ssh_host_rsa_key

KexAlgorithms curve25519-sha256@libssh.org,diffie-hellman-group-exchange-sha256

Ciphers chacha20-poly1305@openssh.com,aes256-gcm@openssh.com,aes128-gcm@openssh.com,aes256-ctr,aes192-ctr,aes128-ctr

MACs hmac-sha2-512-etm@openssh.com,hmac-sha2-256-etm@openssh.com,hmac-ripemd160-etm@openssh.com,umac-128-etm@openssh.com,hmac-sha2-512,hmac-sha2-256,hmac-ripemd160,umac-128@openssh.com




Host *

KexAlgorithms curve25519-sha256@libssh.org,diffie-hellman-group-exchange-sha256

Ciphers chacha20-poly1305@openssh.com,aes256-gcm@openssh.com,aes128-gcm@openssh.com,aes256-ctr,aes192-ctr,aes128-ctr

MACs hmac-sha2-512-etm@openssh.com,hmac-sha2-256-etm@openssh.com,hmac-ripemd160-etm@openssh.com,umac-128-etm@openssh.com,hmac-sha2-512,hmac-sha2-256,hmac-ripemd160,umac-128@openssh.com


ssh-keygen -t ed25519 -a 256 -f yourkey.key -C whateveryouwant

eatbitseveryday 1 day ago 0 replies      
Maybe a more technical/comprehensive read is this[1] writeup, which I see some others have linked to. Prior HN[2].

[1] https://stribika.github.io/2015/01/04/secure-secure-shell.ht...

[2] https://news.ycombinator.com/item?id=8843994

perlgeek 1 day ago 1 reply      
So I once read somewhere that RSA is simpler to implement than most other algorithms, and hence it's a safer choice than other algorithms, because weaknesses typically come from suboptimal implementation less than from the cryptographic algorithm. (Unless you use known-broken things like md5 or 3DES).

And I think that was in the context of some DSA or ECDSA weakness, possibly a side channel attack or something similar. I forgot the details :(

What are your thoughts on this? Should we focus more simplicity and robustness of the implementation, rather than just the strength of the algorithm itself?

tarellel 1 day ago 0 replies      
Something I found resourceful while setting up SSH on a recent server is Mozilla's SSH Guidelines - https://wiki.mozilla.org/Security/Guidelines/OpenSSH
Locke1689 1 day ago 0 replies      
What's the problem with ECDSA?
qzervaas 2 days ago 0 replies      
For those who have just updated to macOS Sierra, the default SSH client configuration is to not allow ssh-dss keys any longer.

Follow these instructions to update your keys.

ComodoHacker 1 day ago 0 replies      
>RSA 2048: yellow recommended to change

Could someone provide a link with decent explanation why? Is it solely out of fear that it will be cracked soon on quantum computer?

franciscop 1 day ago 0 replies      
Why not usimg this year as the name for the ssh? Then when you are using 2014.pub or 2013.pub you know it's time to upgrade
otabdeveloper 1 day ago 0 replies      
Nobody is going to brute-force my git keys, especially when it's so trivial to gain access to the repos via social engineering.
stock_toaster 2 days ago 3 replies      
github doesn't support ed25519 keys does it?
jamiesonbecker 2 days ago 1 reply      
In Userify (ssh key manager that only distributes sudo roles and public keys -- you keep your private keys[1]) we're going to be disallowing DSS keys soon.

I like this post - it's good advice overall. Keys are easy to handle and in some ways more secure than certificate management (which relies on extra unnecessary infrastructure).

1. https://userify.com

aluhut 1 day ago 1 reply      
I wish this whole SSH business would be less complicated...
wyclif 1 day ago 0 replies      
the need to generate fresh ones to protect your privates much better

Um, I'm pretty sure he meant privacy, not "privates." Time for an edit.

sztwiorok 1 day ago 0 replies      
very good post about security!

many people still using RSA/DSA keys :/some people are doing even worse things.Last week I saw one man who have shared his priv key by email message!

QWERTY people have to grow up!

The GitHub Load Balancer githubengineering.com
431 points by logicalstack  3 days ago   122 comments top 20
NicoJuicy 3 days ago 0 replies      
I notice a lot of negativity arround here. Don't know why that is... But i'll take my 5 cents on it.

NIH - Not invented here and redoing an opensource project.

- Github said they used HAProxy before, i think the use case of github could very well be unique. So they created something that works best for them. They don't have to re-engineer an entire code base.When you work on small projects, you can send a merge request to do changes. I think this is something bigger then just a small bugfix ;). Totally understand them there for creating something new

- They used opensource based on number of open source projects including, haproxy, iptables, FoU and pf_ring. That is what opensource is, use opensource to create what suits you best. Every company has some edge cases. I have no doubt that Github has a lot of them ;)


Thanks GitHub for sharing, i'll follow up on your posts and hope to learn a couple of new things ;)

otoburb 3 days ago 2 replies      
Given this is based on HAProxy and seems to improve the director tier of a typical L4/L7 split design, I'm led to believe GLB is an improved TCP-only load balancer.

But they also talk about DNS queries, which are still mainly UDP53, so I'm hoping GLB will have UDP load-balancing capability as gravy on top. I excluded zone transfers, DNSSEC traffic or (growing) IPv6 DNS requests on TCP53 because, at least in carrier networks, we're still seeing a tonne of DNS traffic that still fits within plain old 512-byte UDP packets.

Looking forward to seeing how this develops.

EDIT: Terrible wording on my part to imply that GLB is based off of HAProxy code. I meant to convey that GLB seems to have been designed with deep experience working with HAProxy as evidenced by the quote: "Traditionally we scaled this vertically, running a small set of very large machines running haproxy [...]".

jimjag 3 days ago 13 replies      
I am increasingly bothered by the "not invented here" syndrome where instead of taking existing projects and enhancing them, in true open source fashion, people instead re-create from scratch.

It is then justified that their creation is needed because "no one else has these kinds of problems" but then they open source them as if lots of other people could benefit from it. Why open source something if it has an expected user base of 1?

Again, I am not surprised by this. They whole push of Github is not to create a community which works together on a single project in a collaborative, consensus based method, but rather lots of people doing their own thing and only occasionally sharing code. It is no wonder that they follow this meme internally.

Scaevolus 3 days ago 0 replies      
gwright 3 days ago 0 replies      
While I understand that NIH syndrome is a real thing, it is very dissapointing to read many of the comments here.

I think very few HN readers are really in a position to have an informed opinion regarding Github's decision to build new piece of software rather than using an existing system.

Personally I find this area quite interesting to read about because it is very difficult to build highly available, scalable, and resilient network service endpoints. Plain old TCP/IP isn't really up to the job. Dealing with this without any cooperation from the client side of the connection adds to the difficulty.

I look forward to hearing more about GLB.

Ianvdl 3 days ago 4 replies      
Given the title and the length of the post I was expecting a lot more detail.

> Over the last year weve developed our new load balancer, called GLB (GitHub Load Balancer). Today, and over the next few weeks, we will be sharing the design and releasing its components as open source software.

Is it common practice to do this? Most recent software/framework/service announcements I've read were just a single, longer post with all the details and (where applicable) source code. The only exception I can think of is the Windows Subsystem for Linux (WSL) which was discussed over multiple posts.

gumby 3 days ago 3 replies      
They talk about running on "bare metal" but when I followed that link it looked like they were simply running under Ubuntu. Is it so much a given that everything is going to be virtualized?

When I think of "bare metal" I think of a single image with disk management, network stack, and what few services they want all running in supervisory mode. Basically the architecture of an embedded system.

p1mrx 3 days ago 0 replies      
GitHub only speaks IPv4, so I would be extra-skeptical about using any of their networking code to support a modern service.
NatW 3 days ago 1 reply      
I'm curious if they looked into pf / CARP as part of their research into allowing horizontal scalability for an ip. See: https://www.openbsd.org/faq/pf/carp.html
yladiz 2 days ago 0 replies      
I'm of two minds about this. Part of me agrees with many of the commenters here, in that Not Invented Here syndrome was probably in effect during the development of this. I don't really know Github's specific use case, and I don't know the various open source load balancers outside of Haproxy and Nginx, but I would be surprised if their use case hasn't been seen before and can be handled with the current software (with some modification, pull requests, etc.). On the other hand, I would guess Github would research into all of this, contact knowledgeable people in the business, and explore their options before spending resources on making an entirely new load balancer. Maybe it really is difficult to horizontally scale load balancing, or load balance on "commodity hardware".

That being said, why introduce a new piece of technology without actually releasing it if you're planning to release it, without giving a firm deadline? This isn't a press release, this is a blog post describing the technical details of the load balancer that is apparently already in production and working, so why not release the source when the technology is introduced?

jedberg 3 days ago 0 replies      
Awesome. The whole time I was reading I was thinking "they need Rendezvous hashing". And then bam, last paragraph mentions that is in fact what they are using.
treve 3 days ago 1 reply      
I half expect a comment here explaining why Gitlab does it better ;)
lifeisstillgood 3 days ago 6 replies      
I love using GitHub and appreciate the impact it is and has had. But this post is what is wrong with the web today. They have taken a distributed-at-it's-plumbing technology, and centralised it so much that now we need to innovate new load balancing mechanisms.

Years ago I worked at Demon Internet and we tried to give every dial up user a piece of webspace - just a disk always connected. Almost no one ever used them. But it is what the web is for. Storing your Facebook posts and your git pushes and everything else.

No load balancing needed because almost no one reads each repo.

The problem is it is easier to drain each of my different things into globally centralised locations, easier for me to just load it up on GitHub than keep my own repo on my cloud server. Easier to post on Facebook than publish myself.

But it is beginning to creak. GitHub faces scaling challenges, I am frustrated that some people are on whatsapp and some slack and some telegram, and I cannot track who is talking to me.

The web is not meant to be used like this. And it is beginning to show.

contingencies 2 days ago 0 replies      
I am intrigued by their opening statement of multiple POPs, but the lack of multi-POP discussion further in the system description.

My understanding is that the likes of, for example, Cloudflare or EC2 have a pretty solid system in place for issuing geoDNS responses (historical latency/bandwidth, ASN or geolocation based DNS responses) to direct random internet clients to a nearby POP. Building such a system is not that difficult, I am fairly confident many of us could do so given some time and hardware funding.

Observation #1: No geoDNS strategy.

Observation #2: Limited global POPs.

Given that the inherently distributed nature of git probably makes providing a multi-pop experience easier than for other companies, I wonder why Github's architecture does not appear to have this licked. Is this a case of missing the forest for the trees?

lamontcg 2 days ago 0 replies      
Why not just use DNS load balancing over VIPs served by HA pairs of load balancers?

Back in the day we did this with Netscalers doing L7 load balancing in clusters, and then Cisco Distributed Directors doing DNS load balancing across those clusters.

It can take days/weeks to bleed off connections from a VIP that is in the DNS load balancing, but since you've got an H/A pair of load balancers on every VIP you can fail over and fail back across each pair to do routine maintenance.

That worked acceptably for a company with a $10B stock valuation at the time.

madmulita 2 days ago 0 replies      
We are in the process of moving all of our infrastructure to OpenStack, OpenShift, Ansible, DevOps, Microservices, Docker, Agile, SDN and what not.

There are some brainiacs pushing these magic solutions on us and one of the promises is load balancing is not an issue, even better, it's not even being talked about.

Please, please, tell me there's something I'm missing.

squiguy7 3 days ago 0 replies      
I know they mentioned their SYN flood tool but I recently saw a similar project from a hosting provider and thought it was neat [1]. It seems like everyone wants their own solution to this when it is a very common and non-trivial problem.

[1]: https://github.com/LTD-Beget/syncookied

bogomipz 3 days ago 1 reply      
Do the Directors use Anycast then? That wasn't clear to me.
tadelle 3 days ago 1 reply      
alsadi 3 days ago 0 replies      
I never like github approach, they alway use larger hammers
Googles lawyers are asking to find Oracles lawyers in contempt of court vice.com
389 points by ivank  2 days ago   138 comments top 19
grellas 2 days ago 6 replies      
It is huge that a lawyer would disclose in a public setting such important confidential numbers. I even have trouble seeing how something like that could be "accidental". It is basically a force of habit among experienced litigators to think and to say, in any number of contexts, "I know this may be relevant but I can't discuss it because it is the subject of a protective order" or "I know the attorneys know this information but it was disclosed under the protective order as being marked for 'attorneys' eyes only'". In all my years of litigating, I don't believe I have ever heard a casual slip on such information, even in otherwise private contexts (e.g., attorneys are discussing with their own client what an adverse party disclosed and are very careful not to disclose something marked for "attorneys' eyes only"). Certainly willful disclosures of this type can even get you disbarred.

But the significance of this breach is not the only thing that caught my eye.

These litigants have been entrenched in scorched-earth litigation for years now in which the working M.O. for both sides is to concede nothing and make everything the subject of endless dispute. Big firm litigators will often do this. It is a great way to rack up bills. Clients in these contexts do not oppose it and very often demand it. And so a lot of wasteful lawyering happens just because everyone understands that this is an all-out war.

To me, then, it seems that the big problem here (in addition to the improper disclosures of highly important confidential information in a public court hearing) was the resistance by the lawyers who did this to simply acknowledging that a big problem existed that required them to stipulate to getting the transcript sealed immediately. Had they done so, it seems the information would never have made the headlines. Instead (and I am sure because it had become the pattern in the case), they could not reach this simple agreement with the other lawyers to deal with the problem but had to find grounds to resist and fight over it.

I know that we as outside observers have limited information upon which to make an assessment here and so the only thing we can truly say from our perspective is "who knows". Yet, if the surface facts reflect the reality, then it is scarcely believable that the lawyers could have so lost perspective as to take this issue to the mat, resulting in such damage to a party. Assuming the facts are as they appear on the surface, this would be very serious misconduct and I can see why Judge Alsup is really mad that it happened.

mmastrac 2 days ago 3 replies      
While this is a good story, the headline misses by far the point that the body makes - the only reason this is an open secret is because an Oracle lawyer revealed it in public.

A better title might be:

"Google is trying to get Oracle in trouble for revealing confidential figures"

nkurz 2 days ago 4 replies      
As background, this opinion piece by the lawyer in question may be useful in understanding the mindset of the players. Hurst argues that because API's are not copyrightable, the GPL is dead and Oracle's valiant attempts to defend free software have been foiled:

The Death of "Free" Software . . . or How Google Killed GPLby Annette Hurst (@divaesq)

The developer community may be celebrating today what it perceives as a victory in Oracle v. Google. Google won a verdict that an unauthorized, commercial, competitive, harmful use of software in billions of products is fair use. No copyright expert would have ever predicted such a use would be considered fair. Before celebrating, developers should take a closer look. Not only will creators everywhere suffer from this decision if it remains intact, but the free software movement itself now faces substantial jeopardy.



This wasn't an accidental "slip" by a poorly trained intern. This was a conscious disclosure made by one of Oracle's lead attorneys. She is one of the top IP lawyers in the nation: https://www.orrick.com/People/2/6/2/Annette-Hurst. It is in keeping with the "scorched earth" strategy that has been followed for this case. She knew what she was doing, and she (and her firm) should pay the consequences. If there are no consequences, it will legitimize and reward this strategy.

nikic 2 days ago 1 reply      
This article reads very weirdly to me. Are they arguing that disclosing confidential information, and subsequently opposing steps to contain the disclosed information, is perfectly fine because ... it can be found on the internet, precisely because of this disclosure? This makes absolutely no sense to me.
segmondy 2 days ago 2 replies      
Oracle should pay, they knew exactly what they were doing. If it was them, they would be suing too. Live by the sword die by the sword.
balabaster 2 days ago 1 reply      
Having read this article it reminds me somewhat of tactics in movies where lawyers deliberately ask an inflammatory question in front of a jury purely for the purpose of planting a seed, and before anyone can yell objection they immediately retract knowing that the damage has been done. The judge may strike it from the record, the judge may tell the jury to disregard it, but you can't unthink or unhear something that's been said. The bell has already been rung.

I don't (or can't, I'm unsure) believe that lawyers of this caliber make mistakes like this. So what was her play by doing this? Did it pay off?

yongjik 2 days ago 3 replies      
Off-topic, but I find it strange that money in the order of $1B can change hands between two mega-corporations without anyone outside having an inkling, while I could find websites saying exactly how much a low-level government worker earns in a social services center in my county. (Spoiler: much less than I used to earn as developer.)

Shouldn't the structure of accountability be in the other direction?

b1daly 1 day ago 1 reply      
Slightly off topic, but I've always had a hard wrapping my head around the stance the somehow an API is distinct from code. I understand that it's an abstraction in programming, and that industry practice has been that it's acceptable to take an existing API that you didn't create and write a new implementation.

But since the API is "implemented" in code, it seems like for the purpose of copyright consideration that the distinction is simply one of custom.

It's a programming abstraction, to create your own "implementation" of the API you still have to use code that is identical to original.

Alsop's original, overturned, ruling was that as a matter of law API's couldn't be copyrighted because they express an idea that can only be expressed exactly that way, and traditionally this would not be allowed (can't copyright an idea). As I understood it, his concept implied that to get IP protection over an API would require something more like patent protection. (I might be totally wrong on this).

edgesrazor 2 days ago 2 replies      
Off topic: I may be old and cranky, but I simply can't stand articles with animated gifs - it just seems ridiculously unprofessional.
JadeNB 2 days ago 2 replies      
The judge tried to reveal the depth of this revelation by comparing it to that of the most secret thing he could imagine:

> If she had had the recipe for Coca-Cola she could have blurted it out in this court right now.


EDIT: I wasn't trying to be snarky or silly, just pointing out an aspect of the story that struck me as funny. Serious request: if that is inappropriate, please let me know rather than just silently downvoting. In that case, I apologise and will delete the post.

bitmapbrother 2 days ago 1 reply      
Regardless of the outcome her career in litigating high profile cases is pretty much over. You simply do not utter highly confidential company information accidentally. It was intentional and it was done to paint a picture to the jury about how much money Google was making from Android and what it was paying Apple.
wfunction 2 days ago 1 reply      
As someone who knows zilch about business, I don't quite understand why people knowing these numbers is so devastating. What will another company do with these two numbers that it otherwise wouldn't do?
1024core 2 days ago 1 reply      
> Oracle attorney Melinda Haag

God I hate that woman. When she was a US Attorney for SF, she went around and threatened to seize buildings where medical cannabis dispensaries were located, in full compliance of the local laws. Because she couldn't do any thing to the dispensary directly, she threatened their landlords. This was after Obama had said that DoJ would not interfere with dispensaries which were operating within the state laws.

joering2 2 days ago 0 replies      
"... or Robin Thicke being forced to plunge his own toilet."

Can someone explain me this one??

AceJohnny2 2 days ago 0 replies      
If a lawsuit of this scale can be considered the corporate equivalent of war, contempt of court is equivalent to being declared a war criminal.
c3534l 2 days ago 0 replies      
How can a public corporation keep those two numbers secret? Those are basic cost and revenue numbers that should be disclosed in their annual financial statements. The fact that it's legal to keep those numbers secret means there's something very wrong with how we do financial disclosure in America.
suyash 2 days ago 1 reply      
swehner 2 days ago 0 replies      
Why now? The blurting happened in January.
ocdtrekkie 2 days ago 3 replies      
If anything, my only sadness is that more of Google's dirty laundry wasn't aired. This illusion that Google search is winning because people prefer it and that Google doesn't make money on Android are both claims I'm happy to see debunked. Google's anti-monopoly claims fundamentally hinge on concepts like these.

And if a lawyer did break the law by doing it, I say she belongs on the same high pedestal people put Snowden on.

A Digital Rumor Should Never Lead to a Police Raid eff.org
346 points by dwaxe  3 days ago   163 comments top 20
danso 3 days ago 8 replies      
FWIW, the prospect of being suspected and questioned (but not necessarily raided) because of your IP location is one of the best metaphors to relate what it's like as a minority to be searched just because you are of the same race as a suspect in an nearby active case.

It is perfectly logical to say that if there was an assault on a college campus and that the victim said the perp is an "Asian male", for the police to not prioritize the questioning of all non-Asians in the area. And if the report was made within minutes of the incident and the suspect is on foot, it may be justifiable to target the 5 Asian males loitering around rather than the 95 people of other demographics. What logical person would argue otherwise?

But the problem creep comes in the many, many cases when police don't have a threshold for how long and wide that demographic descriptor should be used. Within 1000 feet of the reported attack? A mile? Why not 2 miles? And why not 2 days or even 2 weeks after the incident, just to be safe?

The main difference in the ISP/IP metaphor is that in the digital world, it's possible to imagine search-and-question tactics that aren't time-consuming for the police or for the suspect. Hell, the suspect might not even know their internet-records were under any suspicion. OTOH, there are definitely real-world places in which for the police (and their community and most specifically the politicians), hand-cuffing and patting someone down has been so streamlined and accepted by the powers-that-be that it isn't a bother for them (the police) either.

edit: To clarify, I don't mean to get in the very wide debate on racial profiling, etc. But when I worked at a newspaper, we had a policy to not mention race unless the police could provide 4 or 5 other identifiers. That led to readers cussing us out because, they'd argue, knowing that the suspect was black is better than nothing. My point here is that sometimes, nothing is not always better than something, and that is most explicitly clear when it comes to broad IP range searches.

soylentcola 3 days ago 3 replies      
A similar example, while not a raid, hit me closer to home a bit over a year ago.

I'm sure that if you follow US news at all, you heard about the looting and arson in Baltimore in the Spring of 2015. While the city was on edge in the wake of a citizen's death in police custody, there had already been some minor demonstrations and a brawl between protesters, baseball fans, and provocateurs downtown earlier in the month.

Then, on the day of the funeral held for the man killed in custody, word started to spread of plans for some sort of riot or mass havoc being planned later in the day. Later, authorities pointed to a digital "flyer" being passed around yet nobody investigating this outside of the police has found any source or initial copy of this flyer that dates before this was published in the media. Trust me, we looked.

In response to this alleged threat to public order, cops with riot gear and a freaking mini-tank showed up at a major public transit hub right as school let out. Transit was shut down and everyone was corralled into a small area next to a busy street and without a way home for hours.

Eventually, tensions got high enough that when the first pissed off teenager or whoever chucked a bottle or a rock, it didn't take long for others to join in. In the ensuing vandalism and arson, hundreds of thousands in damage was caused, people got hurt, the city was put under curfew for a week, and to this day, businesses and residents have suffered from the reputation gained (worsened?) that day.

Looking back, the part that really sticks out to me is how the whole thing was triggered (assuming you don't think it was a deliberate provocation) by some "social media flyer" that claimed some teens were planning to run around starting shit after school. This rumor summoned riot police, shut down transit, stranded loads of adults and teens alongside the road, and facing down a phalanx of police plus one armored tactical vehicle.

Would those shops and homes still been damaged or those stores been looted and burned in a wave of unrest without this rumor-inspired flashpoint? No idea. But it sure didn't help.

dtnewman 3 days ago 2 replies      
It starts off saying:

> If police raided a home based only on an anonymous phone call claiming residents broke the law, it would be clearly unconstitutional... Yet EFF has found that police and courts are regularly conducting and approving raids based on the similar type of unreliable digital evidence: Internet Protocol (IP) address information.

I'm not sure that these two are equivalent. A better example would be the police raiding my home based on an illegal phone call that came from my phone number. Sure, the fact that it comes from my phone number doesn't mean I did it, but it's certainly evidence that points to me, just as an IP address can be.

In general, the summary linked to above makes it sound like police should never use IP addresses. To be fair, if you read the whitepaper itself, it doesn't say this, but rather that police should be _careful_ in how they use IP addresses. Specifically, it recommends that police "conduct additional investigation to verify and corroborate the physical location of a particular decive connect to the Internet whenever police have information about an IP address physical location, and providing that information to the court with the warrant application".

pmoriarty 3 days ago 1 reply      
In the 1980's, some powerful senator's cell phone was snooped on, resulting in a major scandal when the contents of his phone calls was revealed in the press.

This resulted in Congress passing laws that made it illegal for radios to be capable of listening in on cell phone frequencies or being easily modified to allow them to do so.

It is likely that only similar widely publicized embarrassments and privacy violations of the rich and powerful will result in any meaningful legislative attempts to curtail the growth of the police state in the United State.

They clearly don't intend to do much about it unless they themselves are the victims of such abuses of power. As long as it's just "nobodies" or social or political outcasts who are the victims the police and surveillance aparatus, it's doubtful that much will change.

eth0up 3 days ago 1 reply      
A few more examples of botched attempts at IP-based raids:


The one I'm familiar with is the Sarasota, FL incident, where a married couple was raided in the middle of the night in response to alleged child pornography. Their unit was in a condominium, practically on the edge of Sarasota bay, where various boats moor and dock. After further investigation, it was discovered that the traffic had originated from some guy in a boat using a high gain antenna. If I remember correctly, he had cracked their WEP key and illegally accessed their network to obtain nasty images, lots of them. The insecurity of WEP has been known about for a long time, presumably by LE too.

It is conjecture on my part, but a few things come to mind regarding alternative methods of investigation that may have avoided this. 1. Contact the ISP first (in this case I think it may have been Verizon). I remember Verizon having the ability to remotely reset router passwords, which possibly suggests the ability to remotely view associated client data, e.g. MAC addresses and hostnames and maybe even OS. This may have provided valuable clues. 2. Note the protocol used by the wireless router. 3. Wardrive a bit. 4. Maybe check for logs of any accounts the boat guy logged into while on their network.

Regardless, the raid was botched and pretty traumatic for the couple, considering they were operating a legal AP probably secured with what they thought was adequate encryption. At the time of this event, WEP was standard default, straight from the ISP. They'd done nothing wrong.

More info: http://www.heraldtribune.com/news/20110131/wireless-router-h...

rayiner 3 days ago 4 replies      
Not great to start an article off with sloppy reasoning:

> If police raided a home based only on an anonymous phone call claiming residents broke the law, it would be clearly unconstitutional.

> Yet EFF has found that police and courts are regularly conducting and approving raids based on the similar type of unreliable digital evidence: Internet Protocol (IP) address information.

When police go after an IP address, it happens after there is evidence linking it to some crime. That makes the situation wholly unlike an anonymous phone call, where there is no evidence a crime has even been committed, and where the identifying information itself is trivial to falsify.

Also, IP addresses give a lot more information than the article implies. Especially these days now that everyone has a home router that probably keeps the same IP address for weeks at a time if not months. Not enough to trigger a police raid, of course (if we want to argue that the police have too low a standard of evidence for initiating a raid, I agree) but it's probably a good lead to go on in the common case.

EDIT: I don't disagree with the rest of the article.

eth0up 3 days ago 1 reply      
pjc50 3 days ago 1 reply      
"If police raided a home based only on an anonymous phone call claiming residents broke the law, it would be clearly unconstitutional"

I thought that was how SWATting worked - anonymous denunciation by untraceable phone call?

s_q_b 3 days ago 0 replies      
If the use of IP addresses in this manner disturbs you, you should look into the the proposed changes to Federal Rule Of Criminal Procedure 41.

This is the EFF's article, which is either a highly overzealous or highly prescient: https://www.eff.org/deeplinks/2016/04/rule-41-little-known-c...

stronglikedan 3 days ago 1 reply      
> If police raided a home based only on an anonymous phone call claiming residents broke the law, it would be clearly unconstitutional.

But they do this all the time, especially in low income areas. They just don't call it a raid. They call it a "welfare check".

xienze 3 days ago 3 replies      
> Put simply: there is no uniform way to systematically map physical locations based on IP addresses or create a phone book to lookup users of particular IP addresses.

Maybe today, but when we have wide deployment of IPv6 (heh), won't ISPs do away with NATing and give everyone their own block of IPs? Then I would think you could reliably tie a person to an IP address as long as the ISP cooperates.

vorotato 3 days ago 0 replies      
Otherwise the police become the weapons of criminals which is, of course backwards.
coldcode 3 days ago 0 replies      
(1) It's unreliable (2) It's unconstitutional assuming judges agree (3) It's expensive if you screw it up, such as people die, lawsuits, or embarrassment. All of which is unlikely change behavior unless everyone agrees.
bootload 2 days ago 0 replies      
"A call is an unknown source, talking about unreliable information, about a location. It is NEVER to be trusted NEVER...." -- Michael A. Wood Jr

An unverified call can never to be trusted. Read the whole twitter thread by ex BPD, USMC Retd., Michael A. Wood Jr [0] to understand why.

[0] https://twitter.com/MichaelAWoodJr/status/778813281376931840

nv-vn 3 days ago 0 replies      
>If police raided a home based only on an anonymous phone call claiming residents broke the law, it would be clearly unconstitutional.

Isn't that exactly what happens when you get SWATted?

throwaway92314 3 days ago 0 replies      
I'll just point this out here. Reena Virk started as a rumour going around in schools. Until eight days later her body was found. A little bit of prudence is necessary, but don't discount rumours out of hand.


PaulHoule 3 days ago 0 replies      
It's as much a "law and order" issue as it is a civil rights issue.

Cops have limited resources to deal with a number of problems and if they don't have the training and procedures to use internet evidence they are going to waste those resources tracking down stolen cars, child porn and whatever in the wrong places.

rocky1138 3 days ago 3 replies      
Why don't we just regulate any Internet-connected device? When you purchase one, you register your name and address and are given the IP address in return.

Then, we can simply look up the physical address of the IP address holder.

marcoperaza 3 days ago 2 replies      
>Law enforcements over-reliance on the technology is a product of police and courts not understanding the limitations of both IP addresses and the tools used to link the IP address with a person or a physical location.

You can most certainly narrow down an IP address to a particular ISP customer. Is it possible that they have an open wifi? Yes. Is it possible to narrow it down to a single member of the household? Depends! Is it possible that a computer at the destination is being used a proxy by the real attacker? Yes! But it's certainly not the blackbox that the EFF is trying to portray it as.

It's totally appropriate to execute a search warrant based on IP logs. A search warrant doesn't mean that any particular person is guilty, just that there is probable cause that there is information about a crime at a certain location.

matt_wulfeck 3 days ago 1 reply      
> IP address information was designed to route traffic on the Internet, not serve as an identifier for other purposes.

I think you're going to have a hard time here convincing a jury or judge with this argument. In general LOE isn't concerned with the intentional of what an IP address was meant for. At least with today's ISP an IP address can be a reasonable approximation of a person or persons.

Microsoft aren't forcing Lenovo to block free operating systems mjg59.dreamwidth.org
366 points by robin_reala  4 days ago   235 comments top 33
Hydraulix989 4 days ago 4 replies      
Their spin that it is "our super advanced Intel RAID chipset" really plays in their favor, given that their BIOS uses a single goto statement to intentionally block access to the ability to set this chipset into the AHCI compatible mode that the hardware so readily supports, as evidenced by the REing work and the fact that other OSes detect the drive after the AHCI fix using the custom-flashed BIOS.

So, why are they reluctant to just issue their band-aid patch to the BIOS -- after all, it's really the path of least resistance here?

Yes, there has been some deflection of blame here. The argument that every single OS except Windows 10 is at fault for not supporting this CRAZY new super advanced hardware doesn't make much sense.

"Linux (and all other operating systems) don't support X on Z because of Y" doesn't really apply when "Z modified Y in a way that does not allow support for X."

To state it more plainly, this "CRAZY new super advanced hardware" has a trivial backwards compatible mode that works with everything just fine, but it is blocked by Lenovo's BIOS.

raesene9 4 days ago 8 replies      
Also worth noting Lenovo's official statment on the matter http://www.techrepublic.com/article/lenovo-denies-deliberate... confirming that they have not blocked the installation of alternate operating systems.

It was a shame to see the initial posts this morning hit the top of the page without any more evidence than a single customer support rep. who was unlikely to realistically have inside knowledge of some kind of "secret conspiracy" to block linux installs by Microsoft.

pdkl95 4 days ago 2 replies      
There has been a disturbing level of contempt for the people that were concerned about the future of Free Software. There has been a major shift towards more locked down platforms for years ever since iOS was accepted by the developer community. With Microsoft locking down Secure Boot on ARM and requiring it for Windows 10, it is prudent to be extra vigilant about anything strange that happens in the boot process. The alternative is to ignore potential problems until they grow into much larger problems that are harder to deal with.

Obviously vigilance implies some amount of false positives. It is easy to dismiss a problem once better information is available. It's great that this Lenovo situation is simply a misunderstanding about drivers, but that doesn't invalidate the initial concern about a suspicious situation.

AdmiralAsshat 4 days ago 2 replies      
The moral of the story is that you shouldn't trust a low-level support engineer as a source for official company policy.
WhitneyLand 4 days ago 0 replies      
There was way too much rush to judgement here. Suspicion and skepticism are great, let those fires burn. But let's not condemn or blame until the issue has been aired out from all parties.

- MS shouldn't be blamed based on what the CEO of Lenovo says, let alone what a tech or BB rep says.

- MS shouldn't be blamed for new crimes based on past behavior

Why care about MS or any other megacorp? Because this salem witch trial shit is toxic and should not be condoned against anyone.

Rush to suspicion and demanding answers is great. There is no downside to saving blame for after the facts are in.

rbanffy 4 days ago 1 reply      
Wasn't Lenovo the company that shipped unremovable malware with laptops? Considering the almost impossible to disable Intel management stuff is also there, I can only imagine the kind of parasite living on these machines.

Why would anyone buy their stuff?

hermitdev 4 days ago 1 reply      
For what it's worth, I've had issues with Intel RST under Windows as well in mixed-mode configs. My boot device is an SSD configured for AHCI and I've a 3 drive RAID array. On a soft reset of my PC, the BIOS won't see the SSD. The completely nonobvious solution? Make the SSD hot swappable. Not a Lenovo PC, either. Been going on for years. Had to do a hard reset every time I had to restart for years before I found a solution to this.
facorreia 4 days ago 2 replies      
> Rather than be angry at Lenovo, let's put pressure on Intel to provide support for their hardware.
NikolaeVarius 4 days ago 0 replies      
Standard culture of outrage before actually taking more than 5 seconds to think about something and consider other possibilities.
rburhum 4 days ago 2 replies      
What is crazy to me is that Lenovo is usually the brand that people recommend for Linux laptops. They are shooting themselves in the foot here. They may think that the number of people on Linux is too small, but I bet it is bigger than they think. It is just that there is no easy way to accurately census the amount of Linux users on their HW.
guelo 4 days ago 1 reply      
> Why not offer the option to disable it? A user who does would end up with a machine that doesn't boot

The modder that flashed the custom BIOS was able to boot linux on his first try.

guelo 4 days ago 2 replies      
Without any comment from Lenovo or Microsoft this guy is speculating the same as everybody else.
seba_dos1 4 days ago 0 replies      
Pushing Intel to provide the drivers or at least documentation would be the best solution - the BIOS lock would become irrelevant.

However, I don't agree with conclusion that Lenovo isn't to blame. They went out of their way to ensure that even power users playing with EFI shell won't be able to switch to AHCI mode.

I don't care about Microsoft here. Lenovo showed its bad side and I probably won't be buying their devices anymore - which is a pity, as I'm writing this on my Yoga 2 Pro, with my company's Yoga 900 (fortunately older, unblocked revision) nearby and I liked those devices.

rukuu001 4 days ago 0 replies      
I'm surprised at the incredulity expressed here, given MS's history of dealing with OEMs. See https://en.m.wikipedia.org/wiki/Bundling_of_Microsoft_Window...
StreamBright 4 days ago 0 replies      
Somebody should notify the guys who went really deep condemning Microsoft of cutting shady deals.


huhtenberg 4 days ago 3 replies      
Yeah, sure, Microsoft is now all white and fluffy. Best friends forever.

How about we pay some attention to the second part of:

 Lenovo's firmware defaults to "RAID" mode and ** doesn't allow you to change that **
Power savings or not, but locking down storage controller to a mode that just happens to be supported by exactly one OS has NO obvious rational explanation. Either Lenovo does that or Windows does. This has nothing to do with Intel.

fenomas 3 days ago 0 replies      
Meta: It seems really odd that this has been relegated to page two, considering that "MS and Lenovo secret agreement" headlines sat on the top page most of yesterday, largely unsubstantiated.

I could be crazy, but HN's algos seem much too aggressive about hiding articles due to flags. It often feels like the most interesting articles are to be found 2-3 spots into the second page.

youdontknowtho 4 days ago 0 replies      
Its amazing that Linux can so thoroughly have won in the device world and yet MS is still every fan boys favorite boogeyman. This is such a non event.
gnode 3 days ago 0 replies      
It sounds to me like it would be quite trivial to run Linux on this laptop, just by treating the "RAID" mode PCI ID like AHCI and employing the regular driver. I believe Linux supports forcing the use of a driver for a PCI device.
sqldba 4 days ago 0 replies      
Click bait. It's one interpretation masquerading as the truth while decrying the other interpretation.

Until Lenovo issue a proper, detailed, official statement we need to keep the pressure on.

Self aggrandising posts like this don't help.

savagej 4 days ago 1 reply      
Why would anyone ever buy Lenovo? It's malware, spyware, and harmful to users. I buy HP or Samsung laptops to run Fedora. Just accept that Lenovo is not IBM hardware, and that it is lost to us.
aruggirello 3 days ago 0 replies      
I repost here the 39th comment, which gives a possible explanation of the issue:

 Storm in a teacup Date: 2016-09-22 09:17 am (UTC) From: [personal profile] cowbutt
"Intel have not submitted any patches to Linux to support the "RAID" mode."

Such patches are unnecessary, as mdadm already supports Intel Rapid Storage Technology (RST - http://www.intel.co.uk/content/www/uk/en/architecture-and-te... ) for simple RAID (e.g. levels 0, 1, 10) arrays, allowing them to be assembled as md or dmraid devices under Linux.

However, it would appear that the version of mdadm in shipping versions of Ubuntu (at least - maybe other distros too) doesn't support the Smart Response Technology (SRT - http://www.intel.com/content/www/us/en/architecture-and-tech... ) feature that's a part of RST and is used by Lenovo to build a hybrid one-stripe RAID0 device from the HDD with a cache on the SSD (I'm sure Lenovo have a good reason for not using a SSHD). Dan Williams of Intel submitted a series of patches to mdadm to support SRT back in April 2014: https://marc.info/?l=linux-raid&r=1&b=201404&w=2 . Perhaps now there's shipping hardware that requires them, there'll be the impetus for distro vendors to get them integrated into mdadm, and their auto-detection in their installers to use the functionality provided sanely.


I should add that mdadm is not present in Ubuntu live images by default - one has to pull it in by issuing "sudo apt[-get] install mdadm". BTW, I don't know if mdadm would detect the RAID controller/disk immediately upon installation, or it would require a reboot. In the latter case you may wish to use a USB key with enough spare room to save the system status and reboot. I'd use UNetBootin to prepare such a USB key.

The main issue here is, a user who doesn't even see a disk, probably wouldn't know to go as far as installing mdadm.IMHO, given the broadening diffusion of NVMe and RAID devices, Debian, Canonical, REDHAT, Fedora etc. might wish to make mdadm part of their live images by default (and eventually strip it from the installed system if it's unnecessary).

Edit: clarified

youdontknowtho 4 days ago 0 replies      
Of course they aren't but how can I feel morally superior with that fact?
bsder 4 days ago 0 replies      
The setting is almost certainly because of Microsoft. It is almost certainly part of their license agreement to block installation of anything older than Windows 10.

The fact that Linux got caught in it is just collateral damage.

hetfeld 4 days ago 1 reply      
So why i can't install Ubuntu on my Lenovo laptop?
lspears 4 days ago 0 replies      
farcical_tinpot 4 days ago 1 reply      
Seeing a manufacturer use fake RAID, by default, on a single disk system, then unfathomably hardwiring this into the firmware so it can't be changed, then have a Lenovo rep actually admit the reason with the forum thread censored and then see this kind of defence is downright hilarious.

Garrett should be condemning Lenovo for not making a perfectly configurable chipset feature....configurable and defending Linux and freedom of choice on hardware that has always traditionally been that way. But, no, he doesn't. He defends stupidity as he always does.

colemickens 4 days ago 1 reply      
Oh it's funny to see the comments in this thread talking down about people on reddit when the misplaced outrage was just as loud here. In fact, I got buried here for pointing out that the claim was BS and unrelated to SecureBoot where at least Reddit took it thoughtfully and realized it was probably just a bullshit statement from a nobody rep that got blown out of proportion.

Sorry to be that guy, but the elitism is pretty misplaced anymore...

johansch 4 days ago 1 reply      
It's so sad to see this. (This entire thread, and its comments are down-voted.)

Let me try again. New Microsoft is awesome! Old Microsoft never happened. Double plus good!

farcical_tinpot 4 days ago 2 replies      
simbalion 4 days ago 4 replies      
throw2016 4 days ago 0 replies      
Some commentators seem to be more keen on labelling others conspiracy theorists than consider the possibility that MS and Lenovo could be up to no good.

The only way to convince these folks it seems would be a smoking gun or even better a signed confession from satya and lenovo admitting to shady behavior.

Since that's not how shady behavior works in the real world presumably many here are supporters of the camel in the sand approach with a zero tolerance policy towards non conforming camels.

intopieces 4 days ago 1 reply      

"For a consumer device, why would you want to? The number of people buying these laptops to run anything other than Windows is miniscule."

This is a really poor argument, and slightly disingenuous. Sometimes, people change their use for a device. Maybe they want to explore linux in the future, maybe they want to sell the laptop to someone who wants to use it for linux...

That the blame is being possibly misdirected ought not to detract from the fact that blame is necessary. If users don't vocally oppose measures like this, the industry will assume that this kind of restriction is reasonable. It's not. Yes, power management is important, but anyone who puts linux on their laptop will quickly learn there are limitations to the features of that device that were originally tailored to the OS the device shipped with. That's a good lesson, and a good opportunity for a community to develop around the device (if it's good enough) to mitigate those deficiencies and adapt them for the particular linux distro.

In short, Lenovo is at fault for not being up front about this limitation, for not explaining it, and for not devoting at least some resources to mitigating for their potential linux-inclined users.

Then again, perhaps a linux-inclined user might also be one of the many that don't trust Lenovo after their self-signed certificate scandal.

Ask HN: What are the must-read books about economics/finance?
440 points by curiousgal  3 days ago   266 comments top 113
reqres 2 days ago 3 replies      
Please do not look upon popular economics best sellers as a good way to get a rounded economics education. While many have value in critical insight and entertainment, they often offer only a narrow perspective on economics. Novice economists typically lack the ability to critically appraise them without a wider economic framework to work from.

An academic reading list (i.e. university course texts) will provide you a good theoretical foundation as to how economists interpret and model real economic issues. It's important to grasp the plethora of important economic concepts like diminishing returns, comparative advantage and concepts of market efficiency (among many others things) and how they apply within micro or macro economic issues.

With some foundational knowledge in place, a good economist then goes on to relax the underlying assumptions and look for analogues in the real world. This is where the popular reading list come in, often they take a deep dive in specific areas i.e. where traditional economic assumptions break down.

In short, the academic reading list gives you a framework to understand economics. The best seller list tempers that framework with real world exceptions, paradoxes and open questions.

It's a bit disappointing to see a real academic reading list so far down this comment page (I strongly recommend looking at oli5679 suggestions). I doubt HNers would suggest reading up on javascript as a good foundation for a computer science education. Yes, you can become a well rounded computer scientist by starting on javascript. But it's more important to have a grasp on core computer science ideas like algorithm design & analysis and automata.

davidivadavid 2 days ago 2 replies      
One approach is to go to the MIT OpenCourseWare website, look for the economics department, and look at their reading lists.

Of course, that's going to be mostly academic reading (textbooks, etc.). But if you want to learn the basics, it's probably safer to start there than the pop econ books (and I would dispense with most heterodox reading before you're able to assess them within a larger framework).

Two good books that haven't been mentioned here:

Economic Theory in Retrospect, by Mark Blaug. Very useful to get a good historical grounding in the main ideas that compose today's orthodox economics.

The Applied Theory of Pirce, by McCloskey. Your usual microeconomics textbook, but far more thorough, insisting a lot on grasping the intuition behind the concepts. Available for free from the author's website here: http://www.deirdremccloskey.com/docs/price.pdf

ohthehugemanate 2 days ago 2 replies      
Top of my list would be "the ascent of money", by Harvard Prof Niall Ferguson. It explains what money and financial instruments are, by telling the stories of their history. Hes a great story teller, and for each aspect of finance that he explains, there's a story of a famous piece of history which it caused. For example, the application of oriental maths to finance caused a huge boom for Italian bankers, especially including one family, the Medici. That financial boom was responsible for the artistic boom we call Renaissance art. Or how the Dutch republic triumphed over the enormous Hapsburg empire, because the world's largest silver mine couldn't compete with the world's first stock market.

Fantastic read, and a great way to gain financial literacy.

kevinburke 2 days ago 2 replies      
(Economics major and longtime econ book/paper reader here) I very much enjoyed The Cartoon Introduction to Economics as an introduction to microeconomic concepts: http://standupeconomist.com/cartoon-intro-microeconomics/

It's extremely readable and funny and covers most of the situations in real life where you can apply economic concepts to understand why something is the way it is.

Understanding why countries and economies grow (and why some grow faster than others!) doesn't always fall under the "economics" umbrella but is really useful for informing policy (and a useful reminder these days, when both US presidential candidates rail against trade agreements). "From Poverty to Prosperity" lays out a very readable and convincing argument for how countries have grown and become rich. https://www.amazon.com/Poverty-Prosperity-Intangible-Liabili...

For finance I very much enjoyed The Intelligent Investor, which also (apparently) inspired Warren Buffett's investing philosophy. https://www.amazon.com/Intelligent-Investor-Definitive-Inves...

AndrewKemendo 2 days ago 8 replies      
The following list will introduce you to Western Economic Philosophy as it relates to modern history specifically. This list is weighted heavily toward neo-classical economics and does not get into computational model based economics - specifically microeconomics, which comprises the bulk of economics education today:

Schumpeter - History of economic analysis

Adam Smith - Theory of Moral Sentiments

Kaynes - The General Theory of Employment, Interest and Money

Marx - Capital

Benjamin Graham - The Intelligent Investor

Galbraith - The Affluent Society

Galbraith - The Great Crash

Milton Friedman - Capitalism and Freedom

Nassim Taleb - Black Swan

Ron Suskind - Confidence Men

Scott Patterson - Dark Pools

If you want to delve into heterodox economics afterward, start with the following:

Hayek - Individualism and Economic Order

Mises - Human Action

Rothbard - Man, Economy, State

soVeryTired 2 days ago 0 replies      
I work in a quant hedge fund - I'll give you my take. The first thing I would point out is that there is a massive difference between academic theory and practice. I don't want to turn this into an anti-academic rant, but I do want to emphasise that we value very different things. For this reason alone, most of what you read in most textbooks won't do you much good.

Personally I wouldn't place too much emphasis on outside knowledge. Basic knowledge of economics wouldn't hurt, but don't go nuts. Khan academy will give you more than enough theory. You don't want to spend all your energy developing a skill that a trained economist applicant will crush you at. Neither should you focus too much on e.g. stochastic analysis. In the real world, no-one cares whether a stochastic process is previsible or progressively measurable. But knowing how to derive Black-Scholes couldn't hurt.

So far I've msotly talked about what you shouldn't read. I'll try to talk a little bit about what you should. Read the financial press. The FT or the wall street journal, depending on where you're based. Read finance blogs. Frances coppola is good. So is the Bank of England's blog. Check out Alphaville at the FT too. You'll be expected to know what's going on in the world right now. Could you explain what QE is? For a finance job, that's more important than knowing what the IS/LM model says. What's been going on in China recently? What do you think about their currency outflows?

Know how to code. At least one of Python, Matlab or R for the buy side, one of Java or C++ for the sell side.

Most importantly, though, you should be able to demonstrate enthusiasm. Any given junior quant role will get hundreds of applications, and some demonstrable interest will put you head and shoulders above the pack. A link to some decent analysis on github would do (none of the hundred or so applicants to the last position we advertised did that). Play with some financial data. Quantopian is apparently a good resource.

I've talked about how to prepare for a general finance job. The specific reading you should do will depend on exactly what job you want. Do you want to be a quant? If so, buy side or sell side? Read up on the difference. Go check out efinancialcareers, have a look at the skills they're asking for within each sector, and take it from there.

RockyMcNuts 2 days ago 2 replies      
As a start, take some economics courses, intro Micro and Macro. (Check https://www.coursetalk.com/https://www.class-central.com/ )

Actually the first book I'd recommend would be The Worldly Philosophers, a readable history of economics


A couple of more right-leaning books -Hayek, The Road to Serfdomhttps://www.amazon.com/Road-Serfdom-Fiftieth-Anniversary/dp/...

Friedman, Capitalism and Freedomhttps://www.amazon.com/Capitalism-Freedom-Anniversary-Milton...

Less right-leaning

The Marx-Engels Readerhttps://www.amazon.com/Marx-Engels-Reader-Second-Karl-Marx/d...

oli5679 2 days ago 3 replies      
I'd recommend textbooks or corsera rather than pop econ books.

Mostly Harmless Econometrics - Angrist and Krueger

Principles of microeconomics - Mankiw (beginner)

Intermediate microeconomics - Varian (intermediate)

You also want to cover finance and time series - I don't know what would be good there.

bbayles 2 days ago 3 replies      
I read Dubner and Levitt's Freakonomics in 2005. It's lame to say that a pop-science book changed my life, but since then I've thought about economics every day.

I would recommend some pop-econ to become familiar with a stylized version of how economists think. I'd recommend Tim Harford's The Undercover Economist Strikes Back and The Logic of Life and Robert Frank's The Economic Naturalist. (Dubner's and Levitt's books are entertaining, but I wouldn't try to learn much about economics from them)

The world of professional economists has been fascinating to watch over the last 10 years, as academic economist blogs are very active and very high quality. Watching debates and commentary about the global financial crises unfold on the blogs in real time was really something. Economist bloggers have a real influence on policy now, and whole schools of thought have coalesced out of blogs (e.g. market monetarism).

There are some excellent economics podcasts out there now. EconTalk (with Russ Roberts) has been going since 2006. I'd recommend listening to some of his interviews with academic economists. Macro Musings (with David Beckworth) just started this year, and the policy discussions have been quite informative.

The Marginal Revolution University website has an fantastic series of videos on economics topics. The "Development Economics" course I would recommend strongly - I wish I'd been taught the Solow Model in school.

Economics is a very interesting discipline to study from the outside. Learning a bit about it puts policy debates in a new light - I've become much more liberal on some topics and much less confident on a lot of topics. I find that reporting about economics issues is generally pretty terrible, so beware that if you get into economics you'll want to stop reading a lot of news analysis.

jnordwick 2 days ago 1 reply      
This one was recommended by the former head of NYMEX to me when I started my career in trading. Written about Jesse Livermore who made and lost his fortune multiple times. He was often blamed for rigging the market, but his lesson is simple: you basically can't rig the market; it will destroy you way more easily. Take what the market gives you and be happy it even decided to give you that:


And you'll see a lot of recommendations for everything from Hazlitt to Piketty, but my favorite you never see recommended for macro is The Way the World Works by Jude Wanniski. He was one of the life long Democrats who became a Reagan advisor (and basically quickly turned back into a Dem before passing away about ten years ago):


Besides that, this is a really broad questions. There is stuff like John Hull for derivatives (this is what I survive on):


This is the game theory book I and many others have survived on in college and many years past. Haven't really found a better one yet:


lujim 2 days ago 2 replies      
If you're looking for the nuts and bolts on how capital markets around the world work this book is hands down the best there is.


Equities, Futures, Rate Swaps, Options, Credit, Treasury, Corporate, Municipal, Mortgage and Agency Bonds. Then the technology that supports it all.

It is not only a fantastic high level view, but it get granular enough to explain things like how US Treasuries prices quoted in 32nds of a dollar or how fixed income securities are identified by something called a CUSIP or what a strike price is for an option. Granular enough to explain practical day to day concepts that would help you at your first job in a financial firm.

vegancap 2 days ago 4 replies      
Henry Hazlitt - 'Economics in one lesson'Mises - 'Theory of Credit and Money' Adam Smith - 'Wealth of Nations' Milton Friedman - 'Capitalism and Freedom'Murray Rothbard - 'A New Liberty'

Have been my personal, but somewhat one-sided favourites.

ddebernardy 2 days ago 1 reply      
IMO start with a recent book that spells out useful pointers to give the classics a critical read:

"Debunking Economics", by Steve Keen.

Keen gave a talk at Google a few years back that was a pretty good summary of what's in the book's first version.

If you're into stats and finance also check out the author's finance classes on youtube. Besides a bunch of videos that cover what's in his book, there are quite a few on financial modeling, and at least one video in there that delves into power laws and financial markets.

Also, try to throw in a few history books to your mix: history of the world, of science, and of ideas. History helps contextualize and make sense of what was going on in the mind of contemporaries as economic theories matured.

marmot777 2 days ago 1 reply      
These are important books but obviously not a comprehensive list.

* John Locke's Two Treatises of Government - It's political philosophy but it's hard to understand Classical Liberalism without having read some Locke.

* Adam Smith's Wealth of Nations - He and Locke are the two main guys to read for a solid start on Classical Liberalism, which is completely different than modern political liberalism. It's like having two features in an app with nearly the same name. Confusing as fuck.

* E. F. Schumacher's Small Is Beautiful: Economics as if People Mattered - This book will shift your perspective, useful to avoid becoming an a mindless advocate for one school of thought or another.

* Marx is a tough one as Capital is massive and unreadable and The Communist Manifesto is a propaganda pamphlet but I think you need to at least find some articles that summarize the basics.

* Keynes and Hayek - This hip hop battle is a decent start:https://www.youtube.com/watch?v=d0nERTFo-Skthen read Keynes' The General Theory of Employment, Interest and Money and Hayek's The Road to Serfdom.

* Milton Friedman's - Yes, read Capitalism and Freedom. I hesitated to include it as the guy's so good at making the case that it can turn you into a market advocate bot. Please resist that.

Can someone help me on this, is there a book balance Hayek and a book to balance Friedman? I'm sorry but Keynes doesn't do it for me. Look at the difference in titles between Hayek and Keynes. It's hard to motivate to read the Keynes book but nobody ever has trouble reading Hayek.

I see a lot of these ideas come up on HN a lot. What I don't like so much is when someone becomes an advocate for a particular ism. To me, all isms are rubbish. All of them. Understand but do not become a shill for an ideology.

bennesvig 3 days ago 2 replies      
Basic Economics by Thomas Sowell is the book that got me interested in economics. It's a large but easy to understand read.
tom_b 3 days ago 1 reply      

I found Larry Harris' Trading and Exchanges: Market Microstructure for Practitioners a solid introduction to market making and trading. Terms and concepts are easy to pick up from the text. I was comfortable enough after reading it to skim stats journal papers talking about market making models. The Stockfighter team had mentioned it in older threads here. It's expensive, but I just borrowed it from the library at my university instead of buying.

I also like The Elements of Statistical Learning which is free from the authors (http://statweb.stanford.edu/~tibs/ElemStatLearn/download.htm...). Although it isn't specifically about economics or markets, you should at least read it.

I'm at a loss on general economics books.

scott00 2 days ago 0 replies      
My first read of your request made me think you were looking for books mainly for personal intellectual growth. There are a lot of answers in that vein, as well as a few that seem suitable replacements for an undergrad econ degree. A second read made me wonder if you're actually asking for practical advice about what you should read in order to get a job in finance, given you won't take many econ or finance courses. I'll answer in the second vein, as it seems to be somewhat underrepresented.

Investment Banking/Private Equity/Investment Analysis

McKinsey & Co, Koller, Goedhart, Wessels: Valuation

Damadoran on Valuation

Trading or Quant

Hull: Options, Futures, and Other Derivatives

Joshi: Introduction to Mathematical Finance

Harris: Trading and Exchanges: Market Microstructure for Practitioners

There should probably also be a category for what I think of as quantitative fundamental investing. For an idea of what I mean, look at what the investment firm AQR does. I'm not sure of good books in this area though.

loeber 2 days ago 1 reply      
Debt: the First 5,000 Years by David Graeber is a controversial but rather important recent publication. I haven't seen it mentioned yet, so I wanted to recommend it.
geff82 2 days ago 2 replies      
"Economics in one lesson" is a classic worth reading and thinking about. While you don't necessary have to follow the libertarian way of thinking it guides you to, it still shapes your critical thinking about economic policies a lot.
gtrubetskoy 2 days ago 0 replies      
To better understand our monetary system I highly recommend watching the "Money as Debt" movie. It's on youtube as well as http://www.moneyasdebt.net/ (which I think links to y/t anyway). It provides a pretty good explanation of gold-backed vs credit-backed money and is fun to watch.
tmaly 3 days ago 3 replies      
Economics in One Lesson by Henry Hazlitt

Human Action by Ludwig von Mises

malloryerik 2 days ago 0 replies      
Aside from strict econ, finance and trading books, I'd heartily suggest economic history.

One of my personal favorites:

Global Capitalism, Its Fall and Rise in the Twentieth Century by Jeffry Frieden.


From the Journal of International Eeconomics' review:

Perhaps the greatest merit of Frieden's book is that it allows the reader to see the themes of winners and losers, risk and uncertainty, integration, economic growth and technological change emerge clearly from the deep forest of contemporary history. One gains a greater appreciation for the timelessness of these phenomena and how to begin to get a grip on the bigger picture of policy making and the global economy.

I found that quote on the author's site. http://scholar.harvard.edu/jfrieden/pages/global-capitalism-...

longsangstan 2 days ago 3 replies      
If you know Chinese, there is a must read:Economic Explanation( by Steven Cheung.

If you don't, you can read:Economic Explanation: Selected Papers of Steven N.s. Cheung. (Same book name but different content - collection of essays vs a book on theories)

Why Steven Cheung?As a close friend to Ronald Coase, he too focuses on empirical research (the real world) rather than blackboard economics (the imaginary world); hates the use of math for the sake of it; emphasizes on testable implications (positive economics).

His classic paper The Fable of the Bees is a great example of how empirical work destroys blackboard economics.

jmcgough 2 days ago 0 replies      
A Random Walk Down Wallstreet.

Great introductory book on investing, especially if you're interested in personal finance.

gawry 2 days ago 0 replies      
A nice place to start might be the CFA study guide


branchless 2 days ago 0 replies      
Progress and Poverty:


Why is there so much poverty amongst all our progress? Georgism and land value tax. Essential reading IMHO and an enjoyable read also.

randcraw 2 days ago 0 replies      
For readable bios of the major economic thinkers, I like:

New Ideas from Dead Economists: An Introduction to Modern Economic Thought, by Bucholz and Feldstein

The Worldly Philosophers: The Lives, Times And Ideas Of The Great Economic Thinkers, by Heilbroner

And for a readable sample of the economists' thought in their own words, there's:

Teachings from the Worldly Philosophy, by Heilbroner

orthoganol 2 days ago 0 replies      
"Global Capitalism: Its Fall and Rise in the Twentieth Century" by Jeffry Frieden is a masterpiece. It will give you a thorough, expansive view of the global financial world - the major events and trends - as they unfolded over the last century. This book is regularly assigned as a text book in Ivy League economic history classes, so even though it's short on math/ econometrics, it's a serious work.
pmilot 2 days ago 0 replies      
Surprisingly, a lot of people in this thread hesitate to recommend Thomas Piketty's "Capital in the Twenty-First Century". I'm not sure why this book is somehow surrounded with overblown controversy.

I think it is an excellent book on historical economics. His conclusions are drawn from an extremely large dataset that is publicly available and downloadable here: https://www.quandl.com/data/PIKETTY

It's by no means an Economics 101 book, but it should definitely be part of any economist's personal library in my opinion.

stephenbez 2 days ago 0 replies      
I found Milton Friedman's "Free To Choose" fundamental and very readable.


qwrusz 19 hours ago 0 replies      
I work in finance. I agree with other comments that suggested the CFA Program curriculum.

Specifically, CFA Level 1 textbooks are among the best introductions to finance and economics I found. You don't have to sign up for the CFA exam$, the textbooks can be bought separately. CFA might not be as fun reading but are a very practical foundation (and will help put future readings in context).

You say you hope to get into finance but don't know almost anything about it. How did you decide to get into finance without knowing much about it?

I enjoy it but it's not for everyone. Finance is also huge. Economics is less relevant to finance than many realize (most roles do not require having studied econ and Goldman's CEO recently called the firm a "tech company").

May I humbly suggest, prior (or in addition) to spending precious time reading finance/econ books, speak to a few people who work in finance and read finance sites to get a better feel for it.

Books can be amazing, even if just read for intellectual curiosity, but they take a long time to read. There are other ways to learn which are quicker/more relevant to you vs. entire books.

Lastly, one "must-read" book is The Intelligent Investor by Ben Graham. The revised edition with notes from Jason Zweig is excellent. The industry is still obsessed with the book ~70 years after it came out and for good reason. Even if you disagree with it or think it's outdated (and many do), the book comes up so often it's worth reading to be in the loop.

CFA Program: https://www.cfainstitute.org/programs/cfaprogram/Pages/index...

TII book wiki: https://en.wikipedia.org/wiki/The_Intelligent_Investor

*This is my first HN comment. Apologies for any noob mistakes.

shmulkey18 2 days ago 0 replies      
A brief but profound paper: The Use of Knowledge in Society by Hayek (http://home.uchicago.edu/~vlima/courses/econ200/spring01/hay...).

As others have said, the EconTalk podcast is excellent.

zhte415 8 hours ago 0 replies      
A must-read no one has mentioned is macro-man.blogspot.com

Written by a macro hedge fund type guy (and friends) is brings cognizant analysis of the highest level, plus a bunch of finance slang you're going to need to get used to.

astazangasta 2 days ago 0 replies      
I will resist the urge to tell you what NOT to read and merely recommend a few favorites:

1. I am a big fan of John Kenneth Galbraith, who writes very clearly about a few things. I recommend both "The New Industrial State" and especially "The Affluent Society", where he argues that economics is insufficient to deal with post-scarcity.

2. Deirdre McCloskey's "If You're So Smart" is a great skewering of the blinkered nature of economic inquiry. Much of what is wrong with economics is what is wrong with scientific inquiry generally (being stuck in a formalism, confusing their models with reality); this is an excellent criticism.

3. Anything by Ha-Joon Chang. He writes intelligently about development and globalization; he is unorthodox in his economic practice, and his arguments are simple and drawn from history. There are a lot of "My god, it's full of stars!" moments in his work.

4. Still looking...

hkmurakami 3 days ago 2 replies      
A random walk down wall Street

Reminisces of a stock operator

When genius failed

Unconventional success

And generally just read financial news and follow markets until you develop a sense for spotting BS.

n00b101 2 days ago 1 reply      
Options, Futures, and Other Derivatives by John Hull

Principles of Corporate Finance by Richard Brealey, Stewart Myers, Franklin Allen

Traders, Guns and Money: Knowns and unknowns in the dazzling world of derivatives by Satyajit Das

meigwilym 2 days ago 1 reply      
After a few pop-sci economics books (freakonomics, the undercover economist...) I progressed to Ha-Joon Chang's Economics: The User's Guide.

It's covers all the major schools of thought, along with their pros and cons. I highly recommend it.

2T1Qka0rEiPr 2 days ago 0 replies      
For a truly fun read I'd suggest Dan Ariely's "Predictably Irrational". It's less academic than "Thinking fast and slow" by Daniel Kahneman (which is also great), but I found that refreshing.
edge17 2 days ago 0 replies      
Everyone seems to be addressing the finance part of it without the "growing intellectually" part of it. I've been fortunate to be surrounded by economists my whole life. Economists are also tremendous historians; reading a lot of history and recasting what you know about history into economic frameworks will greatly sharpen your intellectual abilities. As with most things involving learning, having and seeking out intellectual peers is a valuable way to challenge all your ideas.
logfromblammo 2 days ago 0 replies      
Try the Society of Actuaries / Casualty Actuarial Society study resources [0] for exams P (probability) [1], FM (financial mathematics) [2], MFE (models for financial economics) [3] or S (statistics and probabilistic models) [4]. Look at the PDF syllabus documents, and there will be a section on "suggested texts".

Looking up the suggested texts for previous test years (or for obsolete tests) may also reveal texts that may be cheaper now or available as used copies.

You could probably get something like Price Theory and Applications (Landsburg) or Principles of Corporate Finance (Brealey, Myers, Allen) for cheap.

[0] http://beanactuary.org/exams/preliminary/?fa=preliminary-com...[1] https://www.soa.org/education/exam-req/edu-exam-p-detail.asp...[2] https://www.soa.org/education/exam-req/edu-exam-fm-detail.as...[3] https://www.soa.org/education/exam-req/edu-exam-mfe-detail.a...[4] https://www.casact.org/admissions/syllabus/index.cfm?fa=Ssyl...

qubex 2 days ago 0 replies      
Mathematically trained economist here.

Why Stock Markets Crash by Didier SornetteThe complete oeuvre of Paul Wilmott

(The Computational Beauty of Nature, by Gary W. Flake because it's wonderful and puts you in the right frame of mind)

misiti3780 2 days ago 0 replies      
All must reads in my opinion

Fooled By Randomness - Taleb

The Black Swan - Taleb

Antifragile - Taleb

When Genius Failed - Lowenstein

Liars Poker - Lewis

The Big Short - Lewis

Flash Boys - Lewis

Too Big To Fail - Sorkin

Against the Gods - Bernstein

One Up On Wallstreet - Lynch

The Intelligent Investor - Graham

Henchilada 2 days ago 1 reply      
pjc50 2 days ago 0 replies      
Note that "finance" and "economics" are separate disciplines, roughly corresponding to applied vs theoretical.

The book which changed my thinking the most was "The Other Path" https://www.amazon.co.uk/Other-Path-Economic-Answer-Terroris...

It would be easy to give it the traditional libertarian gloss of "reducing regulation to improve the economy", but it's much more subtle than that. It looks at the costs of being outside the "system", and the benefits of simplifying the system so as to include more people and businesses. Along with land reform to reflect the actual reality of buildings.

Also, short and entertaining, but with lots of insights into principal-agent problems and bubble mentality: "Where Are the Customers' Yachts?" https://www.amazon.co.uk/Where-Are-Customers-Yachts-Investme...

baristaGeek 2 days ago 0 replies      
People have mentioned different authors across different schools of economic thoughts such as Mankiw, Rothbard, Friedman, Hayek, Smith, Keynes, etc. There's one that's also being mentioned which I particularly would avoid recommending which is Piketty.

Those are the best recommendations.

I would like to give a recommendation that might be a little bit different: 'Why Nations Fail' by Acemoglu.

yomritoyj 2 days ago 0 replies      
Since you are already in a quantitative field I think it would be good to quickly get to the heart of what economists actually do. I would suggest

Varian, 'Intermediate Microeconomics'Luenberger, 'Investment Science'Wooldridge, 'Introductory Econometrics'

for the undergraduate background and then at the graduate level Jehle and Renyi for microeconomics, Duffie for asset pricing theory, Tirole for corporate finance and Campbell, Lo and Macinlay for econometrics.

ElonsMosque 2 days ago 0 replies      
This might sound unconventional but in terms of Economics I would recommend a comic book called "Economix" by Michael Goodwin. According to financial advisor David Bach:

"You could read 10 books on the subject and not glean as much information."

Personally I believe thats because the subject and history of economics is presented in such an accesible and fun way in this book without compromising the quality and historical accuracy.

unixhero 2 days ago 0 replies      
End to End exploration and explanation of how and why global economy works.- Peter Dicken, Global Shift https://uk.sagepub.com/en-gb/eur/global-shift/book242137

Any corporate finance textbook, probably;Brearly Myers, Corporate Finance, https://www.amazon.com/Principles-Corporate-Finance-Richard-...

Watch the Yale/Stanford lectures opencourseware on Financial Markets with Schiller; http://oyc.yale.edu/economics/econ-252-11

Nicholas Taleb, Black Swan; https://www.amazon.com/Black-Swan-Improbable-Robustness-Frag...

Harry Markopolos, Nobody Would Listen, https://www.amazon.com/No-One-Would-Listen-Financial/dp/0470...

Michael Lewis, Liars Poker, https://www.amazon.com/Liars-Poker-Norton-Paperback-Michael/...

"Leveraged Sellout", Damn It Feels Good To Be A Banker, https://www.amazon.com/Damn-Feels-Good-Be-Banker/dp/14013096...

marginalcodex 2 days ago 1 reply      
There is no must-read books for economics (or almost any other field of study). Non-fiction economics books are meant to teach the reader something new. As Economics represents a set of ideas owned by no one individual, the best overview of economics will contain all of the important, integral ideas of the subject.

Any summary of economics that introduces the core concepts will be great and serve its purpose.

chiliap2 2 days ago 0 replies      
One I don't see recommended very often is: Fortune's Formula. It describes the lives of Claude Shannon and Ed Thorp (author of Beat the Dealer) and how they use the Kelley formula in both gambling and investing. The Kelley formula, as the book explains, is a formula for determining the optimum amount to bet on a wager (or investment) if you know the edge you have over the house.
Osiris30 2 days ago 0 replies      
On topic article on economist Paul Romer's view on the current state of macro: http://www.economist.com/news/finance-and-economics/21707529...
mempko 2 days ago 0 replies      
Best book you can read first is "Debt the First 5000 Years" by David Graeber. He is an anthropologist and the book outlines many economic topics giving you a historical context.

Then go and read all the standard literature and you will be surprised how terrible and unscientific it all is.

crdoconnor 2 days ago 0 replies      
Traders, Guns and Money - Satyajit Das

Debunking Economics - Steve Keen

The Volatility Machine - Michael Pettis

D_Alex 2 days ago 0 replies      
"Where are the customers' yachts?" by Fred Schwed is a must read for anyone interested in investing in stocks.

An absolute must.

mkempe 2 days ago 0 replies      
For a thorough understanding of free-markets and the laws of economics, Capitalism: A Treatise on Economics by George Reisman. Economic Sophisms by Frdric Bastiat. Socialism by Ludwig von Mises.
dash2 2 days ago 0 replies      
Academic economist(ish) speaking here. Be aware of the distinction between books of economics (the discipline) and books about economics by non-economists. Both can be great - I loved The Big Short.

Strongly recommend Keynes, and surprised nobody has mentioned Minsky or Kindleberger - outsiders now receiving recognition.

eth0up 2 days ago 2 replies      
Road to Serfdom, by F.A. Hayek?
yodsanklai 2 days ago 1 reply      
As far as economics is concerned, I recommend Mankiw's principle of economics. It's widely used as a textbook for economics undergraduates. It's very well written and entertaining. In my opinion, it's better than general public vulgarization books.
fitchjo 2 days ago 1 reply      
It is not a book, but Matt Levine is the way I get my daily finance news. He is fantastic.
dxbydt 2 days ago 0 replies      
David Ruppert's "Statistics and Finance" is the classic you are looking for. It is a standard textbook in most finance curriculums in the US. Roughly 50% of the book is plain statistics as applicable to finance. The rest is finance with a statistical flavor.
fitzwatermellow 3 days ago 1 reply      
Alternatively, don't learn from books, but the markets themselves. Open a Paper Trading account via Think or Swim. Begin a steady diet of Bloomberg / WSJ / CNBC every day. Whenever a word or idea is mentioned that you don't understand, Google it or consult Investopedia. Figure out what the Fed actually does. How debt and credit markets work. The microstructure of physical and electronic commodities trading. Maybe skim an online "Stochastic Calculus" class. Join Quantopian and master every algorithmic strategy known to humankind. Dive deep into cryptocurrency and blockchain technologies.

And who knows, perhaps one day you'll invent something that obviates the need for a global system of monetary trust ;)

cubey17 2 days ago 0 replies      
Can't recommend this book highly enough: A Concise Guide to Macroeconomics, Second Edition: What Managers, Executives, and Students Need to Know http://a.co/jicSNc9
Dowwie 2 days ago 2 replies      
Why finance? Finance was the major viable option for people with your background but that's not the case anymore. Every industry needs people with your background -- more than ever!

However, if you're hell bent on going into finance and you're going to read one economics book, read Thomas Piketty, "Capital for 21st Century".

These are the classic economics tomes that you'd read over a lifetime:

Karl Marx, "Capital"Adam Smith, "The Theory of Moral Sentiments"Adam Smith, "The Wealth of Nations"FA Hayek, "The Road to Serfdom"

justifier 2 days ago 0 replies      
at the moment i'm uninterested in the arc of economic studies in acadamia so the opencourseware reading lists seem the wrong place to start for me

can anyone suggest reading to understand how contemporary banks function, where can i get an understanding of a bank or credit union from a software engineer's perspective: dependencies, steps to start, challenges of running, protections from common problems, interesting emerging disruptions;

fiatjaf 3 days ago 1 reply      
I have news for you: statistics is totally unrelated to economics. Better rethink everything you think you know.

For people wanting recommendations, I suggest Carl Menger's Principles of Economics, which has saved me from stupidification on college.

jaynos 2 days ago 0 replies      
Derivatives Markets by Robert McDonald is a great textbook. I would not suggest reading it cover to cover, but it's a great reference for truly understanding bonds, options, etc.

I'd also recommend anything by Matt Taibbi, but only if reading about the shadiness of Wall Street interests you. His books are well written and fact checked, but definitely have a bias that you may not care for.

hellogoodbyeeee 2 days ago 0 replies      
There are some really good suggestions here about the economics and finance in general. I think having a good solid understanding of the financial crisis is valuable in today's world. I recommend "All the Devils are Here" by Bethany McLean. It offers a well rounded, facts-first approach to explaining the crisis. It does not point fingers or assess blaim, which is a valuable perspective.
xapata 2 days ago 0 replies      
"The Great Transformation" by Karl Polanyi. It's a tough read, because it was translated from Hungarian. It's an important read, because it provides an alternative analysis to both Smith and Marx. Polanyi was informed by recent developments in anthropology which contradicted the major theories of how modern economies had formed.
known 2 days ago 0 replies      
Financial Intelligence for Entrepreneurs Karen Berman & Joe Knight

Simple Numbers, Straight Talk, Big Profits Greg Crabtree

The 1% Windfall Rafi Mohammed

Accounting Made Simple Mike Piper

How to Read a Financial Report John A. Tracy

Venture Deals Brad Feld & Jason Mendelson

And http://www.bloomberg.com/news/features/2016-05-30/the-untold...

danvesma 2 days ago 0 replies      
A scholarly work on how ethics and CSR can be a positive influence on the economic model:


crispytx 2 days ago 1 reply      
The Intelligent Investor by Benjamin Graham

One Up on Wall Street by Peter Lynch

kresimirus 2 days ago 0 replies      
How an Economy Grows and Why It Crashes from Peter Schiff:https://www.amazon.com/How-Economy-Grows-Why-Crashes/dp/0470...

Very short read - economics basics from libertarian view.

aminorex 2 days ago 0 replies      
Kuznetsov,A. - The Complete Guide to Capital Markets for the Quantitative Professional (2006)
chromaton 2 days ago 0 replies      
Understanding Wall Street by Jeffrey Little gives a good overview of many kinds of financial instruments, including stocks, bonds, and options. It's NOT an investment flavor of the week book and is now on its 5th edition, the first having come out over 30 years ago.
robojamison 2 days ago 0 replies      
Whatever Happened to Penny Candy [1] is a fascinating book and a short read.

[1]: https://www.amazon.com/Whatever-Happened-Explanation-Economi...

JSeymourATL 3 days ago 0 replies      
> from a philosophical point of view; positive vs normative economics...

Thought provoking on a variety of levels - Seeking Wisdom: From Darwin To Mungerby Peter Bevelin > http://www.goodreads.com/book/show/1995421.Seeking_Wisdom

joshuathomas096 2 days ago 0 replies      
Thinking, Fast and Slow by Daniel Kahneman.He won a Nobel Prize in Economics in 2002 for his work in Behavioral Economics. I truly believe understanding human behavior and decision making is a key foundation to anything else you read in Economics.

This book changed my life, I highly recommend it.

p4wnc6 3 days ago 0 replies      
In addition to the many fine recommendations already on the thread, I enjoyed Winner's Curse and Irrational Exuberance.
frankyo 1 day ago 0 replies      
Economics in one lesson by Henry Hazlitt changed my life. If you want to read one book only, read this one. It's sort, easy to understand and ruthlessly logical.
kgwgk 2 days ago 0 replies      
I was going to recommend Malkiel's book but of course it has been already mentioned several times. So I'll add to the list Zweig's "The Devil's Financial Dictionary" (funny but also educational) and Sharpe's "Investors and Markets" (more academic).
SRasch 2 days ago 0 replies      
Capitalism and freedom by Milton Friedman
kingmanaz 2 days ago 0 replies      
"Fail Safe Investing" by Harry Browne and "The Intelligent Investor" by Benjamin Graham.

Discussion of the former here:


sonabinu 2 days ago 0 replies      
Adam Smith - Wealth of Nations

Keynes - The General Theory of Employment, Interest and Money

Ben Bernanke - Essays on the Great Depression

Robert Shiller - Irrational Exuberance

Levitt and Dubner - Freakonomics: A Rogue Economist Explores the Hidden Side of Everything

Daniel Kahneman - Thinking, fast and slow

karanbhangui 2 days ago 0 replies      
Haven't seen this one posted yet: http://www.mcafee.cc/Introecon/

I prefer the 2007 version, it's more mathy.

waleedsaud 2 days ago 0 replies      
Economics in One Lesson by Henry Hazlitthttps://mises.org/library/economics-one-lesson
tiatia 2 days ago 1 reply      
Niederhoffer did his PhD in statistics. He is nuts but he basically invented quantitative trading. Maybe you read his book "Education of a trader" and the "New Yorker" article about him ("The Blow up artist").
bronlund 2 days ago 0 replies      
mitchelldeacon9 3 days ago 3 replies      
Here is my short list of favorite books on finance and economics:


Bruck, Connie (1988) Predator's Ball: Inside Story of Drexel Burnham and Rise of Junk Bond Raiders

Draper, William (2011) Startup Game

Graham, Benjamin and Jason Zweig (2006) Intelligent Investor, revised ed.

_________ and David Dodd (2008) Security Analysis, 6E

Greenblatt, Joel (1999) You Can Be a Stock Market Genius

Greenwald, Kahn, Sonkin, Biema (2001) Value Investing: From Graham to Buffett and Beyond

Henwood, Doug (1997) Wall Street: How It Works and for Whom

Levitt, Arthur (2003) Take on the Street: How to Fight for Your Financial Future

Lewis, Michael (1989) Liar's Poker: Rising Through the Wreckage on Wall Street

_________ (2010) Big Short: Inside the Doomsday Machine


Ayres, Ian (2007) Super Crunchers: Why Thinking by Numbers is the New Way to Be Smart

Bernstein, Peter (1996) Against the Gods: Remarkable Story of Risk

Kahneman, Daniel (2011) Thinking, Fast and Slow

Silver, Nate (2012) Signal and the Noise: Why So Many Predictions Fail, but Some Don't

Taleb, Nassim Nicholas (2005) Fooled by Randomness, 2E

_________ (2010) Black Swan: Impact of the Highly Improbable, 2E


Christensen, Clayton (1997) Innovator's Dilemma

Stone, Brad (2013) Everything Store: Jeff Bezos and the Age of Amazon

Wallace, James and Jim Erickson (1992) Hard Drive: Bill Gates and Making of the Microsoft Empire

Walton, Sam with John Huey (1992) Sam Walton: Made in America

Wilson, Mike (1996) Difference between God and Larry Ellison: Inside Oracle Corp


Arrighi, Giovanni (1994) Long Twentieth Century

Braudel, Fernand (1979) Civilization & Capitalism 15th-18th Century, vol. 3: Perspective of the World, trans. Sin Reynolds

Brechin, Gray (2006) Imperial San Francisco: Urban Power, Earthly Ruin

Heilbroner, Robert (1999) Worldly Philosophers: Lives, Times & Ideas of Great Economic Thinkers, 7E

Marx, Karl (1867) Capital, vol. 1

Stiglitz, Joseph (2003) Roaring Nineties: A New History of the Worlds Most Prosperous Decade

_________ (2010) Freefall: America, Free Markets and the Sinking of the World Economy

Vallianatos, E.G. (2014) Poison Spring: Secret History of Pollution and EPA

Vilar, Pierre (1976) A History of Gold and Money: 1450-1920

Yergin, Daniel (1992) Prize: Epic Quest for Money, Oil and Power

Would enjoy email correspondence with anyone interested in these subjects: mitchelldeacon9@gmail.comAll the best

tezza 2 days ago 0 replies      
When I entered Financial Services in London I was recommended this book as the bible:

"How to Read the Financial Pages"

This book really breaks down the finance industry from a component and historical point of view. Stocks, dividends, bonds, TBills, Eurobonds

bronlund 2 days ago 0 replies      
saganus 2 days ago 0 replies      
Not sure how popular this take on finance is, here in HN (really I have no idea), but I found these two very interesting.

"The New Depression: The Breakdown of the Paper Money Economy"

and "The Dollar Crisis: Causes, Consequences, Cures" both by Richard Duncan

rubyn00bie 2 days ago 0 replies      
Preface: For a bit of I suppose... uhh, qualification, I took nearly every single upper division Economics class my university offered (~25). I did so because I LOVE Econ. Also, sorry for the rambling nature of this.

First things first, finance is only sort of economics, it's really just finance. I'd highly recommend taking an accounting class (or book) and a grab an intro finance book. Accounting will really help with jargon, and just some really basic things (like balance sheets). Also, "Security Analysis" [0] is the "only" book you'll ever need, Warren Buffet recommended it to Bill Gates, and now Bill Gates recommends it to everyone.

Back to Economics... There are two primary "groups" of thought... sort of like twins separated at birth who grow to hate each other.

----------------------------------The First: Neoclassical Economics----------------------------------

Focuses primarily on microeconomics and largely mathematical. It's birth is largely due to Economists wanting to make econ a "true science" like we see the physical sciences (biology, chemistry, physics). It starts around the late 1800s and really picks up steam around the time of Einstein. Math was hot and being applied everywhere.

A really interesting period to research and study is right after black Tuesday (and before the great depression) and what the central bank didn't do (before central bank intervention in markets). While I really detest the bastard, Milton Friedman's work on monetary policy is pretty science and generally good here. [1],[2].

I'm a Keynesian (I suppose-- Econ gets deep fast), and so you'd be no where without reading some of what Keynes did to get our assess out of the great depression (i.e. government spending). It's also more or less the birth of Macroeconomics... You'll know you're good when you laugh at forgetting: Y = C + I + G + (X - M). Some good things to get started are looking at the IS-LM [3] model and AS-AD [4] model.

That gets you into the 60s - 70s. Tall Paul Volker is the unsung hero of the 80s, read about him (he ran the federal reserve). After that microeconomics starts to fragment into things involving game theory and behavioral economics (Daniel Kahneman is the man).

Econometric analysis mathematically speaking is just multivariate regression analysis for time series or cross-sectional data. More "modern" analysis is probably using panel data [5] (combination of cross sectional and time series). Calculus, linear algebra, and differential equations should prepare one plenty for everything but panel data analysis. The real "econ" part is applying solid econ theory to the mathematics you're using, a textbook will help [6]. For finance this is your bread and butter.

Game theory will apply a lot of different mathematical tools. You will need to love pure math. To really get into it requires pain or love. I like a healthy amount of both.

----------------------------------The Second: Heterodox Economics----------------------------------

So as it turns out, neoclassical economics is at most half of Economics. It's really where the "philosophy" comes into play. You're gonna need a quick history lesson to sort of see it's topic matter. Economics really didn't exist before... the 1500s. You can try to apply economics to earlier times but you could also just make shit up and post it to twitter. Both would be equally likely to contain truth.

Economics came into existence around the time the Dutch began developing trade routes (1550s). A by product of all this trade, is tons of cash, and goods-- currency (silver, metals, whatever) starts to actually be used in society (before that it was mostly just a status symbol). It pisses off a lot of _institutions_, most of all "the church" and monarchies because money is allowing people to gain power. It's usurping power from them. This is the rise of the "merchant class" and now thanks to money (trade really, but whatever it's complicated)-- people are liberating themselves from the social status they're born into. Eventually modern republics appear, and governments form. Nations trading globally becomes more common (Dutch, English, Spanish) and we get to Adam Smith, David Ricardo [7], et. al.

Now it's the 1800s. People are seeing the birth and growth of capitalism, industry, corporations, and the tumultuous death of agrarian life. Now the way the "common person" lives their day dramatically changing, for a few it was better for most it was worse. Some economists begin to ask why are we replacing these now defunct _institutions_ with equally shitty, or possibly shittier, ones. This is more or less becomes the birth of heterodox economics which largely studies the more abstract ideas like "institutions"; by it's very nature the content tends to be philosophical.

By the 1920s heterodox economics is falling by the wayside. The content is less able to be tested like a physical science (i.e. no math/stats); so, it's treated like a misbegotten child... By the 1950s heterodox content was marginal at best-- the cold war and fear of communism made (makes) people insane. Economists pretty much had to be pro-capitalism or face being called "commies" and thrown in jail or worse being a narc in a witch hunt. This was more or less the nail in the coffin in mainstream heterodox economics (at least for research in the Occident). After the cold-war ended the nail got pulled out, but I wouldn't say it's really outta the coffin yet.

This book [8] isn't great but it's quickly digestible and will point you in the appropriate directions.


Some Rambling to Finish

I'd highly recommend not just learning how to use the tools, but why we have them and where they came from. Economics is vastly deeper than the average person will ever know. That depth is greatly empowering and guiding when using its lenses to see and solve problems. One last thing, know there's no going back, you will see the world differently.

[0] https://www.amazon.com/Security-Analysis-Foreword-Buffett-Ed...

[1] "The Role of Monetary Policy." American Economic Review, Vol. 58, No. 1 (Mar., 1968), pp. 117 JSTOR presidential address to American Economics Association

[2] "Inflation and Unemployment: Nobel lecture", 1977, Journal of Political Economy. Vol. 85, pp. 45172. JSTOR

[3] https://en.wikipedia.org/wiki/IS%E2%80%93LM_model

[4] https://en.wikipedia.org/wiki/AD%E2%80%93AS_model

[5] The course I took on panel data, http://web.pdx.edu/%7Ecrkl/ec510/ec510-PD.htm

[6] https://www.amazon.com/Using-Econometrics-Practical-Addison-...

[7] He more or less invented trade theory (competitive advantage) https://en.wikipedia.org/wiki/David_Ricardo

[8] https://www.amazon.com/Age-Economist-9th-Daniel-Fusfeld/dp/0...

Edit: for formatting.

hendzen 2 days ago 0 replies      
If you want to learn about quantitiative trading:

1) Active Portfolio Management: A Quantitative Approach for Producing Superior Returns and Controlling Risk

2) Quantitative Equity Portfolio Management: Modern Techniques and Applications

dilemma 2 days ago 0 replies      
The Ownership of Enterprise talks about different types of organizational forms (corporations, cooperatives, etc.) and how the form affects its function, and vice versa.
bgilroy26 2 days ago 0 replies      
Capital Ideas:The Improbable Rise of Modern Finance takes a historical approach to the development of finance.

It was striking to me how recent many developments are!

TheSpiceIsLife 2 days ago 0 replies      
Bourgeois Dignity: Why Economics Cant Explain the Modern World by Deirdre McCloskey, and presumably the other books in the series[1]

1. Bourgeois Dignity: Why Economics Cant Explain the Modern World

kejaed 3 days ago 1 reply      
I'm curious, what exactly is an engineering degree in statistics?
fatdog 3 days ago 0 replies      
Mark Joshi's "The Concepts and Practice of Mathematical Finance" came recommended to me by some people in the field as a foundation. I found it quite readable.
anacleto 2 days ago 0 replies      
+1 Mostly Harmless Econometrics - Angrist and Krueger
damptowel 2 days ago 0 replies      
Debunking Economics by Steve Keen. Though be warned, you might not quite appreciate economic textbooks afterwards.
itscharlieb 2 days ago 0 replies      
The Alchemists: Three Central Bankers and a World on Fire - Neil Irwin

Great account on central banking in general and central banking policy during the 08-09 crisis in particular!

damptowel 2 days ago 0 replies      
Debunking Economics by Steve Keen, though be warned, you might not quite enjoy economic textbooks afterwards.
brudgers 3 days ago 0 replies      

Wealth of Nations

zallarak 3 days ago 0 replies      
Books about economics- read Keynes and Friedman.

Economic history- lords of finance, too big to fail.

Most quant finance books are low quality and I'd suggest avoiding them.

ob 2 days ago 1 reply      
I still think one of the best textbooks on economics is Paul Samuelson's Economics. Assuming you mean macro-economics that is.
kesor 2 days ago 0 replies      
Eliyahu Goldratt books, especially ones that include his explanation about Throughput Accounting.
JustUhThought 2 days ago 0 replies      
I suggest building reading lists baed on a list of Nobel Prize winners for the subject.
trader 2 days ago 0 replies      
Read 10-Ks and 10-Qs and build operating models in excel.
haney 2 days ago 0 replies      
I'd highly recommend The Intelligent Investor.
dmfdmf 2 days ago 0 replies      
One of the best articles on Economics is Ayn Rand's "Egalitarianism and Inflation" in her anthology "Philosophy: Who Needs it".
colinmegill 2 days ago 0 replies      
Econned is essential reading
kerrynusticeNkJ 2 days ago 0 replies      
gates on musical notes
How to build a robot that sees with $100 and TensorFlow oreilly.com
362 points by nogaleviner  3 days ago   61 comments top 10
bernardopires 3 days ago 3 replies      
Just a nit, but the author keeps talking about object recognition while what he was actually doing is image classification. Object recognition actually consists of two tasks, one is classifying the object (this is a beer bottle) and the other is also says where in the image the object is. Additionally it can/should detect multiple objects in the image. This is a more complex than classification, which only associates one category with the image.
rbanffy 3 days ago 3 replies      
> recognizing arbitrary objects within a larger image has been the Holy Grail of artificial intelligence

The Holy Grail is general AI. Recognizing objects is a side quest, perhaps a required step, but, by no means, the end goal.

icemelt8 3 days ago 1 reply      
This was amazing, I am amazed at your command of both hardware and software technology. Even as a Software Engineer, I have a hard time trying to make TensorFlow do something for me.
urvader 3 days ago 1 reply      
I would like to know how long it "thinks"- it is clear the camera is paused for a while while the robot parses the image..
salex89 3 days ago 3 replies      
My biggest current question is which keyboard is this, on the image in the article?!


visarga 3 days ago 1 reply      
Great project. Locomotion and vision are pretty advanced compared to grasping and complex handling of objects. If we could have a workable arm, it would be much more interesting in applications.
dharma1 3 days ago 1 reply      
Dis the author publish a repo for this? It's easy getting tensorflow going for basic image classification but the hard part is actually making the robot move in a way that makes sense - using the camera and the sonar data to make decisions and then drive the motors. Or is this not autonomous?
criddell 3 days ago 1 reply      
This reminds me of a low res vision system I read about 20 years ago:

I've always been kind of intrigued by what is possible with very simple hardware.

nojvek 2 days ago 0 replies      
Oh my god. You are trying to build the exact thing I am trying to build. Albeit you've made much more progress.

I'm still soldering wires into the motors. You should take off the paper from acrylic. The transparent effect makes it look awesome.

My goal is to make a raspberry pi bot that plays indoor fetch. I would love to have a chat with you.

forgotAgain 3 days ago 0 replies      
Sorry for the off topic but is anyone else getting very high cpu usage from O'Reilly websites? Any known resolution or work around?

With Chrome developer tools I see one error:"Uncaught SecurityError: Failed to read the 'localStorage' property from 'Window': Access is denied for this document."

Original bulletin board thread in which :-) was proposed cmu.edu
348 points by ZeljkoS  2 days ago   145 comments top 31
kelvich 2 days ago 1 reply      
Nabokov's interview. The New York Times [1969]

 -- How do you rank yourself among writers (living) and of the immediate past? -- I often think there should exist a special typographical sign for a smile -- some sort of concave mark, a supine round bracket, which I would now like to trace in reply to your question.

jgw 2 days ago 5 replies      
It makes me a bit of a luddite (and a heck of a curmudgeon), but it always makes me a little sad when good ol' ASCII smileys are rendered all fancy-like. There's something charming and hackerish about showing it as a 7-bit glyph.

I think the Internet fundamentally changed when that happened.

Tangentially-related, I can't fathom why someone would post YouTube videos of `telnet towel.blinkenlights.nl`.

benbreen 2 days ago 1 reply      
Apropos is this debate about whether an intentional :) shows up in a 1648 poem:


Here's the verse:

Tumble me down, and I will sit

Upon my ruines (smiling yet :)

I think that the article does a fairly convincing job of showing that this is just weird 17th century typography, but then again, there was enough experimentation with printing at the time that it also wouldn't surprise me if it was intentional, at least at some point in the typesetting process.

artbikes 2 days ago 1 reply      
Like most of the cultural inventions of virtual communities there was prior art on PLATO.


kjhughes 2 days ago 2 replies      
I vividly remember having the following conversation with a fellow CMU undergrad around this time:

Me: What's with all the :-) in the posts?

Friend: It indicates joking.

Me: Why?

Friend: What's it look like?

Me: A pinball plunger.

Friend: Rotate 90 degrees.

Me: Ohhhhhh.


ZeljkoS 2 days ago 6 replies      
Interesting thing to note is that before Fahlman suggested ":-)" symbol, Leonard Hamey suggested "{#}" (see 17-Sep-82 17:42 post). After that, someone suggested "\__/" (see 20-Sep-82 17:56 post). But only ":-)" gained popularity.

It is funny to imagine how emoticons (https://en.wikipedia.org/wiki/List_of_emoticons) would look today if one of alternative symbols was accepted?

milesf 2 days ago 0 replies      
Ah bulletin boards :)

For years I have been searching for a copy of Blue Board (https://en.wikipedia.org/wiki/Blue_Board_(software)), a popular BBS program in the Vancouver, BC, Canada area written by the late Martin Sikes http://www.penmachine.com/martinsikes/

I even talked with the owner of Sota Software, the publisher, but I never heard anything back.

If anyone has a copy, PLEASE let me know! I've been wanting to setup a memorial telnet Blue Board site for decades now.

hvass 2 days ago 0 replies      
This is gold:

"Since Scott's original proposal, many further symbols have beenproposed here:

(:-) for messages dealing with bicycle helmets@= for messages dealing with nuclear war"

minivan 2 days ago 6 replies      
"o>-<|= for messages of interest to women"

I'm glad we are past that.

xyzzy4 2 days ago 2 replies      
I'm sure :-) has been independently invented a million times.
p333347 2 days ago 1 reply      
I see one Guy Steele in that thread. Is he the Guy Steele? Glancing wikipedia suggests he was asst prof at CMU around that time. Just curious.
emmet 2 days ago 1 reply      
| I have a picture of ET holding a chainsaw in .press file format. The fileexists in /usr/wah/public/etchainsaw.press on the IUS.


wmccullough 2 days ago 0 replies      
I love how different the conversations were on the internet then.

Now adays, if a thread came about to propose the ':-)', people would devolve into a debate about the proper use of the parenthesis, and at least one user would claim that '(-:' was a better choice, though it is the darkhorse option for the community.

chiph 2 days ago 0 replies      
Interesting that there are both left-handed and right-handed smileys in the thread. :-) (-:
yitchelle 2 days ago 2 replies      
Interestingly, before I read this post and the comments, I have always thought that :-) means a smiling face. Ie, to convey a sense of a smile after writing a message. Not a "I am joking" message.

Well, I learned something today.

soneca 2 days ago 1 reply      
And the proposal to have a separate channel to jokes is as old as the smiley. There is always that guy.

Have anyone thought about creating a separate HN for jokes?

danvoell 2 days ago 1 reply      
I wonder at what point the nose was removed :)
backtoyoujim 2 days ago 0 replies      
I wonder how many times the initial turn head, grok, smile -- mirroring back to the pareidolia itself, has happened.
Imagenuity 2 days ago 0 replies      
Monday Sept 19th would've been the 34th "smilaversary".
dugluak 2 days ago 1 reply      
love birds

 (@> <@) ( _) (_ ) /\ /\

_audakel 2 days ago 0 replies      
"Read it sideways. "hahaha love this!
f_allwein 2 days ago 1 reply      
19-Sep-82 11:44, Scott E Fahlman invents the ':-)'.

Nice. :-)

hammock 2 days ago 1 reply      
Reading these BBS always makes me think how much nerdier computer people were back then than they are now. Or am I off base?
pcunite 2 days ago 0 replies      

I see you

david-given 2 days ago 0 replies      
I... now find myself morbidly curious as to whether you could use Unicode diacritic abuse to draw actual pictures.

Pasted in example stolen from Glitchr, mainly to see how well HN renders them:

- ...

anjc 2 days ago 0 replies      
Wow that's interesting


equivocates 2 days ago 0 replies      
equivocates 2 days ago 0 replies      
guessmyname 2 days ago 0 replies      
Here is a list of popular emoticons: https://textfac.es/
chalana 2 days ago 1 reply      
Usenet archives are also a treasure trove for this kind of things. Searching old posts on Usenet feels like modern day archaeology
artursapek 2 days ago 0 replies      
This is creepy. I just opened a PR on GitHub and set the description to ":-)". Then I opened HN and saw this.
How Norway spends its $882B global fund economist.com
281 points by punnerud  4 days ago   157 comments top 11
kristofferR 4 days ago 5 replies      
"It is run frugally and transparently" is a dubious claim, at least according to claims made on NRKs Folkeopplysningen (a show like Penn and Teller: Bullshit, just better).

The fund spends a lot on being actively managed, one manager received ~$60 million in bonuses in 2010. However, they won't reply when people ask if bonuses are actually financially beneficial.

https://tv.nrk.no/serie/folkeopplysningen/KMTE50009215/seson... @ 28:30

cs702 4 days ago 2 replies      
A little over decade ago, when Norway's fund was called "the Petroleum Fund" and had "only" $147B, an article in Slate magazine explained what was special about it:

"Norway has pursued a classically Scandinavian solution. It has viewed oil revenues as a temporary, collectively owned windfall that, instead of spurring consumption today, can be used to insulate the country from the storms of the global economy and provide a thick, goose-down cushion for the distant day when the oil wells run dry."[1]

Since then, the fund has grown six-fold.

[1] http://www.slate.com/articles/business/moneybox/2004/10/avoi...

atheg33 3 days ago 1 reply      
As a Canadian I feel so cheated learning about Norway's Oil Fund.

Our government hasn't hardly saved a dime of our Oil Income.

We have been taking a small cut of the hundreds of thousands of barrels of oil we have been producing daily for the past 100+ years and spending it as fast as we possibly can.

>Most of the oil companies exploring for oil in Alberta were of U.S. origin, and at its peak in 1973, over 78 per cent of Canadian oil and gas production was under foreign ownership and over 90 per cent of oil and gas production companies were under foreign control, mostly American. [0]

[0] https://en.wikipedia.org/wiki/Petroleum_production_in_Canada...

harryh 4 days ago 3 replies      
882 B / 5.2 Million ~= $170k for every citizen of Norway.

At 4% a year that's $6,800 each in annual income. Not bad!

netcan 4 days ago 4 replies      
Norway's oil money story is one of the weirdest. Are there any examples in history where a country has saved up such a big stash? Are they planning to retire young, as a nation?
Lythimus 4 days ago 1 reply      
Is there an index or ETF which follows this pension fund's investments?
terda12 3 days ago 7 replies      
Visiting Norway, I always thought it is kind of a weird country. On one hand it's one of the richest countries in the world. On the other hand, I've seen so many young Norwegian women work hard cleaning toilets and hotel rooms. Such jobs would be considered "low rung" at in the US but in Norway they treat their low rung jobs as something to be proud of.
shardinator 3 days ago 0 replies      
Related to a lot of discussion comments, I highly recommend "A Random Walk Down Wallstreet" by Burton Malkiel. https://www.amazon.com/Random-Walk-Down-Wall-Street/dp/03933...
rer 3 days ago 2 replies      
In the "Top of the World" graph in the article, there's a dip for Saudi Arabia. Does anyone know why?
rogaha 3 days ago 0 replies      
I see lots of comments talking about the return on investment (~4% YoY) and the ~$60M in bonuses, etc. But I don't see anyone questioning why there is so much money invested in other companies outside Norway.

I'm curious to know: 1) Why do we have a savings Fund with double of the annual GDP? Should we have a limit? Why the excess is not invested locally? 2) Is there an existing plan to define when the money will be directed to the Norway economy? The current GDP per capita is around $68K which doesn't seem that much compared to the amount of money in the country's saving account. Why not invest in education and/or technology?3) Why there are a few people earning so much money (e.g. ~$60M bonus) to manage the country's assets? Is the real purpose to make money or save the money for future generations?

Two years spent spamming spammers back medium.com
392 points by beweinreich  12 hours ago   103 comments top 40
kalleboo 11 hours ago 3 replies      
This is a great idea. Waste the spammer's time and it's no longer worth it.

The phone version of this is Lenny[0], a set of audio files/Asterisk script which pretends to be a senile, doddering old man (who has a duck problem). There's a reddit user who runs a number you can forward your sales calls to, and he'll pick out the best ones and put on YouTube[1]. The record is keeping a caller on the phone for 56 minutes.

[0] https://www.reddit.com/r/itslenny/

[1] https://www.youtube.com/playlist?list=PLduL71_GKzHHk4hLga0nO... (edit: if you sort the user's videos by most popular, the top one is something quite amazing)

grecy 11 hours ago 8 replies      
> Imagine if this type of thing happened in real-life. You walk out the door in the morning and youre immediately attacked by Parul, Kevin, and Amelie.

I laughed out loud at this, because it's exactly what I'm experiencing now in West Africa.

Street vendors are aggressive about selling whatever they have, and they seem to assume I want it - almost like I owe it to them to buy it - I'm not sure if it's because I'm White, or it's just their standard procedure for everyone that walks by.

On my 3 minute walk to the local store, I get a minimum of 10 people in my face, trying to sell me cell phone recharge cards, peanuts and limes. Every single day I say no thanks, every single day they try again, sometimes even on the walk back.

I've tried ignoring them or not responding at all, and that usually makes it worse - they'll yell louder and louder (assuming I have not heard), hiss, make a kissing noise, and eventually put themselves in my way so I'm forced to acknowledge them.

Amazingly, even when I do buy something, and I clearly have it in my hand (a bunch of carrots for example), every single street vendor selling carrots will still try with 100% effort to sell me carrots.

titomc 6 hours ago 1 reply      
One spammer realised that he is talking to a bot and asks the bot about the three laws.


wanderr 10 hours ago 0 replies      
Back in the olden days, when the ping of death causing a windows BSOD was a thing, if I was online when I got spam I would immediately look for the spammer's ip and send them a ping of death. I could tell it often worked because then I'd get the same spam again 10 minutes later, so I'd do it to them again, then I'd get spammed again and ping them again until eventually they gave up.

I assume their mass mailing program would just start at the top of an email list and send them one by one, without tracking progress, so when the computer crashed they would have to start over. After a few crashes in a row hopefully the spammer would blame the spam sending program for crashing the computer and give up, maybe even demand a refund from whoever sold it to them.

mmwako 39 minutes ago 0 replies      
I was just wondering: if every person did this with the spam they get (or maybe automatized by Gmail), spammers would be overflowed with bot answers to their spam emails, and would not be able to differentiate between a potential victim's response, and all the bot replys. This has the potential to actually SOLVE the problem of spam. Think this could work?
chrissnell 5 hours ago 0 replies      
This reminds me of a script I wrote about a decade ago to deal with phishing sites. My script generated first and last names, email addresses, passwords, and credit card numbers that actual passed checksum validation. It would submit these fake entries to a phishing form just as fast as the remote end would take them, polluting their database/inbox/whatever with thousands of bogus submissions. Besides wasting their time and resources, it also smokescreened any legitimate submissions that might have come through.
MaxLeiter 7 hours ago 0 replies      
If you find this funny, I highly recommend this TED talk: "This is what happens when you reply to a spam email"


koytch 10 hours ago 0 replies      
Effing hilarious. Some years ago I spent a few days writing to a 'Russian bride'. It became instantly clear all replies were scripted, there was no connection at all with what I said (The full text of 'I, Robot'? Oh, what interesting things you do). So I'd say many if not most of the spam scenarios are automated and the whole thing becomes too meta.
Exuma 12 hours ago 1 reply      
This is great, and would be even more brilliant if it could integrate some sort of markov chain, like https://www.reddit.com/r/subredditsimulator

I'd love to see it have random answers that are unique based on the question. Then you make it a global service that hundreds of thousands of people can forward messages to, and then you waste spammers time en masse.

brightball 12 hours ago 1 reply      
These guys operate on an ROI basis. If you waste their time you decrease their ROI.

Great tool.

ortusdux 11 hours ago 0 replies      
Reminds me of the cities that setup robocallers to cut down on illegal signs.


sztwiorok 12 hours ago 1 reply      
Great idea!

I'm sure that GitHub community will help to make it even better


chrischen 12 hours ago 4 replies      
It would be great if someone could implement this as a free public service, using neural algorithms to generate responses.
robrenaud 2 hours ago 0 replies      
There is kind of a interesting Turing test scenario for AI here. Design an AI to maximize number of replies (or total text written) by the spammer. The internet is vast and full of spammers, you'll never run out of real humans providing responses to optimize your system.
verroq 11 hours ago 1 reply      
Should just connect two or more spammers together and let them offer their products to each other.
cxmcc 12 hours ago 1 reply      
Love it! For physical spam mails with business reply envelope, I always fold everything back into the envelope and send it back.
eljimmy 7 hours ago 0 replies      
I once made the mistake of sending a joke reply to a spammer from my legitimate email.

Turned out they pulled my phone number from the WHOIS info on my domain which I can only assume they sold to some marketing companies as I received about a dozen cold calls from various "web agencies" from the states. A lot of them were relentless, calling me repeatedly and leaving voicemails.

wojcikstefan 10 hours ago 0 replies      
Aren't most of these spam emails automated anyway? If it's just two bots talking, you're not really wasting anybody's time/resources.
codingdave 10 hours ago 3 replies      
Sure, great idea, funny and clever and all that.

But I disagree with the idea that inboxes are sacred, and disagree with the attitude of "how dare people send marketing to me!" Fraudulent spam is one thing. Plain old marketing or sales cold calls, though... you know people are going to do it. It is their job. And I'd much rather get emails than I can quickly delete and ignore vs. phone calls. And once in a while, someone actually hits on a service that is useful to me.

So I don't think the real-life scenario of people badgering you outside the door is accurate. The better metaphor would be one comparing your inbox to your actual mailbox. Sure, junk mail is annoying and most of it gets thrown out. But sometimes that restaurant down the street does send coupons.

the_duke 10 hours ago 2 replies      
Hilarious idea.

But one of the first things I would have coded is preventing the same message to be sent again.

The examples are full of that.

wdr1 6 hours ago 0 replies      
Reminds me of the guy who set up an automated voice script on his landline to thwart telemarketers:


Animats 7 hours ago 1 reply      
The one for phones has been on HN before. This one for spam is nice, but not yet smart enough. With more smarts and some understanding of the messages, it could keep spammers going forever. It doesn't need to be very intelligent; it just needs to get up to the Eliza level.

If it detects a spam related to search engine optimization, it should have a list of about a hundred plausible questions it can ask on that subject, for example. There aren't that many spammed subjects.

Most email spam, though, is promoting a link, and can't handle an email reply. You'd need something smart enough to go to a web site and sign up with fake credentials.

gus_massa 12 hours ago 0 replies      
Can anyone forward an email to that address? :)

Do you have localized versions? [I'm from Argentina and Most of my spam is in Spanish. I guess no. :( ]

abhianet 10 hours ago 0 replies      
What happens if I send it a mail from an mlooper address? Can I get it to setup an infinite loop?
maouida 11 hours ago 0 replies      
Gmail and other popular mail providers should implement something like this.

It would be a big step forward in spam fighting.

aomix 4 hours ago 0 replies      
I thought spamd passive aggressively insulting spammers and tarpitting their connections was a good effort at wasting their time. This is a big step up from that.
TheOtherHobbes 11 hours ago 0 replies      
Brilliant! But... of course some of the spammers are bots themselves.
Lxr 11 hours ago 0 replies      
This is hilarious, I love it. I would also love to see this made into a public service with some clever ML algorithms to keep the conversation going as long as possible.
Dagwoodie 10 hours ago 0 replies      
Based on the sample of 24 messages back and forth, it looks like the spammer also had a reply bot because a lot of messages exactly the same canned response.
tamersalama 11 hours ago 0 replies      
Great idea! This has the potential of reducing worldwide spam/scam if is implemented by email service providers.
slinger 9 hours ago 0 replies      
I'm laughing out loud with these MLooper conversations. Made my day :D

PS: Nice project btw

kensai 10 hours ago 0 replies      
The dude deserves a Nobel Peace Prize for this. Literally pacifying the interwebs! :D
sztwiorok 12 hours ago 1 reply      
Great tool

Please share this on github. we will be able to add our sugestions to the list of answers!

partycoder 8 hours ago 0 replies      
Two related funny stories if you have the time:

- The 7 legged spider story.

- The guy that tricked Nigerian spammers into acting the dead parrot sketch from Monty Python

imaginenore 12 hours ago 0 replies      
Would be cool to have this in GMail.
GirlsCanCode 11 hours ago 1 reply      
Spam doesn't bother me anymore, but unsolicited phone calls do. The majority of phone calls that come in are not legitimate.
countryqt30 11 hours ago 0 replies      
qgaultier 11 hours ago 0 replies      
brillant !
mooveprince 12 hours ago 0 replies      
Made my day :)
tomrod 11 hours ago 0 replies      
To connect the conversation to David Brin, futurologist, philosopher, and author, this sounds an awful like the crystal spheres in Existence.
Super Mario 64 1996 Developer Interviews shmuplations.com
335 points by Impossible  3 days ago   96 comments top 14
dmbaggett 3 days ago 8 replies      
>The N64 hardware has something called a Z-Buffer, and thanks to that, we were able to design the terrain and visuals however we wanted.

This was a huge advantage for them. In contrast, for Crash Bandicoot -- which came out for the PS1 at the same time -- we had to use over an hour of pre-computation distributed across a dozen SGI workstations for each level to get a high poly count on hardware lacking a Z-buffer.

A Z-buffer is critical, because sorting polygons is O(n^2), not O(n lg n). This is because cyclic overlap breaks the transitive property required for an N lg N sorting algorithm.

The PS2 got Sony to parity; at that point both Nintendo and Sony had shipped hardware with Z-buffers.

intsunny 3 days ago 6 replies      
I'll always remember the first time I saw Super Mario 64 in front of my very eyes in ToysRUS. It was as if every other 3D game in history suddenly didn't matter anymore. Here was the future of 3D gaming. Here was a game with unbelievably fluid controls in really large levels clearly designed to be explored.

Unlike most previous Mario games, there was no timer either. This only further encouraged players to really explore the 3D environment, collect the side-quest coins, and not be stressed out.

corysama 3 days ago 2 replies      
Shameless plug: We collect this kind of material over in https://www.reddit.com/r/TheMakingOfGames/

There's also https://www.reddit.com/r/VideoGameScience for more technical material.

BTW: The linked site has a whole lot more articles like this one http://shmuplations.com/games/

baconomatic 3 days ago 3 replies      
If you're at all interested in speedruns, this is a great video that takes it to the extreme for Super Mario 64: https://www.youtube.com/watch?v=kpk2tdsPh0A
johnm1019 2 days ago 0 replies      
The opening quote totally blows my mind as a humble non-game dev. I never thought of it this way.

> Miyamoto: Ever since Donkey Kong, its been our thinking that for a game to sell, it has to excite the people who are watching the playerit has to make you want to say, hey, gimme the controller next! ...

The simple approach is to say, "to make a great game, it should be fun for the person playing it." But they've already taken a step back and approached it from the perspective that great gaming happens socially. Maybe this is one reason I cherished all the Nintendo games as much as I did. It's because the memories of playing them are always with other people and we're all having fun. It wasn't a solo act.

mr_pink 3 days ago 3 replies      
Holy crap, Super Mario 64 is 20 years old now. I've never felt my age more than I do right now...
TazeTSchnitzel 3 days ago 1 reply      
> The way Marios face moves is really great too. Like in the opening scene.

> Miyamoto: That actually came from a prototype for Mario Paint 3D (that were still going to release).

I wonder, was Miyamoto referring to Mario Artist?

wodenokoto 3 days ago 0 replies      
I really loved this game and have always been sad that they didn't do a proper sequel.

Reading this interview now, it sounds like they had plenty ideas for new stuff.

russellbeattie 3 days ago 0 replies      
I should probably make an effort to finish that game someday. Not that I've finished many Super Mario games. I think I've purchased every one, but have completed maybe two of them. So many levels incomplete... I wonder if game devs feel bad working on higher levels, knowing only a tiny portion of players will actually ever see them?
Insanity 2 days ago 0 replies      
I have fond memories of this game, and a lot of what they spoke about in the interview regarding what gamers enjoyed ringed true for me. The movement of Mario did feel great, and I had a lot of fun exploring the environment, jumping in the water for swimming or seeing how Mario's movement was different in different environments. (I did notice his centre of gravity as well, and it seems like a great fit).

It is great to read that they actually had players like me in mind when they created the game. This article actually makes me want to dig up the game and play it through again.

spdustin 3 days ago 1 reply      
Awesome interview.

The other comments reminded me of this fan-made video of an Unreal Engine-powered Super Mario 64. It's stunning.


Note: keep playing past 0:50 - it's not just the non-Mario environment.

ravenstine 2 days ago 0 replies      
I would love to see something similar for Goldeneye/Perfect Dark. I've been slowly but surely working on building a demo FPS engine using very minimalist implementation to learn about game dynamics, and I'd love to hear what sort of technical challenges were faced at Rare and how they developed their(albeit simplistic) enemy AIs with pathfinding.
pcunite 3 days ago 1 reply      
I think the first time I played this game was with the Nemu64 emulator using a good computer and LCD monitor. The monitor alone made for a better experience than the scaly TV sets typical of the day. Also, being able to pause, save, and replay an area was nice.
racl101 2 days ago 0 replies      
Really cool.

Super Mario 64 was such an amazing game back in the day. It totally changed my life.

House Passes Employee Stock Options Bill Aimed at Startups morningconsult.com
344 points by endswapper  2 days ago   222 comments top 25
grellas 2 days ago 8 replies      
The original point of ISOs was to offer to employees the opportunity to take an economic risk with stock options (by exercising and paying for the stock at the bargain price) while avoiding the tax risk (by generally not recognizing ordinary income from that exercise and being taxed only at the time the stock was sold, and then only as a capital gains tax).

AMT has since emerged to devour the value of this benefit. By having to include the value of the spread (difference between exercise price and fair market value of the stock on date of exercise) as AMT income and pay tax on it at 28%-type rates, an employee can incur great tax risk in exercising options - especially for a venture that is in advanced rounds of funding but for which there is still no public market for trading of the shares. Even secondary markets for closely held stock are much restricted given the restrictions on transfer routinely written into the stock option documentation these days.

So why not just pass a law saying that the value of the spread is exempt from AMT? Of course, that would do exactly what is needed.

The problem is that AMT, which began in the late 60s as a "millionaire's tax", has since grown to be an integral part of how the federal government finances its affairs and is thus, in its perverse sort of way, a sacred cow untouchable without seriously disturbing the current political balance that is extant today.

And so this half-measure that helps a bit, not by eliminating the tax risk but only by deferring it and also for only some but not all potentially affected employees.

So, if you incur a several hundred thousand dollar tax hit because you choose to exercise your options under this measure, and then your venture goes bust for some reason, it appears you still will have to pay the tax down the road - thus, tax disasters are still possible with this measure. Of course, in optimum cases (and likely even in most cases), employees can benefit from this measure because they don't have to pay tax up front but only after enough time lapses by which they can realize the economic value of the stock.

This "tax breather" is a positive step and will make this helpful for a great many people. Not a complete answer but perhaps the best the politicians can do in today's political climate. It would be good if it passes.

Edit: text of the bill is here: https://www.congress.gov/bill/114th-congress/house-bill/5719... (Note: it is a deferral only - if the value evaporates, you still owe the tax).

matt_wulfeck 2 days ago 8 replies      
> Only startups offering stock options to at least 80 percent of their workforce would be eligible for tax deferrals, and a companys highest-paid executives would not be able to defer taxes on their stock under the legislation.

I understand the desire to avoid a regressive taxation system, but why is it that every tax rule we create comes with 2x the amount of caveats and rules? Our tax system is becoming a mess.

At this rate soon nobody will be able to file their own taxes without an accountant to sort through the muck. And complicated to systems tend to benefit the wealthy.

djrogers 2 days ago 1 reply      
This is good news, but it may not go anywhere -

"the Administration strongly opposes H.R. 5719 because it would increase the Federal deficit by $1 billion over the next ten years." [1]

So a really bad tax rule is in place, but since it happens to bring in ~$100M/yr, we shouldn't fix the rule?


calcsam 2 days ago 4 replies      
This is amazing news. Some context:

It's quite common to owe taxes today for gains on the value of your stock -- which is an illiquid asset you can't sell. This puts employees in the position of shelling out cash to keep something that rightfully belongs to them, or simply abandoning it (failing to exercise) when they leave the company. This bill would defer taxes on gains up to 7 years, or until the company goes public.

If you are awarded stock options, an you exercise them, you have to file an 83(b) election within 90 days or else you are liable on all paper gains in the value of your stock.

Even if you file an 83b election, you are still liable for paper gains between the value of your options when you were granted them and the value when you exercised.

For example, if you were awarded options with a strike price of $5 and the company raised a new round of funding and the 409A valuation (& strike price of the new options) has risen to $15 per share, the IRS considers that you now owe taxes on $10 of income / share. In other words, it costs you not $5 / share to exercise but ~$8.50 including taxes.

So the tricky part about options is that they require money to exercise, money that you often don't have ready, in order to obtain an asset that is (a) not liquid and (b) may decline in value (c) you often can't sell due to transfer restrictions.

For example: one early engineer at Zenefits had to pay $100,000 in taxes for exercising his stock....and then all the crap hit the fan, and he likely paid more in taxes than his shares will end up being worth. Ouch.

As a result of this problem with options, many startups -- especially later-stage ones like Uber -- choose instead to offer RSUs, which are basically stock grants as opposed to stock options. You don't have to pay any money to "get" them like you do for options.

However, the IRS considers stock grants, unlike options, immediately taxable income. If you get 10,000 RSUs per year, and the stock is valued at $5/share by an auditor, you now have to pay taxes on $50,000 of additional income, for an asset that you likely have no way of selling.

Some startups allow "net" grants -- which basically means they keep ~35% of your stock in lieu of taxes. That solves the liquidity problem, but offering this is completely at the discretion of the startup and some don't, which leaves employees at the mercy of the IRS, again having to pay cash on paper gains of an illiquid asset.

asah 2 days ago 2 replies      
Can someone explain: if you exercise and hold the shares (eg leave the company) do you owe tax after year seven, even if the shares remain illiquid?

That's the core issue: the IRS is taxing individuals on truly illiquid assets.

jnordwick 2 days ago 2 replies      
Most employees get hit by the AMT and the step up in basis when exercising their incentive employee stock options, and from just skimming the bill, I don't see how that is prevented.
martin_ 2 days ago 2 replies      
This sounds great, though requiring "offering 80% of the workforce stock" and excluding highest paid executives seems vague - is this at time of hiring, when stock is issued, fully vested, when taxes are due, somewhere inbetween? I parted ways with a startup in the valley last year and exercised some shares on January 13th. If I had exercised just two weeks earlier, I'm told I would've been hit with north of 50k in AMT, I have until next year to figure it out now but I wonder if I'm eligible. Also curious how long it typically takes to get from house, through the senate and passed.
gtrubetskoy 2 days ago 2 replies      
I still don't understand why taxes are owed. If an option at the time of grant is worth $0 (which is how it's typically done or is that not the case?), then you don't owe anything to the IRS until you exercise the option, i.e. buy shares at the option price and sell them at presumably higher valuation and make some money, at which point you will need to part with some of it because it's income.

But if you never exercise the options, then you never owe any tax. What am I missing here?

revo13 2 days ago 7 replies      
More evidence as to why the income tax should be replaced with a consumption tax. Just let people make their dammed money already and apply a simple tax when they spend it. Windfalls wouldn't be "dangerous" or punitive in that model, and savers would be rewarded.

--Of course I oversimplify the consumption tax, and safeguard would need to be in place on that to ensure it is not regressive with respect to necessities...

zkhalique 2 days ago 0 replies      
Meanwhile, the USA actively encourages companies to offshore their money with their tax code:


adanto6840 2 days ago 1 reply      
The bill text is here and is pretty easy to decipher: https://www.congress.gov/bill/114th-congress/house-bill/5719
ap22213 2 days ago 0 replies      
What the house needs to do is regulate startup's shady options agreements. I see way too many developers getting burned out striving for that big payout that may never come. It's the classic con game.
nullc 1 day ago 0 replies      
Perhaps I'm misreading the law, but it looks like it solves the wrong problem: It addresses a cash-flow issue rather than the tax liability issue.

Say you have options at FooCorp and you leave. FooCorp is illquid and you have 90 days to exercise your 10,000 options. Your FC options have a $5 strike, but the company currently has a 409a valuation of $100/share.

To exercise the options you would need to pay $50,000 to FooCorp, then you would have a "realized gain" 950k (($100-$5)*10000 which you would owe 28% of in taxes that year, or 266k. So you would need access to $316k in total in order to exercise these options.

Two issues arise: (1) You may not have $316k just kicking around. (2) THE SHARES ARE ILLIQUID AND MAY BE WORTH $0 WHEN YOU CAN ACTUALLY DO ANYTHING WITH THEM.

The bill appears to help with (1) by letting you pay that 266k not now-- but later when the company shares become liquid or 7 years (whichever comes first). But it does nothing about (2) -- you might exercise and then the company goes bust, and seven years later you owe $266k and your current position is worth -50k... and because the taxes are AMT, you can't meaningfully write them off your losses against the taxes you owe.

This kind of failure doesn't require FooCorp to fail. You could have options at $5, execute at $100, and have things go liquid at $7-- ignoring taxes this would have been a $20k gain. But with the taxes you're still $246k in the hole.

The issue all along wasn't that someone needed extra money. The issue was the potential huge losses. If it weren't risky you could find a lender to cover the execution price and taxes in exchange for a return when the asset becomes liquid. (E.g. having to pay the $266k up front but getting it returned later when the asset becomes worthless and you write it off)

If anything this makes the situation worse by encouraging more people to commit financial suicide by making it less obviously a bad idea while being just as risky as it always was.

mrfusion 2 days ago 2 replies      
Does anyone have experience buying stock options from employees. I really want to own shares in a few companies that would never hire me :-(
jkern 2 days ago 0 replies      
How does this relate to the push for startups to change from a 90 day to 10 year exercise window? It seems like that's a better option than this bill since it gives employees a larger time window to make an exercise decision, during which the likelyhood of options actual resulting in something liquid is much higher
koolba 2 days ago 3 replies      
> Only startups offering stock options to at least 80 percent of their workforce would be eligible for tax deferrals, and a companys highest-paid executives would not be able to defer taxes on their stock under the legislation.

Is this why I keep seeing nominal $1 salaries?

cdbattags 2 days ago 0 replies      
How would this affect the concept of phantom stock options? I worked at a startup who used the main excuse of no taxes paid handing out ghost options instead of normal options.

"Phantom stock can, but usually does not, pay dividends. When the grant is initially made or the phantom shares vest, there is no tax impact. When the payout is made, however, it is taxed as ordinary income to the grantee and is deductible to the employer."


AdamN 2 days ago 0 replies      
I wish this was retroactive :-(
stevenae 2 days ago 0 replies      
The article appears to get the "seven years" qualification wrong. The bill states that tax must be paid at:

>> the date that is 7 years after the first date the rights of the employee in such stock are transferable or are not subject to a substantial risk of forfeiture, whichever occurs earlier

Which implies that transfer-restricted stock grants do not start this clock ticking.

k2xl 2 days ago 1 reply      
I'm confused. I bought shares this year and would be hit with 50K tax bill from AMT next year.

Does this mean I don't owe AMT addition next year?

tmaly 2 days ago 0 replies      
I am wondering if there will be additional complexity added to the rule making phase of this if it becomes law.

While this amendment is short in length, it seems to add additional complexity to an already complex tax code. I would have liked to have seen an even simpler proposal.

throwaway6497 2 days ago 1 reply      
Dumb question: Does it mean, it is law now.
ulkram 2 days ago 1 reply      
When does this go into effect?
ulkram 2 days ago 0 replies      
when does this go into effect?
chillydawg 2 days ago 4 replies      
Nice to see tax laws for the rich can get passed, but substantive change to do with criminal justice, healthcare etc go nowhere.
How to Get a Job in Deep Learning deepgram.com
307 points by stephensonsco  3 days ago   85 comments top 15
csantini 3 days ago 5 replies      
TL;DR: Deep Learning will become a commodity. Software will eat Deep Learning too.

I'd like to clean up a bit the air from the hype fog:

DL is giving amazing results only when you have big sets of labelled data. Hence it will be much cheaper for companies to buy Google/Microsoft Vision/Audio REST APIs rather than paying the costs of: cloud + find data + deep learning experts. So, I don't think we will see a massive growth of DL gigs.

e.g. Google Vision API: https://cloud.google.com/vision/

Except those areas where your own CNN implementation is needed (automotive, industrial automation), Deep Learning will be another "library" in the ever increasing Software Engineering mess of gluing many open source libraries and REST apis to get something useful done. You need 1 guy training a Neural Network for every 100 software monkeys maintaining the infrastructure complexity.There are now many Software Engineering jobs because it's hard to glue and maintain publicly-available code to solve some specific business problem.

I think the the same applies for many Data Scientist jobs, which are these days more about fetching/cleaning/visualizing data than making machine learning on it.

tom_b 3 days ago 5 replies      
I am curious about demand for this skill in the market.

But I just don't see it - machine/statistical/deep learning gigs just seem really rare.

I know this isn't a great metric, but searches on Indeed.com:

 "deep learning" - 873 "machine learning" - 9,762 "statistical learning" - 65 java - 72,802 javascript - 43,785
Same searches on LinkedIn:

 "deep learning" - 646 "machine learning" - 6,952 "statistical learning" - 34 java - 43,845 javascript - 30,818
Even the "machine learning" search on Indeed, with 9K+ results has 1300+ from Amazon, followed by a much smaller number (in low hundreds each) from Microsoft, Google, others (including some that look like staffing companies).

Even on HN's who's hiring Sept 2016 thread phrase counts: 14 "deep learning" 79 "machine learning"

I completely agree with the idea that being able to use some deep/machine/statistical learning is going to be a toolset that data hackers need to have. I even think that there is a bit of the "build it and they will come" magic waiting out there.

But I think the best way forward is to be working in data and figure out how to generate value with deep learning - this will be much more productive than trying to seek out a deep learning gig in terms of promoting deep learning in the workplace. Heck, that's a suggestion I would be wise to take myself . . .

orthoganol 3 days ago 3 replies      
My question is, it feels like machine learning is reaching its "Rails" stage. You can implement the latest Bi-directional NN or LSTM-RNN using a high level API that already sits on top of another high level framework. Even beyond the core setup it will do the peripherals - smart initializations, anti-overfitting, split up your data, etc.

Do people who implement (albeit real, useful) deep learning systems, but who have no formal machine learning background, who don't really know much or care about implementing derivatives or softmax functions because the frameworks abstract all that away - are these people getting offered jobs?

protomikron 3 days ago 0 replies      
My advice:

Do not label yourself as a data scientist or machine learning expert. Go for the domain, i.e. become comfortable with the actual data and the methods used there:

- predict land use in aerial imagery - become comfortable with photogrammetry, geography, etc.

- predict biological tissue(s) - become comfortable with specific branches of biology or medicine

- predict $something_relevant

I actually stole this advice from the epilogue of some text about programming, and it really stuck with me. Otherwise your expertise is just too generic and you compete with a big pool of people who call themselves machine learning experts, because they can write a for loop in Bash.

xor1 2 days ago 5 replies      
>Speaking of math, you should have some familiarity with calculus, probability and linear algebra

Curious to know if anyone has had success learning/re-learning these as a mid-20s or older adult who works fulltime, and if you could potentially provide a list of books/courses to go through. I personally never learned anything past geometry (in high school). The most advanced math class I took in college was College Algebra. That means I never learned trig or anything past it (so no calc, linear algebra, or probability), and I'm sure most people on HN surpassed me math-wise sometime in high school :)

I've been able to skate by with my embarrassing lack of math knowledge/skills as a developer, but I feel like it's only a matter of time until the mathematical steamroller becomes a serious threat career-wise and I get crushed.

partycoder 3 days ago 2 replies      
"A job in deep learning".

It is highly unlikely that you will get a job in which you exclusively use deep learning alone, and not any other ML/AI technique.

Once you learn DL, then, "congratulations... here are 100 other topics you might need to know about before getting a job". http://scikit-learn.org/stable/tutorial/machine_learning_map...

thefastlane 3 days ago 3 replies      
i just want to be a software engineer without having to continually burn away evenings and weekends studying the latest shiny, continually for the next two decades, just to keep my career afloat. is that even an option anymore?
imron 3 days ago 0 replies      
> I built a twitter analysis DNN from scratch using Theano and can predict the number of retweets a tweet will get with good accuracy

I imagine a product like this could actually charge a fair bit of money helping companies and people improve the 'virality' of their tweets.

cbgb 3 days ago 1 reply      
This is just a nit, but Andrej Karpathy was never a professor at Stanford; he received his PhD from Stanford and now works at OpenAI.
FT_intern 3 days ago 1 reply      
This should be titled "How to Learn Deep Learning".

"How to get a job in deep learning" would include:

- What specific topics will be asked during interviews

- What the interview question format is like

- How to prepare for the interviews

- How to get interviews without a PhD. What do you need to show competence in your self learned skills?

bbctol 3 days ago 1 reply      
You may not need a PhD or tons of experience to learn Deep Learning, but what about the gap between that and getting a job?
Xcelerate 3 days ago 1 reply      
This is applied deep learning. There's a ton of jobs available for taking someone's library on GitHub and applying it to a bunch of data. But other than DeepMind, FAIR, Google Brain, Open AI, Vicarious, and Microsoft Research, who is hiring for theoretical machine learning? That's what I'm interested in developing better algorithms that eventually approach AGI.
max_ 2 days ago 1 reply      
Why is the Machine Learning subreddit so toxic?
stephensonsco 3 days ago 2 replies      
just wrote a blog post that I think a lot of folks will like if they are looking for a job in ML/DL.

Would love to hear if I missed something!

jamisteven 3 days ago 2 replies      
1st sentence should be: 1. Be a fuckin math whiz.

Had it been, i would have clicked the back button.

Twitter may receive formal bid, suitors said to include Salesforce and Google cnbc.com
265 points by kgwgk  2 days ago   340 comments top 42
imagist 2 days ago 18 replies      
IMO, Twitter is the poster child for the tech bubble. They have users, which is their only claim to viability, but notably, they have never made a profit. Currently valued at around $10 billion with 350 million active users, that's about $29 per user. You'd be hard-pressed to find an investor so foolish that they would invest $29 in each of their users and hope to make it back if it were stated in those terms, but people have rushed to invest in a company which only has users, and whose attempts to monetize users through advertising have correlated strongly with loss of users. There can be little argument that Twitter's price has become entirely detached from its value.

That doesn't of course, mean you couldn't make money by investing in Twitter. You can make money by investing in overvalued companies as long as you don't hold onto your share until it busts. One profitable route would be if Twitter does get bought by a larger company. The market as a whole will lose on Twitter, but local maxima can be more profitable than the whole.

But at a personal level, don't be naive about this. A lot of people are investing, not just money, but time and energy, in Twitter or startups like Twitter. If you find yourself thinking that Twitter is a company with any real value, you should take a step back and evaluate whether you're being wise, or whether you've fallen prey to the unbridled optimism of the tech bubble. Twitter's position as poster child for the tech bubble makes it a good litmus test for people's understanding of the industry, and I suspect it will correlate very strongly with who loses everything when the tech bubble collapses.

owenwil 2 days ago 10 replies      
Google acquiring Twitter is actually the best end result here. Salesforce is probably the worst. Lots of people hate the idea of a Google acquisition, but I think it's well suited because:

- Google learnt from its mistakes with Google+ and is eager to not repeat them

- The company is a very different one now from years ago

- Google doesn't want to mess up identity again, so that wouldn't be an issue

- Google mostly just wants a social graph

- Twitter is a bad public company that makes irrational decisions

- Merging Google engineering/leadership with Twitter might actually give direction and ease the financial pressure that seems to drive the company's poor engineering decisions

thr0waway1239 2 days ago 4 replies      
Can someone actually explain to me how the situation came to this point where it practically looks as if Twitter's fate is being decided and played out in the media via endless speculation? It is not like Twitter is a tiny company with an unknown brand, few users and no possibility of improving its profit margins. I am not aware of what they are trying to do, but at the same time it is not as if they could have exhausted all the possibilities. Remember Facebook's beacon? That failed, but FB still managed to repackage the same crap into something more lucrative didn't it? Is this just impatience from stockholders?

For example, let us just say, hypothetically, something really damaging comes out about FB (e.g. the news about the fake video view metrics) and advertisers start fleeing from it. Wouldn't Twitter be the beneficiary of at least some of that exodus? Do they really have no option of an end game?

the_duke 2 days ago 7 replies      
Unclear is what kind of "deal" they are talking about.

A considerable buy in? A full acquisition?

And, assuming a full acquisition... what would be the gain?

Google has a bad story with attempts at social media, apart from YouTube. (Bought Orkut, killed it, tried Google Plus, went nowhere). Twitter is hard to make profitable without alienating the users with too many ads.

For Google, it would probably be an acquisition like YouTube. With the knowledge that it might never be profitable, but intended to get control over a significant asset. But sharing Google infrastructure and resources could probably bring down operating costs in the medium term.

We'll see.

aresant 2 days ago 2 replies      
Salesforce feels like a bizarre choice, although I agree with their digital chief that "I love Twitter" personally.

I use twitter every day as my primary method of content discovery.

So at their core the BUSINESS should revolve around monetizing my eyeballs, eg advertising.

So to me it's Facebook or Google that should grab it, w/FB at the lead considering their relatively smooth / unhurried / and successful takeovers of whatsapp / instagram

deepfriedbits 2 days ago 0 replies      
Twitter's real value to Google is real-time search, in my opinion. They already license results from Twitter for search results, but having access to all of that sweet, sweet real-time data is nice.

The social graph is nice, but between Chrome and Gmail, Google already knows quite a bit about everyone.

majani 2 days ago 0 replies      
Bad idea. I think the social networks who's main purpose is to feed into people's vanity will not stand the test of time since they're not solving a real problem and are merely novelties.
erickhill 2 days ago 1 reply      
I think Twitter's recent foray into becoming a content streaming source (see: NFL) is very interesting and a natural next step, albeit a late one. The user base is already there to essentially compete with Twitch and other streaming providers.
hornbaker 2 days ago 6 replies      
My bet is GOOG. They need a streaming newsfeed product in which to insert ads.
sp527 2 days ago 0 replies      
Seeing a lot of arguments about profitability that don't make sense. At Twitter scale, profit sensitivity to even minor tweaks in ad rate/targeting/placement is massive. They could also go into 'maintenance mode' tomorrow and break a massive profit (it would just be stupid).

Active users is a poor metric for Twitter. It's much more about the views. A relatively smaller number of people on Twitter can command an outsized influence. It's a fundamentally different kind of network.

Twitter's future will probably be more about monetizing its viewership. It's definitely not going to disappear anytime soon.

mattjaynes 2 days ago 0 replies      
OH: "Twitter is a Friendster whose Facebook hasn't appeared yet."

From Jessica Livingston yesterday: https://twitter.com/jesslivingston/status/778948962724315136

cpsempek 2 days ago 0 replies      
Is Google the suitor, or, is Alphabet the suitor? The article says Google, but this could be out of habit. I am not sure that answering my question changes much about the news. But it might say something about how Alphabet views Twitter based on who they decide owns the acquisition and where Twitter would fit within the company (as subsidiary or under Google).
hornbaker 2 days ago 1 reply      
My bet is GOOG. They need a streaming newsfeed product in which to insert ads, especially on mobile where FB is killing it.
encoderer 2 days ago 0 replies      
Anecdotally I use twitter to advertise my SaaS monitoring product, Cronitor, with far more success than we found with AdWords. The ad platform feels easier to use, and promoting content on Twitter is less of a time investment vs selecting, culling, and optimizing sets of keywords.
MollyR 2 days ago 4 replies      
I really can't see salesforce buying twitter. I think a social media giant would gain something from twitter, but not much else.
fideloper 2 days ago 0 replies      
I wouldn't blame Jack one bit for wanting to get back to Square full-time (not that it's necessarily fully his decision to sell).
mark_l_watson 2 days ago 1 reply      
Would it be a bad idea for Twitter to charge a small yearly fee for use? The reason why I think this might work is because some users are very loyal and might not mind spending $20/year for an advertisement free Twitter service. They have about 350 million users, according to a comment here, and if 50 million users would stay, that would be $100 million revenue per year, and with many fewer users, their cost of doing business would be reduced, but with reduced network effects the service would not be as valuable for users. I like Twitter and I would pay $20/year in return for no promoted tweets.
fabiandesimone 2 days ago 0 replies      
To me Twitter has always been real time news.

Why haven't they worked on a way to 'validate' tweets around a story, I can't understand it (from a biz perspective, not technically).

With validation they become, instantly, the #1 news agency in the world.

yalogin 2 days ago 0 replies      
It doesn't matter it would still not cover my losses in the stock. Talk about crappy decisions.
samlevine 2 days ago 0 replies      
I would pay $10 a year to use Twitter. It's just that good for live news.
WA 2 days ago 1 reply      
Stock is through the roof right now. About +19%
dcgudeman 2 days ago 1 reply      
By Salesforce?? RIP twitter.
sorenjan 2 days ago 0 replies      
At the end of 2015 Twitter had 3900 employees [0]. Why would they need close to that? How many could a new owner fire without noticeably affecting the day to day operations?

[0] https://www.statista.com/statistics/272140/employees-of-twit...

bsparker 2 days ago 0 replies      
I wish Slack would somehow take over Twitter. Expertly adding in custom channels would save the social platform.
edbaskerville 2 days ago 3 replies      
By any normal metric, Twitter is a huge success. 300 million people find their service useful.

But it was already huge success when it had no business model. Moreover, what is fundamentally valuable about Twitter to its users--sharing and discovering little bits of textual expression over a publicly visible social network--is not very expensive.

From the perspective of the users who find it valuable, why does it need a for-profit model at all? Why can't we just subsidize it as a non-profit via grants and donations, a la Wikipedia? I'm pretty sure you could do the important thing that Twitter does--ignoring all the extras devoted to figuring out how to extract more money from the data--at a small fraction of its $2 billion in revenue.

I'm not being naive here--it's quite obvious why things are the way they are. But there many examples out there of making a big impact while making a decent living (just without anyone trying to become a billionaire). Social networking is ripe for more of this approach. The attempts so far have failed not because of their business model, but because of the usual reason: poor execution.

happy-go-lucky 2 days ago 0 replies      
If you try to make people want what you make, you end up making something disposable, a short-lived romance.
aikah 2 days ago 1 reply      
Amazon or Facebook . It would make sense for the latter as it is its only real competitor.
k2xl 2 days ago 1 reply      
Why are Twitter's expenses so high? Don't get me wrong, scaling is hard... But at the same time, they don't have the issues of scaling photos or videos (like Facebook).
randomestname 2 days ago 0 replies      
They aren't investing in the present value, they are investing in the future value. Is Twitter going to be more or less relevant in 5 years? 10 years?
eddiecalzone 2 days ago 0 replies      
I had to come here to read some saner comments; the comment section on the linked CNBC article is, uh, deplorable.
rch 2 days ago 1 reply      
Why no mention of Oracle? That's a more likely acquirer than Salesforce.

Other Oracle acquisitions: Datastax, push.io, Collective Intellect, etc.

wslh 2 days ago 1 reply      
Is it logical to buy Twitter at $ 16B? I think it is too expensive considering a lot of their actual users are bots.
nvk 2 days ago 4 replies      
Twitter should be a public utility.
_kyran 2 days ago 1 reply      
Just going to leave this here (from last month):


"Twitter will be sold in six months - Kara Swisher"

plg 2 days ago 0 replies      
If google buys twitter, I'm out
ben_jones 2 days ago 0 replies      
Anyone else see S20E02 of southpark? All I'm going to say is it put Twitter in a very interesting perspective.
mandeepj 2 days ago 0 replies      
any idea why advertising is working on facebook and not on twitter?
susan_hall 1 day ago 0 replies      
Perhaps off-topic, but does anyone know why Twitter gave up on its effort to monetize its API? There was a moment, circa 2010, when that seemed like the obvious move. When Twitter first began to shut down access to its full firehose, it seemed clear that there were businesses willing to pay for its information. But Twitter suddenly turned away from that idea, and focused on advertising. Considering how many sites compete for advertising dollars, it seems crazy that Twitter felt that was the right way to go.

But then Facebook went down the same path, first promoting its API, then largely giving up on any attempt to monetize it.

And before that, way back in 2006, I tried to build a business that would rely on Technorati's API, which they briefly promoted, then gave up on.

There are a lot of companies that make money by selling information via an API. And there is tremendous competition for ad dollars. These 2 facts would lead me to expect more companies might try to make money from their APIs. But what happened in Twitter's case?

smegel 1 day ago 0 replies      
If there is anyone who could mess up Twitter worse than Twitter it is Google.
lcnmrn 2 days ago 2 replies      
There are better alternatives to Twitter out there. Its time for everybody to move on.
sidcool 2 days ago 1 reply      
Salesforce wud be an interesting prospect. Not sure how wud it fit in their business plan
alex_hitchins 2 days ago 1 reply      
Might sound strange, but I think this would be a great purchase for Apple. They have the cash, they certainly have the engineers and UI skills it desperately needs. iMessage working brilliantly, but closer integration with a Twitter style feed makes real sense to me.
Snapchat Releases First Hardware Product, Spectacles wsj.com
326 points by Doubleguitars  1 day ago   307 comments top 59
keithwhor 1 day ago 11 replies      
I think this is brilliant. Even the press details seem perfectly crafted, with one article referencing Evan's "supermodel girlfriend."

Snapchat can win here based on brand alone. The hardware features are a plus, but they're going to sell a lifestyle. Think GoPro + Versace. Commenters here are caught up in the tech. It's not the tech. Get a few celebrities in these, people will buy them and barely use the recording features. They're cheaper than Ray-Bans and I bet you and half of your friends own a pair of those.

Snapchat can assemble an AR powerhouse from the ground up with brand goodwill. Evan and his team have figured out the best market strategy to do so. Google is not "cool" and could never attempt to pull this off.

I have tremendous respect for Evan Spiegel right now. Bold move. Amazingly positioned. I wish them the best of luck. Dare I say, it has the scent of Jobs to it - the vision, the risk ("we make sunglasses now!") and definitely the "cool-factor." Don't misinterpret - this isn't the iPhone, not yet anyway, but I think they're on to something very big.

primigenus 1 day ago 4 replies      
This fixes everything broken about Google Glass. It's almost disturbing how much more on point this is:

Of _course_ they're sunglasses.

Of _course_ it's focused completely on video.

Of _course_ it's marketed as being about sharing your memories as you lived them.

Of _course_ you can only record 10 second videos at a time.

Of _course_ snaps automatically sync to the app.

Of _course_ they're designed to appeal to young fashionable people.

Of _course_ the charge lasts all day

This is one of those things where once you see it it's just obvious this is what it was supposed to be all along.

fowlerpower 1 day ago 7 replies      
I think this is significantly better than what Google did with Google Glass.

It's better because it focuses on the one thing that is really easy to do well. It does not try to do everything at once. It doesn't try to give you apps in your glasses and everything under the sun. This is the right approach to products. Do one thing but do that well.

Before you criticize me think back to the original iPhone, it didn't start with an App Store and everything under the sun like the iwatch did. And yet the iPhone is an icon and the watch is no big deal.

rdmsr 1 day ago 0 replies      
This is definitely the result of Snapchat's acquisition of Epiphany Eyewear back in 2013[1], which was a startup that made something very similar.


CodeWriter23 1 day ago 2 replies      
Hype and grumbles aside, I believe optimizing the "I want to record what I'm seeing right now" to a tap near your temple is pretty compelling. Fumbling to get my camera out of my pocket, or even just grab from tabletop and swipe-to-cam is often long enough to miss that precious moment with my daughter.
ftrflyr 1 day ago 5 replies      
Why? You need to seriously question the motives behind such a launch. IMHO:

[1]Snapshot is an online multimedia application.[2]The infrastructure required to move from online to hardware requires significant investment (beyond the $1.8B they recently raised) - that of which I don't believe Snapchat can fund without a serious re-monetiziation strategy beyond Ads. It is only a matter of time before FB makes the move into Snapchat's market more than they already are.[3]This is an unproven market. Google tried it and didn't succeed. A better play - let someone else test the market a bit more and then move in with a solid Ad monetization strategy around the Spectacles. [4]Why Hardware?! Seriously? I believe Evan is overplaying his hands with so much VC capital coming his way.

leetrout 1 day ago 2 replies      
Even thought I'm not "inb4" Glass comparisons this really does hit a market that I think is untapped. I used to have a "flipcam". It was before I had a phone with the ability to take HD video and before a GoPro was a choice for me because of cost (I still don't have a GoPro).

The ability to have cheaper, stylish, handsfree video recording of my POV has a lot of potential. How-to videos, the "capturing memories" as noted in the article, even just easily recording benign life experiences (police stops, for instance) seamlessly and without hassle is huge.

I do hope there is a tattletale light or something so that the average user can't surreptitiously record things and otherwise easy privacy controls... and I hope it's not long before someone hacks this or they unlock the product to do more than 10 second clips...

If I were GoPro I'd be nervous.

Edit: Actually a second thought- this would be a lot better than body cams in a lot of situations (or certainly a good companion) because it would capture the officer's line of sight.

orbitingpluto 1 day ago 0 replies      
Now everybody can be Spider Jerusalem...

Just like Google Glass users being called Glassholes, SnapChat glasses will probably be called something like SnapChads, because only white rich guys in pastel shorts and rugby shirts named Chad will use them. The aesthetic just isn't there for wide adoption.

WhitneyLand 1 day ago 1 reply      
Looks like they have learned from the glasshole debacle.

1). The messaging emphasizes it's just "a toy", a low volume experiment. More playful and more humble approach makes it a smaller target for ridicule.

2.) Pricing at $149 also makes it less pretentious and more importantly, puts it in the discretionary income range of what the heck I'll give it a shot.

whitecarpet 1 day ago 3 replies      
Another huge innovation which is more about software than hardware is the new circular video format: you can rotate your phone and the video keeps its orientation.

Quite impressive, you have to see it in action:


josephpmay 1 day ago 4 replies      
Being someone in the AR space, I find this a smart but risky move. If they're marked right and become "cool" I'll definitely have to cop a pair (and at $130 they're almost disposable). Spectacles will make it way easier for me to post to Snapchat at parties/concerts/etc without having to break out of the moment by taking my phone out. Strategy-wise, this is a Trojan horse into the AR hardware space, which Evan has wanted to get into for years. However, they fit way better into Snap's image of being a media company vs. directly launching an AR headset.
technofiend 1 day ago 1 reply      
If this means I can go to a public performance and no longer have to try to look past the sea of upthrust arms and glare of 1000 brightly lit screens to see what I came to see then it can't come quickly enough!

Particularly since I feel it will inspire the next product which is an IR flood light that renders all digital cameras useless, since there are so many people oblivious to the fact by trying to capture the experience for themselves they're detracting from the experience for everyone else.

Letting people who need a digital memento silently get one without intruding on the experience of those of us just there to enjoy and be in the moment is a great compromise.

slackoverflower 1 day ago 1 reply      
Snapchat has a huge opportunity in its hand which it has limited to take full advantage of: starting a revenue share program with influencers on the platform. Facebook has yet to do it and Snapchat, which is strapped with VC dollars, can attract a lot more influencers to its platform. I think the companies on the Discover are already in some sort of revenue sharing agreement with Snapchat but brining this to the massive number of young influencers unlocks huge opportunities for Snapchat.
nitrogen 1 day ago 0 replies      
I'm amazed the top-rated top-level comments are all so positive. We have enough people shoving cameras into devices and situations where they don't belong. At least we know what they look like now so we can ostracize anyone wearing them.
arcticfox 1 day ago 4 replies      
> (Spiegel argues that rectangles are an unnecessary vestige of printing photos on sheets of paper.)

It's also the shape of nearly all screens in the world. Perhaps I'm not visionary enough, but I don't foresee a circular computer or phone screen really improving the current situation...

bunkydoo 1 day ago 0 replies      
Well I'll be completely straight and say this isn't anything new. (You've been able to buy similar video glasses from china for about 5 years now) but if it can properly integrate with the app, and slim down a LOT more. To the point the camera is unnoticeable - they could finally start making some money. Well, until the Chinese knockoffs start rolling in
k_sh 1 day ago 0 replies      
> Why make this product, with its attendant risks, and why now? Because its fun.

The way they framed this product is _so_ refreshing.

AndrewKemendo 8 hours ago 0 replies      
I think what people are overlooking is that this device has stereo cameras by default. That means every snap likely has reasonably quality depth for each snap. With the scale of users they will likely have the largest consumer based depth capture platform in the market. That's actually a big deal for building infrastructure needed for the AR ecosystem.
cobookman 1 day ago 2 replies      
For those wondering wtf are these, I don't like the styling, why do these exist...etc, well, i don't think the target market for these is hacker news viewers. I will say that they do look awesome. Way easier to use these than a go pro or hold a camera/phone. Hopefully it's not just locked down to Snapchat.
nvr219 1 day ago 0 replies      
I'm into this! I think selling it as a toy is the right approach.
robbles 1 day ago 0 replies      
This article mentions Snapchat's hundreds of employees and multiple offices. This is one of the most obvious examples of the "what are they all doing?" question for me. I know it must take quite a few people to run operations at that scale, and of course they have an advertising business too, which likely explains the need for multiple offices. But it seems like Snapchat is still an extremely minimal app with only a couple of extra features being added over the years. Instagram had only 13 employees when it was acquired, so what role are most of these people in?
nappy 1 day ago 0 replies      
If this leads to fewer people holding out their phones at concerts... Then I'm especially excited ;)
xeniak 1 day ago 0 replies      
> initially appears to be a normal pair of sunglasses

While it's less offensive than Google Glasses, this doesn't look like "normal" glasses.

NTDF9 1 day ago 0 replies      
This is genius. Really. You want to know what kind of crowd will drop $129 on this?



TeMPOraL 1 day ago 2 replies      
I like it. Seriously, "creepy" is just a word that means "I can't accept the reality doesn't work the way I'd like it".

That said, I worry about implementation. My guess is that it's going to be directly and permanently tied to Snapchat itself. Which significantly reduces the potential usefulness of this product - not everything you record is something you only want to have sent directly to Snapchat. Personally, I want files. Plain, old files. Is that so hard to understand for all those cloud-first companies?

hellogoodbyeeee 1 day ago 2 replies      
I don't understand why all these software companies are in a rush to make hardware. With the lone exception of Apple, all hardware seems to resort in a race to the bottom commoditization resulting in paper thin margins.
anonbiocoward 1 day ago 0 replies      
They really should have consulted with the Warby Parker folks, or pretty much anyone who actually designs glasses.
bobsil1 1 day ago 0 replies      
>he was the best product visionary Id met in my entire life.

This person has never seen the Snapchat interface.

mkagenius 1 day ago 0 replies      
When deciding about products, try to think if cute Minions (from Despicable Me) would like that? - Evan Spiegel
p4mk 1 day ago 0 replies      
I love the execution of "circular videos", surprising that no one has implemented this before!


mathewsanders 1 day ago 1 reply      
There's been an empty store on exchange place in NYC financial district (near Tiffany's) that for a couple weeks has had a huge Snapchat logo taking up the entire window. I wonder if they're also gonna explore retail along with hardware.
ajamesm 1 day ago 1 reply      
Great product for people who want to film women in public, but not be noticed. Game changer
jondubois 1 day ago 0 replies      
Maybe Snapchat will sell some of their users' videos to porn companies (for VR porn)... There are two cameras - Obviously for VR; and given Snapchat's history as a sexting app, I think it's clear where things are heading here.
BinaryIdiot 1 day ago 2 replies      
I can't be the only one who thinks this is going to eat GoPro's lunch, am I? Sure the initial version may not be as high quality as a GoPro and the time limit isn't as good but those are easy things to fix and they have a monstrous social network (something GoPro is sorta trying to break into).

If anything kills GoPro it's something like this.

pmontra 1 day ago 1 reply      
Where are those 10 second videos stored? At Snapchat, on the phone, into the glasses? That changes dramatically the privacy implications of both the glasses and Snapchat. Remember what he said: he watched videos from one year ago. Snapchat has been all about deleting everything now.
bradleybuda 1 day ago 1 reply      
Friday night media release? Surprising.
listic 1 day ago 0 replies      
How do I read the full story? I tried signing in with Facebook, but it redirects to http://www.wsj.com/europe.
hackerews 1 day ago 0 replies      
I love the difference between this and Glass - 'capture life's moments in style' (spectacles.com) vs. 'join the future' (http://marketingland.com/wp-content/ml-loads/2014/05/glass-h...)
mrharrison 1 day ago 0 replies      
I think Apple needs acquihire snapchat and promote Evan as the new Apple CEO. I have zero hate for Cook and think he is great CEO. But Evan is shaping up to have some the most modern product prowess out there. I don't know if these spectacles will be a hit, but I think his choices are in the right direction.
mankash666 1 day ago 1 reply      
2015 revenues of $59M. Assuming an above average salary range, 1000 employees cost about $250M. If they were a public company, they'd get slaughtered on the stock markets.
Dwolb 1 day ago 1 reply      
On the design end, I don't like the look of the camera lense.

Are they able to darken the lense glass to hide the camera a bit? Maybe they could match the black of the camera sensor to the black of the glass a little more. Otherwise it looks a lot like two cameras on your face.

rabboRubble 1 day ago 0 replies      
I reserve judgement until I see a pair in color. And better yet, in person. And see more detail about the power situation.
adamnemecek 1 day ago 0 replies      
I wonder what is the intended use case of this. The response will be lackluster which will make creating a V2 harder.
idlemind 1 day ago 0 replies      
Glasshole meet Snaptwat.
tomkinstinch 1 day ago 0 replies      
What about people who wear prescription glasses, but can't wear or dislike contact lenses? Is it possible to replace the lenses with prescription ones?
clydethefrog 1 day ago 0 replies      
Reminds me of SeeChange from Dave Egger's The Circle.I wonder if Clinton will wear them during this election.
Multiplayer 1 day ago 0 replies      
The most interesting part of the article to me is how useless the WSJ comment section is.

I cannot believe this is still an issue for major publications.

oliv__ 1 day ago 0 replies      
I'm just going to wait until this "Spectacle" self deletes after a few months...
vasanthagneshk 1 day ago 0 replies      
Is it only me that does not want to read the article because I cannot read it anonymously?
JustSomeNobody 1 day ago 0 replies      
These don't look comfortable at all.
dmritard96 1 day ago 0 replies      
ill wait for the generic model that posts to any social network....
superJimmy64 1 day ago 0 replies      
This is a ridiculous product... reminds me of the classic upper management/CEO "ideas". You know the kind: obsolete, neglects societal concerns (security???), nobody around to tell them it's a bad idea.

> (Why make this product, with its attendant risks, and why now? Because its fun, he says with another laugh.)

Sometimes you can look at something and just KNOW that there is not a chance that pile of junk is gonna gain traction.

drivingmenuts 1 day ago 0 replies      
Nice design.

How do they solve the personal privacy issues that arose with Google Glass? Or have they even bothered?

smegel 1 day ago 0 replies      
Goofy but clever. The kind of thing that might be a hit with a certain youthful demographic. And you need to be "cool" to pull something like this off - i.e. not Google.
PercussusVII 1 day ago 0 replies      
Fuck off, Snapchat
amingilani 1 day ago 2 replies      
How do I get around the paywall?
Bud 1 day ago 1 reply      
throwaway28123 1 day ago 1 reply      
I'd just like give everyone a reminder,

>The most important principle on HN, though, is to make thoughtful comments. Thoughtful in both senses: civil and substantial.

"Google Glass 2.0" and similar cheap bashing isn't just against the rules, it's boring and petty.

Take it to 4chan, you'll get the attention you're after.

nefitty 1 day ago 0 replies      
This is exciting for the wearable headset market. If even a fraction of Snapchat's users get this it will normalize the space much more than Google Glass was able to. This is especially considering the young demographic Snapchat caters to, which I assume is more open to new technologies.
Getting Press for Your Startup themacro.com
286 points by endswapper  3 days ago   32 comments top 14
Briel 3 days ago 0 replies      
Here's the process that works (with some persistence) even if you don't have a revolutionary product:

1. Go to Google, toggle to news and enter the name of similar startups

2. Go through each recent article and add the journalist to a spreadsheet

3. Go on Email Hunter or Email Format to find how the publication formats their email addresses to guess the journalist's. Journalists also tend to use firstnamelastname@gmail.com for their personal emails.

4. Email your pitch in 3-5 sentences max. Don't just describe what your startup you does, use an interesting angle or story to show its impact.

Instead of: "We do delivery logistic optimization."

Story: "Why the heck is your technician always 5hrs late? Cause it took forever to fix the issues of the guy before you.

We're helping our customers like Comcast and Oracle smart schedule all their appointments based on data like a) how long issue x typically takes to fix and b) real-time traffic conditions.

In high school, I worked taxi dispatcher, seeing firsthand the inefficiencies in coordinating drivers."

Journalists don't want to advertise your startup for free, they care about writing a story that entertains and educates their readers. Feed one to them.

Good roundup of pitch templates using different angles: http://www.artofemails.com/pitch-press

andreasklinger 3 days ago 1 reply      
Small tips i learned when doing interviews (as journalist and as tech founder):

Ask the journalist what kind of story s/he wants to do and what role you play in this story.

It doesnt matter if it's about you or a general topic - understand what kind of basic arc/message/pov s/he wants to do. In 99% of the cases their job is not to be "investigators" but tell entertaining stories and they are usually happy to share their idea of the article.

- Help getting to the content s/he needs. Share the right infos, the right contacts, industry insights. Essentially help as much as you can creating the ideal article.

- Also make sure to create soundbites that work as quotes. Quotes tend to be highlighted in articles.

- Make sure to connect the journalist with more useful contacts and help finding new ideas for articles.

- Last but too often forgotten. Have good press images ready of you and your product. Some that work portrait, some that work in landscape. Some that show a business look, some that show a more personal look (depending on the story the journalist goes for)

hth - good luck!

bestmomproducts 3 days ago 0 replies      
There is a lot of great advice here. I started as Larry Ellison's handler almost 20 years ago, wrote a book on publicity (Barbara Corcoran endorsed) and teach an online PR class.

Media has changed a lot in the last 5 years. It is important to familiarize yourself with the outlet and how the journalist writes. (ie) is the outlet known for listicles? (article to start and then '7 ways to crush it on a start-up budget.' Or, do they prefer pitches based on their editorial calendar?

Identify your target customer and find out what they read, listen to and watch.

If you have a tech product, focus on where a more technical audience might be like podcasts. You'd be surprised but it's not always the most well known outlet like Mashable that will drive sales or users. While that is great credibility and exciting, there are many opportunities out there.

RESOURCE:HARO www.helpareporterout.com is a good resource to sign up for - free opportunities 3x a day that journalists are posting. Since you will have the "lead" already, keep your response short and to the point.

NEWSWORTHY:To make something newsworthy, look at what is trending in the media. (ie) Angelina Jolie and Brad Pitt getting divorced and then think about all the relevant angles.

Angles could include being a divorce attorney and contacting your local news to talk about the issues they each face or if you have created a divorce app that helps with custody sharing, etc... you could discuss how that would work.

We are in an era of high content consumption online and outlets like Forbes, Entrepreneur and Huffington Post rely on contributors. Forbes, for example, turns out 300 articles a day ... a DAY! There are more opportunities for your company to be featured now than in the past so that is good news for you.

Try not to get discouraged. It can take time to figure out what works and come back to the journalist with different angles. I've found consistent follow up works.

I know that I was rambling a bit but hope that helps.

I have some free info on my site www.rachelaolsen.com if you're interested including audio interviews with a Forbes contributor and a writer for US Weekly, Men's Health and Rolling Stone.

mwseibel 3 days ago 0 replies      
Man - I wish I could be in here answering questions about the post but I'm on the YC World Tour and currently meeting startups in Lagos - I'm glad you all liked the post
sssparkkk 3 days ago 1 reply      
Any advice on how to get some traction when entering a foreign market?

In my experience the media mostly write about startups that already have significant traction, or about new products released by the big boys (Facebook, Google etc).

Acquiring users through Facebook/Twitter advertising might be an option for some, but could also easily mean you'll be spending tens of dollars per active user.

So what's left? I think it basically boils down to Hacker news & Product Hunt to get the ball rolling.

ozgune 3 days ago 0 replies      
Good read! I also found the following post useful and practical for getting press for startups: http://www.craigkerstiens.com/2015/07/21/An-intro-PR-guide-f...
hamhamed 3 days ago 0 replies      
Warm intro's work better, but I'd also wager that good cold emails work as good. For example; if you can connect with the writer about one his articles, and transition into your startup..this is a good 25% chance to get covered (assuming your story is good).

Don't just submit a "tip" or go to the contact page, actually find the person who has written similar stuff and find his email

devindotcom 3 days ago 0 replies      
I just participated in a panel on this very topic. The takeaways were to know who you're pitching, build a relationship, and be honest and succinct. If you have a good product relevant to that publication's readers, a good news editor or writer will pick up on that.
danieltillett 3 days ago 0 replies      
I have often wondered if it would just better to get straight to the point and outright bribe journalists. Most are making a pittance and a few thousand dollars in cash handed under the table should make any startup story come out like it is the new Uber.
jldugger 3 days ago 1 reply      
> i.e. wait until youre 25% past the milestone to announce - so that youre that much closer to announcing the next milestone.

I wouldn't be surprised to hear this violates a securities law.

hrgeek 3 days ago 3 replies      
Does it really work?
untilHellbanned 3 days ago 2 replies      
Lol b/c other YCers say don't waste time getting press.
amirhirsch 3 days ago 0 replies      
Flybrix is having a very successful launch today driven by a PR strategy that goes against Steps 2 and 3 of this advice. We hired a great PR firm to manage contacts and we got blanket coverage everywhere because of a press embargo.

I don't think I could have managed this on my own and on the timeline we did it.

trjordan 3 days ago 3 replies      
It's common knowledge around here that PR is wasted effort for startups. So why do this?

For startups and for any company, brand makes things nebulously easier. Sales require one fewer call, hiring pipelines are slightly more full, fundraising intros are easier. The point about creating News is really at the core: if you can make News one of the consistent outputs of your company, and you can see the results of News on your actual work, then you should do it.

Like everything else at a startup, brand is one tool. Don't use that tool unless the founders are strong with it and there's a well-defined path between that brand and traction.

Show HN: InstaPart Build circuit boards faster with instant parts snapeda.com
300 points by natashabaker  1 day ago   102 comments top 23
natashabaker 1 day ago 8 replies      
Dear HN Community,

I wanted to introduce our startup SnapEDA to the HN community. We recently completed Y Combinator, and have been quiet about the platform while we've been improving it. With that said, wed love to get feedback from the HN community!

Our goal is to build a canonical library for making circuit boards: one trusted, centralized place to get digital models. These digital models include PCB footprints, schematic symbols, and 3D models. The library exports to a growing set of popular EDA tools: EAGLE, Altium, KiCad, Cadence OrCad/Allegro (Beta), & Mentor PADS (Beta).

The library is free because we believe in making this data widely accessible to enable innovation. The purpose of this new feature, InstaPart, is to give designers an option to "skip the queue" and get a part quickly if it doesn't yet exist in the free library. Once that part is made, it is then made available for the entire community to download for free. Growing the library is a top area of focus, so we hope to eventually render the InstaPart feature obsolete and just have everything available natively. :-)

In terms of standards, all new libraries are being made to IPC, and we also source models by partnering with component manufacturers. To ensure quality, we have an automated verification checker on each part page that provides a pass/fail result on common manufacturing issues that we plan to expand with additional checks.

Thanks HN!


zbrozek 1 day ago 3 replies      
I'm a professional hardware engineer, and doing this kind of work is something I do frequently and begrudgingly. Having this kind of service available will be a huge help. Some thoughts:

- Who writes the style guide? How do you make aesthetic decisions?

- Will you support multiple symbol styles? Would it be possible to upload stylesheets or specially annotated schematics and then regenerate already-extant parts in that style?

- Is there any intention of making the file writers open-source? Altium, in particular, has a stupid and annoying file format and it would be a gift to the community to be able to write good PcbLib and SchLib files. It would make it easy for me to write a linting and style-casting tool.

- Is there any chance of bringing down the latency, possibly with the application of more money? My rule of thumb is that it's worth spending about a hundred dollars to save myself an hour. A typical smallish library part takes me about five or ten minutes if I have to do both schematic and footprint, so waiting a day isn't really attractive at any price. But if I could throw money at you to get a result turbospeed, that'd be worthwhile.

compumike 1 day ago 4 replies      
Shocking that it's 2016 and the component manufacturers aren't providing this data free and online, right next to the datasheet link.

Then again, after working with these companies, it's not so surprising! Congrats on the launch.

zbjornson 1 day ago 3 replies      
Not an EE Am I correct that you replace this lengthy process? https://learn.sparkfun.com/tutorials/designing-pcbs-smd-foot...
BrooklynRage 1 day ago 1 reply      
Bookmarked this site. The database seems super useful.

I have mixed feelings about the custom footprint service -- I've worked with ~900-pin BGA SoC-type parts, and paying $30 to do that would be a no brainer, but paying the same for a 8-pin LDO would be a tougher sell -- maybe scale prices with part complexity?

The tougher sell to me is trust / verification of the InstaPart models before the community can vote them up or down. For most teams that I've been on, the most time-consuming part isn't really the pinout generation, but rather the checking of large parts (often 2 engineers checking pin-by-pin to ensure that footprint matches data sheet). I'd be much more comfortable using it if you outlined how pinouts are verified before sending them out to the customers.

I hope this works out! Making footprints is a huge PITA, and I'd love to be rid of it.

Tom1971 1 day ago 1 reply      
I just created an account with the LinkedIn login option.

When I get back to the front page, it shows for everybody to see with my full name that I just joined snapeda.


FreedomToCreate 1 day ago 1 reply      
As a designer, this looks like a great convenience since making a symbol and footprint usually takes a half hour on average. The 3D feature is the most useful part as a proper model takes much longer but at $79 dollars it could be expensive for people like me who design boards with many ICs and and unique components. With that said, in a very time constrained project with a lot of new components which we have no symbols for, if I could select and buy everything I need in a packaged deal, that would be appealing.
Nanite 1 day ago 0 replies      
This is actually quite useful, as an occasional user of a variety of layout packages, once in a while run I into that one rare part which isn't in a library, and end up wasting an hour finding the specs/measuring the part and figuring out how the component editor works.
jazzychad 1 day ago 1 reply      
Very cool! I can't seem find any links to click on to actually order parts from manufacturers when I find a part - e.g. https://www.snapeda.com/parts/MCP23017-E/SP/Microchip/view-p...

There's the whole section at the top about availability and average price, but no link to go buy it? In addition to the InstaPart revenue, are you also going to make money with affiliate sales to parts sites (a la Octopart)?

nzjrs 16 hours ago 0 replies      
What's the liability situation if you deliver an incorrect footprint?
LAMike 1 day ago 1 reply      
Is this where people should go to make a production level PCB's for products that are prototyped on something like a raspberry pi? Or is it serving a different market?
madengr 1 day ago 1 reply      
Why? Making schematic symbols and PCB footprints is not time consuming, and at least you'll have the proper paste mask. Altium has an IPC footprint generator wizard, and 3Dcontentcentral has lots of user contributed parts in STEP format.
timecop 14 hours ago 0 replies      
Anyone who cant spend a few minutes in their EDA tool of choice to create a symbol or footprint and instead chooses to pay $29 here should not have a job (or a hobby) making electronics.
pmorici 1 day ago 1 reply      
Seems like an interesting service. I'm kind of confused by the site though.

On the landing page it says, "Get any schematic symbol and PCB footprint delivered in 24 hours. Just $29." Then I did a search for a part and clicked request and, after signing up for an account, it said that to get it in 24 hours I need to pay $79 and $29 was for 5 days service. I also somehow ended up on a page at one point that said that you could request any part for free. So which is it really?

I also found the social network aspect of the site off putting particularly since there was no mention of that and then after I signed up for an account to give the request a part service a try I see my user name plastered on the sites front page in a feed of recently signed up users and deactivating my account doesn't remove that.

legulere 1 day ago 1 reply      
Your terms and conditions site has a third paragraph twice: https://www.snapeda.com/about/terms/
new299 23 hours ago 1 reply      
What license are the footprints provided under? I couldn't see this on the main page?
honkhonkpants 1 day ago 1 reply      
I think your footprints should be shown with dimensions, because before I drop hundreds or thousands of dollars on a run of PCBs, I'm going to have to check your footprints to make sure they are correct, and that could actually take me longer than drawing them from scratch.
IgorPartola 22 hours ago 1 reply      
I mostly use Fritzing. Will it be supported at some point?
necdetalpmen 1 day ago 0 replies      
Looks great, someone had to do this :-) Can't wait to give it a try!
wyager 1 day ago 0 replies      
Awesome! Making footprints is the single most annoying part of PCB design. Looks great!
necdetalpmen 1 day ago 0 replies      
Sounds great, someone had to do this :-) Can't wait to try!
sandGorgon 1 day ago 0 replies      
you should producthunt this!
transfire 1 day ago 0 replies      
Machine Learning: Models with Learned Parameters indico.io
305 points by madisonmay  3 days ago   31 comments top 6
antirez 3 days ago 4 replies      
I strongly advise everybody with one day free (and not much better things to do) to implement a basic fully connected feedforward neural network (the classical stuff, basically), and give it a try against the MNIST handwritten digits database. It's a relatively simple project that learns you the basic. On top of that to understand the more complex stuff becomes more approachable. To me this is the parallel task to implement a basic interpreter in order to understand how higher level languages and compilers work. You don't need to write compilers normally, as you don't need to write your own AI stack, but it's the only path to fully understand the basics.

You'll see it learning to recognize the digits, you can print the digits that it misses and you'll see they are actually harder even for humans sometimes, or sometimes you'll see why it can't understand the digit while it's trivial for you (for instance it's an 8 but just the lower circle is so small).

Also back propagation is an algorithm which is simple to develop an intuition about. Even if you forget the details N years later the idea is one of the stuff you'll never forget.

nkozyra 3 days ago 2 replies      
This is well-written and I applaud any step toward demystifying the sometimes scary sounding concepts that drive much of the ML algorithms.

Knowing you can pretty quickly whip up a KNN or ANN in a few hundred lines of code or fewer is one of the more eye-opening parts of the delving in. For the most part, supervised learning follows a pretty reliable path and each algorithm obviously varies in approach, but I know I originally thought "deep learning? ugh, sounds abstract and complicated" before realizing it was all just a deep ANN.

Long story short: dig in. It's unlikely to be as complex as you think. And if you've ever had an algorithms class (or worked as a professional software dev) none of it should be too daunting. Your only problem will be keeping up the charade if people around you think ML/AI is some sort of magic.

djkust 3 days ago 1 reply      
Hi folks, authors here in case you have questions.

This is actually part 3 in a series. For developers who are still getting oriented around machine learning, you might enjoy the first two articles, too. Part 1 shows how the machine learning process is fundamentally the same as the scientific thinking process. Part 2 explains why MNIST is a good benchmark task. Future parts will show how to extend the simple model into the more sophisticated stuff we see in research papers.

We intend to continue until as long as there are useful things to show & tell. If there are particular topics you'd like to see sooner than later, please leave a note!

yodsanklai 3 days ago 1 reply      
I took Andrew NG's ML class on Coursera. It was certainly interesting to see how ML works but I'm not sure what to do with this. Particularly, I'm still unsure how to tell beforehand if a problem is too complex to be considered, how much data it'll require, what computing power is needed.

Are there a lot of problems that fall between the very hard and the very easy ones? and for which enough data can be found?

throwaway13048u 3 days ago 3 replies      
So this may be a place as good as any -- I've got a decent math background, and am self teaching myself ML while waiting for work to come in.

I'm working on undertstanding CNNs, and I can't seem to find the answer (read: don't know what terms to look for) that explain how you train the convolutional weights.

For instance, a blur might be

[[ 0 0.125 0 ] , [ 0.125 0.5 0.125 ] , [0 0.125 0]]

But in practice, I assume you would want to have these actual weights themselves trained, no?

But, in CNNs, the same convolutional step is executed on the entire input to the convolutional step, you just move around where you take your "inputs".

How do you do the training, then? Do you just do backprop on each variable of the convolution stem from its output, with a really small learning rate, then repeat after shifting over to the next output?

Sorry if this seems like a poorly thought out question, I'm definitely not phrasing this perfectly.

aantix 3 days ago 1 reply      
There's been a couple of times where I needed to classify a large set of web pages and used a Bayes classifier.

I would start to get misclassified pages and it was so difficult to diagnose as to why these misclassifications were occurring. Bad examples? Bad counter examples? Wrong algorithm for the job? Ugh.

I ended up writing a set of rules. It wasn't fancy but at the end of the day, I understood the exact criteria for each classification and they were easily adjustable.

Sublime Text 3 Build 3124 sublimetext.com
322 points by tiagocorrea  4 days ago   224 comments top 36
guessmyname 3 days ago 8 replies      
> With these latest changes, Sublime Text 3 is almost

> ready to graduate out of beta, and into a 3.0 version.

Wow, Finally! I have been using ST3 for several years (wow, years) and always wondered what is keeping the developer from labeling that version as stable. From all the issues reported here [1] I have never encountered one while using the editor for pretty much all my work. Those $70 are definitely worth every penny. Sometimes I cringe from videos featuring ST while using a non-registered license, this week it happened with a course from Google engineers via Udacity, Google engineers!!! As if they don't have miserable $70 to buy a license, I assumed they were in a rush and didn't have time to set the license which I hope they bought.

Anyway, thanks for all the hard work Jon, and recently Will.

[1] https://github.com/SublimeTextIssues/Core/issues

spdustin 4 days ago 1 reply      
From the release notes [0]:

> Minor improvements to file load times

I didn't even realize there was room to squeeze out more performance here. Sublime Text is wicked-fast opening pretty much everything I throw at it.

[0]: https://www.sublimetext.com/3

derefr 4 days ago 4 replies      
> a menu entry to install Package Control

If Sublime is going to acknowledge Package Control, why not just ship with it? I'm sure the Package Control folks would be glad to move their repo upstream.

gravypod 4 days ago 5 replies      
I wish the Sublime Text people open sourced their code. I'd buy it from them in that event and I'd finally have a text editor to recommend. Atom, VS code, and anything else is completely blown out of the water by ST. There's a reason it's still around and it's because ST is the only thing that can even think of doing what sublime text can do.

Good work to the people behind it, it's an amazing feat no doubt. Just please consider making it free software for all of us who care about that just a bit. Amazing work none the less.

wkirby 4 days ago 3 replies      
Actually the only thing that keeps me from switching back to ST3 is Atoms first class support for `.gitignore` and excluding files from the quick open menu.

I know there's a package that claims to update the file ignore pattern to match the open project, but it really doesn't work well at all.

connorshea 4 days ago 7 replies      
I really, really wish it was open source. I understand why it isn't, but with its main competitors being Atom and VSCode, it's hard to warrant using a closed source text editor even if it's so much faster and I'm used to it.
statictype 3 days ago 2 replies      
The only thing I really want from Sublime (or VSCode) is an API that lets me display an output panel/sidebar with an html engine embedded in it.

Atom provides this - it also provides arbitrary html in the editor itself which is cool but also what makes it slow.

I just want it for the supplementary panels that show build outputs, documentation or other contextual information.

That's enough to let me customize it for our team's usage.

supergetting 3 days ago 0 replies      
When I first started using Sublime, I disliked the occasional popups, and thought I'd just keep using it without paying $70 for a text editor?!?!

But I HAD to buy the thing! Not because I wanted to avoid the annoying popup, but because of everything we know about Sublime today; performance, simplicity and intuitiveness of the UI, packaging system, etc.

The article mentions that they're coming out of beta in the near future! nice! and I just noticed they're already mentioning sublime text version 4 (under sales FAQ page).

modeless 4 days ago 3 replies      
I'm surprised so many people here are using Sublime to edit >100 MB files. Yes, it handles them (as long as the lines aren't too long), but it always has to load the entire file before displaying the first line. Aren't there some editors that don't have to do that?

On a related note, large files are often binary. I appreciate that Sublime can display binary files but it's pretty bare bones, and there's no editing support. I'd love to see what Sublime HQ could do if they worked on binary editing support for a couple of milestones. For example, the ability to locate and edit strings in binary files would be cool, as would a basic hex editor.

Manishearth 3 days ago 1 reply      
> Also new in 3124 is Show Definition, which will show where a symbol is defined when hovering over it with the mouse. This makes use of the new on_hover API, and can be controlled via the show_definitions setting:

Is this just an API hook which a plugin can add a definition resolver to, or does this automatically find definitions for all builtin languages? If the latter, this is super cool!

If the former, I'm going to try and update https://packagecontrol.io/packages/YcmdCompletion for this


Edit: omg works out of the box. Seems to be a simple grep-based thing (so it lists all definitions of the same name), but that's still quite useful!

martanne 3 days ago 3 replies      
A number of people expressed the need to edit large files. For the development of my own editor[0] I would be interested to know what kind of usage patterns most often occur. What are the most important operations? Do you search for some (regex) pattern? Do you go to some specific line n? Do you copy/paste large portions of the file around? Do you often edit binary files? If so what types and what kind of changes do you perform?

[0] https://github.com/martanne/vis

mangeletti 4 days ago 1 reply      
I have one single complaint about Sublime Text:

In order to truly clear your history (files open, last searches, etc.), I have to maintain a script with the following:

 find ~ -name *.sublime-workspace -delete rm ~/Library/Application\ Support/Sublime\ Text\ 3/Local/Session.sublime_session
Other than that, see https://news.ycombinator.com/item?id=12553515

sagivo 4 days ago 2 replies      
sublime is by far my favorite editor. fast and lots of plugins. specially if you work with big files. i sometimes need to work with files larger than 150MB and it takes few seconds to open. atom crushes and can't even open the files.
TsomArp 3 days ago 0 replies      
I'm an UltraEdit user. I have tried Sublime Text because of all the nice comments, but I don't see it. Can somebody that also uses or used to use UE tell me what I'm missing?
onetom 3 days ago 0 replies      
Here comes a piece of history.I replied this to my Sublime Text 2 purchase confirmation email I got from Jon Skinner on 2011/08/30:

> hi jon,> > my salary was reduced by 30% just yesterday, but when i woke up today,> they 1st thing i did was purchasing sublime. it's that fucking awesome!> i wish it would be open source, so people could learn from it...> but, hey, i doubt many open source developers could contribute quality> code to it.. :)> > if u could implement the elastic tab stop feature (which has some reference> implementation on the nickgravgaard.com/elastictabstops/ site), then i> would be happy to pay another 60bucks for it.> actually, u could sell separate license for the version which has this> feature...> i know it would be quite elitist, but it worked well with the black macbooks> back then...

putlake 3 days ago 0 replies      
On Mac I use TextWrangler for quick editing and VS Code as the IDE. I never need to open super large files so after reading this discussion I tried to open a 177MB text file in TextWrangler and it opened quickly and was editable. Searching within the file was also super fast.
jayflux 3 days ago 0 replies      
The Official Sublime Rust package supports this update:https://github.com/rust-lang/sublime-rust/pull/87
ricardobeat 3 days ago 0 replies      
The addition of Phantoms [1] is the killer feature in this release for me. This will allow embedding custom HTML [2] inline in the editor, which is something I've been dreaming of - the power of Atom's nice plugin UIs with no compromise in speed!

[1] https://www.sublimetext.com/docs/3/api_reference.html#sublim...

[2] https://www.sublimetext.com/docs/3/minihtml.html

woodruffw 4 days ago 0 replies      
Awesome! I'm especially liking the Phantoms API - there's a ton of potential there for richer plugins and graphical inlining.

I've moved between maybe half a dozen editors over the past half-decade, but I always end up coming back to Sublime.

0xmohit 3 days ago 2 replies      
nonbel 3 days ago 0 replies      
If this issue with the SublimeREPL package could be resolved, it would be perfect:https://stackoverflow.com/questions/27083505/sublime-text-3-...

Anyone have a guess as to why this happens? It causes me headaches using R as well.

jeffijoe 3 days ago 0 replies      
Been using Sublime Text 3 for years (and I do have a license), and been trying out Atom/VSCode lately. Atom can get real slow, but I feel like the extensions for Atom are of higher quality (linters, TypeScript integration). I think it might have something to do with HTML/CSS/JS vs Python for plugin development.
dman 4 days ago 0 replies      
One minor feature request - can you please simplify the setting of font size for the tree browser and for the menu entries. (I know that tree browser font size can be set by the theme, but it is a bit non trivial using PackageResourceViewer to patch theme files to do this). I still havent found a way to change the font size of the menu entries.
codepunker 3 days ago 0 replies      
Amazing! ST is the only editor I considered good enough to pay for and I can see it's getting even better!
makapuf 3 days ago 0 replies      
I suspect still ST3 still being in Beta is a service to ST2 users, who paid it but for a relatively short time before first st3 betas and whose license key will be valid with ST3 betas and not ST3 once out. Not that I am in this case at all.
muktabh 3 days ago 0 replies      
Thanks for the hard work sublime text team for making programming so enjoyable. I already find ST3 so flawless that to think it still can be improved is beyond me. Once again, great going.
pbnjay 4 days ago 0 replies      
Whoa very nice new features! Now I need GoSublime to support them!
barpet 3 days ago 1 reply      
I am an emacs person that converted from ST (on Windows/Linux) and TextMate but I have always preferred ST/Textmate over anything else available.
realraghavgupta 3 days ago 0 replies      
Atom is really good to go, but after using it and some others, I am back to Sublime Text. They are still working on version 3.0 but the beta is also so stable.
dikaiosune 4 days ago 1 reply      
Very cool to see a screenshot from servo's codebase.
ld00d 3 days ago 0 replies      
My favorite new feature:

> Settings now open in a new window, with the default and user settings side-by-side

brightball 3 days ago 0 replies      
Sublime is slowly making me end my hold out that Textmate will one day take over again.
alexmorenodev 3 days ago 0 replies      
I'm craving for transparent background.
niahmiah 3 days ago 0 replies      
nobody cares. Atom ftw
gotofritz 3 days ago 2 replies      
There are a few annoying things about ST3 which aren't so on Atom

- no engagement with the developers. For $70 I expect to be able to file bug reports and maybe some feature requests. Without being banned.

- multi file search is ridiculously poor. I can't save search patterns, the long text box with all the file patterns is hard to navigate (on OS X if you put the cursor at the end it starts scrolling), but most of all the result pane doesn't stick as it used to. I have to search again every time I click on a file from the results then close it.

- copy and paste is STILL buggy on OS X. Sometimes you paste a string and it puts it in the line above the one where you have your cursor.

- package control is not included. It's just common sense

- the scrollbars are invisible on OS X. I don't want a minimap, it used too much space and adds too much noise

- I use BracketHighlighter. Every time I want to customise the highlight colour it's a royal pain in the neck because of ST3's crazy architecture

I'd much rather use atom these days.

bcherny 4 days ago 2 replies      
Does Sublime still exist? With all the hubub about VSCode and Atom, I've sort of forgotten about it.
The decline of Stack Overflow (2015) hackernoon.com
288 points by abaschin  9 hours ago   253 comments top 70
Latty 8 hours ago 13 replies      
People always talk about how SO is so terrible, and yet it's still the best resource by a mile. People have just forgotten that the lack of that carefully curated environment created the complete mess that was all the forums we used to have before.

Sure, it's not ideal - SO could do better in terms of helping people understand the site's goals and enforcing the rules in a less hostile way, but they are working on that (a lot of the more hostile rule enforcement tropes are banned and filtered against), and it's a hard problem.

SO isn't dying any time soon, and the content is still good. If you want to kill it, please go ahead and solve the issue of explaining to users how to contribute quality content and getting them to take that in instantly.

I used to contribute a lot, which tapered off as I had other things filling my time. Towards the end though, the help vampires were getting to me, and I understand why people are harsh towards new users in some situations, it's an easy trap to fall into. Trying to fix that problem by stopping the curation of the content is insane, however. That's a fast route to going from some people being turned away to having no decent content.

sklivvz1971 8 hours ago 10 replies      
Disclaimer: I work in the Q&A team at Stack Overflow.

What people don't understand is that SO serves literally all developers. No matter what choices we make there will be upset people. Do we reduce moderation? Help vampires take over. Do we keep moderation as is? People complain we are too strict (or not enough). Do we allow only English? We are elitist. Do we allow SO in other languages? We are fragmenting the community.

After a few years, I've heard it all.

dismantlethesun 8 hours ago 4 replies      
I've found that if you go to Stackoverflow for a fast-moving topic like Javascript, the first answer isn't correct in modern terms. The next set of answers, without check marks, are likely more correct.

So you have to read the entire page to get a glimpse of the truth.

This can be bad if you're just looking for an immediate solution, however for learning its fantastic because it shows you a recorded history of how things evolved over time in a framework.

makecheck 7 hours ago 5 replies      
While StackOverflow is still an outstanding resource, they never really solved the why bother? problem for the really difficult subjects.

Your choices are: answer a trivial JavaScript question and get a million points, or spend an hour painstakingly explaining a hard problem for maybe one point (where, half the time, the original submitter doesnt even bother to accept your answer so you lose even that small point boost).

There needs to be some scaling factor. For example: if the number of questions anywhere on a certain topic is quite small then popular answers in that area could receive more credit; or, perhaps individual parts of a post could be voted on to increase value (say, you see a really detailed example so you tip that answer for taking the time to write it all out).

tedmiston 7 hours ago 2 replies      
> However, a 2013 study has found that 77% of users only ask one question, 65% only answer one question, and only 8% of users answer more than 5 questions.

Is this actually a problem?

I've always viewed Stack Overflow as a "write once, read many" community. For top questions, it looks kind of like Quora.

Checking my own stats: I've been a member for 7 years, asked 4 questions, and given 45 answers while reaching ~86k people [1]. My karma is not super high at 936. However, I've gleaned so much value from the SO community (and saved so much time), that these numbers only begin to scratch the surface.

Using these numbers as metrics doesn't reflect the value of SO's knowledge base. I don't think Stack Overflow is on the decline at all.

[1]: http://stackoverflow.com/users/149428/tedmiston?tab=profile

V-2 5 hours ago 1 reply      
> Someone at Hacker News expressed a common experience for many programmers (experienced or not) when trying to participate on Stack Overflow.


> 4Respond to comment about a missing semicolon that got deleted when I was cutting/pasting/formatting my code. (Despite the error msg making it clear that the missing semicolon isnt the issue)

Who cares. I would just ignore this.

> Duplicate answer person complains that their entry was posted first. I advise them that the timestamp indicates the other poster was first and they reply that it is a time zone bug.

Yes, very "common experience" ;) Anyway, who cares? I wouldn't waste my time on such an idiotic argument at all

> 10Check back one more time and see that someone has downvoted my question

> 11Email the mods to get the downvote removed

Oh for the love of God, who cares??? So it got downvoted, big deal, why do you need to bother the mods about it... (Did you wonder how they can get cranky sometimes?)

Some people really should chill out, or consider themselves lucky for having way too much free time on their hands

drinchev 7 hours ago 0 replies      
Well can we just think the other way around.

StackOverflow is a community driven website for a society that is used to follow rules and be disciplined.

Nowadays, the popularity of the website put it in a bad position. Handling millions of impolite, pseudo-developers, who've heard that it can help them with their problem.

In other words the community and popularity changed, not StackOverflow. And no it's not dying, it's just working! Sorry for some of us that remember the times where questions were mostly high-quality, but I don't think there is a way to prevent collateral damage in this case.

I prefer to read "opinionated question", instead of 10 paragraphs about a problem that in the end is unsolvable by logical decision.

cezarywojcik 7 hours ago 1 reply      
On the opposite end of the "new user" perspective that is trying to ask a good question, as someone who is seeking to sometimes answer questions, it is pretty hard to actually find a good question. The vast majority of the questions I run into _are_, in fact, duplicates, poorly worded/incomprehensible, far too broad, etc. (one example that has stuck with me is "how do I install HTML/JavaScript on my computer?").

Though perhaps there are good, legitimate questions out there, it is also conceivable that some frustrated new users think that their question is appropriate for SO when it is not. This most often happens, from what I've seen, in the "far too broad" category. Just looking at SO right now, for example, I saw a question that was asking how to pass data on an iOS app from one place to another. In that user's mind, he/she has probably been trying to figure out the basics of making an iOS app, and this seems a legitimate question. However, this is an incredibly broad architecture/design question. SO isn't a resource to hand-hold you when you're learning something new. It's a resource for asking specific questions when you can't find the solution anywhere else (and you've actually tried).

justinlardinois 7 hours ago 0 replies      
I don't really agree with this article's assessment.

If your question isn't clear, it's not possible to get good answers. If your question is based on a fundamental misunderstanding of the technology you're using, the best possible answer is "You're fundamentally misunderstanding the technology you're using."

Stack Overflow encourages users to edit other people's questions for clarity and formatting, which I think is helpful for a lot of new users that don't know how the site works yet. And my experience has been that when there isn't enough information in a question, you tend to get comments asking for more, which is productive.

I do agree that Stack Overflow can be a bit daunting for new users, especially because you're not allowed to comment right off the bat. I believe the threshold is 50 reputation, which can be hard to get early on, because questions get answered so quickly on Stack Overflow that it's hard to find questions that still need an answer.

shagie 4 hours ago 0 replies      
Consider this on the "I keep finding closed and interesting questions on SO" - the "what is the best linter for PHP" or "What is your favorite cartoon" or "where is the best place to meet female programmers (for romance)" (ref http://i.stack.imgur.com/x9ik2.jpg ) - why not ask those questions _here_?

If the answer to that is "because that isn't a good place to ask those questions" or "because HN isn't set up for answering those types of questions" then consider the possible response of "maybe SO isn't set up for answering those types of questions well either?"

When there is more noise than signal in a question and answer page, it is useless. Go dig around https://community.oracle.com/community/java and consider why you don't put site:community.oracle.com in your search (or for that matter, see how well one can find the answer to an error message in /r/javahelp). When there are dozens of answers that consist of "try libXyz" it isn't useful - you're going to have to dig through each of those to see if it works or not for you... and you might as well have done a google search instead.

When the questions are "how do you make a triangle with '*'" a dozen times over in September, those questions need to be closed so they don't waste the time of people who are trying to find good questions to answer.

Dowwie 8 hours ago 2 replies      
I disagree with this. SO helped me with Python, Postgres, and SQL countless times.

If we are going to consider the value of SO, I would argue that it is a cornerstone.

mhomde 8 hours ago 1 reply      
I'm a bit divided about Stack Overflow. On one hand side it's simply one, if not the best, resource for programmers. On the other hand it's become somewhat toxic and counterproductive. The better you become as a programmer the less value you derive from it. The true niche experts are less and less to be found (ie Product/Project -owners and Microsoft/Google/Apple etc -employees) and the other replies will often be exasperating mixture of trying to give answers they've googled or complain about some meta-aspect of the question.

Any community that reaches a certain size will face unique problems and I think Stack Overflow has some of the same problem as reddit does: you have to be very careful on how you give power to users. Power corrupts and becomes a goal/game in itself. Karma/power is a great incentive in the beginning of a community, but can become destructive in the long run. That some programmers have a certain type of personality is probably not helping either.

There seems to be many smpvar on Stack Overflow that loves to wield the small amount of power they've accrued without actually contributing that much. On the other hand you have to enforce rules and curation to keep quality up.

It's a very fine balance and hard to get right. It's mostly about human psychology and incentives. I think there's some tweaks they could do to improve things but I also understand that from their perspective why change something that works?

The danger is that the true experts stop helping/answering questions on Stack Overflow because they find the community becoming too toxic. Might turn into a downwards spiral until there's mostly trolls and newbies left.

inanutshellus 1 hour ago 0 replies      
I found myself a little incensed at this article, rather than by it. Yup, there are limits on new users' privileges. Yes, there are users that play The Reputation Game. Yes, there are many questions that don't get answered to one's satisfaction. Yes, there are trolls. Yes, there are disillusioned users. Yes, yes, yes.

Even so, I also find all of this to be Perfect-As-Enemy-of-Good whinging. Instead it could've been a plea full of suggestions. A Call To Action!

So let's do that now, though I may come off as a prick here because I kinda think all of this whining is the real problem of the internet.

> The privilege limits

IMHO they're are rational and reasonable, but your specific use-case was about new folks not being able to leave comments. So how could we solve this? Perhaps new users ought to be able to, but only the author'd be able to see them. When either the Q or A author upvoted it, it'd become visible to the world?

> Troll responses to your bad/incorrect/misleading answer

That sounded like a bummer. You know you can flag these comments already so... punish these bad actors in the provided way and move on.

> Respond to comment that says my question is a duplicate (its not, which I clarify to avoid closed as duplicate)> [...]> Another issue with this is that duplicates show up despite the crotchety moderators complaining about it.

You can't have your cake and eat it too. It's casual, usually-helpful internet folk doing their (usually) altruistic best to help. Most of the time, this works great. Blog posts like this point out the exception, and are valuable, and they're very enticing clickbait. But when they offer only boo-hoo's and no ah-hah's, move on.

sylvinus 8 hours ago 1 reply      
What most people fail to understand (but was explained very well by Joel in some talks) is that SO is primarily optimized for Google & read-only users just looking for the answer to a common issue. By that metric they are extremely successful.

Many things people complain about are deliberate design choices that actually made SO popular in the first place.

superJimmy64 6 hours ago 0 replies      
There are obvious exceptions however I have come to understand the greatness that is SO. All that is needed is a well-thought question with a little bit of work to show on the side.

I have a theory about why many complain about SO (please don't comment about this line, there are obviously exceptions):

There has been a ridiculous sense of entitlement with the growth and recent appeal of tech jobs in the past 5 years.

All this crap about "trolling" getting out of hand, not enough diversity (THE FIELD WAS PRIMARILY FILLED WITH NERDS OFTEN LACKING ANY SOCIAL SKILLS, no one else wanted to look or hang out with "that guy who is good with computers") etc.

It's a field that was mainly driven out of the desire and enjoyment that would be had messing around on CPU's. Therefore most of the good ones (among the diluted masses of "experts" nowadays) spent a great deal of time on these things. I'm not surprised that somebody would get pissed off if another came around and started asking for the answers to things without any real effort or drive being shown.

SO will forever be a poor resource for the huge incoming population of coders.

emodendroket 1 hour ago 0 replies      
Frankly I'm tired of reading these pieces. For one thing, they always focus on being mean to new users, which in my opinion isn't the problem. My biggest complaint about Stack Overflow is that you get the same amount of points for answering an easier question or a hard one, so complex questions languish while "please write a regex for me" questions get five answers in half an hour.
firefoxd 8 hours ago 3 replies      
No,SO is not declining. All the common questions have already been answered.

The main problem is people don't research their problem for more then a minute before asking a question.

fma 7 hours ago 0 replies      
One person has a few bad experience, says it's in decline. I'll counter his bad experiences with my positive experiences of both asking and answering questions - and finding more answers there anywhere else.

There - now the universe is back in balance.

AdrienChey 8 hours ago 0 replies      
The article is probably right about some factors that explain the "77% of users only ask one question".But IMOHO it miss an important one:Most of the user find answer to their question only by searching.

In fact once you get rejected (by a troll or not) for asking a question that you would have been able to find on SO, you became more careful before posting.

Also don't forget that a huge majority of the net user are ghost / read only :)

dustingetz 8 hours ago 2 replies      
Point based moderation generates toxicity, hacker news gets toxic too. Stack Overflow went all in on moderation before they understood the social consequences of it. There's an opportunity here for a social network with a moderation system that cares about how it makes people feel and how feelings impact user generated content.
menacingly 7 hours ago 0 replies      
Stack Overflow is useful, but a lot of that usefulness comes from its complete dominance of search results. By keyword stuffing in the sidebar (and last I checked the nofollow links) they maintain a strong search presence for virtually every topic.

So, this position as top tech knowledge hub is sort of artificially propped up at this point. You type in your query, arrive at a page with something vaguely like the question you asked, are faced with pedantic flags about how the question you came to have answered is somehow unfit, and maybe some useful info. Also maybe some outdated info with real unaccepeted answers below it.

I'm not aware of another site that so heavily depends on content it itself seems to consider unworthy, siloed into so many vaguely overlapping sub-sites

okket 9 hours ago 3 replies      

Previous discussion: https://news.ycombinator.com/item?id=9837442 (1 year ago, 39 comments)

tedmiston 7 hours ago 0 replies      
To me this reads as if the author is not asking questions in a way consistent with Stack Overflow's guidelines [1]. They make these guidelines really explicit and clear in the FAQ.

> How do I ask a good question?

> Search, and research

> Write a title that summarizes the specific problem

> Introduce the problem before you post any code

> Help others reproduce the problem

> Include all relevant tags

> Proof-read before posting!

> Post the question and respond to feedback

> Look for help asking for help

However, writing questions in this way where you provide a MCVE, explain everything you've tried, and relate it to existing questions to help reviewers is time consuming. It shifts part of the time burden of a good question onto the asker vs reviewers or early answerers.

As someone who's done many reviews on Stack Overflow, I think following these guidelines is the best way to not get downvoted or flagged.

[1]: How do I ask a good question?http://stackoverflow.com/help/how-to-ask

oskarth 7 hours ago 0 replies      
> However, a 2013 study has found that 77% of users only ask one question, 65% only answer one question, and only 8% of users answer more than 5 questions. With this article, Im exploring the likely reasons for that very, very low percentage.

This is a power law distribution. It is to be expected.

ams6110 8 hours ago 0 replies      
Sounds very much like the wikepedia problem: a bunch of long-term members working as mods don't have patience for new people and are hostile to people who aren't already in the club.
bootload 1 hour ago 0 replies      
"77% of users only ask one question, 65% only answer one question, and only 8% of users answer more than 5 questions."

Old SO user, started during the beta (user:2092), never asked a question. I don't bother with SO because a) sub-standard login (oauth), b) the time to answer a question correctly with sufficient details means it eats into time I can do other things; c) SO mods/response are mostly arseholes.

I also hate how original authoritative documentation is drowned out on google with crappy code examples or empty questions, locked, down voted or ignored.

kjhughes 7 hours ago 0 replies      
None of the Stack Overflow moderation irritations matter to me in the end.

All that really matters to me is that SO has employed a reputation system that provides a strong quality signal for answers to questions that I have. This allows me to quickly assess the quality of any given answer based upon the reputation of the answerer and the upvotes that the answer has received.

All other SO problems, including any aggravations encountered while trying to give back to that community, are relatively insignificant.

hellofunk 5 hours ago 1 reply      
A bizarre problem I have had on SO is that, after years of gaining reputation into the 5 digits, now if I ask a question on a topic that is new to me, I often get the response that I should know better if I have a high reputation. It's weird. If I have my colleague ask the same question, a guy with much less reputation, the answers are helpful. It's as if the quality of the question is judged by how much the asker must already know. Which is ridiculous if you explore a lot of different technologies.

Then there are genuine trolls who, despite massive reputation, seem to have the sole goal of proving everyone wrong. It's also weird.

This image has never rung more true to me then when interacting with so many "power" users: http://i.imgur.com/Oy6mA.jpg?fb

Safety1stClyde 2 hours ago 0 replies      
There is a problem with giving moderation powers to entrenched individuals who are not experts.

I've seen this on the English language stack exchange, the Japanese language stack exchange, the Physics stack exchange, and on stack overflow. The people who are there on the site gathering points and up and down voting aren't necessarily experts, and in many cases they're amateurs or people who know something about one thing, yet have moderation powers over things they really don't know about.

The same applies to the dustier corners of Wikipedia, where entrenched non-experts often reduce articles to the level of their own ignorance. Since nobody is getting paid for their participation in these sites, it's hardly surprising that these people end up predominating.

yladiz 8 hours ago 0 replies      
Sure, Stack Overflow kind of sucks sometimes and people are sometimes really up their own asses about things being exactly right and having a question worded explicitly, but it is by far the best resource for programmers to find help from other programmers due to its ubiquitousness in the programming community. I haven't found any other alternatives better; Quora isn't useful for programming questions (a general Q&A/opinion style question is better suited for Quora, but not "What's wrong with my code?"), and I wouldn't use ExpertsExchange. Are there any other notable places to ask these kinds of questions?

So it sucks, but there's no resources to replace it. You might not like it and because it's not something that you have to help with you are free to stop contributing (e.g. you can't just say, "I don't like this, I'm going to stop working on it" at work), but why not help everyone and contribute, either to Stack Overflow or by making a new, better resource, rather than being grumpy and only helping your pessimistic self feel better?

An aside: The header image lags considerably when I scroll (on Safari 10)... Why isn't it just a static image on the page, and have JS to detect when the aspect ratio changes on a page level rather than an image level?

achikin 4 hours ago 1 reply      
I've answered 21 questions, got upvotes on 15 answers, 3 answers accepted by owners, asked 2 questions by myself and both were upvoted. I have no idea what these people in the article are talking about. SO is super friendly and super helpful.
obj-g 7 hours ago 0 replies      
I worship SO like the all-knowing deity it is, but I don't love it. It's a cruel god, like that of the Old Testament. I've never contributed to the site, though I've used it for years, and I never will -- simply because it's too arcane and silly and I don't wish to play the reputation game just to leave a simple comment or whatever. When a new god appears, I'll surrender completely.
fbreduc 1 hour ago 0 replies      
I've never been a user of SO, i don't find it useful except really obscure stuff; I always reach for docs first and those usually have my answer, and typically a better answer.. There are many occasions when co-workers will come and say "LOOK HERE's THE SO ANSWER THE GOLDEN KEY" and that answer many times was just wrong.. or half the time didn't apply at all to what was happening
kozikow 2 hours ago 0 replies      
I recently replied to SO question with a "New feature have been added that supports your use case: link" that was genuinely solving a problem.

Reply got removed by a moderator saying that I need to describe solution and can't just provide the link. I didn't bother. Currently the only answer suggests sub-optimal solutions and I am not replying to SO questions again.

It felt like a land grab - it wouldn't help the questioner, but it would help SO, by bringing more data to their platform.

notacoward 3 hours ago 0 replies      
Every user that has high karma/reputation/whatever on a site - including this one - should be forced to experience it from a noob's perspective once in a while. It can be a real eye-opener, as the experience for the in-group and the out-group can be almost totally different. How many times have even regular users of a site - not just noobs - pointed out a problem only to be downvoted, harassed, or even banned by the "senior users" who don't see the problem precisely because they are so senior? They're like the "senior architects" on a software project, who no longer contribute actual code (or novel opinions on this side of the analogy) but always sit in judgment of others'. I guess it's a universal human tendency, but the point is that karma systems should be designed to attenuate it instead of reinforcing it.
TheHippo 7 hours ago 0 replies      
I'm apparently in the top 0.54% for this year. Whoever says SO hates new user have never used the moderation tools and read the stuff new users write on this page.
tomc1985 6 hours ago 0 replies      
A large portion of my rep comes from questions I wrote that were (considerably later) marked "closed", for various reasons. All of them continue to accumulate thousands of views and the occasional point or two... that's never made much sense. Kind of a statement on how nonsensical StackOverflow's become.

It feels almost like the variables in their little machine are out of tune. If they scaled back the free privileges and hire some trained moderators they could probably clean things up a bunch.

jawarner 8 hours ago 0 replies      
Similar community problems happened on Wikipedia. Give many users moderation power, and they'll each enforce their own view of what the site should be.
ijafri 6 hours ago 0 replies      
Glad someone brought it up., I had guessed it was only the case with me., it's still a great resource to 'search' answers., but posting your own... only would lead you to humiliation with down-votes.

What I had guessed and stopped posting any question, that they only want to answer anything that's general... not 'specific' at least at this point, i still use SO a lot but only search for possible answers, I have never dared again after being trolled and downvotes.

WhitneyLand 3 hours ago 0 replies      
Interesting that one of the people called out as ruining SO may be in prison for some scary sex/murder fantasies/photos involving minors.

Is this the same guy?



executesorder66 8 hours ago 1 reply      
What we need instead of SO is a wiki similar to the Arch wiki, but for all IT related troubleshooting problems.

Instead of removing "duplicate questions" we can just refine the details of that section of the wiki with more information.

employee8000 3 hours ago 0 replies      
The one thing missing from most sites is the meta-evaluating of the mods, the way slashdot had. That was one of the more innovative things about slasdot back in the day.

It would randomly ask users how accurate a mod was for a particular mod that he or she made. I'm supposing that if enough people voted against the mod, their mod would be reversed and privileges removed.

Places like Reddit and SO need this in order to control the mods themselves.

Cozumel 7 hours ago 0 replies      
SO is an invaluable resource but I don't ask questions because of the hostility there, plus a lot of times in thinking about a question to phrase it properly I end up answering it myself.

But one of the major problems that I see there are the people who comment solely for the sake of commenting, it seems like they must get paid by the word they comment so much and it's never helpful, always snarky and irrelevant.

twblalock 3 hours ago 0 replies      
It sucks when questions answered five years ago are considered the final word on the topic, and newer questions about the same topic are marked as duplicates.

I want to know the best way to solve the problem in 2016, not five years ago. In that time, we've gone from Java 6 to Java 8, almost everything in the JavaScript world has changed, etc.

ourmandave 5 hours ago 0 replies      
A lot of SO people down vote a new users question with the intention of removing the down vote once it's improved.

But the person getting down voted to hell doesn't know that!

They just see a community telling them to go to hell.

Think they'll stick around to read the friendly guide lines?

nicholassmith 7 hours ago 0 replies      
I'm a top 4% user for the year apparently, I've asked 40 questions & posted 318 answers. SO is a minefield.

It's an excellent resource and I've found so much value out of it, but the moderation is hyper-aggressive at times and often duplicate marks are for old behaviour in old libraries. I think it needs a clean sweep at some stage, leaving the current content but drop everyone back to 0.

exabrial 8 hours ago 1 reply      
It's the best resource by a mile, but the site is filled with grumpy people!
tomrod 4 hours ago 0 replies      
I agree to a lot of this, despite the criticisms being seen as cliche.

I've found the reddit programming communities to be more helpful as a user, even though finding similar enough content is difficult. Further, I've noticed reddit and blogs edging in on SO's Google results. I wonder if this will be the trend.

SO has been great for so long because the content was trustworthy. The trolls are removing that edge.

DrNuke 7 hours ago 0 replies      
It's natural that complex, evolving organisms have more and more entropy but their decline, as for what you HNers are saying here, is due to the unavoidable predominance of more and more web-idiots, mostly coming from 3rd and 4th world countries, which I however understand in full: they are looking for recognition as human beings in a global world, guidance to make their talents work and opportunities in richer environments while starting from nothing. It's the same reason a lot of web-idiots, me among them, are writing here on HN from 1st and 2nd world countries. That said, noise is reduced by stricter filters or by changing the definition of noise and your attitude towards it. Your choice.
mbroshi 7 hours ago 0 replies      
It's funny, because many of the points made about SO in this article remind me of my experiences with HN. My first couple of comments were downvoted, as I did not yet know the HN etiquette and unspoken rules of discourse, and that can be pretty discouraging for newcomer. I'm still pretty reticent to comment for fear of being downvoted.

There is also the familiar rush to be the first to post some news story, which is almost impossible.

All that said, I've had extremely overall positive experiences on both SO and HN. The breadth of collective expertise and depth of comments on both sites is really awe-inspiring. I view the strict rules of engagement as a feature, not a bug.

a3n 7 hours ago 1 reply      
I've idly wondered how I could restrict queries to questions that have been closed as too <whatever> or not enough <whatever>, as those are often the most interesting and educational items, even if they don't answer my specific question.
monksy 7 hours ago 0 replies      
I'm the author of the SKH/SO article (Link was right here: https://theexceptioncatcher.com/blog/2012/09/stackoverflow-i...)

So far. I still haven't participated in SO very much. For the most part it isn't even worth asking a question. (It's a good thing and a bad thing) Also I have noted that they try to prevent you from deleting. After you click that submit button it's their property apparently.

soyiuz 4 hours ago 0 replies      
I think a part of SO's problem is its size. There are many other smaller communities in the Stack Exchange network that seem to be more friendly. The size of SO prevents it from becoming a community as such, which manifests in the lack of shared norms and ethics which can then be meaningfully enacted in practice.
k__ 7 hours ago 0 replies      
For most of my bugs I switched to read the source on Github. Works better for me
nojvek 6 hours ago 2 replies      
I've some very bad experiences with stack overflow. They are very hostile towards new users and the 50 point system that doesn't allow commenting is very absurd. If I ask a question, I should be able to comment for a clarification right?

I wonder if they ever A/B test their point system. I only use stack overflow when Google points me to it.

I think SO can do a much better job at making users actually ask/answer questions rather than just use it as a read only site.

chain18 6 hours ago 0 replies      
It seems like many people are talking only about their experiences as users looking for answers and not from the perspective as a user answering questions.
bsder 5 hours ago 1 reply      
The bigger problem I have with SO is that it's answers are static--they are stuck in amber for eternity.

This is not fine for something like Rust which has changed significantly within the last year or two. Many of the highest voted answers are now WRONG, but there is no way to dislodge them.

pknerd 7 hours ago 0 replies      
Couldn't agree more! The way SO is being moderated, soon it's gonna be next Expert Exchange. IYKWIM
godelski 8 hours ago 1 reply      
One interesting issue with the duplicate answers is that if you notice later a typo in your answer and edit it you get pushed to the bottom of the answer stack, despite having the first answer. This seems like an issue SO could solve to help contributors. I doubt many abuse the system to completely rewrite an answer and stay on top.
StreamBright 7 hours ago 0 replies      
I am not sure why but I am very luck with StackOverflow, almost always. It might be due to the fact that I ask detailed questions that are 99% of the time valid questions and I am using languages that has welcoming nice community and usually not extremely popular so trolling is minimal.
kwhitefoot 4 hours ago 0 replies      
I think some people on SO should ponder the adage:

There are no stupid questions, only stupid answers.

kwhitefoot 4 hours ago 0 replies      
I think SO should ponder the old saying:

There are no stupid questions, only stupid answers.

jimjimjim 7 hours ago 1 reply      
stack overflow is one of the best dev resources available.

do people not remember the dark days before trying to unblock the answers in expert sexchange?

djfm 6 hours ago 0 replies      
the question about "this" really is dumb
gjvc 6 hours ago 0 replies      
Just buy (and read) a book :-)
rodgerd 7 hours ago 0 replies      
The complaint is essentially describing how it's turned into Wikipedia, where the author is describing a space where "working the rules" is more important than trying to achieve the goal those rules are meant to enable.
throw2016 7 hours ago 1 reply      
There is an air of arrogance in SO that is unpleasant. Help is supposed to be about being open, friendly and relaxed, not arcane rules, criterias and deciding you know best.

If experts are uptight and arrogant the desire to learn quickly vanishes. Being friendly does not preclude being firm.

I have noticed many questions being closed in an arbritrary manner and worse in a mean spirited way. The first may be ok but the second is simple unacceptable. Who are all these people on power trips and why does SO allow it?

throwaway1974 6 hours ago 0 replies      
Has everyone forgotten "Sexperts Exchange" ? SO is very much useful
CrocODundee 7 hours ago 0 replies      
Let's be honest, Stack Overflow and that network of sites has been in decline for years. Sadly it has turned into a vast and wide content farm and SEO ploy that is full of search spam and and users who copy/paste material from legitimate sources. To top it off, Stack Overflow uses "nofollow" on the outbound links to make sure the true source of material gets no credit in the eyes of Google, Bing, etc. Rinse and repeat for years, and spammers have taken over the asylum, with some trolls for good measure too, and that's unfortunately much of Stack Overflow today.

How did this happen?

In short, they appeared to receive preferential treatment from Google after complaining very publicly and loudly about not ranking at the top of search results. Ever since then they have widely dominating search results for any vaguely related technical query. Ironically, at that point in time their primary complaint was about other content farms scraping their content.



>> JS: All of these sites that go to Stack Overflow, scrape our content, and reprint it with garbage ads, Google Adsense-encrusted pages.They're basically producing worse versions of our pages and they use these slimy SEO techniques, so they actually rank higher than us.

>> For a long time, we were getting enormous complaints from our own users that they'd search on Google and they'd find Stack Overflow content that had been stripped from its useful form but SEO'd like crazy and encrusted in ads and thrown up willy-nilly. And these sites were getting a lot of traffic. So that was his complaint and of course he phrased it in this larger frame of "Is Google losing their edge, etc. etc.?"

>> BI: It got a lot of attention. Do you think Google's doing a good job of fixing this sort of problem?

>> JS: They fixed it. They called us up at the time and said, "Thank you for bringing that up. You have lit a fire under the team that is supposed to be working on that problem that has not been delivering."

Matt Cutts, then the head of Google Web Spam, posted to Hacker News about this to "fix" the problem of sites outranking Stack Overflow.


What seemed like newly preferential treatment directly impacted hundreds of other sites that lost huge volumes of their traffic to the newly crowned king-of-search Stack Overflow. For example:


>> As many of you know, DaniWeb was hit by a Google algorithm update back in November 2012 and we lost about 50% of our search traffic. In investigating the issue, I discovered that DaniWeb, in addition to most other programming forums out there, all lost their google traffic to StackOverflow.

That, in turn, perpetuated the reposting/scraping activity and blatant spam posts to the StackOverflow network, since the site network ranks dominantly in every vaguely related query. Now years later, an even larger volume of material on Stack Overflow is not original content and doesn't even pretend to be. It's absolutely littered with copied/pasted content and blatantly spammy/promotional posts from around the web.

From the outside looking in, it appears that Stack Overflow has become exactly what they once actively complained about.

fiatjaf 7 hours ago 0 replies      
I'm for more hostility. There should be more difficult to ask questions, there are way too much people asking idiot questions there. Allowing this makes the site worse for people who really have important questions.
Park.io automating tasks to make $125k per month indiehackers.com
331 points by csallen  10 hours ago   146 comments top 26
jondot 7 hours ago 5 replies      
I might get downvoted for this, but here's a story.We just finished picking a brand name, after 2-3 months of intensive work.

Being fond of .io's I naivly googled my <brandname>.io, and found that park.io owns it - this happened last week. I immediately sent an email to inquire. We considered the price, and then when I came to buy it today, a week after, the price is tripled. This was a fixed price domain, NOT bid.

That's clever price manipulation. Detect when someone wants something, let it sit, and when they're ready - triple the price. Maybe that's a hint for how he made so much money? In any way we'll just do the get<brandname>.io or something like this, as a compromise. Thanks for being a douche, park.io!

And then, magically, this is now on HN :)

20years 9 hours ago 3 replies      
I don't understand some of the negative comments here. This guy built a million dollar business in a year providing a service that people want to pay for. He did it all on his own with no other co-founders or employees. I say "Congrats!"
csallen 10 hours ago 5 replies      
For those getting a 502, I'm working on the server, Elastic Beanstalk didn't scale fast enough >.<

Cached version here: https://webcache.googleusercontent.com/search?q=cache:F0QdH3...

Or if you refresh a few times, it should come up.

poorman 8 hours ago 0 replies      
NIC.IO now has backordering. In order for park.io to continue being successful with landing and selling premium domains, he must be appraising the value of the domain and his chance of selling it in one of his auctions. Then weighing that against the NIC.io backorder price of 60EUR (67.35USD) + 60EUR registration fee and finally backordering it himself far enough in advance before someone else does (because only 1 backorder can be placed on NIC.IO).

Interview here: http://www.domainsherpa.com/mike-carson-parkio-interview/

cjhanks 2 hours ago 0 replies      
I find it shocking that you would post on HN; "Hey guys, I make $125k/mo making other peoples lives harder".

Given the way your current infrastructure is configured (vulnerabilities and all)... somebody could probably cost you ~$30-70k/mo in AWS resource utilization at a cost of ~$600/mo. The moment you park on the domain of someone who shares your internet ethics, that will be an interesting day for you.

mef 9 hours ago 1 reply      
His hn profile:


Other projects, including what appears to be an initiative to start a new religion? (http://consciousness.io/)


chatmasta 5 hours ago 2 replies      
This thread is a great example of why not to publicize your revenue or internal systems.

At best, you'll get hate and resentment. At worst, you'll get hate, resentment, and a new competitor.

throwawaysept 8 hours ago 1 reply      
Mike Carson has put park.io up for sale. Asking price is $1.5M.


simple10 6 hours ago 0 replies      
I'm getting S3 access denied errors.

Here's the cached version if you need it:https://webcache.googleusercontent.com/search?q=cache:F0QdH3...

fixxer 6 hours ago 0 replies      
This couldn't be more click baity if the title involved "this one weird trick".
bambax 4 hours ago 1 reply      
Slightlyy OT and probably nave: what is the attractiveness of the .io TLD? If the .com isn't available, why is .io more desirable than any other? Is there a hard reason or is .io just fashionable?
intrasight 10 hours ago 1 reply      
He gives some good advice for indie developers if you manage to get over the "self-promotional" aspects of the article and read to the end.
Tinyyy 9 hours ago 4 replies      
I don't understand how this can survive in the long run - what's stopping someone else from setting up an identical service at lower prices? There is literally no lock in because users can sign up for multiple services and potentially pay a lower price (depends on which one snags the domain).

Or is the technology behind that unique?

z3t4 8 hours ago 0 replies      
this guy has found the secret recepy for making money: make a service for something where users direcly earn money or spend money. Then automate it in order to scale.
roflchoppa 5 hours ago 1 reply      
lmao, man i liked when the term "hacker" was directed toward blackhats. This whole "im a hacker" gig is hilarious.
Quanttek 6 hours ago 0 replies      
Just to clarify: Why does he call it "backordering"? I mean the domains didn't expire yet or does he park them?
rrtwo 6 hours ago 1 reply      
The interview mentions parked domains as the major lead generator... are those domain names owned by park.io or by its clients?
imaginenore 7 hours ago 0 replies      
econner 6 hours ago 0 replies      
Why the .io domains for everything?
mbenson2147 7 hours ago 0 replies      
For some reason the URL doesn't work unless I remove the '-1' at the end.
asdfologist 8 hours ago 1 reply      
Jealousy and cognitive dissonance. This guy is more successful than them, so clearly he's done something bad.

Welcome to HN: home of the insecure narcissists who like to argue over programming languages, humblebrag about their gifted childhoods, and prove that they're superior to anyone more successful than them.

countryqt30 9 hours ago 3 replies      
congratulations :)
asdfologist 7 hours ago 2 replies      
FYI dang censored my comment about how HNers react to these stories with a jealous and insecure attitude:


hmans 9 hours ago 2 replies      
Domain squatting as a service.
EGreg 9 hours ago 1 reply      
So this guy is a domain squatter?

And all you need to do to become one is add a script? I always assumed there was special access to top tier domain snatchers or they had some sort of high-speed trading thing with a fast uplink.

paulpauper 9 hours ago 5 replies      
You backorder the domain. If we get it and you are the only bidder, you pay $99 and the domain is yours.

lol then why bother paying $99. just pick it up from any registrar for $9 after it drops

       cached 26 September 2016 02:11:01 GMT