hacker news with inline top comments    .. more ..    25 Sep 2016 Best
home   ask   best   3 years ago   
Akamai takes Brian Krebs site off its servers after record cyberattack businessinsider.com
646 points by bishnu  2 days ago   430 comments top 50
parshimers 2 days ago 4 replies      
Quite impressive. You know your blog is good when folks will try to take down a CDN to supress what's on it. He's also had heroin mailed to him in combination with a swatting attempt before: http://webcache.googleusercontent.com/search?q=cache:gEjqPfc...
headmelted 1 day ago 7 replies      
Still not a good move for Akamai, though.

I get him speaking out for them about the hosting having been free, but Akamai is now the CDN that got bullied into kicking someone of their service against their own will.

Terrible PR, and that mud will stick in tech circles. Akamai folds under pressure.

I know it's a crude comparison, but we don't negotiate with terrorists for a reason.

zx2c4 2 days ago 6 replies      
Isn't this the point at which Cloudflare is supposed to gain a handful of PR points for putting him back online, pro bono, and then doing a write up on how effortlessly they handled the bandwidth with eBPF?
xarope 1 day ago 3 replies      
Here's a "philosophical" question with regards to the internet, and perhaps even it's future. Given that a currently anonymous attacker, and likely not a "state" player (i.e. not a governmental entity with almost unlimited resources) has managed to DDoS a single website, does this portend that unless there are significant changes to the way the internet infrastructure works, we are seeing the demise of the WWW?

Kind of like a reverse wild-wild-west evolution, where the previously carefully cultivated academic and company site presence, gradually degenerates into misclick-hell? And the non-technical, non-IT savvy masses, in a bid to escape this all, end up in a facebook-style future where media is curated and presented for consumption (or perhaps in future, facebook-type entities end up with their own wild-wild-west hell)?

I have a strange feeling that we are seeing the decline of a city/civilisation; once you used to feel safe walking out at night, knew everybody in the neighbourhood, could leave your doors unlocked... and now, you don't dare to go down the lane to the left in case you pick up a nasty virus, and if you hear a knock on the door at night/email from DHL, you don't dare to even look through the peephole/preview the JPG!

betaby 2 days ago 4 replies      
I would like to see stats from Tier1/Tier2/IX for that.Krebs claims it's 665Gbit/s https://twitter.com/briankrebs/status/778404352285405188 Such attack must be visible in many places, however not a single major ISP reported that in mailing list. Previous smaller attacks were reported 'slowing down' some regional ISPs. Perhaps ISPs got better.
panic 2 days ago 1 reply      
This recent talk about DDoS attacks is worth a watch if you're curious about why it's a hard problem to solve: https://www.youtube.com/watch?v=79u7bURE6Ss
WhitneyLand 1 day ago 2 replies      
This is bad PR for Akamai and a tactical error for them to boot Krebs even if they were providing free service.

To some, the implication would will be "they couldn't handle it" so why should I trust the DDOS they are heavily promoting on their site?

At minimum they should comment on the situation, at best restore his service and learn how deal with high profile clients.

owenversteeg 1 day ago 2 replies      
The first thing a lot of people are thinking (and saying) is "switch to Cloudflare". But there's another name I think needs to be said - OVH. OVH can withstand a Tbps scale attack as far as I know, and it provides this to pretty much anyone. They have a pretty good interface and some of their plans are extremely cheap. They're also great at standing up for free speech, which I really appreciate.
flashman 1 day ago 3 replies      
> I likely cost them a ton of money today.

But more specifically, whoever launched the attack cost them that money.

Also, ha:

PING krebsonsecurity.com ( 56 data bytes

reustle 2 days ago 1 reply      
It would be interesting to try out some of these new p2p website technologies like IPFS/WebTorrent with these high profile sites who are frequently attacked.
xarope 2 days ago 0 replies      
I tried to get to an article on Krebs' site from a Bruce Schneier blog post, and couldn't, then bumped into this post in HN.

It's a pity Akamai booted him off; on the one hand, I can understand that it would significantly impact on their SLAs to other customers, but on the other hand it's a shame they don't have a lower impact network to re-host him on, and use this as a learning lesson on how to better mitigate such DDoSs...

geofft 2 days ago 0 replies      

"Before everyone beats up on Akamai/Prolexic too much, they were providing me service pro bono. So, as I said, I don't fault them at all."

josho 2 days ago 3 replies      
I'd love to learn more about these botnets. I wonder about things like What's the average time that a compromised computer stays in this net. What is the typical computer (grandmas old PC running XP). Do the ISPs ever get involved to kill bots running on their networks?
ChuckMcM 1 day ago 0 replies      
Wow, I figured that everyone that had hired vDOS would be irritated but that is pretty impressive. Still it says a lot for how effective he has been at rooting out this stuff, not like the TierN infrastructure folks have managed to track this stuff down with their resources.
VertexRed 1 day ago 0 replies      
These 'attackers' give Krebs' more publicity than he would ever be able to generate himself.

It's also useful to point out that Krebs' hasn't been the only target as half a dozen other large targets were attackedhttp://www.webhostingtalk.com/showthread.php?t=1599694

mirekrusin 1 day ago 2 replies      
Isn't this whole thing a bit silly? I mean what's the point? They just spend time on making him the best marketing, he'll double his audience/readers, no?
Futurebot 1 day ago 0 replies      
Something about the platform-centric world we're in now is that this sort of attack doesn't have the blocking power it once did: you can mirror your content on Twitter, FB, G+, etc. and cross-link so people can still read your stuff. This makes the "denial" part pretty watered down; it's a wonder people even bother with these sorts of attacks anymore for non-services (i.e., for regular media material like text, photos, etc.)

Of course, maybe the goal is to deny someone ad revenue, but that seems awfully low-status for such a high-profile attack: "Yeah, we really got 'em! Denied 'em AD REVENUE for a whole week!"

zaidf 1 day ago 3 replies      
He should get a Facebook page and publish a copy of all his posts on it.
ckdarby 2 days ago 3 replies      
The ddos attacks seem to be getting larger these days.

I've recently seen a ~200 Gbit/s hit us.

Does anyone have good resources around mitigation? I was looking at the BGP flowspec but was hopefully that someone might have come across other tactics?

marmot777 1 day ago 1 reply      
Brian Krebs is a hero. Are Akamai executives cowards for dumping him? I'd like to add that law enforcement are heroes.

And it's honorable he wants to meet Fly in person, recognizing him as a human being. I haven't read it yet but I'm assuming the reference to 12-step hints that Fly's having some post alcohol binge regrets.

I'm sure alcohol makes it easier to hurt other human beings, which is why violent people are often drunk. I'd be ashamed of myself if I woke up realizing that I'd spent my life actively trying to harm other human beings for money, feeling no remorse until Karma (here defined as law enforcement officials) finally caught up with me.

rabboRubble 1 day ago 1 reply      
Here's a link to the last post from his website. Google did not appear to have this cached:


redorb 2 days ago 1 reply      
Cloudflare should pick up the site for good advertising..
dmix 1 day ago 0 replies      
If you're curious what the source of the DDOS attacks are from, here is a recent one that hit OVH:

> This botnet with 145607 cameras/dvr (1-30Mbps per IP) is able to send >1.5Tbps DDoS. Type: tcp/ack, tcp/ack+psh, tcp/syn.


This is much higher than the Akamai attack on Krebs too. Welcome to the wonderful side-effects of the totally insecure firmware of IoT...

mirekrusin 1 day ago 0 replies      
It's funny how my mom after reading "record cyberattack" would be wondering how many poor people died but what it means is that somebody was downloading images from website many times.
sfifs 1 day ago 1 reply      
I'm wondering if the rising scale of these attacks & the seeming ease with which sites can be taken down will ultimately result in an "authenticated" internet - ie. you can't even connect without identity verification.

We already see publishing through FB Instant Articles etc. moving in that land on top of the current internet, to combat these types of firehose attacks, the only solution may be to take authentication one level deeper into the connection level.

That of course sounds good to security agencies as that's the end of anonymity online.

jsjohnst 1 day ago 0 replies      
There are a number of factors that go into play (did the site use custom SSL, what edge locations were they providing caching in, etc), but had Kreb been a normal paying customer, this could easily have been a over a million dollar bill (if it was sustained long enough to alter his 95th percentile bracket) in the cheapest case. If things like custom SSL are in the mix (which Akamai charges absurdly high prices for), or lots of traffic from more expensive POPs, or lack of already having pricing commiserate with high volume traffic commitments, the bill could've been 5-10x that amount or more.
atombath 1 day ago 0 replies      
It's kind of stupid to me that the massive and advanced cdn of akamai protect something as non-important as a blog against such a major ddos attack. If they were doing it pro-bono wouldn't the prudent action be to mitigate ddos's until a certain treshold and then actually assess the value of what you are protecting? A good lesson to have learned, I believe.

But no, they'll drop this client which had to have continually given good referrals.

tuna-piano 1 day ago 0 replies      
Some are guessing the DDOS was because of this recent post of his, about a large DDOS network.


exolymph 2 days ago 2 replies      
It would be interesting if he started writing on Medium (not saying technically advisable, just interesting). I wonder if he'd ever consider trying that.
Igalze 1 day ago 0 replies      
Unbelievable, they enjoyed year of free publicity from association with him, and this is how they repay him. Its bad enough that they couldn't handle the attack, despite all the bragging about their multi-Tbps capacity...
saganus 1 day ago 1 reply      
So if Akamai can't hold an attack of this size, who can?

Or is it that they actually can hold it off but it costs too much money?

nodesocket 1 day ago 0 replies      
Brian Krebs' wasn't a paying customer right? Akamai provided the service pro-bono. Perfectly acceptable for them to suspend service if it becomes more than trivial in terms of cost or it puts their paying customers at risk.
nodesocket 1 day ago 1 reply      
I've always wondered if your domain is under a http DDoS attack, couldn't you in theory update your DNS A record to another ip and take other servers down (maliciously)?
Globz 2 days ago 2 replies      
At this scale it must also cost a ton of money to carry out this attack, I wonder if there's a vulnerability that we don't know about that let them do this so easily?
csomar 1 day ago 1 reply      
I'm really interested to read his blog now. Any way I can find a readable version for his blog posts?
desireco42 1 day ago 0 replies      
I understand that this is burning bandwidth for Akamai, but seriously, taking into account what is at stake here, I think they need to do their share and continue to support Brian.
snowy 1 day ago 0 replies      
krebsonsecurity.com is now resolving to localhost. I guess he doesn't want to give the DDoSers a target.....
EJTH 1 day ago 0 replies      
Too bad, I had some nice reads on his website. Hopefully this will only be temporarily...
shshhdhs 1 day ago 0 replies      
So the attackers win..
ttam 1 day ago 2 replies      
so long for using a CDN to protect from DDOS attacks...
hetfeld 1 day ago 0 replies      
You'll be redirected in... never redirected.
known 2 days ago 2 replies      
Is it according to terms/conditions of Akamai?
dragonbonheur 2 days ago 1 reply      
Who profits from this attack?
EGreg 1 day ago 1 reply      
Why don't we switch to a distributed network with a DHT like freenet? So many benefits, including not being able to take down content via DDOS.
dragonbonheur 1 day ago 1 reply      
Are there web servers or software that blacklist IP addresses that disconnects after a short time and redirects them to a static page?
pitaj 2 days ago 3 replies      
tl;dr Akamai was hosting his site pro bono. His site was being DDOSed, which cost Akamai a ton of money, so they kicked him off since they were literally only losing money on the deal.
rasz_pl 1 day ago 1 reply      
I think its time for some serious financial incentives for ISPs to start getting serious about routing (or rather not routing) garbage. Financial fines for every DOS originating from your AS, or blacklisting if you are a repeated offender.
yAnonymous 1 day ago 0 replies      
Time to use Github pages.
ninja-wannabe-7 1 day ago 0 replies      
Should've used CloudFlare.
codedokode 1 day ago 1 reply      
Such attacks are possible because Internet is decentralized. There is no way to tell peers that you don't want to get traffic from some AS.

And investigation is difficult because attacking nodes might be in different countries, in some of which DDOS attacks are not illegal.

Maybe it is time to start building international firewalls to protect local infrastructure?

Announcing TypeScript 2.0 microsoft.com
546 points by DanRosenwasser  2 days ago   299 comments top 23
k__ 2 days ago 4 replies      

- Simplified Declaration File (.d.ts) Acquisition (via npm)

- Non-nullable Types

- Control Flow Analyzed Types

- The readonly Modifier

Finally, I was wating for months :)

dested 2 days ago 1 reply      
This isn't really the place for it but I really wish that both Webstorm and Resharper used the actual typescript compiler for its tooling (like vscode) vs handrolling their own. Now I have to wait until Webstorm 2016.3 to see the full benefit of 2.0, rather than getting it for free by just updating typescript. Not to mention the obscene number of typescript edge case inconsistencies in the warnings, errors, and refactorings.
edroche 2 days ago 2 replies      
I can't wait to use some of the new features in our production apps. Typescript is/was my bridge into javascript development, because IMHO javascript was a broken language for a long time, and I am not sure if I could have ever done as much as I have without its existence.

Non-nullable types, tagged union types, and easy declaration file acquisition are definitely the biggest wins for me with this release.

HeyImAlex 2 days ago 1 reply      
Typescript is such a neat project. The js ecosystem is vast and diverse and the typescript team has the unique job of figuring out how to make common dynamic patterns type-safe and convenient. Like... that's so cool. Every little pattern like its own type system puzzle, and there's no _avoiding_ the issue like a ground-up language can do, because their job is literally to type the JavaScript we write today.

Also, how much money is MS pumping into TS? A lot of OSS has one or two super-contributors that carry the project on their backs, but typescript has a small army of smart people with significant contributions.

bsimpson 2 days ago 1 reply      
Control-flow analysis? I think that was Flow's differentiating feature.

Obviously, there are still differentiators between the projects (like TypeScript including a known set of transpilers vs. Flow delegating to Babel), but I'm curious to know if they are converging on their core feature (e.g. how to do type-checking/static analysis).

jameslk 2 days ago 1 reply      
One of the things I've been really needing is buried in the wiki:

> Previously flagged as an invalid flag combination, target: es5 and 'module: es6' is now supported. This should facilitate using ES2015-based tree shakers like rollup.

So now I can add rollup to my production build pipeline to remove dead code. Nice!

qwertyuiop924 2 days ago 16 replies      
Man, I feel like the only one here who doesn't really like static types. I like dynamic typing just fine (it's crazy, I know: it must be the lisp influence). And if I want static typing, TS feels a bit intrusive. Flow is much better in this respect.

I also don't think JS is the root of all evil, and I use Emacs rather than an IDE (although we do have really good integration with regular JS, in the form of the famous js2-mode, and flycheck's linter integration). I, mean, do you really need your IDE checking your types as you type? It's not that slow, and us Emacs users have M-x compile, so we can run our code, and than jump back to the problematic line when an error occurs, and I know IDEs have similar functionality.

Don't get me wrong: static typing can be good at times, and option static typing and compile-time type analysis are useful tools, and I'm glad TS, Flow, and the like exist. But I always see a flock of comments saying that they couldn't possible live without static types, and thanking TS for taking them out of the hell of dynamism, and wishing there was something similar in Ruby/Python/whatever.

I don't really get that.

easong 2 days ago 5 replies      
I really wish that MS would release typescript as a collection of plugins for babel that would handle only one thing at a time (eg, the type system). Having my production build, es6 transpiler, type system, JSX compiler and so on (including a bunch of features I would rather didn't exist at all) all in one package feels like a failure of separation of concerns.

I understand that people find Babel's plugin ecosystem confusing and intimidating (it is), but I don't think a separate monolithic typescript that reimplements popular babel functionality is the answer.

smithkl42 2 days ago 1 reply      
Huge TypeScript fan here - been using it since its 0.8x days. And I'm very interested in the new --strictNullChecks compiler flag. But I'm trying to implement that on our current codebase, and I'm coming to the conclusion that it's still a bit premature. There are a lot of very common and harmless JS (and TS) patterns which this breaks, and for which it's been difficult (for me at least) to find a workaround.

Turning on --strictNullChecks flagged about 600+ compiler errors in our 10Kloc codebase. I've addressed about half of those so far, and I can't say that any of them have actually been a real bug that I'm glad got caught. On the contrary, because of the weird hoops it makes you jump through (e.g., encodeAsUriComponent(url || '')), I'd say that our codebase feels even less clean.

sdegutis 2 days ago 4 replies      
> In TypeScript 2.0, null and undefined have their own types which allows developers to explicitly express when null/undefined values are acceptable. Now, when something can be either a number or null, you can describe it with the union type number | null (which reads as number or null).

Great news, but I suspect it's going to be pretty difficult to migrate large codebases to account for this properly, even with the help of `--strictNullChecks`. Sounds like days worth of tedious work analyzing every function.

oblio 2 days ago 0 replies      

* npm replaces typings/tsd

* non-nullable types (has to be switched on)

* better control flow analysis ( la Facebook Flow)

* read-only properties

Arnavion 2 days ago 1 reply      

 $ npm view typescript 'dist-tags' { latest: '2.0.3', next: '2.1.0-dev.20160922', beta: '2.0.0', rc: '2.0.2' }
Yet https://www.npmjs.com/package/typescript says

>typescript published 5 months ago>1.8.10 is the latest of 447 releases

leeoniya 2 days ago 3 replies      

 class Person { readonly name: string;
couldn't they have just reused `const`?

Roboprog 1 day ago 4 replies      
Does Typescript have a facility to support partial function application?

Say I have a function of arity 4, and want to bind / partially apply (some might say "inject") 2 arguments to it to create a function of arity 2, can TS infer the types of the remaining arguments, or, that the result is a function at all???

I use partial function application MUCH more than classes in the JS code that I write. There just seems to be less to need all that "taxonomy" related refactoring.

"Stop Writing Classes", "Executioner" in "The Kingdom of Nouns" (not!), and all that sort of thing :-)

netcraft 2 days ago 0 replies      
I've got to find the time to play and get typescript set up figured out. I've had several false starts, always running into duplicate type definition issues and similar problems - looking forward to see if the new @types from npm help things.
equasar 2 days ago 2 replies      
Awesome release! Yet, I'm still waiting for 2.1 to bring finally async/await generators.
mhd 2 days ago 2 replies      
Seems the spread operator (a roadblock for a lot of JS interop) will have to wait until 2.1.
zem 2 days ago 0 replies      
typescript is a nicely conservative set of extensions to javascript, but if you're willing to venture a bit further afield, redhat's ceylon [https://ceylon-lang.org/] is well worth a look. it has an excellent type system, and seems to be under active development, particularly in terms of improving the runtime and code generation.

i played with it for a bit before deciding that for my personal stuff i prefer more strongly ML-based languages, but if i ever have to develop and maintain a large team project i'll definitely give ceylon a very serious look.

macspoofing 2 days ago 1 reply      
Why choose "readonly" as the modifier for immutable properties? A little long, no?
ihsw 2 days ago 1 reply      
Still no async/await for ES5 build targets, but other than that a plethora of excellent new features.
grimmdude 2 days ago 2 replies      
I gave TypeScript a try but honestly thought it was more trouble than it's worth in most javascript applications/libraries. Maybe that's just the lazy person inside me, or maybe my projects aren't big enough to make use of it's features.
ggregoire 2 days ago 2 replies      
TypeScript is so strange for me. Each time I read something about it, I want to give it another try... then I read some codes and I just can't go further. I like JavaScript as it is, without types on every line of code.

Unlike most people on HN, I like JavaScript. I build web app since 2011. I liked working with jQuery, then Backbone and Grunt, then Angular and Gulp. Now I'm working with React, Webpack and Babel (ES6/ES7) and writing web apps has never been so much pleasure. JavaScript in 2016 is really fine for me. And the common point in my JS experience from 2011 to 2016 is that dynamic typing has never been a problem. I also worked with strongly typed languages for years like Java or C# and I still prefer the flexibility of JavaScript.

So it's strange because I admire TypeScript. The work accomplished by its team is really amazing. And it's so nice to see a single library reconcile developers with JavaScript. But in the other hand, I prefer keeping my JS without types because it just works fine for me and the teams with who I worked.

velmu 2 days ago 0 replies      
It Costs $30 to Make a DIY EpiPen technologyreview.com
501 points by Halienja  3 days ago   384 comments top 45
googamooga 2 days ago 12 replies      
Medicine is probably the second best place after military where we can observe how greed and corruption are literally killing people.

I'm living in Russia and recently have been involved in medical devices market here. The local market for cardiology stents (little springs they insert non-surgically into your heart to remove artery clogging and prevent heart attack or stroke) has been long occupied by the three US companies. The Russian company I invested in, made their own stent design and launched a production factory in Western Siberia. Our prices are three to four times lower that prices for the same class of stents from the US competitors and the quality is the same or higher. We fought out 15% or the market for the last two years.

I have to say, that almost 99% of all stents in Russia are installed at the cost of the state medical insurance - every person in Russia is covered by this insurance, and that insurance is just sponsored by the state or local budget. The budget allocated to this kind of medical support is fixed, so if the yearly budget is 100M rubles (our local currency) and cost of a manipulation and a stent is 100K rubles then you can install stents in 1000 patients in one year. If the price goes down four-fold, then it will be 4000 patients. And this stent manipulation is a life saver in true sense of this word. So, basically with our stents we can save four times more people's lives, which on a scale of Russia would be tens of thousands of people.

Here enters the greed and corruption. One of the US companies approached one of the most powerful Russian oligarchs with good ties in the government. He lobbied a government decree stating that this US company will be single supplier for cardiology stents starting Jan, 2017. So, all hospitals and clinics are obliged to buy stents only from them, at the price they set. Tens of thousands of Russian people will die each year because of the greed and corruption - and we can't do much about it.

suprgeek 2 days ago 7 replies      
The Hackers groups are doing what they can to expose Mylan's (EpiPen makers) Greed which is laudable. What is really needed is also an explainer on why a bit of govt. leverage (socialism if you will) is good in Medicine Pricing as well.

Mylan is a really great user of buying legislation.They leveraged their 90% market ownership of the epinephrine auto-injector market such that

[1] It lobbied hard to ensure that all parents of school going kids (or tax payers) paid for EpiPens by making it into a bill that politicians could easily justify.

Once the bill passed and schools all over the country purchased these by the boatloads, then they just kept raising the price over and over and milking the profits.

When it got too much and they could not ignore the patient backlash they have turned again to purchasing legislation..

[2] Now they want to make it so that the patients do not see the copays - instead every one suffers by paying more for health insurance.

[1] https://www.opensecrets.org/lobby/billsum.php?id=hr2094-113

[2] http://www.nytimes.com/2016/09/16/business/epipen-maker-myla...

With scumbags like these, is it any wonder that the USA has the most expensive healthcare system in the world?

SteveGregory 2 days ago 5 replies      
Probably too late to really contribute, but either way -

I feel like health is a degenerate case of free markets. In any free market, the price is set by the consumers assessing their utility for the goods or services purchased. In cases of pencils, productivity software, energy, raw materials, etc, consumers compare the methods of resolving the need, or at baseline the cost of not addressing the need.

In healthcare, there are lots of situations where the cost is X dollars vs literal death. Of course, death is not an acceptable alternative, so an acceptable X ends up being very, very high for the treatment. Most people would pay their life savings to treat themselves of any life-threatoning ailment.

I honestly believe that free markets setting prices is good for most industries, but I cannot see it working in situations where the benefit categorically supercedes any amount of money.

It seems like we need to either rethink IP law surrounding healthcare, or have a monopsony (single payer or something else) setting prices.

This is a hard thing for me to resolve, as somebody who normally likes a libertarian approach.

anaolykarpov 2 days ago 4 replies      
The title could be rephrased as "Cheap guys risk the lives of thousands of people by promising savings of a few bucks".

The problem is not with the "greedy corporations", but with the poorly dedigned legislation regarding intellectual property rights.

The state created the protectionist environment in which companies can become bullies and be sure that they won't be exposed to any economic competition.

Of course, the complete lack of IP laws would deter companies from investing in research, but the same effect have too strong IP laws. Why would a company risk their money and do research once they found a cash cow which can be milked for a long time, having the state guarantee it?

mikekij 2 days ago 2 replies      
(preparing for an onslaught of down votes, but here we go.)

It's awesome to see a 'hacker' building a $30 EpiPen. But looking only at the materials cost for a medical device ignores the millions (sometimes billions?) of dollars spent on R&D, IP licensing, and (perhaps most significantly) regulatory compliance.

The pricing system for devices and drugs is definitely screwed up in the US, but Mylan's 36% gross margin on the devices doesn't seem criminal.

Perhaps they're padding their cost numbers. And perhaps there are IP shenanigans at play that I'm not aware of. But one needs a thorough understanding of the total costs to invent, develop, achieve regulatory clearance for, and market a medical device in order to assess the morality of the pricing.

fernly 3 days ago 1 reply      
TIL that an "autoject" is an inexpensive self-injection tool commonly used by diabetics[1] that can be carried safely and used easily. It can be loaded with Insulin, or with any drug whatever. The OP article describes using it to inject epinephrine, stating that

> A 1mL vial of epinephrine costs about $2.50... Doses range from 0.01mL for babies, to 0.1mL for children, to 0.3mL for adults.

In other words, if your doctor will give you a prescription for the drug itself, you could assemble three epipen equivalents for less than $100.

[1] https://smile.amazon.com/AJ-1311-Autoject-Injection-Removabl...

sp527 3 days ago 9 replies      
This doesn't feel like the right platform for DIY. When someone needs an EpiPen, it's because they might be dying. Presumably, a large and well-capitalized organization will have tested their device extensively and can offer better guarantees about it actually working (I should stress presumably). There are a lot of ways in which the hacker mindset can be beneficial to society, but this particular application feels like an ethical gray area.
zaroth 2 days ago 1 reply      
So "Four Thieves Vinegar" says their DIY auto-injector works probably almost as well as the EpiPen. Sign me up! </s>

Are we really complaining about an "onerous regulatory process" for a device which untrained laymen need to be able to use in a high stress emergency situation?

I'd like to see Four Thieves Vinegar fund the necessary trials to prove their device is safe, gain FDA approval, bring the device to market, and defend themselves against the inevitable lawsuits, and then tell us how they can sell the device with less than 80% gross margins. The marginal cost of making one more pill or one more device is almost entirely irrelevant, and any article that tries to make a case for a medical product being overpriced based on COGS isn't worth reading IMO.

The price for EpiPens went up because no one else was able to make a competing product that didn't malfunction or deliver the wrong dose of epinephrine.

justinlardinois 3 days ago 5 replies      
I feel like a lot of the commenters here didn't read the article.

> Four Thieves Vinegar have created and uploaded the plans for the simple version, called the Epipencil. Also spring loaded, the parts are gathered over the counter. The epinephrine will still need to be acquired with a prescription.

This still involves an FDA-approved drug obtained through normal channels; the DIY part is the injector.

Creating DIY medical drugs would certainly be something to be concerned about, but I don't see the problem with DIY medical tools.

ncavet 2 days ago 3 replies      
"Hacking" medicine doesn't really work. See Theranos.

First, epenephrine degrades when exposed to light so your epipencil may be ineffective from the start.

Second, parents when measured took 2.5 minutes to fill a syringe with epinephrine which is not fast enough in an emergency.

The price gouging is terrible. There are cheaper alternatives but Epi-pen has the most well known brand.

People buy Tylenol and Advil not Aspirin and Ibruprofen.

Drs have appealed to ban drug advertising. Medicine should have no place in capitalism.

Source: http://www.aaaai.org/ask-the-expert/effect-of-light-on-epine...

agotterer 3 days ago 1 reply      
Michael gave a fantastic talk at Hope this year titled "how to torrent a pharmaceutical" where he made Daraprim on stage for only a few cents. Its defingely worth watching: http://livestream.com/internetsociety3/hopeconf/videos/13073...
CodeWriter23 2 days ago 0 replies      
If you are persistent, you can get CVS to order the "generic" epinephrine injector for $5.


contingencies 3 days ago 2 replies      
I just checked and here in China you can buy 10x1mg doses of epinephrine (which is actually adrenalin) over the counter/online for 4 ($0.6USD) ... https://world.taobao.com/search/search.htm?sort=price&_ksTS=...

That means the epinephrine (adrenalin) itself is essentially free. What do they charge in the US/Europe/Australia?

enoch_r 3 days ago 1 reply      
The hack here is simple: this group did not get FDA approval for their device. Greedy corporations have repeatedly tried to make money by competing with Mylan with cheaper Epipens, but they've been prohibited from doing so by the US government (not so in Europe, where the unfortunate Europeans suffer eight greedy corporations trying to drive prices down).


bayesian_horse 2 days ago 0 replies      
It may be better than having no epinephrine at hand. But other than that, there are a lot of problems: How sure are you that it will work when you need it? Can you fill the syringe cleanly enough? Will the epinephrine in the syringe degrade, or worse, develop a bacterial growth?

It may be a better Idea to look into a syringe+vial combination on hand, prescribed by a doctor. Less convenient, and you need to learn how to use it (and preferably teach those close to you), but this may be a whole lot safer. The downside of course is the problem of self-administration when in anaphylactic shock.

snow_mac 2 days ago 1 reply      
I've had to use an EpiPen twice in my life. Oh my gosh, the terror in your heart when you're self administering it is real. I will never forget the experience for the rest of my life. I don't want to trust some hack with no FDA approval in that moment.

I don't give a damn if the product is $50 or $500. I will buy it, it's saved my life many times. Its not awesome to see a hacker point out while the materials are cheap

jpalomaki 2 days ago 0 replies      
Wouldn't it be better to focus on the reasons why there is no viable competition for this company even though the business seems to be extremely profitable?

The article links to another that lists some of the issues:https://www.statnews.com/2016/09/09/epipen-lack-of-innovatio...

This points out (among other things) that the design is patent protected and FDA rules make it difficult to come up with other designs that don't violate the patent. It is also mentioned that the devices need to go through long and expensive regulatory process.

chillacy 3 days ago 4 replies      
Regarding the original price hike which motivated this project, I saw an interesting perspective on the matter: https://www.youtube.com/watch?v=RoMlxVimwiU

Now granted Shkreli is a controversial figure, but basically drug companies are businesses, and if you sort of detach yourself and look from a business perspective and value-based pricing, Epipen competes with the ER, and $600 is a bargain vs an ER visit.

And of course his ultimate conclusion is that maybe life saving drugs are more like water and power than cell phones and wine? Maybe the government should get involved in making generic drugs available.

jMyles 2 days ago 1 reply      
Somewhat tangential: I surmise that this title will be subject to editing by HN staff, but I think that "Hacker group creates $30 DIY Epipen to expose corporate greed and save lives" is an exemplary post title for HN and want to see more like it.
a-no-n 2 days ago 1 reply      
Just saw more Epipen Congressional testimony. The actual unit cost of the Epipen (whether branded or "generic") is around $67 USD. Assuming that this cost were not overly inflated beyond actual overhead and unit costs, in order to be sustainable, a reasonable retail price without distributors would be $134 USD... with distributors $200-238.

That said, the more downward pressure from competitors (commercial or nonprofit projects), the better for customers; especially where a monopoly existed, it's rational to for customers to band together and attack excessive hegemony.

Enteprising folks need to jump on this to sell this as a kit (w/ or w/o the medication).

iamflimflam1 2 days ago 0 replies      
"3.1.24 The health economics model assumes that people who receive adrenaline auto-injectors will be allocated two epinephrine pens (EpiPens) with an average shelf-life of 6 months. Each auto-inject EpiPen costs the NHS 28.77 (British national formulary 60). This equates to 115.08 per person per year."


From 2011 - but I can't imagine the cost has gone up by that much.

smsm42 2 days ago 0 replies      
$30 is way too much, production cost of EpiPen is probably in single-digit dollars, maybe even less. That's not the point, nobody thinks EpiPen costs $300 to produce.

The system it built in a very specific and deliberate way in the US - there are patented drugs that are expensive, by design, and the pharma is supposed to finance R&D and FDA testing and so on from that money, instead of financing it through taxation, or venture investing, or other means. Now, one can claim maybe Mylan is abusing the system and the money that were supposed to finance R&D are instead financing lavish salaries or whatever. And one can claim the system should not be built at all like this but should be built other way. Maybe.

But completely ignoring the whole design and saying "ah, we've discovered it costs $30!" is useless. Yes, it actually costs even less to manufacture, way less. It's obvious. The reason why Mylan charges more is not because it costs a lot to manufacture. The reason is because that's how patented drugs market in US works. If one wants to change it, it needs to be understood how it works. It's not corruption, it's the design of the system.

wodenokoto 2 days ago 0 replies      
There are 3 epipens in this article.

The non-generic at ~$350

The generic at ~$150

And the homemade at ~$30

The homemade is equivalent to the generic and the difference between generic and non-generic is not clearly mentioned so let's talk about the price of the generic epipen.

According to the article there is difficult bureaucracy to navigate and very large liability should an epipen fail. On top of that there's distribution and offices that needs a cut or to be paid for.

Is 5x markup that horrible?

robomartin 2 days ago 0 replies      
Ther's an abysmal difference between hacking something together an manufacturing a reliable product at scale that people can bet their lives on. Everything, from R&D to the cost of lawsuits, FDA trials and regulatory frameworks makes these comparisons dumb and ridiculous.

I've been manufacturing products for thirty years. It's never simple for good products, not even a cup of coffee at Starbucks.

throw2016 2 days ago 0 replies      
The expected market response should have been a flood of alternatives at 1/100 or even 1/10 the price since the base ingredient costs pennies. But these 'ideal' market scenarios that are in public interest rarely come through.

What we often get instead are completely self serving and crafty efforts in collusion with 'ngos' and lobbyists to leech tax payer subsidies and 'force' it onto institutions via legislation.

This pattern is repeated so often and widely its predictable. Also predictable is framing it as a capitalism vs socialism issue to trigger and distract while the corruption continues unabated.

The problem is healthcare is critical. If your checks and balances and idealised system does not work you risk letting people feed on others desperation and create demons. And these sociopaths then multiply within your society killing it from within. This is the biggest argument for socialized healthcare.

eggy 2 days ago 2 replies      
I am a hacker at heart, and I believe there are definitely some shady dealings with government and industry lobbyists, however, I like to look at things on both sides, since there is always another side.

Truth is if it was more than one hacker in this collective making the 'Epipencil' they must have designed, procured materials, fabricated and did this all in less than an hour to say $30, and they would have had to do all of that in less than an hour to meet minimum wage requirements.

This does NOT speak to QA/QC, testing, insurance, FDA approval, legal costs or even their hacker lab overhead in equipment and energy to make one, let alone hundreds of thousands of these potentially life-saving products.

My guess is that the $150 per Epipen is close to what you need to fulfill all of the above and then some requirements. Far from the $300 or more in pen price hikes, so it was good they did this as an exercise for putting Mylan and government in the spotlight. Bravo, really!

My belief is that it is not solely big bad corporations, but big bad government AND big bad corporations. Just look at the moral integrity of our two current POTUS candidates.

I am trying to become more financially literate in my old age, and I am trying to teach my children likewise, since financial illiteracy is a deterrent to poor people improving their lives, or hackers making a worthwhile dollar in conjunction with learning and exploration.

I tell my kids to think twice when they reactively say or answer:

"ASAP" - when is that? Point to a date on the calendar;

"It will take 5 min." - It never takes just a minute or five;

"It only cost $8 for the materials." - How much is your time worth? Learning is a benefit that cannot be quantified too easily, but for other matters, you need to value your time.

heironimus 2 days ago 2 replies      
There are so many reasons the EpiPen costs $318, corporate greed being one of them. One of the huge reasons that no one talks about is that most rarely actually sell for $318. It's priced at that, but insurance companies negotiate a lower (unpublished) price in most/all insurance purchases. It's only those with no insurance or who are buying it without insurance that pay the full price.

This is true for nearly all drugs, medical equipment, or medical procedures in the US. This is one of the huge problems with our system. Everyone puts a huge price-tag on their stuff knowing that insurance with negotiate down.

To me, this seems like the biggest problem here.

pingec 2 days ago 0 replies      
amalcon 2 days ago 0 replies      
This is an interesting approach. I've been wondering about refilling the things -- once the injectors I have expire, I may disassemble one to see if I can work that out.

As long as the needle hasn't been used, and the refill is the same dosage as it came with, I'd expect it to be just as effective as a new injector.

(Disclaimer: I am not a doctor, even if I were you're not paying me, this is not medical advice)

This may be legal to do commercially as well, since you're not manufacturing new devices that could infringe the patent. Sorting out FDA issues would be the only hard part (though likely very hard).

(Disclaimer: Nor am I a lawyer, and you are still not paying me, this is not legal advice)

wyager 3 days ago 1 reply      
> corporate greed

Can we please give blame where it's due? http://slatestarcodex.com/2016/08/29/reverse-voxsplaining-dr...

bonoboTP 2 days ago 1 reply      
Is EpiPen that well known among Americans? I (non-American) never heard of it until all the news about it's price in the US.

Does it get prescribed more often in the US than other countries? Why didn't I know about the existence of this thing?

a3n 2 days ago 0 replies      
Other big grownup companies have tried to make a precise injector, and not done nearly as well as Epipen. It's not just a needle in front of a spring. (I haven't read the article, it won't load atm.)
KaiserPro 2 days ago 0 replies      
it does cost $30 to make an epipen.


It needs to be proved to work, which is rightly arduous. Unlike in (most)software, you can just fix it later. Defects kill. There needs to be a high bar of evidence to prove that:

A) the drug works

B) It doesn't cause your face to melt off

C) its reliable.

All of this is costly, Now, you have two choices, nationalise your drug R&D and charge a uniform cost spread over all drug classes, or through general taxation. Or Sweep away all your regulations on drug prices and start again. (like why the fuck is medicaid not allowed to collectively bargain on price? that's taxpayer subsidy right there...)

In the UK there is a thing called NICE, which is semi autonomous and run by people who can understand stats (ie not politicians) its job is to evaluate the cost of drugs, and crucially the effectiveness of all drugs prescribed within the NHS.

Is the drug actually effective? (sure 50% more powerful, but it costs 190%, just double up the old one, etc etc)

does it provide value for money?

is it safe?

are all the questions they ask. If a drug fails the tests its either written out of guidelines, or more unusually its banned.

ChuckMcM 3 days ago 0 replies      
Pretty neat. I wondered why you could just use an autoinjector like diabetics use (answer you need a larger diameter needle). Still easily doable and its all off the shelf made by medical device manufacturers and drug makers so not so much "DIY" as "repurposing existing medical gear to be more versatile".
KKKKkkkk1 2 days ago 0 replies      
How much does it cost to get and maintain FDA approval for marketing the EpiPen? What are the financial costs of the legal risks you are taking by selling it to patients? In other words, if it's so lucrative, why isn't anyone else doing it?
repiret 3 days ago 3 replies      
Thats like saying pirated software exposes the greed of software companies. I don't think that anybody believes that EpiPens themselves are very expensive at all - just like software, the cost comes from the cost of development, which in this market consists mostly of regulatory compliance and approval. If it were easy to bring an epinephrin injector to market, Teva would have already done so and Sanofi wouldn't have had to recall theirs. If there were more auto injectors on the market, the prices would go down.

The outrageous price of EpiPens is not a result of corporate greed so much as a failure of the FDA and Congress - but mostly Congress, the FDA is their subordinate. They failed to promogulate rules that maintained a competitive market for epinephrin auto injectors.

JustSomeNobody 2 days ago 1 reply      
Ok so Mylan can get them made for $30 and sells them (now) for $150. Is a 5X sale price not acceptable? If not why aren't people going after every single product manufactured and sold?

Don't get me wrong, I'm not defending anyone here; that whole industry needs some fixing. I'm just tossing out the question.

bahmboo 3 days ago 0 replies      
Should be wearing gloves and preserving sterile field for making something injectable.
dang 2 days ago 0 replies      
Url changed from https://www.minds.com/blog/view/625077755582623755, which points to this.
red_blobs 3 days ago 3 replies      
tn13 2 days ago 2 replies      
It costs $30 to make an EpiPen at home why dont you create a company and sell it for $50 and solve the problem all ya complaining about ?

Mylan deserves our appreciation for inventing EpiPen when none of the other smarty pants who claim to make it in $30 bothered to help the needy.

endgame 2 days ago 0 replies      

I'm sure this is an interesting article but the only way we will stop this practice is if we stop giving user-hostile publications our eyeballs.

pweissbrod 3 days ago 0 replies      
Watch the video. All described is loading epinephrine into an autoinjector. This is great because it suggests the barrier to competition is relatively low hanging fruit for those already in the drug delivery markets.

Also: screw mylan

Mao_Zedang 2 days ago 1 reply      
If the product is so expensive, and someone can make a competing product for less viably I find it hard to believe that it hasn't been done. A more fair comparison would be "medical aid which wasnt subjected to the same regulations and testing is cheaper to make and distribute" aka Corporate greed.
crazy1van 2 days ago 3 replies      
If someone knows how to make a product for $30 that the competition charges $300 for, why not go into business and undercut their price by a huge margin? Millions of users' lives would be instantly improved with dramatically cheaper epipens. That will do far more to combat greed than a blog post.

However, I think that if someone were to try this, they'd find there are many more costs involved than the raw ingredients and it might not be quite so simple to massively undercut the competition. But still, they should go for it! Competition is is the best medicine for over priced goods.

An Important Message About Yahoo User Security yahoo.tumblr.com
501 points by runesoerensen  2 days ago   340 comments top 54
nodesocket 2 days ago 22 replies      
You'd think this would affect the stock price, but currently YHOO only trading down 8 cents (-0.18%). I honestly see this all the time. What sounds like really horrible news for a company, does not affect the price. Howerver, some random analyst or reporter who works at the Mercury Star Sun Inquirer writes a negative article or downgrade and the stock tanks. Doesn't make much sense.
supergirl 2 days ago 8 replies      
"state sponsored actor". I wonder how they decided that. did the hackers plant a flag inside yahoo's data center? or is any attack originating from outside US now considered state sponsored? of course, we will never see any proof of this.

also, did it take them 2 years to discover this breach? that's bad. or, do they just announce it now? that's worse.

nostromo 2 days ago 8 replies      
"The data stolen may have included names, email addresses, telephone numbers, dates of birth and hashed passwords but may not have included unprotected passwords, payment card data or bank account information, the company said."

What's the difference between "may have" and "may not have" in this context?

It seems like they're saying anything could have been stolen.

newscracker 2 days ago 13 replies      
Moving email addresses out from one provider and creating another one is more difficult than moving phone numbers (in the latter case, number portability could help, if available).

What exactly can an average/common end user do for such incidents, even if it is to avoid them in the future? I use different passwords across accounts, with all of them being somewhat complex or very complex.

I have looked at a few different paid service providers before, but they're all very expensive. Expensive for me is anything that charges more than $20 per year, or worse, charges that amount or higher for every single email address/alias on a domain. My use of email for personal purposes is writing about a handful of emails in an entire year, but on the receiving side, I get a lot of emails - most of them somewhat commercial in nature (like online orders, bank statement notifications, marketing newsletters I've explicitly signed up for, etc.). I also have several email addresses, each one used for a different purpose and with some overlap across them.

It seems like web hosting has become extremely cheap over time whereas email hosting has stagnated on the price front for a long time.

throwawayReply 2 days ago 1 reply      
Is there some kind of "statute of limitations" thing that means we're suddenly finding out about a string of breaches from 2012 now?

Or is there some group that is trading breach data privately that have themselves been compromised so that data coming from them is finally leaking out?

I'm now more worried about the 4 year delay in these things coming to light than the effect of the breaches themselves given how many times I now show up on haveibeenpwned.

jonbarker 2 days ago 1 reply      
Yahoo has recommended that users "check their accounts". What exactly would they be checking? Doesn't a compromised account look the same as an uncompromised account from a user perspective?
jap 2 days ago 1 reply      
I wonder if "500M" is a silly way of saying all user account details were stolen.
munk-a 2 days ago 1 reply      
Does the incredible delay of this announcement count as being grossly negligent?

Maybe they're trying to devalue their stock prior to the merger? Similar to what Caris did: http://www.law360.com/articles/684195/caris-employees-get-16...

mey 2 days ago 1 reply      
I found it rather perverse that the login and account recovery screens of Yahoo! have 3rd party ads running. Doesn't give me any confidence in their security (in addition to the breach).
soci 2 days ago 1 reply      
Wait, Yahoo believes the data was stolen by a "state-sponsored actor"!

If they have such evidence, why don't they explain so? To me it looks like a tactic to put the focus on the "noughty" government instead of themselves.

Anyway, it will be an interesting read (if ever written) how Yahoo discovered they had been stolen and by who (what state?).

Also, if "the state" is finally behind this, who will they prosecute till death? I bet it's the hacker :(

sambe 2 days ago 0 replies      
One of the more convoluted announcements I've seen. I have to be aware that yahoo officially communicates via tumblr.com, check two different announcement pages which may not yet be up (converting time zones). When I clicked one of them I had to find the notice "in my region" which had only one option (not my region) and linked to another (non-yahoo?) site with an image of a document. I can't imagine all 500M users will jump through these hoops and remember when they last changed their password.
St-Clock 2 days ago 1 reply      
"encrypted or unencrypted security questions and answers"

This is bad right? Like, worse than your hashed password and your mailing address.

The only good thing is that if I ever implement security questions, I'll remember Yahoo! and how it could end up in the wrong hands.

leesalminen 2 days ago 1 reply      
I wonder how many dummy accounts from the mid-2000s of mine were included in that.

I was born in 1990, and my insecure online behavior from 2000-2005 scare me. Hopefully HaveIBeenPwned gets their hands on this so I can scan for my teenage usernames.

geraldcombs 2 days ago 0 replies      
I wonder what percentage of those 500 million accounts correspond to real human beings. Much of the spam on the sites I run come from what appear to be fake @yahoo.com accounts.
mschoebel 2 days ago 0 replies      
FWIW... I just logged in to my Yahoo Account and removed the security questions. Just to be sure. I had already changed my password a few months ago when first rumors of this came up. I'm pretty sure that the option to remove the security questions wasn't there back then.
pavornyoh 2 days ago 2 replies      
Can anyone elaborate as to why this is being announced two years later? Why now and not when it happened?
zymhan 2 days ago 1 reply      
Does this mean Verizon would assume liability if the purchase closes before a lawsuit/fine is brought?
Spydar007 2 days ago 0 replies      
>Yahoo believes that information associated with at least 500 million user accounts was stolen

That tops the HIBP list for the most stolen.[1]

[1] https://haveibeenpwned.com/PwnedWebsites

mtgx 2 days ago 0 replies      
> We have confirmed that a copy of certain user account information was stolen from the companys network in late 2014 by what it believes is a state-sponsored actor.

GCHQ? Although GCHQ seems to have hacked them even earlier than that.


Fuzzwah 2 days ago 0 replies      
"We are recommending that all users who havent changed their passwords since 2014 do so."

And then don't include an easy link to where users can do that? Great work yahoo.

I found my way to http://profile.yahoo.com but apparently from my machine at an AU University: "profile.yahoo.coms server DNS address could not be found"

slicktux 2 days ago 0 replies      
Interestingly my account is not part of the compromise and my friends are; I can confirm this because they received a message about the compromise and I did not. . .I asked them how long they've had their accounts and they said for about a year; where as I have had my account for about 5 years. Interesting no?
heroiccyber 2 days ago 7 replies      
You can verify if your credentials have been compromised at https://heroic.com
yAnonymous 1 day ago 0 replies      
Is it possible that certain companies leave their user data open for attacks to illegally share it with third parties?

At this point, it should at least be considered. There's obviously quite a bit of incompetence at Yahoo, but still...

norea-armozel 2 days ago 0 replies      
I think my only concern is what data I had attached to my Yahoo account (for Flickr) which I think they required me to tie to a phone number. So I guess that means I can expect people trying to abuse that phone number as a point of identification in identity theft attempts. Oh joy.
elorant 2 days ago 2 replies      
Theres one thing I dont understand with this state sponsored actor. Say you are an oppressive regime and you target activists who use yahoo mail to publish your dirty laundry. Why on earth would you hack half a billion accounts just to get access to a few dozen ones? Doesnt make sense. You attract too much attention. A thing like that would never go unnoticed. If on the other hand youve found some exploit and target specific accounts which are numbered in the tens, say hundreds, you can easily get away with it.

BTW, I dont know if its coincidental but just yesterday I received a notification from Yahoo to disable access to Mail from third party apps.

Alex3917 2 days ago 0 replies      
This is where the phrase "adverse material fact" comes into play.
bpyne 1 day ago 0 replies      
Oddly, I changed the password for 2 Yahoo accounts only a month ago. I have to wonder if Yahoo filtered for people who recently changed passwords before designating me as a person who might be affected.
sofaofthedamned 2 days ago 1 reply      
Yahoo will survive this regardless of their 'state sponsored' hand waving or not.

The day the same happens to Google or Facebook will be very different.

itsnotlupus 2 days ago 0 replies      
The timeline seems close to that HN item: https://news.ycombinator.com/item?id=8416393 (dead link, cache at http://archive.is/PpCth )

It could be entirely unrelated.

inestyne 2 days ago 0 replies      
The only reason they announced it was to avoid being guilty of an actual crime after being acquired and not announcing it before hand.
aaronkrolik 2 days ago 2 replies      
How is it that 200M user accounts are worth only $1800?
Fej 2 days ago 0 replies      

More relevant than ever.

xorgar831 2 days ago 0 replies      
Yahoo won't let you enable two-factor auth with a Google Voice phone number. Oh well, time to delete my account.

Here's the magic link:https://edit.yahoo.com/config/delete_user

badthingfactory 2 days ago 0 replies      
I have an account that was definitely compromised. I had completely forgotten this account existed and never used it to sign up for anything else. I was rather surprised when I realized someone had that email and password.
finid 2 days ago 0 replies      
System was hacked in "late 2014", but we only now found out about it in 2016. That's almost 2 years.

Whoever the "state-sponsored" hacker is probably has lost interest in that access.

halayli 2 days ago 0 replies      
What evidence do they have that it's a state sponsored attack?
esaym 2 days ago 0 replies      
Long live yahoo...oh well. I've never used them for email, I only had an account for yahoo IM, which they just killed. I have no use for them at all now.
ashitlerferad 2 days ago 0 replies      
Interesting that they are moving beyond passphrase authentication towards an "Account Key". I wonder how that works...
chris_wot 2 days ago 0 replies      
Will this be reported to the Australian Privacy Commisioner? I'm assuming it affected Yahoo Australia.
ianai 2 days ago 1 reply      
Sometimes I play with the notion of a future without asymmetric information. If it is to be known it is known by all.
nradov 2 days ago 3 replies      
It seems bizarre that Yahoo would use a post on tumblr.com to make such an important announcement. From what I've seen Tumblr has become mostly a wasteland of worthless garbage in the past few years and no one takes it seriously any more. Isn't this the sort of thing that ought to be on the yahoo.com home page from a PR crisis management standpoint?
thereisnogadi 2 days ago 0 replies      
Wow. If you scroll up and look at the header, that's some very sexy UX. Good job, Yahoo!
willow9886 2 days ago 2 replies      
Yahoo's login experience has been horrible lately. This must be a contributing factor.
therealmarv 2 days ago 0 replies      
MD5 for password hashing? Seriously? That was even 2014 waaaaaaay outdated.
tzakrajs 2 days ago 0 replies      
Yet another "state actor hit us with 0days" statement.
realraghavgupta 2 days ago 1 reply      
using 2 Factor Authentication comes handy in situations like this.
smaili 2 days ago 1 reply      
Anyone happen to know if this was the largest hack in history?
beezle 2 days ago 3 replies      
Anybody know what hash they use at yahoo for account passwords?
backtoyoujim 2 days ago 0 replies      
I should have never created that rocketmail account.
eternalban 2 days ago 0 replies      
local area askHN:

what to do if one had an ancient account that was abandoned but has one's name on it?

[p.s. forgot password, etc.]

shruubi 2 days ago 0 replies      
It's obviously so important that they posted it to Tumblr instead of on the yahoo website itself...
luckydata 2 days ago 0 replies      
why don't they die already?
justinv 2 days ago 2 replies      
"by what it believed was a "state-sponsored actor.""
mapletree 2 days ago 0 replies      
I purchased my own credentials from the hackers just to make sure nobody else gets them. So much easier than coming up with another password.
What every coder should know about gamma johnnovak.net
554 points by johnnovak  3 days ago   180 comments top 42
jacobolus 3 days ago 3 replies      
One thing I hate is that essentially all vector graphics and text rendering (Cairo, Quartz, MS Windows, Adobe apps, ...) is done with gamma-oblivious antialiasing, which means that apparent stroke width / text color changes as you scale text up or down.

This is why if you render vector graphics to a raster image at high resolution and then scale the image down (using high quality resampling), you get something that looks substantially thinner/lighter than a vector render.

This causes all kinds of problems with accurately rendering very detailed vector images full of fine lines and detailed patterns (e.g. zoomed out maps). It also breaks WYSIWYG between high-resolution printing and screen renders. (It doesnt help that the antialiasing in common vector graphics / text renderers are also fairly inaccurate in general for detailed shapes, leading to weird seams etc.)

But nobody can afford to fix their gamma handling code for on-screen rendering, because all the screen fonts we use were designed with the assumption of wrong gamma treatment, which means most text will look too thin after the change.

* * *

To see a prototype of a better vector graphics implementation than anything in current production, and some nice demo images of how broken current implementations are when they hit complicated graphics, check this 2014 paper: http://w3.impa.br/~diego/projects/GanEtAl14/

cscheid 3 days ago 3 replies      
Hey, so gamma is not a logarithmic response. You claim that the delta you use in Figure 2 is a ratio, but your code, https://github.com/johnnovak/johnnovak.site/blob/master/blog... uses a fixed power. These are not the same thing.

f(x+eps)/f(x) ~= eps f'(x)/f(x) + 1

f(x) = x^2.2f'(x) = 2.2x^1.2

f(x+eps)/f(x) ~= 1.2 eps/x + 1

Human response to light is not particularly well-modeled by a logarithmic response. It's --- no big surprise --- better modeled by a power law.

This stuff is confusing because there's two perceptual "laws" that people like to cite: Fechner-Weber, and Stephens's. Fechner-Weber is logarithmic; Stephens's is a generalized power-law response.

inopinatus 2 days ago 5 replies      
Um. Curiously, that first example didn't work for me. Figures 1 & 2, under "Light emission vs perceptual brightness" are compared thus: "On which image does the gradiation appear more even? Its the second one!"

Except that for me it isn't. The first one, graded by emission rather than perception, appears more evenly graded to me. There is no setting I can find using the Apple calibration tool (even in expert mode) that does anything but strengthen this perception.

This raises only questions. Is this discrepancy caused by my Apple Thunderbolt Display? By my mild myopia? The natural lighting? My high-protein diet? The jazz on the stereo? The NSA? Or do I really have a different perception of light intensity?

And is anyone else getting the same?

Note: I have always had trouble with gamma correction during game setup; there has never been a setting I liked. Typically there'll be a request to adjust gamma until a character disappears, but however I fiddle things it never does.

Negitivefrags 3 days ago 1 reply      
Something that is important to note is that in photoshop the default is gamma incorrect blending.

If you work on game textures, and especially for effects like particles, it's important that you change the photoshop option to use gamma correct alpha blending. If you don't, you will get inconsistent results between your game engine and what you author in photoshop.

This isn't as important for normal image editing because the resulting image is just being viewed directly and you just edit until it looks right.

ansgri 3 days ago 2 replies      
Enough has been said about incorrect gamma (this and [0]), now I think it's high time to bash the software of the world for incorrect downscaling (e.g. [1]). It has much more visible effects, and has real consequences for computer vision algorithms.

In the course on computer vision in my university (which I help teaching) we teach this stuff to make students understand physics, but at the end of the lecture I'd always note that for vision it's largely irrelevant and isn't worth the cycles to convert image to the linear scale.

[0] http://www.4p8.com/eric.brasseur/gamma.html

[1] http://photo.stackexchange.com/questions/53820/why-do-photos...

Alexey_Nigin 2 days ago 2 replies      
I tried viewing the article on 4 different monitors. All monitors had default settings except for brightness. Monitors A & B were on new laptops, monitor C was on a very old laptop, and monitor D was on a smartphone. Here are the results:

FIGURES 1 & 2. On monitor A, all bands of color in figure 1 were easily discernible. The first four bands of color in figure 2 looked identically. Figure 1 looked more evenly spaced than figure 2. On monitor B, all bands of color in figure 1 were easily discernible. The first five bands of color in figure 2 looked identically. Figure 1 looked more evenly spaced than figure 2. On monitor C, all bands of color except the last two in figure 1 were easily discernible. The first three bands of color in figure 2 looked identically. Figure 1 looked about as evenly spaced as figure 2. The result from monitor D was the same as the result from monitor A.

FIGURE 12. On monitors A and B, the color of (A) was closer to (B) than to (C). On monitor C, (A) appeared equally close in color to (B) and (C). On monitor D, the color of (A) was exactly identical to (B).

CONCLUSION: On monitor C, gamma correction had neutral effect. On all other monitors, the effects were negative. Unfortunately, I was unable to find a standalone PC monitor for my comparison. It is entirely possible that a PC monitor would give a different result. However, since most people use laptops and tablets nowadays, I doubt the article's premise that "every coder should know about gamma".

kazinator 3 days ago 3 replies      
I was going to comment snarkily: "Really? Every coder? What if you program toasters?"

Then it immediately occurred to me that a toaster has some binary enumeration of the blackness level of the toast, like from 0 to 15, and this corresponds to a non-linear way to the actual darkness: i.e. yep, you have to know something about gamma.

crazygringo 3 days ago 3 replies      
This is one of the most fascinating articles I've come across on HN, and so well explained, so thank you.

But I wonder about what the "right" way to blend gradients really is -- the article shows how linear blending of bright hues results in an arguably more natural transition.

Yet a linear blending from black to white would actually, perceptually, feel too light -- exactly what Fig. 1 looks like -- the whole point is that a black-to-white gradient looks more even if calculated in sRGB, and not linearly.

So for gradients intended to look good to human eyes, or more specifically that change at a perceptually constant rate, what is the right algorithm when color is taken into account?

I wonder if relying just on gamma (which maps only brightness) is not enough, but whether there are equivalent curves for hue and saturation? For example, looking at any circular HSV color picker, we've very sensitive to changes around blue, and much less so around green -- is there an equivalent perceptual "gamma" for hue? Should we take that into an account for even better gradients, and calculate gradients as linear transitions in HSV rather than RGB?

datenwolf 2 days ago 4 replies      
I think the deep underlying problem is not just handling gamma but that to this day the graphics systems we use make programs output their graphics output in the color space of the connected display device. If graphics system coders in the late 1980-ies and early 1990-ies would have bothered to just think for a moment and look at the existing research then the APIs we're using today would expect colors in linear contact color space.

Practically all the problems described in the article (which BTW has a few factual inaccuracies regarding the technical details on the how and why of gamma) vanish if graphics operations are performed in a linear contact color space. The most robust choice would have been CIE1931 (aka XYZ1931).

Doing linear operations in CIE Lab also avoids the gamma problems (the L component is linear as well), however the chroma transformation between XYZ and the ab component of Lab is nonlinear. However from a image processing and manipulation point of view doing linear operations also on the ab components of Lab will actually yield the "expected" results.

The biggest drawback with contact color spaces is, that 8 bits of dynamic range are insufficient for the L channel; 10 bits is sufficient, but in general one wants at least 12 bits. In terms of 32 bits per pixel practical distribution is 12L 10a 10b. Unfortunately current GPUs experience a performance penality with this kind of alignment. So in practice one is going to use a 16 bits per channel format.

One must be aware that aside the linear XYZ and Lab color spaces, even if a contact color space is used images are often stored with a nonlinear mapping. For example DCI compliant digital cinema package video essence encoding is specified to be stored as CIE1931 XYZ with D65 whitepoint and a gamma=2.6 mapping applied, using 12 bits per channel.

skierscott 3 days ago 0 replies      
I work on algorithms that can be applied to images, and was equally surprised when I saw a video called "Computer color is broken."

I investigated and wrote a post called "Computer color is only kinda broken"[1].

This post includes visuals and investigates mixing two colors together in different colorspaces.


tomjakubowski 3 days ago 1 reply      
Hi John!

If you're reading comments, I just thought you should know that the link to w3.org in the (color) "Gradients" section is broken.

It should point to https://lists.w3.org/Archives/Public/www-style/2012Jan/0607.... but there's an extra "o" at the end of the URL in your page's link.

Glyptodon 3 days ago 2 replies      
The thing that seems a bit weird to me is that the constant light intensity graduation (fig 1) appears much more even/linearly monotonic to me than the perceptual one (fig 2) which seems really off at the ends, kind of sticking to really really dark black for too long at the left end, shifting to white too fast at the right end.
elihu 3 days ago 2 replies      
This is very good and useful; I'll have to update my ray-tracer accordingly.

One thing not discussed though is what to do about values that don't fit in the zero-to-one range? In 3-D rendering, there is no maximum intensity of light, so what's the ideal strategy to truncate to the needed range?

panic 3 days ago 1 reply      
Nowadays GPUs are able to convert between sRGB and linear automatically when reading and writing textures. There's no more excuse for incorrect rendering on modern hardware!
jadbox 3 days ago 2 replies      
Interesting that Nim (lang) is used in the examples- good readable code too
mxfh 3 days ago 0 replies      
Good reminder about these persisting blending issues in the linear interpretation of RGB values, which was well explained to non-coders as well here in a quite popular MinutePhysics video:https://www.youtube.com/watch?v=LKnqECcg6Gw

As others commented the gamma scaling issues seem even more relevant.

Just please, don't use RGB color space for generating gradients. In fact, it's ill fitted for most operations concerning the perception of colors as is.

chroma.js: https://vis4.net/blog/posts/mastering-multi-hued-color-scale...

D3: https://bl.ocks.org/mbostock/3014589

Interesting excursion: historically the default viewing gammas seem to have lowered, because broadcasting defaulted to dimly lit rooms, while today's ubiquitous displays are usually in brighter environments.


mixmastamyk 3 days ago 0 replies      
The conclusion reminded me of the "unicode sandwich," i.e. decode on data load, process in a pure form, then encode before writing to disk.
kristofferR 2 days ago 1 reply      
I get that this is an article about gamma, but it should have mentioned that sRGB is on the way out. People who need to think about gamma also need to think about wider color spaces like DCI-P3, which the Apple ecosystem is moving to pretty quickly (and others would be dumb to not follow).
qwertyuiop924 3 days ago 0 replies      
I'd already seen most of this in a video (courtesy of Henry, aka MinutePhysics, https://m.youtube.com/watch?v=LKnqECcg6Gw), but it was nice to see a programmer-oriented explanation, nonetheless.
spunker540 3 days ago 1 reply      
I don't think this is something every coder should know about-- maybe every graphics coder
kevin_thibedeau 3 days ago 1 reply      
What is usually little mentioned is that the transfer function for LCDs is a sigmoid rather than an exponential. The latter is simulated for desktop displays to maintain compatibility with CRTs. Embedded LCDs don't usually have this luxury.
willvarfar 3 days ago 4 replies      
I'm divided; I really want the article to be true, and for everyone to realise what a whole mistake we've been making all along... but, as the legions of us who don't adjust for gamma demonstrate, ignoring it doesn't make the world end?!
emcq 2 days ago 0 replies      
Meh gamma is a simplistic nonlinearity to model the world; if you care about perception use a CIE colorspace, if you care about gaming they have developed more sophisticated nonlinearities for HDR.
chmike 2 days ago 2 replies      
Does it mean that when doing a conversion from sRGB encoding to physical intensity encoding we have to extend the number of bits to encode the physical intensity values to avoid rounding errors in the sRGB encoding ?

I guess that that the required number of bits to encode physical intensity values depends on the operations. performed. The author suggest using floats, but this means 3x4 bytes and 4x4 bytes with the alpha channel. Would 16 bit unsigned integer be enough ? Floats are ok when using graphic cards, but not ok when using the processor.

nichochar 3 days ago 1 reply      
The design of your website, and it's readability, is great! Good job
reduxive 3 days ago 0 replies      
This article could really benefit from an image DIFF widget. Even animated flashing GIF images would be an improvement.

It needs something that not only permits comparable overlays, but (perhaps with a third diff layer) also highlights the ugly/wrong pixels with a high-contrast paint.

A handful of images are only somewhat obviously problematic, but for most of the images, I really had to struggle to find undesirable artifacts.

If it's that difficult to discern inconsistent image artifacts, one can understand why so little attention is often paid to this situation.

slacka 3 days ago 1 reply      
> The graphics libraries of my operating system handle gamma correctly. (Only if your operating system is Mac OS X 10.6 or higher)

Not just OS X. The majority of Linux games from the past 2 decades including all SDL and id tech 1-3 games relied on X server's gamma function. An X.Org Server update broken it about 6 years ago. It was fixed a few weeks ago.


daredevildave 2 days ago 0 replies      
And if you want to use a WebGL engine with gamma correct rendering... https://playcanvas.com ;-)


amelius 2 days ago 0 replies      
> sRGB is a colour space that is the de-facto standard for consumer electronic devices nowadays, including monitors, digital cameras, scanners, printers and handheld devices. It is also the standard colour space for images on the Internet.

Ok, does that mean that the device performs the gamma-transformation for me, and I don't need to worry about gamma?

(and if not, why not?)

j2kun 3 days ago 1 reply      
Is this why my computer screen's brightness controls always seem to have a huge jump between the lowest two settings (off and dimmest-before-off)?
sriku 2 days ago 1 reply      
When viewing this on a macbook air, the discussion around the two images in the section "Light emission vs perceptual brightness" appears weird. To me, the first image appears linearly spaced and in the second image I can hardly make out the difference between the first few bars of black.
anotheryou 2 days ago 0 replies      
Can anyone recommend a tool/library to output at least full HD image fades to a video with correct gamma? Preferably even with dithering for finer steps when fading slowly.

My main problem is that I'm not good at on-the-fly encoding and outputting frame by frame feels a bit excessive.

kevinwang 3 days ago 4 replies      
On my iPhone, for the checkerboard resizing, the srgb-space resizing (b) is almost an exact match, while C appears much whiter.
notlisted 2 days ago 2 replies      
Beautiful description and great examples. One thing confuses me. I'm actually using PS CS5 (supposedly the last 'correct' one?) and resizing figure 11 to 50% actually results in B, not C. Is there an option/setting I can use to fix this?
Retric 3 days ago 1 reply      
Did anyone else think the first set of bars was linear not the second? I could not notice any difference between the leftmost three bars on the bottom section. Or does this relate to how iPad renders images or something? ed: Same issue on PC.
catpolice 2 days ago 0 replies      
As a pedantic writer, it annoys me that the article starts by mentioning a quiz and making a big deal about answering yes or no to the questions... but there aren't actually any questions. The "quiz" is a list of statements. Each one can be understood by context to imply a question about whether you agree with the statement, but it's distracting because you can't answer yes to something that isn't a question.
wfunction 3 days ago 0 replies      
Can anyone get IrfanView to output the correct image? I'm trying the latest version I can find and it still gives me full gray.
pilooch 2 days ago 0 replies      
I do AI, I let my CNN eat the gammas :)
platz 3 days ago 2 replies      
please dont use condescending titles barking what all coders should or shouldnt know (invariably the topic is a niche that the author wants to cajole others into caring about too)
optimuspaul 3 days ago 2 replies      
Lost me when he said the fig 2 appeared to have a more even gradation than fig 1, and not just because of the spelling error. The fig 1 looked more even to me, but I am colorblind.
twothamendment 3 days ago 0 replies      
I know and love my gamma. She makes the best cookies!
Ripgrep A new command line search tool burntsushi.net
666 points by dikaiosune  1 day ago   179 comments top 33
losvedir 1 day ago 2 replies      
Meh, yet another grep tool.... wait, by burntsushi! Whenever I hear of someone wanting to improve grep I think of the classic ridiculous fish piece[0]. But when I saw that this one was by the author of rust's regex tools, which I know from a previous post on here, are quite sophisticated, I perked up.

Also, the tool aside, this blog post should be held up as the gold standard of what gets posted to hacker news: detailed, technical, interesting.

Thanks for your hard work! Looking forward to taking this for a spin.

[0] http://ridiculousfish.com/blog/posts/old-age-and-treachery.h...

ggreer 1 day ago 7 replies      
I'm the author of ag. That was a really good comparison of the different code searching tools. The author did a great job of showing how each tool misbehaved or performed poorly in certain circumstances. He's also totally right about defaults mattering.

It looks like ripgrep gets most of its speedup on ag by:

1. Only supporting DFA-able Rust regexes. I'd love to use a lighter-weight regex library in ag, but users are accustomed to full PCRE support. Switching would cause me to receive a lot of angry emails. Maybe I'll do it anyway. PCRE has some annoying limitations. (For example, it can only search up to 2GB at a time.)

2. Not counting line numbers by default. The blog post addresses this, but I think results without line numbers are far less useful; so much so that I've traded away performance in ag. (Note that even if you tell ag not to print line numbers, it still wastes time counting them. The printing code is the result of me merging a lot of PRs that I really shouldn't have.)

3. Not using mmap(). This is a big one, and I'm not sure what the deal is here. I just added a --nommap option to ag in master.[1] It's a naive implementation, but it benchmarks comparably to the default mmap() behavior. I'm really hoping there's a flag I can pass to mmap() or madvise() that says, "Don't worry about all that synchronization stuff. I just want to read these bytes sequentially. I'm OK with undefined behavior if something else changes the file while I'm reading it."

The author also points out correctness issues with ag. Ag doesn't fully support .gitiginore. It doesn't support unicode. Inverse matching (-v) can be crazy slow. These shortcomings are mostly because I originally wrote ag for myself. If I didn't use certain gitignore rules or non-ASCII encodings, I didn't write the code to support them.

Some expectation management: If you try out ripgrep, don't get your hopes up. Unless you're searching some really big codebases, you won't notice the speed difference. What you will notice, however, are the feature differences. Take a look at https://github.com/BurntSushi/ripgrep/issues to get a taste of what's missing or broken. It will be some time before all those little details are ironed-out.

That said, may the best code searching tool win. :)

1. https://github.com/ggreer/the_silver_searcher/commit/bd65e26...

minimax 1 day ago 1 reply      
In contrast, GNU grep uses libcs memchr, which is standard C code with no explicit use of SIMD instructions. However, that C code will be autovectorized to use xmm registers and SIMD instructions, which are half the size of ymm registers.

I don't think this is correct. glibc has architecture specific hand rolled (or unrolled if you will lol) assembly for x64 memchr. See here: https://sourceware.org/git/?p=glibc.git;a=blob;f=sysdeps/x86...

cwillu 1 day ago 0 replies      
I wish more people actually took steps to optimize disk io though; my current source tree may be in cache, but my logs certainly aren't. Nor are my /usr/share/docs/, /usr/includes/, or my old projects.

Chris Mason of btrfs fame did some proof of concept work for walking and reading trees in on-disk order, showing some pretty spectacular potential gains: https://oss.oracle.com/~mason/acp/

Tooling to do your own testing: https://oss.oracle.com/~mason/seekwatcher/

jonstewart 1 day ago 2 replies      
Nice! Lightgrep[1] uses libicu et al to look up code points for a user-specified encoding and encode them as bytes, then just searches for the bytes. Since ripgrep is presumably looking just for bytes, too, and compiling UTF-8 multibyte code points to a sequence of bytes, perhaps you can do likewise with ICU and support other encodings. ICU is a bear to build against when cross-compiling, but it knows hundreds of encodings, all of the proper code point names, character classes, named properties, etc., and the surface area of its API that's required for such usage is still pretty small.

[1]: http://strozfriedberg.github.io/liblightgrep

_audakel 1 day ago 0 replies      

Id like to try to convince you why you shouldnt use ripgrep. Often, this is far more revealing than reasons why I think you should use ripgrep."

Love that he added this

lobster_johnson 1 day ago 1 reply      
Very nice. Not only fast, but feels modern.

Tried it out on a 3.5GB JSON file:

 # rg rg erzg4 k.json > /dev/null 1.80s user 2.54s system 53% cpu 8.053 total # rg with 4 threads rg -j4 erzg4 k.json > /dev/null 1.76s user 1.29s system 99% cpu 3.059 total # OS X grep grep erzg4 k.json > /dev/null 60.62s user 0.96s system 99% cpu 1:01.75 total # GNU Grep ggrep erzg4 k.json > /dev/null 1.96s user 1.43s system 88% cpu 2.691 total
GNU Grep wins, but it's pretty crusty, especially with regards to its output (even with colourization).

bodyfour 1 day ago 2 replies      
It would be interesting to benchmark how much mmap hurts when operating in a non-parallel mode.

I think a lot of the residual love for mmap is because it actually did give decent results back when single core machines were the norm. However, once your program becomes multithreaded it imposes a lot of hidden synchronization costs, especially on munmap().

The fastest option might well be to use mmap sometimes but have a collection of single-thread processes instead of a single multi-threaded one so that their VM maps aren't shared. However, this significantly complicates the work-sharing and output-merging stages. If you want to keep all the benefits you'd need a shared-memory area and do manual allocation inside it for all common data which would be a lot of work.

It might also be that mmap is a loss these days even for single-threaded... I don't know.

Side note: when I last looked at this problem (on Solaris, 20ish years ago) one trick I used when mmap'ing was to skip the "madvise(MADV_SEQUENTIAL)" if the file size was below some threshold. If the file was small enough to be completely be prefetched from disk it had no effect and was just a wasted syscall. On larger files it seemed to help, though.

cm3 1 day ago 1 reply      
To build a static Linux binary with SIMD support, run this:

 RUSTFLAGS="-C target-cpu=native" rustup run nightly cargo build --target x86_64-unknown-linux-musl --release --features simd-accel

dikaiosune 1 day ago 0 replies      
Compiling it to try right now...

Some discussion over on /r/rust: https://www.reddit.com/r/rust/comments/544hnk/ripgrep_is_fas...

EDIT: The machine I'm on is much less beefy than the benchmark machines, which means that the speed difference is quite noticeable for me.

echelon 1 day ago 1 reply      
Rust is really staring to be seen in the wild now.
Tim61 1 day ago 0 replies      
I love the layout of this article. Especially the pitch and anti-pitch. I wish more more tools/libraries/things would make note of their downsides.

I'm convinced to give it a try.

chx 5 hours ago 0 replies      
I am not sure how excited I am ... I readily accept this to be faster than ag -- but ag already scans 5M lines in a second for a string literal on my machine. Not having to switch tools when I need a recursive regexp is win enough to tolerate a potential .4s vs .32s second everyday search.
krylon 1 day ago 2 replies      
When I use grep (which is fairly regularly), the bottleneck is nearly always the disk or the network (in case of NFS/SMB volumes).

Just out of curiosity, what kind of use case makes grep and prospective replacements scream? The most "hardcore" I got with grep was digging through a few gigabytes of ShamePoint logs looking for those correlation IDs, and that apparently was completely I/O-bound, the CPUs on that machine stayed nearly idle.

pixelbeat 1 day ago 1 reply      
Thanks for the detailed comparisons and writeup.

I find this simple wrapper around grep(1) very fast and useful:


h1d 1 day ago 0 replies      
"if you like speed, saner defaults, fewer bugs and Unicode"

Warning - Conditional always returns true.

fsiefken 1 day ago 2 replies      
nice, but does it compile and run on armhf? I don't see any binaries
visarga 16 hours ago 1 reply      
Great tool. Does there exist a faster implementation of sort as well? I once implemented quicksort in C and it was faster than Unix sort by a lot, I mean, seconds instead of minutes for 1 million lines of text.
xuhu 1 day ago 1 reply      
Why not make --with-filename default even for e.g. "rg somestring" ? That seems like it could hinder adoption since grep does it and it's useful when piping to other commands.

Is it enabled when you specify a directory (rg somestring .) ?

qwertyuiop924 1 day ago 0 replies      
That is really cool. Although I think this is a case where Good Enough will beat amazing, at least for me (especially given how much I use backrefs).
wamatt 1 day ago 1 reply      
On a somewhat related note.

There does not appear be a popular indexed full-text search tool in existence.

Think cross-platform version of Spotlight's mdfind. Could there be something fundamental that makes this approach unsuitable for code search?

Alternatively, something like locate, but realtime and fulltext, instead of filename only.

justinmayer 1 day ago 2 replies      
Anyone have any suggestions regarding how to best use Ripgrep within Vim? Specifically, how best to use it to recursively search the current directory (or specified directory) and have the results appear in a quickfix window that allows for easily opening the file(s) that contain the searched term.
petre 1 day ago 2 replies      
Does it use PCRE (not the lib, the regex style). If not, ack is just fine. My main concern with grep are Posix regular expressions.
pmontra 1 day ago 1 reply      
It looks very good and I'd like to try it. However I'm lazy and I don't want to install all the Rust dev environment to compile it. Did anybody build a .deb for Ubuntu 16?
AlisdairO 1 day ago 0 replies      
Superb work, and a superb writeup. It's really great to see such an honest and thorough evaluation.
hxn 1 day ago 2 replies      
Looks like every tool has its upsides and downsides. This one lacks full PCRE syntax support. Does one have to install Rust to use it?
spicyj 1 day ago 5 replies      
rg is harder to type with one hand because it uses the same finger twice. :)
serge2k 1 day ago 1 reply      
> We will attempt to do the impossible

Oh well. Waste of time then.

libman 1 day ago 1 reply      
Tragically the news that LLVM is switching to a non-Copyfree license (see copyfree.org/standard/rejected) has ruined everything... Nothing written in Rust can be called Free Software. :(
chalana 1 day ago 2 replies      
I'm never sure whether or not I should adopt these fancy new command line tools that come out. I get them on my muscle memory and then all of a sudden I ssh into a machine that doesn't have any of these and I'm screwed...
kozikow 1 day ago 2 replies      
1. Ag have nice editor integration. I would miss emacs helm-projectile-ag

2. Pcre is good regexp flavor to master. It is have good balance of speed, power and popularity. In addition to Ag, there are accessible libraries in many languages, including python.

I think it would be good if everyone settled on Pcre, rather than each language thinking they will do regexps better.

zatkin 1 day ago 0 replies      
>It is not, strictly speaking, an interface compatible drop-in replacement for both, but the feature sets are far more similar than different.
wruza 1 day ago 1 reply      

 ... $ rg -uu foobar # similar to `grep -r` $ rg -uuu foobar # similar to `grep -a -r`
I knew it. The name is absolutely ironic. I cannot just drop-it-in and make all my scripts and whatever scripts I download work immediately faster (nor is it compatible with my shell typing reflexes). New, shiny, fast tool, doomed from birth.

The MIT License, Line by Line kemitchell.com
505 points by monocasa  2 days ago   129 comments top 18
richardfontana 2 days ago 3 replies      
Despite the assumption of some newer open-source developers thatsending a pull request on GitHub automatically licenses thecontribution for distribution on the terms of the projects existinglicensewhat Richard Fontana of Red Hat callsinbound=outboundUnited States law doesnt recognize any suchrule. Strong copyright protection, not permissive licensing, is thedefault.

That isn't quite what I mean by "inbound=outbound". Rather,inbound=outbound is a contribution governance rule under which inboundcontributions, say a pull request for a GitHub-hosted project, aredeemed to be licensed under the applicable outbound license of theproject. This is, in fact, the rule under which most open sourceprojects have operated since time immemorial. The DCO is one way ofmaking inbound=outbound more explicit, and I increasingly think onethat should be encouraged (if only to combat the practice of usingCLAs and the like). But under the right circumstances it works even where the contribution is notexplicitly licensed (I think this is what Kyle may bequestioning). There are other ways besides the DCO of creating greatercertainty, or the appearance of greater certainty, around the inboundlicensing act, such as PieterH's suggestion of using a copyleftlicense like the MPL, or the suggestion of using the Apache License2.0 (whose section 5 states an inbound=outbound rule as a kind ofcondition of the outbound license grant).

PieterH 2 days ago 10 replies      
This is a really good article. There's one part in particular that struck me:

"Despite the assumption of some newer open-source developers that sending a pull request on GitHub automatically licenses the contribution for distribution on the terms of the projects existing licensewhat Richard Fontana of Red Hat calls inbound=outboundUnited States law doesnt recognize any such rule. Strong copyright protection, not permissive licensing, is the default."

In other words the fork + pull request + merge flow does not work on a project unless you have an explicit step like a CLA, or an alternative solution.

We faced this problem early on in ZeroMQ, that asking contributors to take this extra step increased the work for maintainers (to check, is this the first time person X contributes, and have they made a CLA?) It also scared off contributors from businesses, where this often took approval (which took time and was often denied).

Our first solution in ZeroMQ was to ask contributors to explicitly state, "I hereby license this patch under MIT," which let us safely merge it into our LGPL codebase. Yet, again, another extra step and again, needs corporate approval.

Our current solution is I think more elegant and is one of the arguments I've used in favor of a share-alike license (xGPL originally and MPLv2 more these days) in our projects.

That works as follows:

* When you fork a project ABC that uses, say, MPLv2, the fork is also licensed under MPLv2.

* When you modify the fork, with your patch, your derived work is now also always licensed under MPLv2. This is due to the share-alike aspect. If you use MIT, at this stage the derived work is (or rather, can be) standard copyright. Admittedly if you leave the license header in the source file, it remains MIT. Yet how many maintainers check the header of the inbound source file? Not many IMO.

* When you then send a patch from that inbound project, the patch is also licensed under MPLv2.

* Ergo there is no need for an explicit grant or transfer of copyright.

I wonder if other people have come to the same conclusion, or if there are flaws in my reasoning.

SamBam 2 days ago 2 replies      
> The phrase arising from, out of or in connection with is a recurring tick symptomatic of the legal draftsmans inherent, anxious insecurity.

Indeed. I'm trying to imagine a court saying "Well... there were damages, but they arose out of the software and not from the software, so therefore- oh, wait! The license actually includes arising "out of" the software as well as "from" the software, so I guess the limitation of liability stands. Case dismissed!"

gakada 2 days ago 1 reply      
It's funny that the MIT license has the reputation of being "The license you choose when you don't care about attribution, or it would be unreasonable to require attribution".

As the article points out, copies of MIT licensed code must include not only the copyright line but the whole damn license.

AstroJetson 2 days ago 1 reply      
Crushed under load, to get the cache


I've always wondered about the nuances around the different licenses, it's nice to get a non-lawyer guide to the MIT one.

tbirdz 2 days ago 2 replies      
Does anyone know if this will be part of an ongoing series, covering many open source licenses?
tunnuz 2 days ago 1 reply      
Also relevant, this free and very accessible book from O'Reilly http://www.oreilly.com/openbook/osfreesoft/book that explains the history and caveats of most open source licenses.
twhb 1 day ago 9 replies      
A bit off-topic, but I would be very interested in somebody making a case for why an OS license is better than a simple line like "This code is free for everybody to use as they wish." I've read about it plenty, but remain unconvinced.

The OS software I write is for the good of everybody, not just its own popularity or the OS community. I'm fine with all uses of it, in whole or in part, whether or not I'm credited. The license reproduction requirement therefore feels like unnecessary noise, and I'd like to think that courts are sane enough that the warranty disclaimer is unnecessary too - is there any real court case where somebody has been sued for a defect in free, OS software, without an explicit warranty, and lost?

kazinator 2 days ago 3 replies      
These licenses have little flaws upon closer examination. One day I was reading the BSD license closely, in the context of its use in the TXR project, and was astonished to find that it was buggy and required tweaking to make it more internally consistent and true to its intent. I added a METALICENSE document where the changes are detailed:



The main problem is that the original says that both the use and redistribution of the software are permitted provided that the "following conditions are met", which is followed by two numbered conditions, 1 and 2. But the two conditions do not govern use at all; they are purely about redistribution! Rather, the intended legal situation is that use of the software means that the user agrees to the liability and warranty disclaimer (which is not a condition). But the BSD license neglects to say that at all; it says use is subject to the conditions (just that 1 and 2), not to the disclaimer.

reitanqild 1 day ago 1 reply      
This is fantastic IMO. More of this.

I also think there could be room for something similar about code (but I haven't had time to read aosabook.org yet, so maybe that is wheree I'll find it.)

A note on the Crockford joke:

I think it is on IBM, not the lawyers: I think he describes somewhere the fun of getting a payment from IBM followed by sending an additional license entitling IBM to use the software for evil.

vonnik 2 days ago 0 replies      
We've worked with Kyle Mitchell. He's a smart guy.
breakingcups 1 day ago 2 replies      
A very lovely article, I enjoyed it very much since it gives insight into the "syntax" of legal documents in the US.

This article brought up a point I find very interesting. The MIT license (and a bunch of other licenses as well) are very US-oriented when it came to their writing, provisions, etc. I'd love to read a similar article exploring licenses like these from a, say, European point of view. Would the same constructs hold up in a German court, for example. What language is missing or superfluous?

branchly2 2 days ago 0 replies      
This looks excellent, and I'm going to curl up with it and a cup of tea tonight to read it more carefully. Thanks!

Would love to see the author do one for the GPL. I realize the result would be quite a bit longer.

MindTwister 1 day ago 1 reply      
Huh? I submitted this 23 hours ago with the exact same url, glad to see som discussion though.
cpdean 2 days ago 3 replies      
Would someone care to elaborate why the "Good, not Evil" clause in the JSON license is bad?
nickpsecurity 1 day ago 0 replies      
Licenses like MIT and BSD should be avoided due to the patent risk in favor of licenses that explicitly grant patent protection like Apache 2.0. The patent troll risk is just way too high. As rando said, companies like Microsoft are even open-sourcing code while raking in hundreds of millions from patent suits against open-source software. That this is even working for them shows the permissive licenses need to eliminate that tactic entirely.
rando832 2 days ago 2 replies      
cyphar 2 days ago 2 replies      
> The MIT License is the most popular open-source software license.

I'm fairly certain the GPL is still more popular.

Heavy SSD Writes from Firefox servethehome.com
456 points by kungfudoi  1 day ago   326 comments top 45
lighttower 1 day ago 13 replies      
Chrome, on my system, is even more abusive. Watch the size of the .config/google-chrome directory and you'll see that it grows to multi-GB in the profile file.

There is a Linux utility that takes care of all browsers' abuse of your ssd called profile sync daemon, PSD. It's available in the debian repo or [1] for Ubuntu or [2] for source. It uses `overlay` filesystem to direct all writes to ram and only syncs back to disc the deltas every n minutes using rsync. Been using this for years. You can also manually alleviate some of this by setting up a tmpfs and symlink .cache to it.

[1] https://launchpad.net/~graysky/+archive/ubuntu/utils[2] https://github.com/graysky2/profile-sync-daemon

EDIT: Add link, grammar

EDIT2: Add link to source

Yoric 1 day ago 12 replies      
Hi, Im one of the Firefox developers who was in charge of Session Restore, so Im one of the culprits of this heavy SSD I/O. To make a long story short: we are aware of the problem, but fixing it for real requires completely re-architecturing Session Restore. Thats something we havent done yet, as Session Restore is rather safety-critical for many users, so this would need to be done very carefully, and with plenty of manpower.

I hope we can get around to doing it someday. Of course, as usual in an open-source project, contributors welcome :)

zbuf 1 day ago 4 replies      
I have been running Firefox for a long time with an LD_PRELOAD wrapper which turns fsync() and sync() into a no-op.

I feel it's little antisocial for regular desktop apps to assume it's for them to do this.

Chrome is also a culprit, a similar sync'ing caused us problems at my employer's, inflated pressure on an NFS server where /home directories are network mounts. Even where we already put the cache to a local disk.

At the bottom of these sorts of cases I have on more than one occasion found an SQLite database. I can see its benefit as a file format, but I don't think we need full database-like synchronisation on things like cookie updates; I would prefer to lose a few seconds (or minutes) of cookie updates on power loss than over-inflate the I/O requirements.

RussianCow 1 day ago 6 replies      
Serious question: Is 12GB a day really going to make a dent in your SSD's lifespan? I was under the impression that, with modern SSDs, you basically didn't have to worry about this stuff.
rayiner 1 day ago 2 replies      
Doing all this work is also probably burning battery life. An SSD can use several watts while writing, versus as low as 30-50 milliwatts at idle (with proper power management).
blinkingled 1 day ago 11 replies      
Even better just disable session restore entirely - Browser.sessionstore.enabled - Since Firefox 3.5 this preference is superseded with setting browser.sessionstore.max_tabs_undo and browser.sessionstore.max_windows_undo to 0.

As I understand this feature is there so if the browser crashes it can restore your windows and tabs - I don't remember having a browser crash on me since the demise of Flash.

robin_reala 1 day ago 2 replies      
Its always annoying when an issue like this is reported yet no bugzilla reports are mentioned. Has anyone else filed this already, or shall I?
Someone 1 day ago 0 replies      
12GB/day is about 140kB/second, or one Apple 2 floppy disk every second.

It also is about single CD speed (yes, you could almost record uncompressed stereo CD quality audio all day round for that amount of data)

All to give you back your session if your web browser crashes or is crashed.

Moore's law at its best.

vesinisa 1 day ago 0 replies      
I've already moved all my browser profiles to `/tmp` and set up a bootscripts to persist them during boot / shutdown. E.g. for Arch Linux see https://wiki.archlinux.org/index.php/profile-sync-daemon

This is a far superior solution to fiddling with configuration options in each individual product to avoid wearing down your SSD with constant writes. Murphy's law has it such hacks will only be frustrated by next version upgrade.

And no, using Chrome does not help. All browsers that use disk caching or complex state on disk are fundamentally heavy on writes to an SSD. The amount of traffic itself is not even a particularly good measure of SSD wear, since writing a single kilobyte of data on an SSD can not be achieved on HW level without rewriting the whole page, which is generally several megabytes in size. So changing a single byte in a file is no less taxing than a huge 4 MB write.

raverbashing 1 day ago 1 reply      
Are these writes being sync'd to disk?

Because FF may die but the OS will save it later. That's fine

Not every write to a file means a write to disk

CoryG89 1 day ago 1 reply      
Maybe I am not understanding this right, but is this saying that Firefox will continually keep writing to the disk while idle? Does anyone know more about this? Why would this be needed to restore session/tabs? Seems like it should only write after a user action or if the open page writes to storage? Even if it was necessary to write continually while idle, how could it possibly consume so much data in such a short period of time?
weatherlight 1 day ago 0 replies      
Spotify does some pretty evil I/O as well. https://community.spotify.com/t5/Desktop-Linux-Windows-Web-P...
towerbabbel 1 day ago 0 replies      
I observed something similar several years ago: http://www.overclockers.com/forums/showthread.php/697061-Whe...

I still think the worry about it wearing out an SSD is overblown. The 20GB per day of writes is extremely conservative and mostly there to avoid more pathological use cases. Like taking a consumer SSD and using it as the drive for some write heavy database load with 10x+ write amplification and when you wear it out demand a new one on warranty.

Backing up the session is still sequential writes so write amplification is minimal. After discovering the issue I did nothing and just left Firefox there wearing on my SSD. I'll still die of old age before Firefox can wear it out.

Falkon1313 1 day ago 1 reply      
I checked my system - Firefox wasn't writing much and what it is writing is going to my user directory on the hard drive instead of the program directory on the SSD, so that's nice. But still, I don't want my browser cluttering up my drive with unnecessary junk - history, persistent caching from previous sessions, old tracking cookies, nevermind a constant backup of the state of everything. I try to turn all that off, but there's always one more hidden thing like this.

If I want to save something, I'll download it. If I want to come back, I'll bookmark it. Other than those two cases and settings changes, all of which are triggered by my explicit choice & action, it really shouldn't be writing/saving/storing anything. Would be nice if there were a lightweight/portable/'clean' option or version.

When I tried Spotify, it was pretty bad about that too - created many gigabytes of junk in the background and never cleaned up after itself. I made a scheduled task to delete it all daily, but eventually just stopped using spotify.

zx2c4 1 day ago 2 replies      
I have fixed this issue forever. I got a Thinkpad P50 with 64 gigs of ram. So, I just mount a tmpfs over ~/.cache.

I actually use a tmpfs for a few things:

 $ grep tmpfs /etc/fstab tmpfs/tmptmpfsnodev,nosuid,mode=1777,noatime0 0 tmpfs/var/tmp/portagetmpfsnoatime0 0 tmpfs/home/zx2c4/.cachetmpfsnoatime,nosuid,nodev,uid=1000,gid=1000,mode=07550 0

alyandon 1 day ago 0 replies      
Yep, I have a brand new SSD drive that over the course of a few months accumulated several TERAbytes (yes - TERA) of writes directly attributable to the default FF browser session sync interval coupled with the fact I leave it open 24/7 with tons of open tabs.

Once I noticed that excessive writes were occurring, it was easy for me to identify FF as the culprit in Process Hacker but it took much longer to figure out why FF was doing it.

tsukikage 1 day ago 1 reply      
The interesting question here is, why is the browser writing data to disk at this rate?

If it's genuinely receiving new data at this rate, that's kind of concerning for those of us on capped/metered mobile connections. The original article mentions that cookies accounted for the bulk of the writes, which is distressing.

If it's not, using incremental deltas is surely a no-brainer here?

nashashmi 1 day ago 0 replies      
On a related note: also see http://windows7themes.net/en-us/firefox-memory-cache-ssd/

Just another firefox ssd optimization.

Edit: And see bernaerts.dyndns.org/linux/74-ubuntu/212-ubuntu-firefox-tweaks-ssd

It talks about sessionstore.

justinrstout 1 day ago 0 replies      
Theodore Ts'o wrote about a similar Firefox issue back in 2009: https://thunk.org/tytso/blog/2009/03/15/dont-fear-the-fsync/
joosters 1 day ago 2 replies      
Does firefox sync() the data? If not, these continuous overwrites of the same file may not even hit the disk at all, as it could all end up being cached.

Even if some data is being written, it could still be orders of magnitude lower than the writes executed by the program.

There are legitimate pros/cons of using sync() or not. Missing it out could mean that the file data is lost if your computer crashes. But if firefox crashes by itself, the data will be safe.

vamur 1 day ago 0 replies      
Using private mode and a RAM disk is a quick solution for this issue. Easy to setup on Linux and there is a free RAM disk utility on Windows as well.
leeoniya 1 day ago 2 replies      
i'm not seeing these numbers, using I/O columns in Process Explorer. i'm running Nightly Portable with maybe 80 tabs open/restored.
Nursie 1 day ago 1 reply      
Firefox has been terrible for disk access for many years. I remember I had a post install script (to follow, I never actually automated) that I would run through in my linux boxes back in about 2003 that would cut down on this and speed up the whole system.

Basically chattr +i on a whole bunch of its files and databases, and everything's fine again...

digi_owl 1 day ago 1 reply      
I do wonder if their mobile version have a similar problem. I have noticed it chugs badly when opened for the first time in a while on Android, meaning i have to leave it sitting or a while so it can get things done before i can actually browse anything.
gcb0 1 day ago 0 replies      
> goes to the point of installing weird programs to be "pro" about their ssd life

> failed to read the very first recommendation on every single guide for ssd life: use ram disk cache for browser temp files.

yeah, let's upvote this

waldbeere 1 day ago 0 replies      
Simple solution change save time to 30s

Windows file compression cookies.sqlite => 1MB => 472KBsessionstore-backups => 421 KB => 204 KB

Move TMP cach folder to ram drive ie ImDisk

iask 1 day ago 1 reply      
So Firefox is also expensive to run in terms of energy consumption. No wonder the fans on my MacBook Pro always sound like a jet engine whenever I have several tabs open. Seriously!

Disclaimer: I dual boot (camp) windows 7 on my mac.

caiob 1 day ago 0 replies      
That goes to show how space/memory hungry and bloated browsers have become.
tesla23 1 day ago 0 replies      
I'am sorry if I drop it, but it seems not many people know about cache poisoning. I have always kept the suggested settings since the age of javascript.
rsync 1 day ago 0 replies      
I continue to be impressed with the content and community at servethehome - it's slowly migrated its way into my daily browsing list.
Animats 1 day ago 2 replies      
Firefox is relying too much on session restore to deal with bugs in their code. Firefox needs to crash less. With all the effort going into multiprocess Firefox, Rust, and Servo, it should be possible to have one page abort without taking down the whole browser. About half the time, session restore can't restore the page that crashed Firefox anyway.
Freestyler_3 1 day ago 0 replies      
I use Opera on windows, No idea how to check or change the session storage interval.

Anyone got ideas on that?

HorizonXP 1 day ago 5 replies      
Wow, that's really unfortunate.

I just built a new PC with SSDs, and switched back to Firefox. Even with 16GB of RAM on an i3-2120, Firefox still hiccups and lags when I open new tabs or try to scroll.

This new issue of it prematurely wearing out my SSDs will just push me to Chrome. Hopefully it doesn't have the same issues.

Sami_Lehtinen 1 day ago 1 reply      
uBlock also keeps writing "hit counts" to disk all the time, as well as for some strange reason they've chose database page size to be 32k so each update writes at least 32kB.
yashafromrussia 1 day ago 0 replies      
Sounds sweet, ill try it out. How is it comparing to ack (ack-grep)?
bikamonki 1 day ago 0 replies      
Can this be avoided in FF and Chrome with private tabs?
falsedan 1 day ago 0 replies      
Using a display font for body text--
aylons 1 day ago 2 replies      
In Linux where should this be written? Inside the home folder?

Maybe moving this folder to a HDD should suffice.

known 21 hours ago 0 replies      
rasz_pl 1 day ago 0 replies      
For comparison ancient Opera Presto stores about 500 bytes per tab in Session file.
amq 1 day ago 0 replies      
Observed similar behavior with Skype.
PaulHoule 1 day ago 0 replies      
The whole "restore your session" thing is the one of the most user hostile behaviors there is.
kordless 1 day ago 1 reply      
I seriously dislike Firefox, but must use it at work due to browser incompatibility issues with Chrome and sites I use heavily. Anything that makes the experience better is much appreciated.
rackforms 1 day ago 3 replies      
Putting aside how this may not be all that bad for most SSD's, does anyone know when this behavior started?

Firefox really started to annoy me with its constant and needless updates a few months back; the tipping point being breaking almost all legacy extensions (in 46, I believe). This totally broke the Zend Debugger extension, the only way forward would be to totally change my development environment. I'm 38 and now, and apparently well beyond the days when the "new and shiny" hold value. These days I just want stability and reliability.

Firefox keeps charging forward and, as far as I can tell, has brought nothing to the table except new security issues and breaking that which once worked.

I haven't updated since 41 and you know what, it's nearly perfect. It's fast, does what I need it to do, and just plain old works.

Firefox appears to have become a perfect example of developing for the sake of.

The GitHub Load Balancer githubengineering.com
430 points by logicalstack  2 days ago   121 comments top 20
NicoJuicy 2 days ago 0 replies      
I notice a lot of negativity arround here. Don't know why that is... But i'll take my 5 cents on it.

NIH - Not invented here and redoing an opensource project.

- Github said they used HAProxy before, i think the use case of github could very well be unique. So they created something that works best for them. They don't have to re-engineer an entire code base.When you work on small projects, you can send a merge request to do changes. I think this is something bigger then just a small bugfix ;). Totally understand them there for creating something new

- They used opensource based on number of open source projects including, haproxy, iptables, FoU and pf_ring. That is what opensource is, use opensource to create what suits you best. Every company has some edge cases. I have no doubt that Github has a lot of them ;)


Thanks GitHub for sharing, i'll follow up on your posts and hope to learn a couple of new things ;)

otoburb 2 days ago 2 replies      
Given this is based on HAProxy and seems to improve the director tier of a typical L4/L7 split design, I'm led to believe GLB is an improved TCP-only load balancer.

But they also talk about DNS queries, which are still mainly UDP53, so I'm hoping GLB will have UDP load-balancing capability as gravy on top. I excluded zone transfers, DNSSEC traffic or (growing) IPv6 DNS requests on TCP53 because, at least in carrier networks, we're still seeing a tonne of DNS traffic that still fits within plain old 512-byte UDP packets.

Looking forward to seeing how this develops.

EDIT: Terrible wording on my part to imply that GLB is based off of HAProxy code. I meant to convey that GLB seems to have been designed with deep experience working with HAProxy as evidenced by the quote: "Traditionally we scaled this vertically, running a small set of very large machines running haproxy [...]".

jimjag 2 days ago 13 replies      
I am increasingly bothered by the "not invented here" syndrome where instead of taking existing projects and enhancing them, in true open source fashion, people instead re-create from scratch.

It is then justified that their creation is needed because "no one else has these kinds of problems" but then they open source them as if lots of other people could benefit from it. Why open source something if it has an expected user base of 1?

Again, I am not surprised by this. They whole push of Github is not to create a community which works together on a single project in a collaborative, consensus based method, but rather lots of people doing their own thing and only occasionally sharing code. It is no wonder that they follow this meme internally.

Scaevolus 2 days ago 0 replies      
gwright 2 days ago 0 replies      
While I understand that NIH syndrome is a real thing, it is very dissapointing to read many of the comments here.

I think very few HN readers are really in a position to have an informed opinion regarding Github's decision to build new piece of software rather than using an existing system.

Personally I find this area quite interesting to read about because it is very difficult to build highly available, scalable, and resilient network service endpoints. Plain old TCP/IP isn't really up to the job. Dealing with this without any cooperation from the client side of the connection adds to the difficulty.

I look forward to hearing more about GLB.

Ianvdl 2 days ago 4 replies      
Given the title and the length of the post I was expecting a lot more detail.

> Over the last year weve developed our new load balancer, called GLB (GitHub Load Balancer). Today, and over the next few weeks, we will be sharing the design and releasing its components as open source software.

Is it common practice to do this? Most recent software/framework/service announcements I've read were just a single, longer post with all the details and (where applicable) source code. The only exception I can think of is the Windows Subsystem for Linux (WSL) which was discussed over multiple posts.

gumby 2 days ago 3 replies      
They talk about running on "bare metal" but when I followed that link it looked like they were simply running under Ubuntu. Is it so much a given that everything is going to be virtualized?

When I think of "bare metal" I think of a single image with disk management, network stack, and what few services they want all running in supervisory mode. Basically the architecture of an embedded system.

p1mrx 2 days ago 0 replies      
GitHub only speaks IPv4, so I would be extra-skeptical about using any of their networking code to support a modern service.
NatW 2 days ago 1 reply      
I'm curious if they looked into pf / CARP as part of their research into allowing horizontal scalability for an ip. See: https://www.openbsd.org/faq/pf/carp.html
yladiz 1 day ago 0 replies      
I'm of two minds about this. Part of me agrees with many of the commenters here, in that Not Invented Here syndrome was probably in effect during the development of this. I don't really know Github's specific use case, and I don't know the various open source load balancers outside of Haproxy and Nginx, but I would be surprised if their use case hasn't been seen before and can be handled with the current software (with some modification, pull requests, etc.). On the other hand, I would guess Github would research into all of this, contact knowledgeable people in the business, and explore their options before spending resources on making an entirely new load balancer. Maybe it really is difficult to horizontally scale load balancing, or load balance on "commodity hardware".

That being said, why introduce a new piece of technology without actually releasing it if you're planning to release it, without giving a firm deadline? This isn't a press release, this is a blog post describing the technical details of the load balancer that is apparently already in production and working, so why not release the source when the technology is introduced?

jedberg 2 days ago 0 replies      
Awesome. The whole time I was reading I was thinking "they need Rendezvous hashing". And then bam, last paragraph mentions that is in fact what they are using.
treve 2 days ago 1 reply      
I half expect a comment here explaining why Gitlab does it better ;)
lamontcg 1 day ago 0 replies      
Why not just use DNS load balancing over VIPs served by HA pairs of load balancers?

Back in the day we did this with Netscalers doing L7 load balancing in clusters, and then Cisco Distributed Directors doing DNS load balancing across those clusters.

It can take days/weeks to bleed off connections from a VIP that is in the DNS load balancing, but since you've got an H/A pair of load balancers on every VIP you can fail over and fail back across each pair to do routine maintenance.

That worked acceptably for a company with a $10B stock valuation at the time.

contingencies 1 day ago 0 replies      
I am intrigued by their opening statement of multiple POPs, but the lack of multi-POP discussion further in the system description.

My understanding is that the likes of, for example, Cloudflare or EC2 have a pretty solid system in place for issuing geoDNS responses (historical latency/bandwidth, ASN or geolocation based DNS responses) to direct random internet clients to a nearby POP. Building such a system is not that difficult, I am fairly confident many of us could do so given some time and hardware funding.

Observation #1: No geoDNS strategy.

Observation #2: Limited global POPs.

Given that the inherently distributed nature of git probably makes providing a multi-pop experience easier than for other companies, I wonder why Github's architecture does not appear to have this licked. Is this a case of missing the forest for the trees?

madmulita 1 day ago 0 replies      
We are in the process of moving all of our infrastructure to OpenStack, OpenShift, Ansible, DevOps, Microservices, Docker, Agile, SDN and what not.

There are some brainiacs pushing these magic solutions on us and one of the promises is load balancing is not an issue, even better, it's not even being talked about.

Please, please, tell me there's something I'm missing.

squiguy7 2 days ago 0 replies      
I know they mentioned their SYN flood tool but I recently saw a similar project from a hosting provider and thought it was neat [1]. It seems like everyone wants their own solution to this when it is a very common and non-trivial problem.

[1]: https://github.com/LTD-Beget/syncookied

lifeisstillgood 2 days ago 6 replies      
I love using GitHub and appreciate the impact it is and has had. But this post is what is wrong with the web today. They have taken a distributed-at-it's-plumbing technology, and centralised it so much that now we need to innovate new load balancing mechanisms.

Years ago I worked at Demon Internet and we tried to give every dial up user a piece of webspace - just a disk always connected. Almost no one ever used them. But it is what the web is for. Storing your Facebook posts and your git pushes and everything else.

No load balancing needed because almost no one reads each repo.

The problem is it is easier to drain each of my different things into globally centralised locations, easier for me to just load it up on GitHub than keep my own repo on my cloud server. Easier to post on Facebook than publish myself.

But it is beginning to creak. GitHub faces scaling challenges, I am frustrated that some people are on whatsapp and some slack and some telegram, and I cannot track who is talking to me.

The web is not meant to be used like this. And it is beginning to show.

bogomipz 2 days ago 1 reply      
Do the Directors use Anycast then? That wasn't clear to me.
tadelle 2 days ago 1 reply      
alsadi 2 days ago 0 replies      
I never like github approach, they alway use larger hammers
Googles lawyers are asking to find Oracles lawyers in contempt of court vice.com
383 points by ivank  1 day ago   135 comments top 19
grellas 1 day ago 6 replies      
It is huge that a lawyer would disclose in a public setting such important confidential numbers. I even have trouble seeing how something like that could be "accidental". It is basically a force of habit among experienced litigators to think and to say, in any number of contexts, "I know this may be relevant but I can't discuss it because it is the subject of a protective order" or "I know the attorneys know this information but it was disclosed under the protective order as being marked for 'attorneys' eyes only'". In all my years of litigating, I don't believe I have ever heard a casual slip on such information, even in otherwise private contexts (e.g., attorneys are discussing with their own client what an adverse party disclosed and are very careful not to disclose something marked for "attorneys' eyes only"). Certainly willful disclosures of this type can even get you disbarred.

But the significance of this breach is not the only thing that caught my eye.

These litigants have been entrenched in scorched-earth litigation for years now in which the working M.O. for both sides is to concede nothing and make everything the subject of endless dispute. Big firm litigators will often do this. It is a great way to rack up bills. Clients in these contexts do not oppose it and very often demand it. And so a lot of wasteful lawyering happens just because everyone understands that this is an all-out war.

To me, then, it seems that the big problem here (in addition to the improper disclosures of highly important confidential information in a public court hearing) was the resistance by the lawyers who did this to simply acknowledging that a big problem existed that required them to stipulate to getting the transcript sealed immediately. Had they done so, it seems the information would never have made the headlines. Instead (and I am sure because it had become the pattern in the case), they could not reach this simple agreement with the other lawyers to deal with the problem but had to find grounds to resist and fight over it.

I know that we as outside observers have limited information upon which to make an assessment here and so the only thing we can truly say from our perspective is "who knows". Yet, if the surface facts reflect the reality, then it is scarcely believable that the lawyers could have so lost perspective as to take this issue to the mat, resulting in such damage to a party. Assuming the facts are as they appear on the surface, this would be very serious misconduct and I can see why Judge Alsup is really mad that it happened.

mmastrac 1 day ago 3 replies      
While this is a good story, the headline misses by far the point that the body makes - the only reason this is an open secret is because an Oracle lawyer revealed it in public.

A better title might be:

"Google is trying to get Oracle in trouble for revealing confidential figures"

nkurz 1 day ago 4 replies      
As background, this opinion piece by the lawyer in question may be useful in understanding the mindset of the players. Hurst argues that because API's are not copyrightable, the GPL is dead and Oracle's valiant attempts to defend free software have been foiled:

The Death of "Free" Software . . . or How Google Killed GPLby Annette Hurst (@divaesq)

The developer community may be celebrating today what it perceives as a victory in Oracle v. Google. Google won a verdict that an unauthorized, commercial, competitive, harmful use of software in billions of products is fair use. No copyright expert would have ever predicted such a use would be considered fair. Before celebrating, developers should take a closer look. Not only will creators everywhere suffer from this decision if it remains intact, but the free software movement itself now faces substantial jeopardy.



This wasn't an accidental "slip" by a poorly trained intern. This was a conscious disclosure made by one of Oracle's lead attorneys. She is one of the top IP lawyers in the nation: https://www.orrick.com/People/2/6/2/Annette-Hurst. It is in keeping with the "scorched earth" strategy that has been followed for this case. She knew what she was doing, and she (and her firm) should pay the consequences. If there are no consequences, it will legitimize and reward this strategy.

nikic 1 day ago 1 reply      
This article reads very weirdly to me. Are they arguing that disclosing confidential information, and subsequently opposing steps to contain the disclosed information, is perfectly fine because ... it can be found on the internet, precisely because of this disclosure? This makes absolutely no sense to me.
segmondy 1 day ago 2 replies      
Oracle should pay, they knew exactly what they were doing. If it was them, they would be suing too. Live by the sword die by the sword.
balabaster 1 day ago 1 reply      
Having read this article it reminds me somewhat of tactics in movies where lawyers deliberately ask an inflammatory question in front of a jury purely for the purpose of planting a seed, and before anyone can yell objection they immediately retract knowing that the damage has been done. The judge may strike it from the record, the judge may tell the jury to disregard it, but you can't unthink or unhear something that's been said. The bell has already been rung.

I don't (or can't, I'm unsure) believe that lawyers of this caliber make mistakes like this. So what was her play by doing this? Did it pay off?

yongjik 1 day ago 3 replies      
Off-topic, but I find it strange that money in the order of $1B can change hands between two mega-corporations without anyone outside having an inkling, while I could find websites saying exactly how much a low-level government worker earns in a social services center in my county. (Spoiler: much less than I used to earn as developer.)

Shouldn't the structure of accountability be in the other direction?

edgesrazor 1 day ago 2 replies      
Off topic: I may be old and cranky, but I simply can't stand articles with animated gifs - it just seems ridiculously unprofessional.
bitmapbrother 1 day ago 1 reply      
Regardless of the outcome her career in litigating high profile cases is pretty much over. You simply do not utter highly confidential company information accidentally. It was intentional and it was done to paint a picture to the jury about how much money Google was making from Android and what it was paying Apple.
b1daly 18 hours ago 1 reply      
Slightly off topic, but I've always had a hard wrapping my head around the stance the somehow an API is distinct from code. I understand that it's an abstraction in programming, and that industry practice has been that it's acceptable to take an existing API that you didn't create and write a new implementation.

But since the API is "implemented" in code, it seems like for the purpose of copyright consideration that the distinction is simply one of custom.

It's a programming abstraction, to create your own "implementation" of the API you still have to use code that is identical to original.

Alsop's original, overturned, ruling was that as a matter of law API's couldn't be copyrighted because they express an idea that can only be expressed exactly that way, and traditionally this would not be allowed (can't copyright an idea). As I understood it, his concept implied that to get IP protection over an API would require something more like patent protection. (I might be totally wrong on this).

wfunction 1 day ago 1 reply      
As someone who knows zilch about business, I don't quite understand why people knowing these numbers is so devastating. What will another company do with these two numbers that it otherwise wouldn't do?
1024core 1 day ago 1 reply      
> Oracle attorney Melinda Haag

God I hate that woman. When she was a US Attorney for SF, she went around and threatened to seize buildings where medical cannabis dispensaries were located, in full compliance of the local laws. Because she couldn't do any thing to the dispensary directly, she threatened their landlords. This was after Obama had said that DoJ would not interfere with dispensaries which were operating within the state laws.

AceJohnny2 1 day ago 0 replies      
If a lawsuit of this scale can be considered the corporate equivalent of war, contempt of court is equivalent to being declared a war criminal.
JadeNB 1 day ago 2 replies      
The judge tried to reveal the depth of this revelation by comparing it to that of the most secret thing he could imagine:

> If she had had the recipe for Coca-Cola she could have blurted it out in this court right now.


EDIT: I wasn't trying to be snarky or silly, just pointing out an aspect of the story that struck me as funny. Serious request: if that is inappropriate, please let me know rather than just silently downvoting. In that case, I apologise and will delete the post.

joering2 1 day ago 0 replies      
"... or Robin Thicke being forced to plunge his own toilet."

Can someone explain me this one??

c3534l 1 day ago 0 replies      
How can a public corporation keep those two numbers secret? Those are basic cost and revenue numbers that should be disclosed in their annual financial statements. The fact that it's legal to keep those numbers secret means there's something very wrong with how we do financial disclosure in America.
suyash 1 day ago 1 reply      
swehner 1 day ago 0 replies      
Why now? The blurting happened in January.
ocdtrekkie 1 day ago 3 replies      
If anything, my only sadness is that more of Google's dirty laundry wasn't aired. This illusion that Google search is winning because people prefer it and that Google doesn't make money on Android are both claims I'm happy to see debunked. Google's anti-monopoly claims fundamentally hinge on concepts like these.

And if a lawyer did break the law by doing it, I say she belongs on the same high pedestal people put Snowden on.

A Digital Rumor Should Never Lead to a Police Raid eff.org
344 points by dwaxe  2 days ago   163 comments top 20
danso 2 days ago 8 replies      
FWIW, the prospect of being suspected and questioned (but not necessarily raided) because of your IP location is one of the best metaphors to relate what it's like as a minority to be searched just because you are of the same race as a suspect in an nearby active case.

It is perfectly logical to say that if there was an assault on a college campus and that the victim said the perp is an "Asian male", for the police to not prioritize the questioning of all non-Asians in the area. And if the report was made within minutes of the incident and the suspect is on foot, it may be justifiable to target the 5 Asian males loitering around rather than the 95 people of other demographics. What logical person would argue otherwise?

But the problem creep comes in the many, many cases when police don't have a threshold for how long and wide that demographic descriptor should be used. Within 1000 feet of the reported attack? A mile? Why not 2 miles? And why not 2 days or even 2 weeks after the incident, just to be safe?

The main difference in the ISP/IP metaphor is that in the digital world, it's possible to imagine search-and-question tactics that aren't time-consuming for the police or for the suspect. Hell, the suspect might not even know their internet-records were under any suspicion. OTOH, there are definitely real-world places in which for the police (and their community and most specifically the politicians), hand-cuffing and patting someone down has been so streamlined and accepted by the powers-that-be that it isn't a bother for them (the police) either.

edit: To clarify, I don't mean to get in the very wide debate on racial profiling, etc. But when I worked at a newspaper, we had a policy to not mention race unless the police could provide 4 or 5 other identifiers. That led to readers cussing us out because, they'd argue, knowing that the suspect was black is better than nothing. My point here is that sometimes, nothing is not always better than something, and that is most explicitly clear when it comes to broad IP range searches.

soylentcola 2 days ago 3 replies      
A similar example, while not a raid, hit me closer to home a bit over a year ago.

I'm sure that if you follow US news at all, you heard about the looting and arson in Baltimore in the Spring of 2015. While the city was on edge in the wake of a citizen's death in police custody, there had already been some minor demonstrations and a brawl between protesters, baseball fans, and provocateurs downtown earlier in the month.

Then, on the day of the funeral held for the man killed in custody, word started to spread of plans for some sort of riot or mass havoc being planned later in the day. Later, authorities pointed to a digital "flyer" being passed around yet nobody investigating this outside of the police has found any source or initial copy of this flyer that dates before this was published in the media. Trust me, we looked.

In response to this alleged threat to public order, cops with riot gear and a freaking mini-tank showed up at a major public transit hub right as school let out. Transit was shut down and everyone was corralled into a small area next to a busy street and without a way home for hours.

Eventually, tensions got high enough that when the first pissed off teenager or whoever chucked a bottle or a rock, it didn't take long for others to join in. In the ensuing vandalism and arson, hundreds of thousands in damage was caused, people got hurt, the city was put under curfew for a week, and to this day, businesses and residents have suffered from the reputation gained (worsened?) that day.

Looking back, the part that really sticks out to me is how the whole thing was triggered (assuming you don't think it was a deliberate provocation) by some "social media flyer" that claimed some teens were planning to run around starting shit after school. This rumor summoned riot police, shut down transit, stranded loads of adults and teens alongside the road, and facing down a phalanx of police plus one armored tactical vehicle.

Would those shops and homes still been damaged or those stores been looted and burned in a wave of unrest without this rumor-inspired flashpoint? No idea. But it sure didn't help.

dtnewman 2 days ago 2 replies      
It starts off saying:

> If police raided a home based only on an anonymous phone call claiming residents broke the law, it would be clearly unconstitutional... Yet EFF has found that police and courts are regularly conducting and approving raids based on the similar type of unreliable digital evidence: Internet Protocol (IP) address information.

I'm not sure that these two are equivalent. A better example would be the police raiding my home based on an illegal phone call that came from my phone number. Sure, the fact that it comes from my phone number doesn't mean I did it, but it's certainly evidence that points to me, just as an IP address can be.

In general, the summary linked to above makes it sound like police should never use IP addresses. To be fair, if you read the whitepaper itself, it doesn't say this, but rather that police should be _careful_ in how they use IP addresses. Specifically, it recommends that police "conduct additional investigation to verify and corroborate the physical location of a particular decive connect to the Internet whenever police have information about an IP address physical location, and providing that information to the court with the warrant application".

pmoriarty 2 days ago 1 reply      
In the 1980's, some powerful senator's cell phone was snooped on, resulting in a major scandal when the contents of his phone calls was revealed in the press.

This resulted in Congress passing laws that made it illegal for radios to be capable of listening in on cell phone frequencies or being easily modified to allow them to do so.

It is likely that only similar widely publicized embarrassments and privacy violations of the rich and powerful will result in any meaningful legislative attempts to curtail the growth of the police state in the United State.

They clearly don't intend to do much about it unless they themselves are the victims of such abuses of power. As long as it's just "nobodies" or social or political outcasts who are the victims the police and surveillance aparatus, it's doubtful that much will change.

eth0up 2 days ago 1 reply      
A few more examples of botched attempts at IP-based raids:


The one I'm familiar with is the Sarasota, FL incident, where a married couple was raided in the middle of the night in response to alleged child pornography. Their unit was in a condominium, practically on the edge of Sarasota bay, where various boats moor and dock. After further investigation, it was discovered that the traffic had originated from some guy in a boat using a high gain antenna. If I remember correctly, he had cracked their WEP key and illegally accessed their network to obtain nasty images, lots of them. The insecurity of WEP has been known about for a long time, presumably by LE too.

It is conjecture on my part, but a few things come to mind regarding alternative methods of investigation that may have avoided this. 1. Contact the ISP first (in this case I think it may have been Verizon). I remember Verizon having the ability to remotely reset router passwords, which possibly suggests the ability to remotely view associated client data, e.g. MAC addresses and hostnames and maybe even OS. This may have provided valuable clues. 2. Note the protocol used by the wireless router. 3. Wardrive a bit. 4. Maybe check for logs of any accounts the boat guy logged into while on their network.

Regardless, the raid was botched and pretty traumatic for the couple, considering they were operating a legal AP probably secured with what they thought was adequate encryption. At the time of this event, WEP was standard default, straight from the ISP. They'd done nothing wrong.

More info: http://www.heraldtribune.com/news/20110131/wireless-router-h...

rayiner 2 days ago 4 replies      
Not great to start an article off with sloppy reasoning:

> If police raided a home based only on an anonymous phone call claiming residents broke the law, it would be clearly unconstitutional.

> Yet EFF has found that police and courts are regularly conducting and approving raids based on the similar type of unreliable digital evidence: Internet Protocol (IP) address information.

When police go after an IP address, it happens after there is evidence linking it to some crime. That makes the situation wholly unlike an anonymous phone call, where there is no evidence a crime has even been committed, and where the identifying information itself is trivial to falsify.

Also, IP addresses give a lot more information than the article implies. Especially these days now that everyone has a home router that probably keeps the same IP address for weeks at a time if not months. Not enough to trigger a police raid, of course (if we want to argue that the police have too low a standard of evidence for initiating a raid, I agree) but it's probably a good lead to go on in the common case.

EDIT: I don't disagree with the rest of the article.

eth0up 2 days ago 1 reply      
pjc50 2 days ago 1 reply      
"If police raided a home based only on an anonymous phone call claiming residents broke the law, it would be clearly unconstitutional"

I thought that was how SWATting worked - anonymous denunciation by untraceable phone call?

s_q_b 2 days ago 0 replies      
If the use of IP addresses in this manner disturbs you, you should look into the the proposed changes to Federal Rule Of Criminal Procedure 41.

This is the EFF's article, which is either a highly overzealous or highly prescient: https://www.eff.org/deeplinks/2016/04/rule-41-little-known-c...

stronglikedan 2 days ago 1 reply      
> If police raided a home based only on an anonymous phone call claiming residents broke the law, it would be clearly unconstitutional.

But they do this all the time, especially in low income areas. They just don't call it a raid. They call it a "welfare check".

xienze 2 days ago 3 replies      
> Put simply: there is no uniform way to systematically map physical locations based on IP addresses or create a phone book to lookup users of particular IP addresses.

Maybe today, but when we have wide deployment of IPv6 (heh), won't ISPs do away with NATing and give everyone their own block of IPs? Then I would think you could reliably tie a person to an IP address as long as the ISP cooperates.

vorotato 2 days ago 0 replies      
Otherwise the police become the weapons of criminals which is, of course backwards.
coldcode 2 days ago 0 replies      
(1) It's unreliable (2) It's unconstitutional assuming judges agree (3) It's expensive if you screw it up, such as people die, lawsuits, or embarrassment. All of which is unlikely change behavior unless everyone agrees.
bootload 2 days ago 0 replies      
"A call is an unknown source, talking about unreliable information, about a location. It is NEVER to be trusted NEVER...." -- Michael A. Wood Jr

An unverified call can never to be trusted. Read the whole twitter thread by ex BPD, USMC Retd., Michael A. Wood Jr [0] to understand why.

[0] https://twitter.com/MichaelAWoodJr/status/778813281376931840

nv-vn 2 days ago 0 replies      
>If police raided a home based only on an anonymous phone call claiming residents broke the law, it would be clearly unconstitutional.

Isn't that exactly what happens when you get SWATted?

throwaway92314 2 days ago 0 replies      
I'll just point this out here. Reena Virk started as a rumour going around in schools. Until eight days later her body was found. A little bit of prudence is necessary, but don't discount rumours out of hand.


PaulHoule 2 days ago 0 replies      
It's as much a "law and order" issue as it is a civil rights issue.

Cops have limited resources to deal with a number of problems and if they don't have the training and procedures to use internet evidence they are going to waste those resources tracking down stolen cars, child porn and whatever in the wrong places.

rocky1138 2 days ago 3 replies      
Why don't we just regulate any Internet-connected device? When you purchase one, you register your name and address and are given the IP address in return.

Then, we can simply look up the physical address of the IP address holder.

marcoperaza 2 days ago 2 replies      
>Law enforcements over-reliance on the technology is a product of police and courts not understanding the limitations of both IP addresses and the tools used to link the IP address with a person or a physical location.

You can most certainly narrow down an IP address to a particular ISP customer. Is it possible that they have an open wifi? Yes. Is it possible to narrow it down to a single member of the household? Depends! Is it possible that a computer at the destination is being used a proxy by the real attacker? Yes! But it's certainly not the blackbox that the EFF is trying to portray it as.

It's totally appropriate to execute a search warrant based on IP logs. A search warrant doesn't mean that any particular person is guilty, just that there is probable cause that there is information about a crime at a certain location.

matt_wulfeck 2 days ago 1 reply      
> IP address information was designed to route traffic on the Internet, not serve as an identifier for other purposes.

I think you're going to have a hard time here convincing a jury or judge with this argument. In general LOE isn't concerned with the intentional of what an IP address was meant for. At least with today's ISP an IP address can be a reasonable approximation of a person or persons.

Upgrade your SSH keys g3rt.nl
391 points by mariusavram  1 day ago   131 comments top 26
developer2 22 hours ago 4 replies      
Seriously, the default options to ssh-keygen should be all anybody needs. If you need to pass arguments to increase the security of the generated key, then the software has completely failed its purpose. Passing arguments should only be for falling back on less secure options, if there is some limiting factor for a particular deployment.

There is absolutely no reason to pass arguments to ssh-keygen. If it is actually deemed necessary to do so, then that package's installation is inexcusably broken.

tete 18 hours ago 2 replies      
Something I don't understand is the "hate" that RSA gets. Yeah, Elliptic Curves are promising, have benefits (smaller/faster).

But RSA isn't broken, it is well understood, is "boring" (a plus on security, usually), has bigger bit sizes (according to people that know a lot more to me that's a plus point, regardless of EC requiring smaller ones, because of certain attacks), isn't hyped and sponsored by the NSA and isn't considered a bad choice by experts.

Not too many years ago Bruce Schneier was skeptical about EC, because of the NSA pushing for it. Now, I also trust djb and i an sure that ed25519 is a good cipher and there are many projects, like Tor that actually benefit from it, increasing throughput, etc., but for most use cases of SSH that might not be the issue, nor the bottleneck.

So from my naive, inexperienced point of view RSA might seem the more conservative option. And if I was worried about security I'd increase the bit size.

Am I going wrong here?

matt_wulfeck 1 day ago 3 replies      
I disagree with the author. Before you go upgrading into ed25519, beware that the NSA/NIST is moving away from elliptical curve cryptography because it's very vulnerable to cracking with quantum attacks[0].

"So let me spell this out: despite the fact that quantum computers seem to be a long ways off and reasonable quantum-resistant replacement algorithms are nowhere to be seen, NSA decided to make this announcement publicly and not quietly behind the scenes. Weirder still, if you havent yet upgraded to Suite B, you are now being urged not to. In practice, that means some firms will stay with algorithms like RSA rather than transitioning to ECC at all. And RSA is also vulnerable to quantum attacks."

Stick with the battle tested RSA keys, which are susceptible but not as much as ECC crypto. 4097 or even better 8192-bit lengths.

There's no perceptible user benefits to using ed25519 and it's not even supported everywhere. Also you won't have to rotate all of your keys when workable quantum computers start crackin' everything.

[0] https://blog.cryptographyengineering.com/2015/10/22/a-riddle...

Achshar 1 day ago 4 replies      
Noob question here, why move just one step ahead. Why not 8192 or hell 16,384? I can see it can lead to higher CPU consumption on often used keys but for keys that are not accessed more than a couple of times a day, why is it such a bad idea to overdo it?
brandmeyer 1 day ago 2 replies      
If you have servers too old to work with the latest keys, you can easily modify your ~/.ssh/config to automatically use a per-machine private key file:

 Host foo.example.com Keyfile ~/.ssh/my_obsolete_private_keyfile

LeoPanthera 1 day ago 4 replies      
Can someone explain to me why RSA 2048 is "recommended to change"? It's still the default for gpg keys and as far as I know is widely thought to be secure for at least few hundred years!
jkirsteins 17 hours ago 1 reply      
Can anybody elaborate on the idea that for RSA <=2048 is potentially unsafe? Is it true? It seems that even 1024 bit keys haven't been factored yet, much less 2048, so why use anything else currently?


loeg 1 day ago 4 replies      
RSA 2048 is still the openssh default, i.e., best current advice from the openssh authors. The fact that this article's author labels that as "yellow" is a red flag.
morecoffee 1 day ago 0 replies      
Ed25519 is fast, but I don't think the speed is significantly faster to be an argument for using it. Running the borgingssl speed tool on a skylake mobile processor:

 Did 1083 RSA 2048 signing operations in 1017532us (1064.3 ops/sec) Did 29000 RSA 2048 verify operations in 1016092us (28540.7 ops/sec) Did 1440 RSA 2048 (3 prime, e=3) signing operations in 1016334us (1416.9 ops/sec) Did 50000 RSA 2048 (3 prime, e=3) verify operations in 1014778us (49271.9 ops/sec) Did 152 RSA 4096 signing operations in 1000271us (152.0 ops/sec) Did 8974 RSA 4096 verify operations in 1076287us (8337.9 ops/sec) ... Did 6720 Ed25519 key generation operations in 1029483us (6527.5 ops/sec) Did 6832 Ed25519 signing operations in 1058007us (6457.4 ops/sec) Did 3120 Ed25519 verify operations in 1053982us (2960.2 ops/sec)
RSA key verification is still extremely fast.

(also don't look at these numbers purely as speed, but as CPU time spent)

eatbitseveryday 8 hours ago 0 replies      
Maybe a more technical/comprehensive read is this[1] writeup, which I see some others have linked to. Prior HN[2].

[1] https://stribika.github.io/2015/01/04/secure-secure-shell.ht...

[2] https://news.ycombinator.com/item?id=8843994

katzgrau 1 day ago 0 replies      
Security is not my specialty, but I obviously wade in this field, being a developer. Having read this article I will say this to OP and the author:

Thank you, I am sufficiently paranoid enough to change my keys now.

jlgaddis 1 day ago 1 reply      
If you have any RHEL machines, you might wanna keep an RSA (or ECDSA) key around. RHEL doesn't support Ed25519.

I haven't checked, but I presume this also goes for CentOS, Scientific Linux, and other derivatives.

perlgeek 17 hours ago 0 replies      
So I once read somewhere that RSA is simpler to implement than most other algorithms, and hence it's a safer choice than other algorithms, because weaknesses typically come from suboptimal implementation less than from the cryptographic algorithm. (Unless you use known-broken things like md5 or 3DES).

And I think that was in the context of some DSA or ECDSA weakness, possibly a side channel attack or something similar. I forgot the details :(

What are your thoughts on this? Should we focus more simplicity and robustness of the implementation, rather than just the strength of the algorithm itself?

tarellel 11 hours ago 0 replies      
Something I found resourceful while setting up SSH on a recent server is Mozilla's SSH Guidelines - https://wiki.mozilla.org/Security/Guidelines/OpenSSH
tw04 1 day ago 3 replies      
This is my standard on new server setup (which is admittedly overkill but I'd rather have it slightly slower and safer):




sources.list (if you're on an older version of debian)deb http://http.debian.net/debian wheezy-backports main

apt-get -t wheezy-backports install --reinstall ssh


cd /etc/ssh

rm ssh_host_key

ssh-keygen -t ed25519 -f ssh_host_ed25519_key -a 256 < /dev/null

ssh-keygen -t rsa -b 4096 -f ssh_host_rsa_key < /dev/null

(do not password protect server side keys)



Protocol 2

HostKey /etc/ssh/ssh_host_ed25519_key

HostKey /etc/ssh/ssh_host_rsa_key

KexAlgorithms curve25519-sha256@libssh.org,diffie-hellman-group-exchange-sha256

Ciphers chacha20-poly1305@openssh.com,aes256-gcm@openssh.com,aes128-gcm@openssh.com,aes256-ctr,aes192-ctr,aes128-ctr

MACs hmac-sha2-512-etm@openssh.com,hmac-sha2-256-etm@openssh.com,hmac-ripemd160-etm@openssh.com,umac-128-etm@openssh.com,hmac-sha2-512,hmac-sha2-256,hmac-ripemd160,umac-128@openssh.com




Host *

KexAlgorithms curve25519-sha256@libssh.org,diffie-hellman-group-exchange-sha256

Ciphers chacha20-poly1305@openssh.com,aes256-gcm@openssh.com,aes128-gcm@openssh.com,aes256-ctr,aes192-ctr,aes128-ctr

MACs hmac-sha2-512-etm@openssh.com,hmac-sha2-256-etm@openssh.com,hmac-ripemd160-etm@openssh.com,umac-128-etm@openssh.com,hmac-sha2-512,hmac-sha2-256,hmac-ripemd160,umac-128@openssh.com


ssh-keygen -t ed25519 -a 256 -f yourkey.key -C whateveryouwant

Locke1689 1 day ago 0 replies      
What's the problem with ECDSA?
qzervaas 1 day ago 0 replies      
For those who have just updated to macOS Sierra, the default SSH client configuration is to not allow ssh-dss keys any longer.

Follow these instructions to update your keys.

ComodoHacker 18 hours ago 0 replies      
>RSA 2048: yellow recommended to change

Could someone provide a link with decent explanation why? Is it solely out of fear that it will be cracked soon on quantum computer?

franciscop 19 hours ago 0 replies      
Why not usimg this year as the name for the ssh? Then when you are using 2014.pub or 2013.pub you know it's time to upgrade
otabdeveloper 20 hours ago 0 replies      
Nobody is going to brute-force my git keys, especially when it's so trivial to gain access to the repos via social engineering.
stock_toaster 1 day ago 3 replies      
github doesn't support ed25519 keys does it?
jamiesonbecker 1 day ago 1 reply      
In Userify (ssh key manager that only distributes sudo roles and public keys -- you keep your private keys[1]) we're going to be disallowing DSS keys soon.

I like this post - it's good advice overall. Keys are easy to handle and in some ways more secure than certificate management (which relies on extra unnecessary infrastructure).

1. https://userify.com

aluhut 18 hours ago 1 reply      
I wish this whole SSH business would be less complicated...
wyclif 22 hours ago 0 replies      
the need to generate fresh ones to protect your privates much better

Um, I'm pretty sure he meant privacy, not "privates." Time for an edit.

sztwiorok 16 hours ago 0 replies      
very good post about security!

many people still using RSA/DSA keys :/some people are doing even worse things.Last week I saw one man who have shared his priv key by email message!

QWERTY people have to grow up!

Microsoft aren't forcing Lenovo to block free operating systems mjg59.dreamwidth.org
364 points by robin_reala  3 days ago   235 comments top 33
Hydraulix989 3 days ago 4 replies      
Their spin that it is "our super advanced Intel RAID chipset" really plays in their favor, given that their BIOS uses a single goto statement to intentionally block access to the ability to set this chipset into the AHCI compatible mode that the hardware so readily supports, as evidenced by the REing work and the fact that other OSes detect the drive after the AHCI fix using the custom-flashed BIOS.

So, why are they reluctant to just issue their band-aid patch to the BIOS -- after all, it's really the path of least resistance here?

Yes, there has been some deflection of blame here. The argument that every single OS except Windows 10 is at fault for not supporting this CRAZY new super advanced hardware doesn't make much sense.

"Linux (and all other operating systems) don't support X on Z because of Y" doesn't really apply when "Z modified Y in a way that does not allow support for X."

To state it more plainly, this "CRAZY new super advanced hardware" has a trivial backwards compatible mode that works with everything just fine, but it is blocked by Lenovo's BIOS.

raesene9 3 days ago 8 replies      
Also worth noting Lenovo's official statment on the matter http://www.techrepublic.com/article/lenovo-denies-deliberate... confirming that they have not blocked the installation of alternate operating systems.

It was a shame to see the initial posts this morning hit the top of the page without any more evidence than a single customer support rep. who was unlikely to realistically have inside knowledge of some kind of "secret conspiracy" to block linux installs by Microsoft.

pdkl95 3 days ago 2 replies      
There has been a disturbing level of contempt for the people that were concerned about the future of Free Software. There has been a major shift towards more locked down platforms for years ever since iOS was accepted by the developer community. With Microsoft locking down Secure Boot on ARM and requiring it for Windows 10, it is prudent to be extra vigilant about anything strange that happens in the boot process. The alternative is to ignore potential problems until they grow into much larger problems that are harder to deal with.

Obviously vigilance implies some amount of false positives. It is easy to dismiss a problem once better information is available. It's great that this Lenovo situation is simply a misunderstanding about drivers, but that doesn't invalidate the initial concern about a suspicious situation.

AdmiralAsshat 3 days ago 2 replies      
The moral of the story is that you shouldn't trust a low-level support engineer as a source for official company policy.
WhitneyLand 3 days ago 0 replies      
There was way too much rush to judgement here. Suspicion and skepticism are great, let those fires burn. But let's not condemn or blame until the issue has been aired out from all parties.

- MS shouldn't be blamed based on what the CEO of Lenovo says, let alone what a tech or BB rep says.

- MS shouldn't be blamed for new crimes based on past behavior

Why care about MS or any other megacorp? Because this salem witch trial shit is toxic and should not be condoned against anyone.

Rush to suspicion and demanding answers is great. There is no downside to saving blame for after the facts are in.

rbanffy 3 days ago 1 reply      
Wasn't Lenovo the company that shipped unremovable malware with laptops? Considering the almost impossible to disable Intel management stuff is also there, I can only imagine the kind of parasite living on these machines.

Why would anyone buy their stuff?

hermitdev 3 days ago 1 reply      
For what it's worth, I've had issues with Intel RST under Windows as well in mixed-mode configs. My boot device is an SSD configured for AHCI and I've a 3 drive RAID array. On a soft reset of my PC, the BIOS won't see the SSD. The completely nonobvious solution? Make the SSD hot swappable. Not a Lenovo PC, either. Been going on for years. Had to do a hard reset every time I had to restart for years before I found a solution to this.
facorreia 3 days ago 2 replies      
> Rather than be angry at Lenovo, let's put pressure on Intel to provide support for their hardware.
NikolaeVarius 3 days ago 0 replies      
Standard culture of outrage before actually taking more than 5 seconds to think about something and consider other possibilities.
rburhum 3 days ago 2 replies      
What is crazy to me is that Lenovo is usually the brand that people recommend for Linux laptops. They are shooting themselves in the foot here. They may think that the number of people on Linux is too small, but I bet it is bigger than they think. It is just that there is no easy way to accurately census the amount of Linux users on their HW.
guelo 3 days ago 1 reply      
> Why not offer the option to disable it? A user who does would end up with a machine that doesn't boot

The modder that flashed the custom BIOS was able to boot linux on his first try.

guelo 3 days ago 2 replies      
Without any comment from Lenovo or Microsoft this guy is speculating the same as everybody else.
seba_dos1 3 days ago 0 replies      
Pushing Intel to provide the drivers or at least documentation would be the best solution - the BIOS lock would become irrelevant.

However, I don't agree with conclusion that Lenovo isn't to blame. They went out of their way to ensure that even power users playing with EFI shell won't be able to switch to AHCI mode.

I don't care about Microsoft here. Lenovo showed its bad side and I probably won't be buying their devices anymore - which is a pity, as I'm writing this on my Yoga 2 Pro, with my company's Yoga 900 (fortunately older, unblocked revision) nearby and I liked those devices.

rukuu001 3 days ago 0 replies      
I'm surprised at the incredulity expressed here, given MS's history of dealing with OEMs. See https://en.m.wikipedia.org/wiki/Bundling_of_Microsoft_Window...
StreamBright 3 days ago 0 replies      
Somebody should notify the guys who went really deep condemning Microsoft of cutting shady deals.


huhtenberg 3 days ago 3 replies      
Yeah, sure, Microsoft is now all white and fluffy. Best friends forever.

How about we pay some attention to the second part of:

 Lenovo's firmware defaults to "RAID" mode and ** doesn't allow you to change that **
Power savings or not, but locking down storage controller to a mode that just happens to be supported by exactly one OS has NO obvious rational explanation. Either Lenovo does that or Windows does. This has nothing to do with Intel.

fenomas 2 days ago 0 replies      
Meta: It seems really odd that this has been relegated to page two, considering that "MS and Lenovo secret agreement" headlines sat on the top page most of yesterday, largely unsubstantiated.

I could be crazy, but HN's algos seem much too aggressive about hiding articles due to flags. It often feels like the most interesting articles are to be found 2-3 spots into the second page.

youdontknowtho 3 days ago 0 replies      
Its amazing that Linux can so thoroughly have won in the device world and yet MS is still every fan boys favorite boogeyman. This is such a non event.
gnode 2 days ago 0 replies      
It sounds to me like it would be quite trivial to run Linux on this laptop, just by treating the "RAID" mode PCI ID like AHCI and employing the regular driver. I believe Linux supports forcing the use of a driver for a PCI device.
sqldba 3 days ago 0 replies      
Click bait. It's one interpretation masquerading as the truth while decrying the other interpretation.

Until Lenovo issue a proper, detailed, official statement we need to keep the pressure on.

Self aggrandising posts like this don't help.

savagej 3 days ago 1 reply      
Why would anyone ever buy Lenovo? It's malware, spyware, and harmful to users. I buy HP or Samsung laptops to run Fedora. Just accept that Lenovo is not IBM hardware, and that it is lost to us.
aruggirello 2 days ago 0 replies      
I repost here the 39th comment, which gives a possible explanation of the issue:

 Storm in a teacup Date: 2016-09-22 09:17 am (UTC) From: [personal profile] cowbutt
"Intel have not submitted any patches to Linux to support the "RAID" mode."

Such patches are unnecessary, as mdadm already supports Intel Rapid Storage Technology (RST - http://www.intel.co.uk/content/www/uk/en/architecture-and-te... ) for simple RAID (e.g. levels 0, 1, 10) arrays, allowing them to be assembled as md or dmraid devices under Linux.

However, it would appear that the version of mdadm in shipping versions of Ubuntu (at least - maybe other distros too) doesn't support the Smart Response Technology (SRT - http://www.intel.com/content/www/us/en/architecture-and-tech... ) feature that's a part of RST and is used by Lenovo to build a hybrid one-stripe RAID0 device from the HDD with a cache on the SSD (I'm sure Lenovo have a good reason for not using a SSHD). Dan Williams of Intel submitted a series of patches to mdadm to support SRT back in April 2014: https://marc.info/?l=linux-raid&r=1&b=201404&w=2 . Perhaps now there's shipping hardware that requires them, there'll be the impetus for distro vendors to get them integrated into mdadm, and their auto-detection in their installers to use the functionality provided sanely.


I should add that mdadm is not present in Ubuntu live images by default - one has to pull it in by issuing "sudo apt[-get] install mdadm". BTW, I don't know if mdadm would detect the RAID controller/disk immediately upon installation, or it would require a reboot. In the latter case you may wish to use a USB key with enough spare room to save the system status and reboot. I'd use UNetBootin to prepare such a USB key.

The main issue here is, a user who doesn't even see a disk, probably wouldn't know to go as far as installing mdadm.IMHO, given the broadening diffusion of NVMe and RAID devices, Debian, Canonical, REDHAT, Fedora etc. might wish to make mdadm part of their live images by default (and eventually strip it from the installed system if it's unnecessary).

Edit: clarified

youdontknowtho 3 days ago 0 replies      
Of course they aren't but how can I feel morally superior with that fact?
bsder 3 days ago 0 replies      
The setting is almost certainly because of Microsoft. It is almost certainly part of their license agreement to block installation of anything older than Windows 10.

The fact that Linux got caught in it is just collateral damage.

hetfeld 3 days ago 1 reply      
So why i can't install Ubuntu on my Lenovo laptop?
lspears 3 days ago 0 replies      
farcical_tinpot 3 days ago 1 reply      
Seeing a manufacturer use fake RAID, by default, on a single disk system, then unfathomably hardwiring this into the firmware so it can't be changed, then have a Lenovo rep actually admit the reason with the forum thread censored and then see this kind of defence is downright hilarious.

Garrett should be condemning Lenovo for not making a perfectly configurable chipset feature....configurable and defending Linux and freedom of choice on hardware that has always traditionally been that way. But, no, he doesn't. He defends stupidity as he always does.

colemickens 3 days ago 1 reply      
Oh it's funny to see the comments in this thread talking down about people on reddit when the misplaced outrage was just as loud here. In fact, I got buried here for pointing out that the claim was BS and unrelated to SecureBoot where at least Reddit took it thoughtfully and realized it was probably just a bullshit statement from a nobody rep that got blown out of proportion.

Sorry to be that guy, but the elitism is pretty misplaced anymore...

johansch 3 days ago 1 reply      
It's so sad to see this. (This entire thread, and its comments are down-voted.)

Let me try again. New Microsoft is awesome! Old Microsoft never happened. Double plus good!

farcical_tinpot 3 days ago 2 replies      
simbalion 3 days ago 4 replies      
throw2016 3 days ago 0 replies      
Some commentators seem to be more keen on labelling others conspiracy theorists than consider the possibility that MS and Lenovo could be up to no good.

The only way to convince these folks it seems would be a smoking gun or even better a signed confession from satya and lenovo admitting to shady behavior.

Since that's not how shady behavior works in the real world presumably many here are supporters of the camel in the sand approach with a zero tolerance policy towards non conforming camels.

intopieces 3 days ago 1 reply      

"For a consumer device, why would you want to? The number of people buying these laptops to run anything other than Windows is miniscule."

This is a really poor argument, and slightly disingenuous. Sometimes, people change their use for a device. Maybe they want to explore linux in the future, maybe they want to sell the laptop to someone who wants to use it for linux...

That the blame is being possibly misdirected ought not to detract from the fact that blame is necessary. If users don't vocally oppose measures like this, the industry will assume that this kind of restriction is reasonable. It's not. Yes, power management is important, but anyone who puts linux on their laptop will quickly learn there are limitations to the features of that device that were originally tailored to the OS the device shipped with. That's a good lesson, and a good opportunity for a community to develop around the device (if it's good enough) to mitigate those deficiencies and adapt them for the particular linux distro.

In short, Lenovo is at fault for not being up front about this limitation, for not explaining it, and for not devoting at least some resources to mitigating for their potential linux-inclined users.

Then again, perhaps a linux-inclined user might also be one of the many that don't trust Lenovo after their self-signed certificate scandal.

Bike manufacturer sees huge reduction in delivery damage by printing TV on box medium.com
369 points by Someone  7 hours ago   142 comments top 19
charlieegan3 4 hours ago 1 reply      
Related: https://www.atheistberlin.com/study - Shoe company finds relationship between lost packages and package branding.
analog31 31 minutes ago 0 replies      
I should paint a TV on myself for when I'm riding my bike in traffic.
aluhut 5 hours ago 8 replies      
It seems like people who are responsible don't care anymore. Maybe it's the wages, the pressure or whatever. It looks like it's about time to replace even more humans from the equation.
delinka 6 hours ago 2 replies      
For Science: Let's see if LG's willing to have some TV boxes printed with bicycles...
has2k1 5 hours ago 1 reply      
This is analogous to Batesian mimicry [1].

[1] https://en.wikipedia.org/wiki/Batesian_mimicry

WalterBright 6 hours ago 3 replies      
Unfortunately, the boy who cried wolf will apply if this is more widely adopted, and then pity the poor folks who order TVs.
xir78 3 hours ago 2 replies      
Boeing puts a picture of a Lamborghini on their first class seats while in the factory in Evert to covey the cost of them -- amazingly they do cost about as much as one too.
massysett 6 hours ago 2 replies      
I wonder if the number of stolen boxes (either while in shipment or when left on porches) went up?
userbinator 5 hours ago 4 replies      
I wonder what sort of damage these bikes are receiving, because they're designed to be ridden by a person... a TV is definitely far more fragile.
williwu 1 hour ago 0 replies      
Genius idea. Similar idea applies for iPhone's anonymous shipping packaging and plain envelope for credit cards -> reduce theft.
hanoz 6 hours ago 2 replies      
Printing wolf on the box would see them some careful handling too, for a while...
satysin 7 hours ago 1 reply      
Wonderful (part) solution. I love things like this that tap into the mind so subtly.
santoshalper 3 hours ago 0 replies      
What a great idea, but this really feels like the kind of thing they should have kept quiet about.
seesomesense 1 hour ago 0 replies      
Time to replace the humans in the logistics chain with robots.
slovette 7 hours ago 1 reply      
This does not surprise me. To inflict change, you don't need to control the person, you just need to control their perception of reality.
logicallee 5 hours ago 2 replies      
True, but they could reduce damage even more by putting a picture of a stained glass window and giant letters "HIGHLY FRAGILE DELICATE STAINED GLASS WINDOW! HANDLE WITH EXTREME CARE!!" on it. That would certainly reduce damages further.

The problem is that it isn't one (a TV). Why would someone feel mortified if they accidentally drop a packaged bicycle from 2-3 feet (typical carrying height) when a fully assembled bike can be dropped from 2-3 feet, and this is packaged, so it should be even safer. On the other hand no one would feel free to drop a packaged LCD TV from even half a foot because people know it includes a giant pane of essentially glass, and they know that there are limits to what packaging can do.

So, yeah, by failing to meet expectations when it comes to packaging a bicycle, they can reduce damages by writing on it that it's a TV instead. All right.

But isn't this still them not meeting expectations exactly? If they write on it that it's a delicate stained-glass window, that would still be not meeting expectations. If the handler is the one with unreasonable expectations or behavior (if 2-3 feet isn't a reasonable drop height and should be considered a failure), then maybe educate the handler with some writing or warnings on the packaging.

isn't the real issue here that handler's expectations of bike packaging does not meet bike packaging's characteristics? so, you could tackle it head-one by writing care instructions.

alternatively, the article says only 70-80% reduction in damages was achieved. Maybe by lying and saying it is delicate stained-glass, handle with extreme care, they could up that to 95% reducted. I guess I've just saved them 15% of their former damages (even higher percentage of their remaining damages) with this one neat trick.

Theodores 6 hours ago 1 reply      
Most things arrive fully assembled. With that TV you just plug it in and that is it. You don't have to adjust the HDMI sockets with a screwdriver or double check the earth lead is correctly bolted on. You don't have to get a spanner out to adjust that five degree tilt to one side in the base.

But with a bicycle, it is an entirely different story. The seat is not centered on the rails, nice and level. Much has to be assembled and that is understandable, however, the brakes and the gears rarely work as well as Shimano intended. The bike is part assembled and the consumer is left to do the rest. Rarely is the finished result as polished as the fit and finish that the TV arrives with.

If a bicycle manufacturer jost got that final assembly together so that only seat height adjustment was needed with nothing else needing a double check, then they might be able to sell to the end customer properly. As it is there is no quality in the final delivery, bikes sent to the customer will be far from expertly 'tuned'.

orblivion 7 hours ago 3 replies      
Clever, but seems ethically questionable.

Why do the shippers care about breaking a TV? Presumably there are repercussions, such as an insurance plan. So why don't those repercussions just apply to bicycles? If they're fined for enough bikes being broken, they should probably learn that they need be more careful than they thought, right?

EDIT: Toning down my choice of words.

Ask HN: What are the must-read books about economics/finance?
429 points by curiousgal  2 days ago   262 comments top 111
reqres 1 day ago 3 replies      
Please do not look upon popular economics best sellers as a good way to get a rounded economics education. While many have value in critical insight and entertainment, they often offer only a narrow perspective on economics. Novice economists typically lack the ability to critically appraise them without a wider economic framework to work from.

An academic reading list (i.e. university course texts) will provide you a good theoretical foundation as to how economists interpret and model real economic issues. It's important to grasp the plethora of important economic concepts like diminishing returns, comparative advantage and concepts of market efficiency (among many others things) and how they apply within micro or macro economic issues.

With some foundational knowledge in place, a good economist then goes on to relax the underlying assumptions and look for analogues in the real world. This is where the popular reading list come in, often they take a deep dive in specific areas i.e. where traditional economic assumptions break down.

In short, the academic reading list gives you a framework to understand economics. The best seller list tempers that framework with real world exceptions, paradoxes and open questions.

It's a bit disappointing to see a real academic reading list so far down this comment page (I strongly recommend looking at oli5679 suggestions). I doubt HNers would suggest reading up on javascript as a good foundation for a computer science education. Yes, you can become a well rounded computer scientist by starting on javascript. But it's more important to have a grasp on core computer science ideas like algorithm design & analysis and automata.

davidivadavid 1 day ago 2 replies      
One approach is to go to the MIT OpenCourseWare website, look for the economics department, and look at their reading lists.

Of course, that's going to be mostly academic reading (textbooks, etc.). But if you want to learn the basics, it's probably safer to start there than the pop econ books (and I would dispense with most heterodox reading before you're able to assess them within a larger framework).

Two good books that haven't been mentioned here:

Economic Theory in Retrospect, by Mark Blaug. Very useful to get a good historical grounding in the main ideas that compose today's orthodox economics.

The Applied Theory of Pirce, by McCloskey. Your usual microeconomics textbook, but far more thorough, insisting a lot on grasping the intuition behind the concepts. Available for free from the author's website here: http://www.deirdremccloskey.com/docs/price.pdf

ohthehugemanate 1 day ago 2 replies      
Top of my list would be "the ascent of money", by Harvard Prof Niall Ferguson. It explains what money and financial instruments are, by telling the stories of their history. Hes a great story teller, and for each aspect of finance that he explains, there's a story of a famous piece of history which it caused. For example, the application of oriental maths to finance caused a huge boom for Italian bankers, especially including one family, the Medici. That financial boom was responsible for the artistic boom we call Renaissance art. Or how the Dutch republic triumphed over the enormous Hapsburg empire, because the world's largest silver mine couldn't compete with the world's first stock market.

Fantastic read, and a great way to gain financial literacy.

kevinburke 2 days ago 2 replies      
(Economics major and longtime econ book/paper reader here) I very much enjoyed The Cartoon Introduction to Economics as an introduction to microeconomic concepts: http://standupeconomist.com/cartoon-intro-microeconomics/

It's extremely readable and funny and covers most of the situations in real life where you can apply economic concepts to understand why something is the way it is.

Understanding why countries and economies grow (and why some grow faster than others!) doesn't always fall under the "economics" umbrella but is really useful for informing policy (and a useful reminder these days, when both US presidential candidates rail against trade agreements). "From Poverty to Prosperity" lays out a very readable and convincing argument for how countries have grown and become rich. https://www.amazon.com/Poverty-Prosperity-Intangible-Liabili...

For finance I very much enjoyed The Intelligent Investor, which also (apparently) inspired Warren Buffett's investing philosophy. https://www.amazon.com/Intelligent-Investor-Definitive-Inves...

soVeryTired 1 day ago 0 replies      
I work in a quant hedge fund - I'll give you my take. The first thing I would point out is that there is a massive difference between academic theory and practice. I don't want to turn this into an anti-academic rant, but I do want to emphasise that we value very different things. For this reason alone, most of what you read in most textbooks won't do you much good.

Personally I wouldn't place too much emphasis on outside knowledge. Basic knowledge of economics wouldn't hurt, but don't go nuts. Khan academy will give you more than enough theory. You don't want to spend all your energy developing a skill that a trained economist applicant will crush you at. Neither should you focus too much on e.g. stochastic analysis. In the real world, no-one cares whether a stochastic process is previsible or progressively measurable. But knowing how to derive Black-Scholes couldn't hurt.

So far I've msotly talked about what you shouldn't read. I'll try to talk a little bit about what you should. Read the financial press. The FT or the wall street journal, depending on where you're based. Read finance blogs. Frances coppola is good. So is the Bank of England's blog. Check out Alphaville at the FT too. You'll be expected to know what's going on in the world right now. Could you explain what QE is? For a finance job, that's more important than knowing what the IS/LM model says. What's been going on in China recently? What do you think about their currency outflows?

Know how to code. At least one of Python, Matlab or R for the buy side, one of Java or C++ for the sell side.

Most importantly, though, you should be able to demonstrate enthusiasm. Any given junior quant role will get hundreds of applications, and some demonstrable interest will put you head and shoulders above the pack. A link to some decent analysis on github would do (none of the hundred or so applicants to the last position we advertised did that). Play with some financial data. Quantopian is apparently a good resource.

I've talked about how to prepare for a general finance job. The specific reading you should do will depend on exactly what job you want. Do you want to be a quant? If so, buy side or sell side? Read up on the difference. Go check out efinancialcareers, have a look at the skills they're asking for within each sector, and take it from there.

AndrewKemendo 2 days ago 8 replies      
The following list will introduce you to Western Economic Philosophy as it relates to modern history specifically. This list is weighted heavily toward neo-classical economics and does not get into computational model based economics - specifically microeconomics, which comprises the bulk of economics education today:

Schumpeter - History of economic analysis

Adam Smith - Theory of Moral Sentiments

Kaynes - The General Theory of Employment, Interest and Money

Marx - Capital

Benjamin Graham - The Intelligent Investor

Galbraith - The Affluent Society

Galbraith - The Great Crash

Milton Friedman - Capitalism and Freedom

Nassim Taleb - Black Swan

Ron Suskind - Confidence Men

Scott Patterson - Dark Pools

If you want to delve into heterodox economics afterward, start with the following:

Hayek - Individualism and Economic Order

Mises - Human Action

Rothbard - Man, Economy, State

RockyMcNuts 1 day ago 2 replies      
As a start, take some economics courses, intro Micro and Macro. (Check https://www.coursetalk.com/https://www.class-central.com/ )

Actually the first book I'd recommend would be The Worldly Philosophers, a readable history of economics


A couple of more right-leaning books -Hayek, The Road to Serfdomhttps://www.amazon.com/Road-Serfdom-Fiftieth-Anniversary/dp/...

Friedman, Capitalism and Freedomhttps://www.amazon.com/Capitalism-Freedom-Anniversary-Milton...

Less right-leaning

The Marx-Engels Readerhttps://www.amazon.com/Marx-Engels-Reader-Second-Karl-Marx/d...

oli5679 1 day ago 3 replies      
I'd recommend textbooks or corsera rather than pop econ books.

Mostly Harmless Econometrics - Angrist and Krueger

Principles of microeconomics - Mankiw (beginner)

Intermediate microeconomics - Varian (intermediate)

You also want to cover finance and time series - I don't know what would be good there.

bbayles 2 days ago 3 replies      
I read Dubner and Levitt's Freakonomics in 2005. It's lame to say that a pop-science book changed my life, but since then I've thought about economics every day.

I would recommend some pop-econ to become familiar with a stylized version of how economists think. I'd recommend Tim Harford's The Undercover Economist Strikes Back and The Logic of Life and Robert Frank's The Economic Naturalist. (Dubner's and Levitt's books are entertaining, but I wouldn't try to learn much about economics from them)

The world of professional economists has been fascinating to watch over the last 10 years, as academic economist blogs are very active and very high quality. Watching debates and commentary about the global financial crises unfold on the blogs in real time was really something. Economist bloggers have a real influence on policy now, and whole schools of thought have coalesced out of blogs (e.g. market monetarism).

There are some excellent economics podcasts out there now. EconTalk (with Russ Roberts) has been going since 2006. I'd recommend listening to some of his interviews with academic economists. Macro Musings (with David Beckworth) just started this year, and the policy discussions have been quite informative.

The Marginal Revolution University website has an fantastic series of videos on economics topics. The "Development Economics" course I would recommend strongly - I wish I'd been taught the Solow Model in school.

Economics is a very interesting discipline to study from the outside. Learning a bit about it puts policy debates in a new light - I've become much more liberal on some topics and much less confident on a lot of topics. I find that reporting about economics issues is generally pretty terrible, so beware that if you get into economics you'll want to stop reading a lot of news analysis.

jnordwick 2 days ago 1 reply      
This one was recommended by the former head of NYMEX to me when I started my career in trading. Written about Jesse Livermore who made and lost his fortune multiple times. He was often blamed for rigging the market, but his lesson is simple: you basically can't rig the market; it will destroy you way more easily. Take what the market gives you and be happy it even decided to give you that:


And you'll see a lot of recommendations for everything from Hazlitt to Piketty, but my favorite you never see recommended for macro is The Way the World Works by Jude Wanniski. He was one of the life long Democrats who became a Reagan advisor (and basically quickly turned back into a Dem before passing away about ten years ago):


Besides that, this is a really broad questions. There is stuff like John Hull for derivatives (this is what I survive on):


This is the game theory book I and many others have survived on in college and many years past. Haven't really found a better one yet:


vegancap 1 day ago 4 replies      
Henry Hazlitt - 'Economics in one lesson'Mises - 'Theory of Credit and Money' Adam Smith - 'Wealth of Nations' Milton Friedman - 'Capitalism and Freedom'Murray Rothbard - 'A New Liberty'

Have been my personal, but somewhat one-sided favourites.

ddebernardy 1 day ago 1 reply      
IMO start with a recent book that spells out useful pointers to give the classics a critical read:

"Debunking Economics", by Steve Keen.

Keen gave a talk at Google a few years back that was a pretty good summary of what's in the book's first version.

If you're into stats and finance also check out the author's finance classes on youtube. Besides a bunch of videos that cover what's in his book, there are quite a few on financial modeling, and at least one video in there that delves into power laws and financial markets.

Also, try to throw in a few history books to your mix: history of the world, of science, and of ideas. History helps contextualize and make sense of what was going on in the mind of contemporaries as economic theories matured.

lujim 1 day ago 2 replies      
If you're looking for the nuts and bolts on how capital markets around the world work this book is hands down the best there is.


Equities, Futures, Rate Swaps, Options, Credit, Treasury, Corporate, Municipal, Mortgage and Agency Bonds. Then the technology that supports it all.

It is not only a fantastic high level view, but it get granular enough to explain things like how US Treasuries prices quoted in 32nds of a dollar or how fixed income securities are identified by something called a CUSIP or what a strike price is for an option. Granular enough to explain practical day to day concepts that would help you at your first job in a financial firm.

marmot777 1 day ago 1 reply      
These are important books but obviously not a comprehensive list.

* John Locke's Two Treatises of Government - It's political philosophy but it's hard to understand Classical Liberalism without having read some Locke.

* Adam Smith's Wealth of Nations - He and Locke are the two main guys to read for a solid start on Classical Liberalism, which is completely different than modern political liberalism. It's like having two features in an app with nearly the same name. Confusing as fuck.

* E. F. Schumacher's Small Is Beautiful: Economics as if People Mattered - This book will shift your perspective, useful to avoid becoming an a mindless advocate for one school of thought or another.

* Marx is a tough one as Capital is massive and unreadable and The Communist Manifesto is a propaganda pamphlet but I think you need to at least find some articles that summarize the basics.

* Keynes and Hayek - This hip hop battle is a decent start:https://www.youtube.com/watch?v=d0nERTFo-Skthen read Keynes' The General Theory of Employment, Interest and Money and Hayek's The Road to Serfdom.

* Milton Friedman's - Yes, read Capitalism and Freedom. I hesitated to include it as the guy's so good at making the case that it can turn you into a market advocate bot. Please resist that.

Can someone help me on this, is there a book balance Hayek and a book to balance Friedman? I'm sorry but Keynes doesn't do it for me. Look at the difference in titles between Hayek and Keynes. It's hard to motivate to read the Keynes book but nobody ever has trouble reading Hayek.

I see a lot of these ideas come up on HN a lot. What I don't like so much is when someone becomes an advocate for a particular ism. To me, all isms are rubbish. All of them. Understand but do not become a shill for an ideology.

bennesvig 2 days ago 2 replies      
Basic Economics by Thomas Sowell is the book that got me interested in economics. It's a large but easy to understand read.
tom_b 2 days ago 1 reply      

I found Larry Harris' Trading and Exchanges: Market Microstructure for Practitioners a solid introduction to market making and trading. Terms and concepts are easy to pick up from the text. I was comfortable enough after reading it to skim stats journal papers talking about market making models. The Stockfighter team had mentioned it in older threads here. It's expensive, but I just borrowed it from the library at my university instead of buying.

I also like The Elements of Statistical Learning which is free from the authors (http://statweb.stanford.edu/~tibs/ElemStatLearn/download.htm...). Although it isn't specifically about economics or markets, you should at least read it.

I'm at a loss on general economics books.

scott00 1 day ago 0 replies      
My first read of your request made me think you were looking for books mainly for personal intellectual growth. There are a lot of answers in that vein, as well as a few that seem suitable replacements for an undergrad econ degree. A second read made me wonder if you're actually asking for practical advice about what you should read in order to get a job in finance, given you won't take many econ or finance courses. I'll answer in the second vein, as it seems to be somewhat underrepresented.

Investment Banking/Private Equity/Investment Analysis

McKinsey & Co, Koller, Goedhart, Wessels: Valuation

Damadoran on Valuation

Trading or Quant

Hull: Options, Futures, and Other Derivatives

Joshi: Introduction to Mathematical Finance

Harris: Trading and Exchanges: Market Microstructure for Practitioners

There should probably also be a category for what I think of as quantitative fundamental investing. For an idea of what I mean, look at what the investment firm AQR does. I'm not sure of good books in this area though.

loeber 2 days ago 1 reply      
Debt: the First 5,000 Years by David Graeber is a controversial but rather important recent publication. I haven't seen it mentioned yet, so I wanted to recommend it.
geff82 1 day ago 2 replies      
"Economics in one lesson" is a classic worth reading and thinking about. While you don't necessary have to follow the libertarian way of thinking it guides you to, it still shapes your critical thinking about economic policies a lot.
gtrubetskoy 2 days ago 0 replies      
To better understand our monetary system I highly recommend watching the "Money as Debt" movie. It's on youtube as well as http://www.moneyasdebt.net/ (which I think links to y/t anyway). It provides a pretty good explanation of gold-backed vs credit-backed money and is fun to watch.
tmaly 2 days ago 3 replies      
Economics in One Lesson by Henry Hazlitt

Human Action by Ludwig von Mises

malloryerik 1 day ago 0 replies      
Aside from strict econ, finance and trading books, I'd heartily suggest economic history.

One of my personal favorites:

Global Capitalism, Its Fall and Rise in the Twentieth Century by Jeffry Frieden.


From the Journal of International Eeconomics' review:

Perhaps the greatest merit of Frieden's book is that it allows the reader to see the themes of winners and losers, risk and uncertainty, integration, economic growth and technological change emerge clearly from the deep forest of contemporary history. One gains a greater appreciation for the timelessness of these phenomena and how to begin to get a grip on the bigger picture of policy making and the global economy.

I found that quote on the author's site. http://scholar.harvard.edu/jfrieden/pages/global-capitalism-...

longsangstan 1 day ago 3 replies      
If you know Chinese, there is a must read:Economic Explanation( by Steven Cheung.

If you don't, you can read:Economic Explanation: Selected Papers of Steven N.s. Cheung. (Same book name but different content - collection of essays vs a book on theories)

Why Steven Cheung?As a close friend to Ronald Coase, he too focuses on empirical research (the real world) rather than blackboard economics (the imaginary world); hates the use of math for the sake of it; emphasizes on testable implications (positive economics).

His classic paper The Fable of the Bees is a great example of how empirical work destroys blackboard economics.

jmcgough 2 days ago 0 replies      
A Random Walk Down Wallstreet.

Great introductory book on investing, especially if you're interested in personal finance.

pmilot 1 day ago 0 replies      
Surprisingly, a lot of people in this thread hesitate to recommend Thomas Piketty's "Capital in the Twenty-First Century". I'm not sure why this book is somehow surrounded with overblown controversy.

I think it is an excellent book on historical economics. His conclusions are drawn from an extremely large dataset that is publicly available and downloadable here: https://www.quandl.com/data/PIKETTY

It's by no means an Economics 101 book, but it should definitely be part of any economist's personal library in my opinion.

gawry 2 days ago 0 replies      
A nice place to start might be the CFA study guide


orthoganol 1 day ago 0 replies      
"Global Capitalism: Its Fall and Rise in the Twentieth Century" by Jeffry Frieden is a masterpiece. It will give you a thorough, expansive view of the global financial world - the major events and trends - as they unfolded over the last century. This book is regularly assigned as a text book in Ivy League economic history classes, so even though it's short on math/ econometrics, it's a serious work.
branchless 2 days ago 0 replies      
Progress and Poverty:


Why is there so much poverty amongst all our progress? Georgism and land value tax. Essential reading IMHO and an enjoyable read also.

randcraw 2 days ago 0 replies      
For readable bios of the major economic thinkers, I like:

New Ideas from Dead Economists: An Introduction to Modern Economic Thought, by Bucholz and Feldstein

The Worldly Philosophers: The Lives, Times And Ideas Of The Great Economic Thinkers, by Heilbroner

And for a readable sample of the economists' thought in their own words, there's:

Teachings from the Worldly Philosophy, by Heilbroner

stephenbez 1 day ago 0 replies      
I found Milton Friedman's "Free To Choose" fundamental and very readable.


shmulkey18 1 day ago 0 replies      
A brief but profound paper: The Use of Knowledge in Society by Hayek (http://home.uchicago.edu/~vlima/courses/econ200/spring01/hay...).

As others have said, the EconTalk podcast is excellent.

hkmurakami 2 days ago 2 replies      
A random walk down wall Street

Reminisces of a stock operator

When genius failed

Unconventional success

And generally just read financial news and follow markets until you develop a sense for spotting BS.

n00b101 2 days ago 1 reply      
Options, Futures, and Other Derivatives by John Hull

Principles of Corporate Finance by Richard Brealey, Stewart Myers, Franklin Allen

Traders, Guns and Money: Knowns and unknowns in the dazzling world of derivatives by Satyajit Das

meigwilym 1 day ago 1 reply      
After a few pop-sci economics books (freakonomics, the undercover economist...) I progressed to Ha-Joon Chang's Economics: The User's Guide.

It's covers all the major schools of thought, along with their pros and cons. I highly recommend it.

edge17 1 day ago 0 replies      
Everyone seems to be addressing the finance part of it without the "growing intellectually" part of it. I've been fortunate to be surrounded by economists my whole life. Economists are also tremendous historians; reading a lot of history and recasting what you know about history into economic frameworks will greatly sharpen your intellectual abilities. As with most things involving learning, having and seeking out intellectual peers is a valuable way to challenge all your ideas.
2T1Qka0rEiPr 1 day ago 0 replies      
For a truly fun read I'd suggest Dan Ariely's "Predictably Irrational". It's less academic than "Thinking fast and slow" by Daniel Kahneman (which is also great), but I found that refreshing.
qubex 1 day ago 0 replies      
Mathematically trained economist here.

Why Stock Markets Crash by Didier SornetteThe complete oeuvre of Paul Wilmott

(The Computational Beauty of Nature, by Gary W. Flake because it's wonderful and puts you in the right frame of mind)

baristaGeek 1 day ago 0 replies      
People have mentioned different authors across different schools of economic thoughts such as Mankiw, Rothbard, Friedman, Hayek, Smith, Keynes, etc. There's one that's also being mentioned which I particularly would avoid recommending which is Piketty.

Those are the best recommendations.

I would like to give a recommendation that might be a little bit different: 'Why Nations Fail' by Acemoglu.

pjc50 1 day ago 0 replies      
Note that "finance" and "economics" are separate disciplines, roughly corresponding to applied vs theoretical.

The book which changed my thinking the most was "The Other Path" https://www.amazon.co.uk/Other-Path-Economic-Answer-Terroris...

It would be easy to give it the traditional libertarian gloss of "reducing regulation to improve the economy", but it's much more subtle than that. It looks at the costs of being outside the "system", and the benefits of simplifying the system so as to include more people and businesses. Along with land reform to reflect the actual reality of buildings.

Also, short and entertaining, but with lots of insights into principal-agent problems and bubble mentality: "Where Are the Customers' Yachts?" https://www.amazon.co.uk/Where-Are-Customers-Yachts-Investme...

misiti3780 1 day ago 0 replies      
All must reads in my opinion

Fooled By Randomness - Taleb

The Black Swan - Taleb

Antifragile - Taleb

When Genius Failed - Lowenstein

Liars Poker - Lewis

The Big Short - Lewis

Flash Boys - Lewis

Too Big To Fail - Sorkin

Against the Gods - Bernstein

One Up On Wallstreet - Lynch

The Intelligent Investor - Graham

ElonsMosque 1 day ago 0 replies      
This might sound unconventional but in terms of Economics I would recommend a comic book called "Economix" by Michael Goodwin. According to financial advisor David Bach:

"You could read 10 books on the subject and not glean as much information."

Personally I believe thats because the subject and history of economics is presented in such an accesible and fun way in this book without compromising the quality and historical accuracy.

yomritoyj 1 day ago 0 replies      
Since you are already in a quantitative field I think it would be good to quickly get to the heart of what economists actually do. I would suggest

Varian, 'Intermediate Microeconomics'Luenberger, 'Investment Science'Wooldridge, 'Introductory Econometrics'

for the undergraduate background and then at the graduate level Jehle and Renyi for microeconomics, Duffie for asset pricing theory, Tirole for corporate finance and Campbell, Lo and Macinlay for econometrics.

unixhero 1 day ago 0 replies      
End to End exploration and explanation of how and why global economy works.- Peter Dicken, Global Shift https://uk.sagepub.com/en-gb/eur/global-shift/book242137

Any corporate finance textbook, probably;Brearly Myers, Corporate Finance, https://www.amazon.com/Principles-Corporate-Finance-Richard-...

Watch the Yale/Stanford lectures opencourseware on Financial Markets with Schiller; http://oyc.yale.edu/economics/econ-252-11

Nicholas Taleb, Black Swan; https://www.amazon.com/Black-Swan-Improbable-Robustness-Frag...

Harry Markopolos, Nobody Would Listen, https://www.amazon.com/No-One-Would-Listen-Financial/dp/0470...

Michael Lewis, Liars Poker, https://www.amazon.com/Liars-Poker-Norton-Paperback-Michael/...

"Leveraged Sellout", Damn It Feels Good To Be A Banker, https://www.amazon.com/Damn-Feels-Good-Be-Banker/dp/14013096...

mempko 1 day ago 0 replies      
Best book you can read first is "Debt the First 5000 Years" by David Graeber. He is an anthropologist and the book outlines many economic topics giving you a historical context.

Then go and read all the standard literature and you will be surprised how terrible and unscientific it all is.

Osiris30 1 day ago 0 replies      
On topic article on economist Paul Romer's view on the current state of macro: http://www.economist.com/news/finance-and-economics/21707529...
marginalcodex 2 days ago 1 reply      
There is no must-read books for economics (or almost any other field of study). Non-fiction economics books are meant to teach the reader something new. As Economics represents a set of ideas owned by no one individual, the best overview of economics will contain all of the important, integral ideas of the subject.

Any summary of economics that introduces the core concepts will be great and serve its purpose.

chiliap2 2 days ago 0 replies      
One I don't see recommended very often is: Fortune's Formula. It describes the lives of Claude Shannon and Ed Thorp (author of Beat the Dealer) and how they use the Kelley formula in both gambling and investing. The Kelley formula, as the book explains, is a formula for determining the optimum amount to bet on a wager (or investment) if you know the edge you have over the house.
crdoconnor 2 days ago 0 replies      
Traders, Guns and Money - Satyajit Das

Debunking Economics - Steve Keen

The Volatility Machine - Michael Pettis

fitchjo 1 day ago 1 reply      
It is not a book, but Matt Levine is the way I get my daily finance news. He is fantastic.
D_Alex 2 days ago 0 replies      
"Where are the customers' yachts?" by Fred Schwed is a must read for anyone interested in investing in stocks.

An absolute must.

dash2 1 day ago 0 replies      
Academic economist(ish) speaking here. Be aware of the distinction between books of economics (the discipline) and books about economics by non-economists. Both can be great - I loved The Big Short.

Strongly recommend Keynes, and surprised nobody has mentioned Minsky or Kindleberger - outsiders now receiving recognition.

yodsanklai 1 day ago 1 reply      
As far as economics is concerned, I recommend Mankiw's principle of economics. It's widely used as a textbook for economics undergraduates. It's very well written and entertaining. In my opinion, it's better than general public vulgarization books.
eth0up 2 days ago 2 replies      
Road to Serfdom, by F.A. Hayek?
justifier 1 day ago 0 replies      
at the moment i'm uninterested in the arc of economic studies in acadamia so the opencourseware reading lists seem the wrong place to start for me

can anyone suggest reading to understand how contemporary banks function, where can i get an understanding of a bank or credit union from a software engineer's perspective: dependencies, steps to start, challenges of running, protections from common problems, interesting emerging disruptions;

dxbydt 2 days ago 0 replies      
David Ruppert's "Statistics and Finance" is the classic you are looking for. It is a standard textbook in most finance curriculums in the US. Roughly 50% of the book is plain statistics as applicable to finance. The rest is finance with a statistical flavor.
Dowwie 1 day ago 2 replies      
Why finance? Finance was the major viable option for people with your background but that's not the case anymore. Every industry needs people with your background -- more than ever!

However, if you're hell bent on going into finance and you're going to read one economics book, read Thomas Piketty, "Capital for 21st Century".

These are the classic economics tomes that you'd read over a lifetime:

Karl Marx, "Capital"Adam Smith, "The Theory of Moral Sentiments"Adam Smith, "The Wealth of Nations"FA Hayek, "The Road to Serfdom"

fitzwatermellow 2 days ago 1 reply      
Alternatively, don't learn from books, but the markets themselves. Open a Paper Trading account via Think or Swim. Begin a steady diet of Bloomberg / WSJ / CNBC every day. Whenever a word or idea is mentioned that you don't understand, Google it or consult Investopedia. Figure out what the Fed actually does. How debt and credit markets work. The microstructure of physical and electronic commodities trading. Maybe skim an online "Stochastic Calculus" class. Join Quantopian and master every algorithmic strategy known to humankind. Dive deep into cryptocurrency and blockchain technologies.

And who knows, perhaps one day you'll invent something that obviates the need for a global system of monetary trust ;)

cubey17 2 days ago 0 replies      
Can't recommend this book highly enough: A Concise Guide to Macroeconomics, Second Edition: What Managers, Executives, and Students Need to Know http://a.co/jicSNc9
fiatjaf 2 days ago 1 reply      
I have news for you: statistics is totally unrelated to economics. Better rethink everything you think you know.

For people wanting recommendations, I suggest Carl Menger's Principles of Economics, which has saved me from stupidification on college.

jaynos 1 day ago 0 replies      
Derivatives Markets by Robert McDonald is a great textbook. I would not suggest reading it cover to cover, but it's a great reference for truly understanding bonds, options, etc.

I'd also recommend anything by Matt Taibbi, but only if reading about the shadiness of Wall Street interests you. His books are well written and fact checked, but definitely have a bias that you may not care for.

xapata 1 day ago 0 replies      
"The Great Transformation" by Karl Polanyi. It's a tough read, because it was translated from Hungarian. It's an important read, because it provides an alternative analysis to both Smith and Marx. Polanyi was informed by recent developments in anthropology which contradicted the major theories of how modern economies had formed.
hellogoodbyeeee 1 day ago 0 replies      
There are some really good suggestions here about the economics and finance in general. I think having a good solid understanding of the financial crisis is valuable in today's world. I recommend "All the Devils are Here" by Bethany McLean. It offers a well rounded, facts-first approach to explaining the crisis. It does not point fingers or assess blaim, which is a valuable perspective.
Henchilada 2 days ago 1 reply      
frankyo 22 hours ago 0 replies      
Economics in one lesson by Henry Hazlitt changed my life. If you want to read one book only, read this one. It's sort, easy to understand and ruthlessly logical.
danvesma 1 day ago 0 replies      
A scholarly work on how ethics and CSR can be a positive influence on the economic model:


crispytx 2 days ago 1 reply      
The Intelligent Investor by Benjamin Graham

One Up on Wall Street by Peter Lynch

chromaton 1 day ago 0 replies      
Understanding Wall Street by Jeffrey Little gives a good overview of many kinds of financial instruments, including stocks, bonds, and options. It's NOT an investment flavor of the week book and is now on its 5th edition, the first having come out over 30 years ago.
kresimirus 1 day ago 0 replies      
How an Economy Grows and Why It Crashes from Peter Schiff:https://www.amazon.com/How-Economy-Grows-Why-Crashes/dp/0470...

Very short read - economics basics from libertarian view.

robojamison 1 day ago 0 replies      
Whatever Happened to Penny Candy [1] is a fascinating book and a short read.

[1]: https://www.amazon.com/Whatever-Happened-Explanation-Economi...

joshuathomas096 1 day ago 0 replies      
Thinking, Fast and Slow by Daniel Kahneman.He won a Nobel Prize in Economics in 2002 for his work in Behavioral Economics. I truly believe understanding human behavior and decision making is a key foundation to anything else you read in Economics.

This book changed my life, I highly recommend it.

JSeymourATL 2 days ago 0 replies      
> from a philosophical point of view; positive vs normative economics...

Thought provoking on a variety of levels - Seeking Wisdom: From Darwin To Mungerby Peter Bevelin > http://www.goodreads.com/book/show/1995421.Seeking_Wisdom

kgwgk 1 day ago 0 replies      
I was going to recommend Malkiel's book but of course it has been already mentioned several times. So I'll add to the list Zweig's "The Devil's Financial Dictionary" (funny but also educational) and Sharpe's "Investors and Markets" (more academic).
karanbhangui 1 day ago 0 replies      
Haven't seen this one posted yet: http://www.mcafee.cc/Introecon/

I prefer the 2007 version, it's more mathy.

waleedsaud 1 day ago 0 replies      
Economics in One Lesson by Henry Hazlitthttps://mises.org/library/economics-one-lesson
sonabinu 1 day ago 0 replies      
Adam Smith - Wealth of Nations

Keynes - The General Theory of Employment, Interest and Money

Ben Bernanke - Essays on the Great Depression

Robert Shiller - Irrational Exuberance

Levitt and Dubner - Freakonomics: A Rogue Economist Explores the Hidden Side of Everything

Daniel Kahneman - Thinking, fast and slow

p4wnc6 2 days ago 0 replies      
In addition to the many fine recommendations already on the thread, I enjoyed Winner's Curse and Irrational Exuberance.
SRasch 1 day ago 0 replies      
Capitalism and freedom by Milton Friedman
bronlund 1 day ago 0 replies      
bronlund 1 day ago 0 replies      
tezza 1 day ago 0 replies      
When I entered Financial Services in London I was recommended this book as the bible:

"How to Read the Financial Pages"

This book really breaks down the finance industry from a component and historical point of view. Stocks, dividends, bonds, TBills, Eurobonds

saganus 2 days ago 0 replies      
Not sure how popular this take on finance is, here in HN (really I have no idea), but I found these two very interesting.

"The New Depression: The Breakdown of the Paper Money Economy"

and "The Dollar Crisis: Causes, Consequences, Cures" both by Richard Duncan

bgilroy26 1 day ago 0 replies      
Capital Ideas:The Improbable Rise of Modern Finance takes a historical approach to the development of finance.

It was striking to me how recent many developments are!

dilemma 1 day ago 0 replies      
The Ownership of Enterprise talks about different types of organizational forms (corporations, cooperatives, etc.) and how the form affects its function, and vice versa.
rubyn00bie 1 day ago 0 replies      
Preface: For a bit of I suppose... uhh, qualification, I took nearly every single upper division Economics class my university offered (~25). I did so because I LOVE Econ. Also, sorry for the rambling nature of this.

First things first, finance is only sort of economics, it's really just finance. I'd highly recommend taking an accounting class (or book) and a grab an intro finance book. Accounting will really help with jargon, and just some really basic things (like balance sheets). Also, "Security Analysis" [0] is the "only" book you'll ever need, Warren Buffet recommended it to Bill Gates, and now Bill Gates recommends it to everyone.

Back to Economics... There are two primary "groups" of thought... sort of like twins separated at birth who grow to hate each other.

----------------------------------The First: Neoclassical Economics----------------------------------

Focuses primarily on microeconomics and largely mathematical. It's birth is largely due to Economists wanting to make econ a "true science" like we see the physical sciences (biology, chemistry, physics). It starts around the late 1800s and really picks up steam around the time of Einstein. Math was hot and being applied everywhere.

A really interesting period to research and study is right after black Tuesday (and before the great depression) and what the central bank didn't do (before central bank intervention in markets). While I really detest the bastard, Milton Friedman's work on monetary policy is pretty science and generally good here. [1],[2].

I'm a Keynesian (I suppose-- Econ gets deep fast), and so you'd be no where without reading some of what Keynes did to get our assess out of the great depression (i.e. government spending). It's also more or less the birth of Macroeconomics... You'll know you're good when you laugh at forgetting: Y = C + I + G + (X - M). Some good things to get started are looking at the IS-LM [3] model and AS-AD [4] model.

That gets you into the 60s - 70s. Tall Paul Volker is the unsung hero of the 80s, read about him (he ran the federal reserve). After that microeconomics starts to fragment into things involving game theory and behavioral economics (Daniel Kahneman is the man).

Econometric analysis mathematically speaking is just multivariate regression analysis for time series or cross-sectional data. More "modern" analysis is probably using panel data [5] (combination of cross sectional and time series). Calculus, linear algebra, and differential equations should prepare one plenty for everything but panel data analysis. The real "econ" part is applying solid econ theory to the mathematics you're using, a textbook will help [6]. For finance this is your bread and butter.

Game theory will apply a lot of different mathematical tools. You will need to love pure math. To really get into it requires pain or love. I like a healthy amount of both.

----------------------------------The Second: Heterodox Economics----------------------------------

So as it turns out, neoclassical economics is at most half of Economics. It's really where the "philosophy" comes into play. You're gonna need a quick history lesson to sort of see it's topic matter. Economics really didn't exist before... the 1500s. You can try to apply economics to earlier times but you could also just make shit up and post it to twitter. Both would be equally likely to contain truth.

Economics came into existence around the time the Dutch began developing trade routes (1550s). A by product of all this trade, is tons of cash, and goods-- currency (silver, metals, whatever) starts to actually be used in society (before that it was mostly just a status symbol). It pisses off a lot of _institutions_, most of all "the church" and monarchies because money is allowing people to gain power. It's usurping power from them. This is the rise of the "merchant class" and now thanks to money (trade really, but whatever it's complicated)-- people are liberating themselves from the social status they're born into. Eventually modern republics appear, and governments form. Nations trading globally becomes more common (Dutch, English, Spanish) and we get to Adam Smith, David Ricardo [7], et. al.

Now it's the 1800s. People are seeing the birth and growth of capitalism, industry, corporations, and the tumultuous death of agrarian life. Now the way the "common person" lives their day dramatically changing, for a few it was better for most it was worse. Some economists begin to ask why are we replacing these now defunct _institutions_ with equally shitty, or possibly shittier, ones. This is more or less becomes the birth of heterodox economics which largely studies the more abstract ideas like "institutions"; by it's very nature the content tends to be philosophical.

By the 1920s heterodox economics is falling by the wayside. The content is less able to be tested like a physical science (i.e. no math/stats); so, it's treated like a misbegotten child... By the 1950s heterodox content was marginal at best-- the cold war and fear of communism made (makes) people insane. Economists pretty much had to be pro-capitalism or face being called "commies" and thrown in jail or worse being a narc in a witch hunt. This was more or less the nail in the coffin in mainstream heterodox economics (at least for research in the Occident). After the cold-war ended the nail got pulled out, but I wouldn't say it's really outta the coffin yet.

This book [8] isn't great but it's quickly digestible and will point you in the appropriate directions.


Some Rambling to Finish

I'd highly recommend not just learning how to use the tools, but why we have them and where they came from. Economics is vastly deeper than the average person will ever know. That depth is greatly empowering and guiding when using its lenses to see and solve problems. One last thing, know there's no going back, you will see the world differently.

[0] https://www.amazon.com/Security-Analysis-Foreword-Buffett-Ed...

[1] "The Role of Monetary Policy." American Economic Review, Vol. 58, No. 1 (Mar., 1968), pp. 117 JSTOR presidential address to American Economics Association

[2] "Inflation and Unemployment: Nobel lecture", 1977, Journal of Political Economy. Vol. 85, pp. 45172. JSTOR

[3] https://en.wikipedia.org/wiki/IS%E2%80%93LM_model

[4] https://en.wikipedia.org/wiki/AD%E2%80%93AS_model

[5] The course I took on panel data, http://web.pdx.edu/%7Ecrkl/ec510/ec510-PD.htm

[6] https://www.amazon.com/Using-Econometrics-Practical-Addison-...

[7] He more or less invented trade theory (competitive advantage) https://en.wikipedia.org/wiki/David_Ricardo

[8] https://www.amazon.com/Age-Economist-9th-Daniel-Fusfeld/dp/0...

Edit: for formatting.

hendzen 1 day ago 0 replies      
If you want to learn about quantitiative trading:

1) Active Portfolio Management: A Quantitative Approach for Producing Superior Returns and Controlling Risk

2) Quantitative Equity Portfolio Management: Modern Techniques and Applications

TheSpiceIsLife 1 day ago 0 replies      
Bourgeois Dignity: Why Economics Cant Explain the Modern World by Deirdre McCloskey, and presumably the other books in the series[1]

1. Bourgeois Dignity: Why Economics Cant Explain the Modern World

damptowel 1 day ago 0 replies      
Debunking Economics by Steve Keen. Though be warned, you might not quite appreciate economic textbooks afterwards.
damptowel 1 day ago 0 replies      
Debunking Economics by Steve Keen, though be warned, you might not quite enjoy economic textbooks afterwards.
mitchelldeacon9 2 days ago 3 replies      
Here is my short list of favorite books on finance and economics:


Bruck, Connie (1988) Predator's Ball: Inside Story of Drexel Burnham and Rise of Junk Bond Raiders

Draper, William (2011) Startup Game

Graham, Benjamin and Jason Zweig (2006) Intelligent Investor, revised ed.

_________ and David Dodd (2008) Security Analysis, 6E

Greenblatt, Joel (1999) You Can Be a Stock Market Genius

Greenwald, Kahn, Sonkin, Biema (2001) Value Investing: From Graham to Buffett and Beyond

Henwood, Doug (1997) Wall Street: How It Works and for Whom

Levitt, Arthur (2003) Take on the Street: How to Fight for Your Financial Future

Lewis, Michael (1989) Liar's Poker: Rising Through the Wreckage on Wall Street

_________ (2010) Big Short: Inside the Doomsday Machine


Ayres, Ian (2007) Super Crunchers: Why Thinking by Numbers is the New Way to Be Smart

Bernstein, Peter (1996) Against the Gods: Remarkable Story of Risk

Kahneman, Daniel (2011) Thinking, Fast and Slow

Silver, Nate (2012) Signal and the Noise: Why So Many Predictions Fail, but Some Don't

Taleb, Nassim Nicholas (2005) Fooled by Randomness, 2E

_________ (2010) Black Swan: Impact of the Highly Improbable, 2E


Christensen, Clayton (1997) Innovator's Dilemma

Stone, Brad (2013) Everything Store: Jeff Bezos and the Age of Amazon

Wallace, James and Jim Erickson (1992) Hard Drive: Bill Gates and Making of the Microsoft Empire

Walton, Sam with John Huey (1992) Sam Walton: Made in America

Wilson, Mike (1996) Difference between God and Larry Ellison: Inside Oracle Corp


Arrighi, Giovanni (1994) Long Twentieth Century

Braudel, Fernand (1979) Civilization & Capitalism 15th-18th Century, vol. 3: Perspective of the World, trans. Sin Reynolds

Brechin, Gray (2006) Imperial San Francisco: Urban Power, Earthly Ruin

Heilbroner, Robert (1999) Worldly Philosophers: Lives, Times & Ideas of Great Economic Thinkers, 7E

Marx, Karl (1867) Capital, vol. 1

Stiglitz, Joseph (2003) Roaring Nineties: A New History of the Worlds Most Prosperous Decade

_________ (2010) Freefall: America, Free Markets and the Sinking of the World Economy

Vallianatos, E.G. (2014) Poison Spring: Secret History of Pollution and EPA

Vilar, Pierre (1976) A History of Gold and Money: 1450-1920

Yergin, Daniel (1992) Prize: Epic Quest for Money, Oil and Power

Would enjoy email correspondence with anyone interested in these subjects: mitchelldeacon9@gmail.comAll the best

kejaed 2 days ago 1 reply      
I'm curious, what exactly is an engineering degree in statistics?
anacleto 1 day ago 0 replies      
+1 Mostly Harmless Econometrics - Angrist and Krueger
fatdog 2 days ago 0 replies      
Mark Joshi's "The Concepts and Practice of Mathematical Finance" came recommended to me by some people in the field as a foundation. I found it quite readable.
itscharlieb 2 days ago 0 replies      
The Alchemists: Three Central Bankers and a World on Fire - Neil Irwin

Great account on central banking in general and central banking policy during the 08-09 crisis in particular!

brudgers 2 days ago 0 replies      

Wealth of Nations

zallarak 2 days ago 0 replies      
Books about economics- read Keynes and Friedman.

Economic history- lords of finance, too big to fail.

Most quant finance books are low quality and I'd suggest avoiding them.

ob 2 days ago 1 reply      
I still think one of the best textbooks on economics is Paul Samuelson's Economics. Assuming you mean macro-economics that is.
kesor 1 day ago 0 replies      
Eliyahu Goldratt books, especially ones that include his explanation about Throughput Accounting.
JustUhThought 1 day ago 0 replies      
I suggest building reading lists baed on a list of Nobel Prize winners for the subject.
trader 1 day ago 0 replies      
Read 10-Ks and 10-Qs and build operating models in excel.
mkempe 1 day ago 0 replies      
For a thorough understanding of free-markets and the laws of economics, Capitalism: A Treatise on Economics by George Reisman. Economic Sophisms by Frdric Bastiat. Socialism by Ludwig von Mises.
haney 2 days ago 0 replies      
I'd highly recommend The Intelligent Investor.
aminorex 1 day ago 0 replies      
Kuznetsov,A. - The Complete Guide to Capital Markets for the Quantitative Professional (2006)
dmfdmf 1 day ago 0 replies      
One of the best articles on Economics is Ayn Rand's "Egalitarianism and Inflation" in her anthology "Philosophy: Who Needs it".
astazangasta 1 day ago 0 replies      
I will resist the urge to tell you what NOT to read and merely recommend a few favorites:

1. I am a big fan of John Kenneth Galbraith, who writes very clearly about a few things. I recommend both "The New Industrial State" and especially "The Affluent Society", where he argues that economics is insufficient to deal with post-scarcity.

2. Deirdre McCloskey's "If You're So Smart" is a great skewering of the blinkered nature of economic inquiry. Much of what is wrong with economics is what is wrong with scientific inquiry generally (being stuck in a formalism, confusing their models with reality); this is an excellent criticism.

3. Anything by Ha-Joon Chang. He writes intelligently about development and globalization; he is unorthodox in his economic practice, and his arguments are simple and drawn from history. There are a lot of "My god, it's full of stars!" moments in his work.

4. Still looking...

colinmegill 1 day ago 0 replies      
Econned is essential reading
logfromblammo 1 day ago 0 replies      
Try the Society of Actuaries / Casualty Actuarial Society study resources [0] for exams P (probability) [1], FM (financial mathematics) [2], MFE (models for financial economics) [3] or S (statistics and probabilistic models) [4]. Look at the PDF syllabus documents, and there will be a section on "suggested texts".

Looking up the suggested texts for previous test years (or for obsolete tests) may also reveal texts that may be cheaper now or available as used copies.

You could probably get something like Price Theory and Applications (Landsburg) or Principles of Corporate Finance (Brealey, Myers, Allen) for cheap.

[0] http://beanactuary.org/exams/preliminary/?fa=preliminary-com...[1] https://www.soa.org/education/exam-req/edu-exam-p-detail.asp...[2] https://www.soa.org/education/exam-req/edu-exam-fm-detail.as...[3] https://www.soa.org/education/exam-req/edu-exam-mfe-detail.a...[4] https://www.casact.org/admissions/syllabus/index.cfm?fa=Ssyl...

kingmanaz 1 day ago 0 replies      
"Fail Safe Investing" by Harry Browne and "The Intelligent Investor" by Benjamin Graham.

Discussion of the former here:


kerrynusticeNkJ 1 day ago 0 replies      
gates on musical notes
tiatia 1 day ago 1 reply      
Niederhoffer did his PhD in statistics. He is nuts but he basically invented quantitative trading. Maybe you read his book "Education of a trader" and the "New Yorker" article about him ("The Blow up artist").
known 1 day ago 0 replies      
Financial Intelligence for Entrepreneurs Karen Berman & Joe Knight

Simple Numbers, Straight Talk, Big Profits Greg Crabtree

The 1% Windfall Rafi Mohammed

Accounting Made Simple Mike Piper

How to Read a Financial Report John A. Tracy

Venture Deals Brad Feld & Jason Mendelson

And http://www.bloomberg.com/news/features/2016-05-30/the-untold...

How to build a robot that sees with $100 and TensorFlow oreilly.com
359 points by nogaleviner  2 days ago   61 comments top 10
bernardopires 2 days ago 3 replies      
Just a nit, but the author keeps talking about object recognition while what he was actually doing is image classification. Object recognition actually consists of two tasks, one is classifying the object (this is a beer bottle) and the other is also says where in the image the object is. Additionally it can/should detect multiple objects in the image. This is a more complex than classification, which only associates one category with the image.
rbanffy 2 days ago 3 replies      
> recognizing arbitrary objects within a larger image has been the Holy Grail of artificial intelligence

The Holy Grail is general AI. Recognizing objects is a side quest, perhaps a required step, but, by no means, the end goal.

icemelt8 2 days ago 1 reply      
This was amazing, I am amazed at your command of both hardware and software technology. Even as a Software Engineer, I have a hard time trying to make TensorFlow do something for me.
urvader 2 days ago 1 reply      
I would like to know how long it "thinks"- it is clear the camera is paused for a while while the robot parses the image..
salex89 2 days ago 3 replies      
My biggest current question is which keyboard is this, on the image in the article?!


visarga 2 days ago 1 reply      
Great project. Locomotion and vision are pretty advanced compared to grasping and complex handling of objects. If we could have a workable arm, it would be much more interesting in applications.
nojvek 1 day ago 0 replies      
Oh my god. You are trying to build the exact thing I am trying to build. Albeit you've made much more progress.

I'm still soldering wires into the motors. You should take off the paper from acrylic. The transparent effect makes it look awesome.

My goal is to make a raspberry pi bot that plays indoor fetch. I would love to have a chat with you.

dharma1 2 days ago 1 reply      
Dis the author publish a repo for this? It's easy getting tensorflow going for basic image classification but the hard part is actually making the robot move in a way that makes sense - using the camera and the sonar data to make decisions and then drive the motors. Or is this not autonomous?
criddell 2 days ago 1 reply      
This reminds me of a low res vision system I read about 20 years ago:

I've always been kind of intrigued by what is possible with very simple hardware.

forgotAgain 2 days ago 0 replies      
Sorry for the off topic but is anyone else getting very high cpu usage from O'Reilly websites? Any known resolution or work around?

With Chrome developer tools I see one error:"Uncaught SecurityError: Failed to read the 'localStorage' property from 'Window': Access is denied for this document."

Original bulletin board thread in which :-) was proposed cmu.edu
345 points by ZeljkoS  1 day ago   145 comments top 31
kelvich 1 day ago 1 reply      
Nabokov's interview. The New York Times [1969]

 -- How do you rank yourself among writers (living) and of the immediate past? -- I often think there should exist a special typographical sign for a smile -- some sort of concave mark, a supine round bracket, which I would now like to trace in reply to your question.

jgw 1 day ago 5 replies      
It makes me a bit of a luddite (and a heck of a curmudgeon), but it always makes me a little sad when good ol' ASCII smileys are rendered all fancy-like. There's something charming and hackerish about showing it as a 7-bit glyph.

I think the Internet fundamentally changed when that happened.

Tangentially-related, I can't fathom why someone would post YouTube videos of `telnet towel.blinkenlights.nl`.

benbreen 1 day ago 1 reply      
Apropos is this debate about whether an intentional :) shows up in a 1648 poem:


Here's the verse:

Tumble me down, and I will sit

Upon my ruines (smiling yet :)

I think that the article does a fairly convincing job of showing that this is just weird 17th century typography, but then again, there was enough experimentation with printing at the time that it also wouldn't surprise me if it was intentional, at least at some point in the typesetting process.

artbikes 1 day ago 1 reply      
Like most of the cultural inventions of virtual communities there was prior art on PLATO.


kjhughes 1 day ago 2 replies      
I vividly remember having the following conversation with a fellow CMU undergrad around this time:

Me: What's with all the :-) in the posts?

Friend: It indicates joking.

Me: Why?

Friend: What's it look like?

Me: A pinball plunger.

Friend: Rotate 90 degrees.

Me: Ohhhhhh.


ZeljkoS 1 day ago 6 replies      
Interesting thing to note is that before Fahlman suggested ":-)" symbol, Leonard Hamey suggested "{#}" (see 17-Sep-82 17:42 post). After that, someone suggested "\__/" (see 20-Sep-82 17:56 post). But only ":-)" gained popularity.

It is funny to imagine how emoticons (https://en.wikipedia.org/wiki/List_of_emoticons) would look today if one of alternative symbols was accepted?

milesf 1 day ago 0 replies      
Ah bulletin boards :)

For years I have been searching for a copy of Blue Board (https://en.wikipedia.org/wiki/Blue_Board_(software)), a popular BBS program in the Vancouver, BC, Canada area written by the late Martin Sikes http://www.penmachine.com/martinsikes/

I even talked with the owner of Sota Software, the publisher, but I never heard anything back.

If anyone has a copy, PLEASE let me know! I've been wanting to setup a memorial telnet Blue Board site for decades now.

hvass 1 day ago 0 replies      
This is gold:

"Since Scott's original proposal, many further symbols have beenproposed here:

(:-) for messages dealing with bicycle helmets@= for messages dealing with nuclear war"

minivan 1 day ago 6 replies      
"o>-<|= for messages of interest to women"

I'm glad we are past that.

p333347 1 day ago 1 reply      
I see one Guy Steele in that thread. Is he the Guy Steele? Glancing wikipedia suggests he was asst prof at CMU around that time. Just curious.
wmccullough 1 day ago 0 replies      
I love how different the conversations were on the internet then.

Now adays, if a thread came about to propose the ':-)', people would devolve into a debate about the proper use of the parenthesis, and at least one user would claim that '(-:' was a better choice, though it is the darkhorse option for the community.

emmet 1 day ago 1 reply      
| I have a picture of ET holding a chainsaw in .press file format. The fileexists in /usr/wah/public/etchainsaw.press on the IUS.


xyzzy4 1 day ago 2 replies      
I'm sure :-) has been independently invented a million times.
chiph 1 day ago 0 replies      
Interesting that there are both left-handed and right-handed smileys in the thread. :-) (-:
yitchelle 1 day ago 2 replies      
Interestingly, before I read this post and the comments, I have always thought that :-) means a smiling face. Ie, to convey a sense of a smile after writing a message. Not a "I am joking" message.

Well, I learned something today.

soneca 1 day ago 1 reply      
And the proposal to have a separate channel to jokes is as old as the smiley. There is always that guy.

Have anyone thought about creating a separate HN for jokes?

danvoell 1 day ago 1 reply      
I wonder at what point the nose was removed :)
backtoyoujim 1 day ago 0 replies      
I wonder how many times the initial turn head, grok, smile -- mirroring back to the pareidolia itself, has happened.
Imagenuity 1 day ago 0 replies      
Monday Sept 19th would've been the 34th "smilaversary".
dugluak 1 day ago 1 reply      
love birds

 (@> <@) ( _) (_ ) /\ /\

_audakel 1 day ago 0 replies      
"Read it sideways. "hahaha love this!
f_allwein 1 day ago 1 reply      
19-Sep-82 11:44, Scott E Fahlman invents the ':-)'.

Nice. :-)

hammock 1 day ago 1 reply      
Reading these BBS always makes me think how much nerdier computer people were back then than they are now. Or am I off base?
pcunite 1 day ago 0 replies      

I see you

david-given 1 day ago 0 replies      
I... now find myself morbidly curious as to whether you could use Unicode diacritic abuse to draw actual pictures.

Pasted in example stolen from Glitchr, mainly to see how well HN renders them:

- ...

anjc 1 day ago 0 replies      
Wow that's interesting


equivocates 1 day ago 0 replies      
equivocates 1 day ago 0 replies      
guessmyname 1 day ago 0 replies      
Here is a list of popular emoticons: https://textfac.es/
chalana 1 day ago 1 reply      
Usenet archives are also a treasure trove for this kind of things. Searching old posts on Usenet feels like modern day archaeology
artursapek 1 day ago 0 replies      
This is creepy. I just opened a PR on GitHub and set the description to ":-)". Then I opened HN and saw this.
How Norway spends its $882B global fund economist.com
281 points by punnerud  3 days ago   157 comments top 11
kristofferR 3 days ago 5 replies      
"It is run frugally and transparently" is a dubious claim, at least according to claims made on NRKs Folkeopplysningen (a show like Penn and Teller: Bullshit, just better).

The fund spends a lot on being actively managed, one manager received ~$60 million in bonuses in 2010. However, they won't reply when people ask if bonuses are actually financially beneficial.

https://tv.nrk.no/serie/folkeopplysningen/KMTE50009215/seson... @ 28:30

cs702 3 days ago 2 replies      
A little over decade ago, when Norway's fund was called "the Petroleum Fund" and had "only" $147B, an article in Slate magazine explained what was special about it:

"Norway has pursued a classically Scandinavian solution. It has viewed oil revenues as a temporary, collectively owned windfall that, instead of spurring consumption today, can be used to insulate the country from the storms of the global economy and provide a thick, goose-down cushion for the distant day when the oil wells run dry."[1]

Since then, the fund has grown six-fold.

[1] http://www.slate.com/articles/business/moneybox/2004/10/avoi...

atheg33 2 days ago 1 reply      
As a Canadian I feel so cheated learning about Norway's Oil Fund.

Our government hasn't hardly saved a dime of our Oil Income.

We have been taking a small cut of the hundreds of thousands of barrels of oil we have been producing daily for the past 100+ years and spending it as fast as we possibly can.

>Most of the oil companies exploring for oil in Alberta were of U.S. origin, and at its peak in 1973, over 78 per cent of Canadian oil and gas production was under foreign ownership and over 90 per cent of oil and gas production companies were under foreign control, mostly American. [0]

[0] https://en.wikipedia.org/wiki/Petroleum_production_in_Canada...

harryh 3 days ago 3 replies      
882 B / 5.2 Million ~= $170k for every citizen of Norway.

At 4% a year that's $6,800 each in annual income. Not bad!

netcan 3 days ago 4 replies      
Norway's oil money story is one of the weirdest. Are there any examples in history where a country has saved up such a big stash? Are they planning to retire young, as a nation?
Lythimus 3 days ago 1 reply      
Is there an index or ETF which follows this pension fund's investments?
terda12 3 days ago 7 replies      
Visiting Norway, I always thought it is kind of a weird country. On one hand it's one of the richest countries in the world. On the other hand, I've seen so many young Norwegian women work hard cleaning toilets and hotel rooms. Such jobs would be considered "low rung" at in the US but in Norway they treat their low rung jobs as something to be proud of.
shardinator 2 days ago 0 replies      
Related to a lot of discussion comments, I highly recommend "A Random Walk Down Wallstreet" by Burton Malkiel. https://www.amazon.com/Random-Walk-Down-Wall-Street/dp/03933...
rer 3 days ago 2 replies      
In the "Top of the World" graph in the article, there's a dip for Saudi Arabia. Does anyone know why?
rogaha 2 days ago 0 replies      
I see lots of comments talking about the return on investment (~4% YoY) and the ~$60M in bonuses, etc. But I don't see anyone questioning why there is so much money invested in other companies outside Norway.

I'm curious to know: 1) Why do we have a savings Fund with double of the annual GDP? Should we have a limit? Why the excess is not invested locally? 2) Is there an existing plan to define when the money will be directed to the Norway economy? The current GDP per capita is around $68K which doesn't seem that much compared to the amount of money in the country's saving account. Why not invest in education and/or technology?3) Why there are a few people earning so much money (e.g. ~$60M bonus) to manage the country's assets? Is the real purpose to make money or save the money for future generations?

Super Mario 64 1996 Developer Interviews shmuplations.com
332 points by Impossible  2 days ago   96 comments top 14
dmbaggett 2 days ago 8 replies      
>The N64 hardware has something called a Z-Buffer, and thanks to that, we were able to design the terrain and visuals however we wanted.

This was a huge advantage for them. In contrast, for Crash Bandicoot -- which came out for the PS1 at the same time -- we had to use over an hour of pre-computation distributed across a dozen SGI workstations for each level to get a high poly count on hardware lacking a Z-buffer.

A Z-buffer is critical, because sorting polygons is O(n^2), not O(n lg n). This is because cyclic overlap breaks the transitive property required for an N lg N sorting algorithm.

The PS2 got Sony to parity; at that point both Nintendo and Sony had shipped hardware with Z-buffers.

intsunny 2 days ago 6 replies      
I'll always remember the first time I saw Super Mario 64 in front of my very eyes in ToysRUS. It was as if every other 3D game in history suddenly didn't matter anymore. Here was the future of 3D gaming. Here was a game with unbelievably fluid controls in really large levels clearly designed to be explored.

Unlike most previous Mario games, there was no timer either. This only further encouraged players to really explore the 3D environment, collect the side-quest coins, and not be stressed out.

corysama 2 days ago 2 replies      
Shameless plug: We collect this kind of material over in https://www.reddit.com/r/TheMakingOfGames/

There's also https://www.reddit.com/r/VideoGameScience for more technical material.

BTW: The linked site has a whole lot more articles like this one http://shmuplations.com/games/

johnm1019 1 day ago 0 replies      
The opening quote totally blows my mind as a humble non-game dev. I never thought of it this way.

> Miyamoto: Ever since Donkey Kong, its been our thinking that for a game to sell, it has to excite the people who are watching the playerit has to make you want to say, hey, gimme the controller next! ...

The simple approach is to say, "to make a great game, it should be fun for the person playing it." But they've already taken a step back and approached it from the perspective that great gaming happens socially. Maybe this is one reason I cherished all the Nintendo games as much as I did. It's because the memories of playing them are always with other people and we're all having fun. It wasn't a solo act.

baconomatic 2 days ago 3 replies      
If you're at all interested in speedruns, this is a great video that takes it to the extreme for Super Mario 64: https://www.youtube.com/watch?v=kpk2tdsPh0A
mr_pink 2 days ago 3 replies      
Holy crap, Super Mario 64 is 20 years old now. I've never felt my age more than I do right now...
TazeTSchnitzel 2 days ago 1 reply      
> The way Marios face moves is really great too. Like in the opening scene.

> Miyamoto: That actually came from a prototype for Mario Paint 3D (that were still going to release).

I wonder, was Miyamoto referring to Mario Artist?

wodenokoto 2 days ago 0 replies      
I really loved this game and have always been sad that they didn't do a proper sequel.

Reading this interview now, it sounds like they had plenty ideas for new stuff.

russellbeattie 2 days ago 0 replies      
I should probably make an effort to finish that game someday. Not that I've finished many Super Mario games. I think I've purchased every one, but have completed maybe two of them. So many levels incomplete... I wonder if game devs feel bad working on higher levels, knowing only a tiny portion of players will actually ever see them?
Insanity 1 day ago 0 replies      
I have fond memories of this game, and a lot of what they spoke about in the interview regarding what gamers enjoyed ringed true for me. The movement of Mario did feel great, and I had a lot of fun exploring the environment, jumping in the water for swimming or seeing how Mario's movement was different in different environments. (I did notice his centre of gravity as well, and it seems like a great fit).

It is great to read that they actually had players like me in mind when they created the game. This article actually makes me want to dig up the game and play it through again.

spdustin 2 days ago 1 reply      
Awesome interview.

The other comments reminded me of this fan-made video of an Unreal Engine-powered Super Mario 64. It's stunning.


Note: keep playing past 0:50 - it's not just the non-Mario environment.

ravenstine 1 day ago 0 replies      
I would love to see something similar for Goldeneye/Perfect Dark. I've been slowly but surely working on building a demo FPS engine using very minimalist implementation to learn about game dynamics, and I'd love to hear what sort of technical challenges were faced at Rare and how they developed their(albeit simplistic) enemy AIs with pathfinding.
pcunite 2 days ago 1 reply      
I think the first time I played this game was with the Nemu64 emulator using a good computer and LCD monitor. The monitor alone made for a better experience than the scaly TV sets typical of the day. Also, being able to pause, save, and replay an area was nice.
racl101 2 days ago 0 replies      
Really cool.

Super Mario 64 was such an amazing game back in the day. It totally changed my life.

House Passes Employee Stock Options Bill Aimed at Startups morningconsult.com
337 points by endswapper  1 day ago   222 comments top 25
grellas 1 day ago 8 replies      
The original point of ISOs was to offer to employees the opportunity to take an economic risk with stock options (by exercising and paying for the stock at the bargain price) while avoiding the tax risk (by generally not recognizing ordinary income from that exercise and being taxed only at the time the stock was sold, and then only as a capital gains tax).

AMT has since emerged to devour the value of this benefit. By having to include the value of the spread (difference between exercise price and fair market value of the stock on date of exercise) as AMT income and pay tax on it at 28%-type rates, an employee can incur great tax risk in exercising options - especially for a venture that is in advanced rounds of funding but for which there is still no public market for trading of the shares. Even secondary markets for closely held stock are much restricted given the restrictions on transfer routinely written into the stock option documentation these days.

So why not just pass a law saying that the value of the spread is exempt from AMT? Of course, that would do exactly what is needed.

The problem is that AMT, which began in the late 60s as a "millionaire's tax", has since grown to be an integral part of how the federal government finances its affairs and is thus, in its perverse sort of way, a sacred cow untouchable without seriously disturbing the current political balance that is extant today.

And so this half-measure that helps a bit, not by eliminating the tax risk but only by deferring it and also for only some but not all potentially affected employees.

So, if you incur a several hundred thousand dollar tax hit because you choose to exercise your options under this measure, and then your venture goes bust for some reason, it appears you still will have to pay the tax down the road - thus, tax disasters are still possible with this measure. Of course, in optimum cases (and likely even in most cases), employees can benefit from this measure because they don't have to pay tax up front but only after enough time lapses by which they can realize the economic value of the stock.

This "tax breather" is a positive step and will make this helpful for a great many people. Not a complete answer but perhaps the best the politicians can do in today's political climate. It would be good if it passes.

Edit: text of the bill is here: https://www.congress.gov/bill/114th-congress/house-bill/5719... (Note: it is a deferral only - if the value evaporates, you still owe the tax).

matt_wulfeck 1 day ago 8 replies      
> Only startups offering stock options to at least 80 percent of their workforce would be eligible for tax deferrals, and a companys highest-paid executives would not be able to defer taxes on their stock under the legislation.

I understand the desire to avoid a regressive taxation system, but why is it that every tax rule we create comes with 2x the amount of caveats and rules? Our tax system is becoming a mess.

At this rate soon nobody will be able to file their own taxes without an accountant to sort through the muck. And complicated to systems tend to benefit the wealthy.

djrogers 1 day ago 1 reply      
This is good news, but it may not go anywhere -

"the Administration strongly opposes H.R. 5719 because it would increase the Federal deficit by $1 billion over the next ten years." [1]

So a really bad tax rule is in place, but since it happens to bring in ~$100M/yr, we shouldn't fix the rule?


calcsam 1 day ago 4 replies      
This is amazing news. Some context:

It's quite common to owe taxes today for gains on the value of your stock -- which is an illiquid asset you can't sell. This puts employees in the position of shelling out cash to keep something that rightfully belongs to them, or simply abandoning it (failing to exercise) when they leave the company. This bill would defer taxes on gains up to 7 years, or until the company goes public.

If you are awarded stock options, an you exercise them, you have to file an 83(b) election within 90 days or else you are liable on all paper gains in the value of your stock.

Even if you file an 83b election, you are still liable for paper gains between the value of your options when you were granted them and the value when you exercised.

For example, if you were awarded options with a strike price of $5 and the company raised a new round of funding and the 409A valuation (& strike price of the new options) has risen to $15 per share, the IRS considers that you now owe taxes on $10 of income / share. In other words, it costs you not $5 / share to exercise but ~$8.50 including taxes.

So the tricky part about options is that they require money to exercise, money that you often don't have ready, in order to obtain an asset that is (a) not liquid and (b) may decline in value (c) you often can't sell due to transfer restrictions.

For example: one early engineer at Zenefits had to pay $100,000 in taxes for exercising his stock....and then all the crap hit the fan, and he likely paid more in taxes than his shares will end up being worth. Ouch.

As a result of this problem with options, many startups -- especially later-stage ones like Uber -- choose instead to offer RSUs, which are basically stock grants as opposed to stock options. You don't have to pay any money to "get" them like you do for options.

However, the IRS considers stock grants, unlike options, immediately taxable income. If you get 10,000 RSUs per year, and the stock is valued at $5/share by an auditor, you now have to pay taxes on $50,000 of additional income, for an asset that you likely have no way of selling.

Some startups allow "net" grants -- which basically means they keep ~35% of your stock in lieu of taxes. That solves the liquidity problem, but offering this is completely at the discretion of the startup and some don't, which leaves employees at the mercy of the IRS, again having to pay cash on paper gains of an illiquid asset.

asah 1 day ago 2 replies      
Can someone explain: if you exercise and hold the shares (eg leave the company) do you owe tax after year seven, even if the shares remain illiquid?

That's the core issue: the IRS is taxing individuals on truly illiquid assets.

jnordwick 1 day ago 2 replies      
Most employees get hit by the AMT and the step up in basis when exercising their incentive employee stock options, and from just skimming the bill, I don't see how that is prevented.
martin_ 1 day ago 2 replies      
This sounds great, though requiring "offering 80% of the workforce stock" and excluding highest paid executives seems vague - is this at time of hiring, when stock is issued, fully vested, when taxes are due, somewhere inbetween? I parted ways with a startup in the valley last year and exercised some shares on January 13th. If I had exercised just two weeks earlier, I'm told I would've been hit with north of 50k in AMT, I have until next year to figure it out now but I wonder if I'm eligible. Also curious how long it typically takes to get from house, through the senate and passed.
gtrubetskoy 1 day ago 2 replies      
I still don't understand why taxes are owed. If an option at the time of grant is worth $0 (which is how it's typically done or is that not the case?), then you don't owe anything to the IRS until you exercise the option, i.e. buy shares at the option price and sell them at presumably higher valuation and make some money, at which point you will need to part with some of it because it's income.

But if you never exercise the options, then you never owe any tax. What am I missing here?

revo13 1 day ago 7 replies      
More evidence as to why the income tax should be replaced with a consumption tax. Just let people make their dammed money already and apply a simple tax when they spend it. Windfalls wouldn't be "dangerous" or punitive in that model, and savers would be rewarded.

--Of course I oversimplify the consumption tax, and safeguard would need to be in place on that to ensure it is not regressive with respect to necessities...

zkhalique 1 day ago 0 replies      
Meanwhile, the USA actively encourages companies to offshore their money with their tax code:


adanto6840 1 day ago 1 reply      
The bill text is here and is pretty easy to decipher: https://www.congress.gov/bill/114th-congress/house-bill/5719
nullc 1 day ago 0 replies      
Perhaps I'm misreading the law, but it looks like it solves the wrong problem: It addresses a cash-flow issue rather than the tax liability issue.

Say you have options at FooCorp and you leave. FooCorp is illquid and you have 90 days to exercise your 10,000 options. Your FC options have a $5 strike, but the company currently has a 409a valuation of $100/share.

To exercise the options you would need to pay $50,000 to FooCorp, then you would have a "realized gain" 950k (($100-$5)*10000 which you would owe 28% of in taxes that year, or 266k. So you would need access to $316k in total in order to exercise these options.

Two issues arise: (1) You may not have $316k just kicking around. (2) THE SHARES ARE ILLIQUID AND MAY BE WORTH $0 WHEN YOU CAN ACTUALLY DO ANYTHING WITH THEM.

The bill appears to help with (1) by letting you pay that 266k not now-- but later when the company shares become liquid or 7 years (whichever comes first). But it does nothing about (2) -- you might exercise and then the company goes bust, and seven years later you owe $266k and your current position is worth -50k... and because the taxes are AMT, you can't meaningfully write them off your losses against the taxes you owe.

This kind of failure doesn't require FooCorp to fail. You could have options at $5, execute at $100, and have things go liquid at $7-- ignoring taxes this would have been a $20k gain. But with the taxes you're still $246k in the hole.

The issue all along wasn't that someone needed extra money. The issue was the potential huge losses. If it weren't risky you could find a lender to cover the execution price and taxes in exchange for a return when the asset becomes liquid. (E.g. having to pay the $266k up front but getting it returned later when the asset becomes worthless and you write it off)

If anything this makes the situation worse by encouraging more people to commit financial suicide by making it less obviously a bad idea while being just as risky as it always was.

mrfusion 1 day ago 2 replies      
Does anyone have experience buying stock options from employees. I really want to own shares in a few companies that would never hire me :-(
jkern 1 day ago 0 replies      
How does this relate to the push for startups to change from a 90 day to 10 year exercise window? It seems like that's a better option than this bill since it gives employees a larger time window to make an exercise decision, during which the likelyhood of options actual resulting in something liquid is much higher
koolba 1 day ago 3 replies      
> Only startups offering stock options to at least 80 percent of their workforce would be eligible for tax deferrals, and a companys highest-paid executives would not be able to defer taxes on their stock under the legislation.

Is this why I keep seeing nominal $1 salaries?

cdbattags 1 day ago 0 replies      
How would this affect the concept of phantom stock options? I worked at a startup who used the main excuse of no taxes paid handing out ghost options instead of normal options.

"Phantom stock can, but usually does not, pay dividends. When the grant is initially made or the phantom shares vest, there is no tax impact. When the payout is made, however, it is taxed as ordinary income to the grantee and is deductible to the employer."


AdamN 1 day ago 0 replies      
I wish this was retroactive :-(
stevenae 1 day ago 0 replies      
The article appears to get the "seven years" qualification wrong. The bill states that tax must be paid at:

>> the date that is 7 years after the first date the rights of the employee in such stock are transferable or are not subject to a substantial risk of forfeiture, whichever occurs earlier

Which implies that transfer-restricted stock grants do not start this clock ticking.

tmaly 1 day ago 0 replies      
I am wondering if there will be additional complexity added to the rule making phase of this if it becomes law.

While this amendment is short in length, it seems to add additional complexity to an already complex tax code. I would have liked to have seen an even simpler proposal.

ap22213 1 day ago 0 replies      
What the house needs to do is regulate startup's shady options agreements. I see way too many developers getting burned out striving for that big payout that may never come. It's the classic con game.
throwaway6497 1 day ago 1 reply      
Dumb question: Does it mean, it is law now.
ulkram 1 day ago 1 reply      
When does this go into effect?
k2xl 1 day ago 1 reply      
I'm confused. I bought shares this year and would be hit with 50K tax bill from AMT next year.

Does this mean I don't owe AMT addition next year?

ulkram 1 day ago 0 replies      
when does this go into effect?
chillydawg 1 day ago 4 replies      
Nice to see tax laws for the rich can get passed, but substantive change to do with criminal justice, healthcare etc go nowhere.
How to Get a Job in Deep Learning deepgram.com
301 points by stephensonsco  2 days ago   85 comments top 15
csantini 2 days ago 5 replies      
TL;DR: Deep Learning will become a commodity. Software will eat Deep Learning too.

I'd like to clean up a bit the air from the hype fog:

DL is giving amazing results only when you have big sets of labelled data. Hence it will be much cheaper for companies to buy Google/Microsoft Vision/Audio REST APIs rather than paying the costs of: cloud + find data + deep learning experts. So, I don't think we will see a massive growth of DL gigs.

e.g. Google Vision API: https://cloud.google.com/vision/

Except those areas where your own CNN implementation is needed (automotive, industrial automation), Deep Learning will be another "library" in the ever increasing Software Engineering mess of gluing many open source libraries and REST apis to get something useful done. You need 1 guy training a Neural Network for every 100 software monkeys maintaining the infrastructure complexity.There are now many Software Engineering jobs because it's hard to glue and maintain publicly-available code to solve some specific business problem.

I think the the same applies for many Data Scientist jobs, which are these days more about fetching/cleaning/visualizing data than making machine learning on it.

tom_b 2 days ago 5 replies      
I am curious about demand for this skill in the market.

But I just don't see it - machine/statistical/deep learning gigs just seem really rare.

I know this isn't a great metric, but searches on Indeed.com:

 "deep learning" - 873 "machine learning" - 9,762 "statistical learning" - 65 java - 72,802 javascript - 43,785
Same searches on LinkedIn:

 "deep learning" - 646 "machine learning" - 6,952 "statistical learning" - 34 java - 43,845 javascript - 30,818
Even the "machine learning" search on Indeed, with 9K+ results has 1300+ from Amazon, followed by a much smaller number (in low hundreds each) from Microsoft, Google, others (including some that look like staffing companies).

Even on HN's who's hiring Sept 2016 thread phrase counts: 14 "deep learning" 79 "machine learning"

I completely agree with the idea that being able to use some deep/machine/statistical learning is going to be a toolset that data hackers need to have. I even think that there is a bit of the "build it and they will come" magic waiting out there.

But I think the best way forward is to be working in data and figure out how to generate value with deep learning - this will be much more productive than trying to seek out a deep learning gig in terms of promoting deep learning in the workplace. Heck, that's a suggestion I would be wise to take myself . . .

orthoganol 2 days ago 3 replies      
My question is, it feels like machine learning is reaching its "Rails" stage. You can implement the latest Bi-directional NN or LSTM-RNN using a high level API that already sits on top of another high level framework. Even beyond the core setup it will do the peripherals - smart initializations, anti-overfitting, split up your data, etc.

Do people who implement (albeit real, useful) deep learning systems, but who have no formal machine learning background, who don't really know much or care about implementing derivatives or softmax functions because the frameworks abstract all that away - are these people getting offered jobs?

protomikron 2 days ago 0 replies      
My advice:

Do not label yourself as a data scientist or machine learning expert. Go for the domain, i.e. become comfortable with the actual data and the methods used there:

- predict land use in aerial imagery - become comfortable with photogrammetry, geography, etc.

- predict biological tissue(s) - become comfortable with specific branches of biology or medicine

- predict $something_relevant

I actually stole this advice from the epilogue of some text about programming, and it really stuck with me. Otherwise your expertise is just too generic and you compete with a big pool of people who call themselves machine learning experts, because they can write a for loop in Bash.

xor1 1 day ago 5 replies      
>Speaking of math, you should have some familiarity with calculus, probability and linear algebra

Curious to know if anyone has had success learning/re-learning these as a mid-20s or older adult who works fulltime, and if you could potentially provide a list of books/courses to go through. I personally never learned anything past geometry (in high school). The most advanced math class I took in college was College Algebra. That means I never learned trig or anything past it (so no calc, linear algebra, or probability), and I'm sure most people on HN surpassed me math-wise sometime in high school :)

I've been able to skate by with my embarrassing lack of math knowledge/skills as a developer, but I feel like it's only a matter of time until the mathematical steamroller becomes a serious threat career-wise and I get crushed.

partycoder 2 days ago 2 replies      
"A job in deep learning".

It is highly unlikely that you will get a job in which you exclusively use deep learning alone, and not any other ML/AI technique.

Once you learn DL, then, "congratulations... here are 100 other topics you might need to know about before getting a job". http://scikit-learn.org/stable/tutorial/machine_learning_map...

thefastlane 2 days ago 3 replies      
i just want to be a software engineer without having to continually burn away evenings and weekends studying the latest shiny, continually for the next two decades, just to keep my career afloat. is that even an option anymore?
imron 2 days ago 0 replies      
> I built a twitter analysis DNN from scratch using Theano and can predict the number of retweets a tweet will get with good accuracy

I imagine a product like this could actually charge a fair bit of money helping companies and people improve the 'virality' of their tweets.

cbgb 2 days ago 1 reply      
This is just a nit, but Andrej Karpathy was never a professor at Stanford; he received his PhD from Stanford and now works at OpenAI.
FT_intern 2 days ago 1 reply      
This should be titled "How to Learn Deep Learning".

"How to get a job in deep learning" would include:

- What specific topics will be asked during interviews

- What the interview question format is like

- How to prepare for the interviews

- How to get interviews without a PhD. What do you need to show competence in your self learned skills?

bbctol 2 days ago 1 reply      
You may not need a PhD or tons of experience to learn Deep Learning, but what about the gap between that and getting a job?
Xcelerate 2 days ago 1 reply      
This is applied deep learning. There's a ton of jobs available for taking someone's library on GitHub and applying it to a bunch of data. But other than DeepMind, FAIR, Google Brain, Open AI, Vicarious, and Microsoft Research, who is hiring for theoretical machine learning? That's what I'm interested in developing better algorithms that eventually approach AGI.
max_ 1 day ago 1 reply      
Why is the Machine Learning subreddit so toxic?
stephensonsco 2 days ago 2 replies      
just wrote a blog post that I think a lot of folks will like if they are looking for a job in ML/DL.

Would love to hear if I missed something!

jamisteven 2 days ago 2 replies      
1st sentence should be: 1. Be a fuckin math whiz.

Had it been, i would have clicked the back button.

Twitter may receive formal bid, suitors said to include Salesforce and Google cnbc.com
261 points by kgwgk  1 day ago   337 comments top 42
imagist 1 day ago 18 replies      
IMO, Twitter is the poster child for the tech bubble. They have users, which is their only claim to viability, but notably, they have never made a profit. Currently valued at around $10 billion with 350 million active users, that's about $29 per user. You'd be hard-pressed to find an investor so foolish that they would invest $29 in each of their users and hope to make it back if it were stated in those terms, but people have rushed to invest in a company which only has users, and whose attempts to monetize users through advertising have correlated strongly with loss of users. There can be little argument that Twitter's price has become entirely detached from its value.

That doesn't of course, mean you couldn't make money by investing in Twitter. You can make money by investing in overvalued companies as long as you don't hold onto your share until it busts. One profitable route would be if Twitter does get bought by a larger company. The market as a whole will lose on Twitter, but local maxima can be more profitable than the whole.

But at a personal level, don't be naive about this. A lot of people are investing, not just money, but time and energy, in Twitter or startups like Twitter. If you find yourself thinking that Twitter is a company with any real value, you should take a step back and evaluate whether you're being wise, or whether you've fallen prey to the unbridled optimism of the tech bubble. Twitter's position as poster child for the tech bubble makes it a good litmus test for people's understanding of the industry, and I suspect it will correlate very strongly with who loses everything when the tech bubble collapses.

owenwil 1 day ago 10 replies      
Google acquiring Twitter is actually the best end result here. Salesforce is probably the worst. Lots of people hate the idea of a Google acquisition, but I think it's well suited because:

- Google learnt from its mistakes with Google+ and is eager to not repeat them

- The company is a very different one now from years ago

- Google doesn't want to mess up identity again, so that wouldn't be an issue

- Google mostly just wants a social graph

- Twitter is a bad public company that makes irrational decisions

- Merging Google engineering/leadership with Twitter might actually give direction and ease the financial pressure that seems to drive the company's poor engineering decisions

thr0waway1239 1 day ago 4 replies      
Can someone actually explain to me how the situation came to this point where it practically looks as if Twitter's fate is being decided and played out in the media via endless speculation? It is not like Twitter is a tiny company with an unknown brand, few users and no possibility of improving its profit margins. I am not aware of what they are trying to do, but at the same time it is not as if they could have exhausted all the possibilities. Remember Facebook's beacon? That failed, but FB still managed to repackage the same crap into something more lucrative didn't it? Is this just impatience from stockholders?

For example, let us just say, hypothetically, something really damaging comes out about FB (e.g. the news about the fake video view metrics) and advertisers start fleeing from it. Wouldn't Twitter be the beneficiary of at least some of that exodus? Do they really have no option of an end game?

the_duke 1 day ago 7 replies      
Unclear is what kind of "deal" they are talking about.

A considerable buy in? A full acquisition?

And, assuming a full acquisition... what would be the gain?

Google has a bad story with attempts at social media, apart from YouTube. (Bought Orkut, killed it, tried Google Plus, went nowhere). Twitter is hard to make profitable without alienating the users with too many ads.

For Google, it would probably be an acquisition like YouTube. With the knowledge that it might never be profitable, but intended to get control over a significant asset. But sharing Google infrastructure and resources could probably bring down operating costs in the medium term.

We'll see.

aresant 1 day ago 2 replies      
Salesforce feels like a bizarre choice, although I agree with their digital chief that "I love Twitter" personally.

I use twitter every day as my primary method of content discovery.

So at their core the BUSINESS should revolve around monetizing my eyeballs, eg advertising.

So to me it's Facebook or Google that should grab it, w/FB at the lead considering their relatively smooth / unhurried / and successful takeovers of whatsapp / instagram

deepfriedbits 1 day ago 0 replies      
Twitter's real value to Google is real-time search, in my opinion. They already license results from Twitter for search results, but having access to all of that sweet, sweet real-time data is nice.

The social graph is nice, but between Chrome and Gmail, Google already knows quite a bit about everyone.

majani 1 day ago 0 replies      
Bad idea. I think the social networks who's main purpose is to feed into people's vanity will not stand the test of time since they're not solving a real problem and are merely novelties.
erickhill 1 day ago 1 reply      
I think Twitter's recent foray into becoming a content streaming source (see: NFL) is very interesting and a natural next step, albeit a late one. The user base is already there to essentially compete with Twitch and other streaming providers.
hornbaker 1 day ago 6 replies      
My bet is GOOG. They need a streaming newsfeed product in which to insert ads.
mattjaynes 1 day ago 0 replies      
OH: "Twitter is a Friendster whose Facebook hasn't appeared yet."

From Jessica Livingston yesterday: https://twitter.com/jesslivingston/status/778948962724315136

cpsempek 1 day ago 0 replies      
Is Google the suitor, or, is Alphabet the suitor? The article says Google, but this could be out of habit. I am not sure that answering my question changes much about the news. But it might say something about how Alphabet views Twitter based on who they decide owns the acquisition and where Twitter would fit within the company (as subsidiary or under Google).
hornbaker 1 day ago 1 reply      
My bet is GOOG. They need a streaming newsfeed product in which to insert ads, especially on mobile where FB is killing it.
encoderer 1 day ago 0 replies      
Anecdotally I use twitter to advertise my SaaS monitoring product, Cronitor, with far more success than we found with AdWords. The ad platform feels easier to use, and promoting content on Twitter is less of a time investment vs selecting, culling, and optimizing sets of keywords.
sp527 1 day ago 0 replies      
Seeing a lot of arguments about profitability that don't make sense. At Twitter scale, profit sensitivity to even minor tweaks in ad rate/targeting/placement is massive. They could also go into 'maintenance mode' tomorrow and break a massive profit (it would just be stupid).

Active users is a poor metric for Twitter. It's much more about the views. A relatively smaller number of people on Twitter can command an outsized influence. It's a fundamentally different kind of network.

Twitter's future will probably be more about monetizing its viewership. It's definitely not going to disappear anytime soon.

MollyR 1 day ago 4 replies      
I really can't see salesforce buying twitter. I think a social media giant would gain something from twitter, but not much else.
fideloper 1 day ago 0 replies      
I wouldn't blame Jack one bit for wanting to get back to Square full-time (not that it's necessarily fully his decision to sell).
mark_l_watson 1 day ago 1 reply      
Would it be a bad idea for Twitter to charge a small yearly fee for use? The reason why I think this might work is because some users are very loyal and might not mind spending $20/year for an advertisement free Twitter service. They have about 350 million users, according to a comment here, and if 50 million users would stay, that would be $100 million revenue per year, and with many fewer users, their cost of doing business would be reduced, but with reduced network effects the service would not be as valuable for users. I like Twitter and I would pay $20/year in return for no promoted tweets.
fabiandesimone 1 day ago 0 replies      
To me Twitter has always been real time news.

Why haven't they worked on a way to 'validate' tweets around a story, I can't understand it (from a biz perspective, not technically).

With validation they become, instantly, the #1 news agency in the world.

yalogin 1 day ago 0 replies      
It doesn't matter it would still not cover my losses in the stock. Talk about crappy decisions.
samlevine 1 day ago 0 replies      
I would pay $10 a year to use Twitter. It's just that good for live news.
WA 1 day ago 1 reply      
Stock is through the roof right now. About +19%
dcgudeman 1 day ago 1 reply      
By Salesforce?? RIP twitter.
sorenjan 1 day ago 0 replies      
At the end of 2015 Twitter had 3900 employees [0]. Why would they need close to that? How many could a new owner fire without noticeably affecting the day to day operations?

[0] https://www.statista.com/statistics/272140/employees-of-twit...

bsparker 1 day ago 0 replies      
I wish Slack would somehow take over Twitter. Expertly adding in custom channels would save the social platform.
edbaskerville 1 day ago 3 replies      
By any normal metric, Twitter is a huge success. 300 million people find their service useful.

But it was already huge success when it had no business model. Moreover, what is fundamentally valuable about Twitter to its users--sharing and discovering little bits of textual expression over a publicly visible social network--is not very expensive.

From the perspective of the users who find it valuable, why does it need a for-profit model at all? Why can't we just subsidize it as a non-profit via grants and donations, a la Wikipedia? I'm pretty sure you could do the important thing that Twitter does--ignoring all the extras devoted to figuring out how to extract more money from the data--at a small fraction of its $2 billion in revenue.

I'm not being naive here--it's quite obvious why things are the way they are. But there many examples out there of making a big impact while making a decent living (just without anyone trying to become a billionaire). Social networking is ripe for more of this approach. The attempts so far have failed not because of their business model, but because of the usual reason: poor execution.

happy-go-lucky 1 day ago 0 replies      
If you try to make people want what you make, you end up making something disposable, a short-lived romance.
aikah 1 day ago 1 reply      
Amazon or Facebook . It would make sense for the latter as it is its only real competitor.
randomestname 1 day ago 0 replies      
They aren't investing in the present value, they are investing in the future value. Is Twitter going to be more or less relevant in 5 years? 10 years?
eddiecalzone 1 day ago 0 replies      
I had to come here to read some saner comments; the comment section on the linked CNBC article is, uh, deplorable.
rch 1 day ago 1 reply      
Why no mention of Oracle? That's a more likely acquirer than Salesforce.

Other Oracle acquisitions: Datastax, push.io, Collective Intellect, etc.

wslh 1 day ago 1 reply      
Is it logical to buy Twitter at $ 16B? I think it is too expensive considering a lot of their actual users are bots.
nvk 1 day ago 4 replies      
Twitter should be a public utility.
_kyran 1 day ago 1 reply      
Just going to leave this here (from last month):


"Twitter will be sold in six months - Kara Swisher"

susan_hall 10 hours ago 0 replies      
Perhaps off-topic, but does anyone know why Twitter gave up on its effort to monetize its API? There was a moment, circa 2010, when that seemed like the obvious move. When Twitter first began to shut down access to its full firehose, it seemed clear that there were businesses willing to pay for its information. But Twitter suddenly turned away from that idea, and focused on advertising. Considering how many sites compete for advertising dollars, it seems crazy that Twitter felt that was the right way to go.

But then Facebook went down the same path, first promoting its API, then largely giving up on any attempt to monetize it.

And before that, way back in 2006, I tried to build a business that would rely on Technorati's API, which they briefly promoted, then gave up on.

There are a lot of companies that make money by selling information via an API. And there is tremendous competition for ad dollars. These 2 facts would lead me to expect more companies might try to make money from their APIs. But what happened in Twitter's case?

plg 1 day ago 0 replies      
If google buys twitter, I'm out
ben_jones 1 day ago 0 replies      
Anyone else see S20E02 of southpark? All I'm going to say is it put Twitter in a very interesting perspective.
mandeepj 1 day ago 0 replies      
any idea why advertising is working on facebook and not on twitter?
smegel 1 day ago 0 replies      
If there is anyone who could mess up Twitter worse than Twitter it is Google.
k2xl 1 day ago 1 reply      
Why are Twitter's expenses so high? Don't get me wrong, scaling is hard... But at the same time, they don't have the issues of scaling photos or videos (like Facebook).
lcnmrn 1 day ago 2 replies      
There are better alternatives to Twitter out there. Its time for everybody to move on.
sidcool 1 day ago 1 reply      
Salesforce wud be an interesting prospect. Not sure how wud it fit in their business plan
alex_hitchins 1 day ago 1 reply      
Might sound strange, but I think this would be a great purchase for Apple. They have the cash, they certainly have the engineers and UI skills it desperately needs. iMessage working brilliantly, but closer integration with a Twitter style feed makes real sense to me.
Getting Press for Your Startup themacro.com
282 points by endswapper  2 days ago   32 comments top 14
Briel 2 days ago 0 replies      
Here's the process that works (with some persistence) even if you don't have a revolutionary product:

1. Go to Google, toggle to news and enter the name of similar startups

2. Go through each recent article and add the journalist to a spreadsheet

3. Go on Email Hunter or Email Format to find how the publication formats their email addresses to guess the journalist's. Journalists also tend to use firstnamelastname@gmail.com for their personal emails.

4. Email your pitch in 3-5 sentences max. Don't just describe what your startup you does, use an interesting angle or story to show its impact.

Instead of: "We do delivery logistic optimization."

Story: "Why the heck is your technician always 5hrs late? Cause it took forever to fix the issues of the guy before you.

We're helping our customers like Comcast and Oracle smart schedule all their appointments based on data like a) how long issue x typically takes to fix and b) real-time traffic conditions.

In high school, I worked taxi dispatcher, seeing firsthand the inefficiencies in coordinating drivers."

Journalists don't want to advertise your startup for free, they care about writing a story that entertains and educates their readers. Feed one to them.

Good roundup of pitch templates using different angles: http://www.artofemails.com/pitch-press

andreasklinger 2 days ago 1 reply      
Small tips i learned when doing interviews (as journalist and as tech founder):

Ask the journalist what kind of story s/he wants to do and what role you play in this story.

It doesnt matter if it's about you or a general topic - understand what kind of basic arc/message/pov s/he wants to do. In 99% of the cases their job is not to be "investigators" but tell entertaining stories and they are usually happy to share their idea of the article.

- Help getting to the content s/he needs. Share the right infos, the right contacts, industry insights. Essentially help as much as you can creating the ideal article.

- Also make sure to create soundbites that work as quotes. Quotes tend to be highlighted in articles.

- Make sure to connect the journalist with more useful contacts and help finding new ideas for articles.

- Last but too often forgotten. Have good press images ready of you and your product. Some that work portrait, some that work in landscape. Some that show a business look, some that show a more personal look (depending on the story the journalist goes for)

hth - good luck!

bestmomproducts 2 days ago 0 replies      
There is a lot of great advice here. I started as Larry Ellison's handler almost 20 years ago, wrote a book on publicity (Barbara Corcoran endorsed) and teach an online PR class.

Media has changed a lot in the last 5 years. It is important to familiarize yourself with the outlet and how the journalist writes. (ie) is the outlet known for listicles? (article to start and then '7 ways to crush it on a start-up budget.' Or, do they prefer pitches based on their editorial calendar?

Identify your target customer and find out what they read, listen to and watch.

If you have a tech product, focus on where a more technical audience might be like podcasts. You'd be surprised but it's not always the most well known outlet like Mashable that will drive sales or users. While that is great credibility and exciting, there are many opportunities out there.

RESOURCE:HARO www.helpareporterout.com is a good resource to sign up for - free opportunities 3x a day that journalists are posting. Since you will have the "lead" already, keep your response short and to the point.

NEWSWORTHY:To make something newsworthy, look at what is trending in the media. (ie) Angelina Jolie and Brad Pitt getting divorced and then think about all the relevant angles.

Angles could include being a divorce attorney and contacting your local news to talk about the issues they each face or if you have created a divorce app that helps with custody sharing, etc... you could discuss how that would work.

We are in an era of high content consumption online and outlets like Forbes, Entrepreneur and Huffington Post rely on contributors. Forbes, for example, turns out 300 articles a day ... a DAY! There are more opportunities for your company to be featured now than in the past so that is good news for you.

Try not to get discouraged. It can take time to figure out what works and come back to the journalist with different angles. I've found consistent follow up works.

I know that I was rambling a bit but hope that helps.

I have some free info on my site www.rachelaolsen.com if you're interested including audio interviews with a Forbes contributor and a writer for US Weekly, Men's Health and Rolling Stone.

mwseibel 2 days ago 0 replies      
Man - I wish I could be in here answering questions about the post but I'm on the YC World Tour and currently meeting startups in Lagos - I'm glad you all liked the post
sssparkkk 2 days ago 1 reply      
Any advice on how to get some traction when entering a foreign market?

In my experience the media mostly write about startups that already have significant traction, or about new products released by the big boys (Facebook, Google etc).

Acquiring users through Facebook/Twitter advertising might be an option for some, but could also easily mean you'll be spending tens of dollars per active user.

So what's left? I think it basically boils down to Hacker news & Product Hunt to get the ball rolling.

ozgune 2 days ago 0 replies      
Good read! I also found the following post useful and practical for getting press for startups: http://www.craigkerstiens.com/2015/07/21/An-intro-PR-guide-f...
hamhamed 2 days ago 0 replies      
Warm intro's work better, but I'd also wager that good cold emails work as good. For example; if you can connect with the writer about one his articles, and transition into your startup..this is a good 25% chance to get covered (assuming your story is good).

Don't just submit a "tip" or go to the contact page, actually find the person who has written similar stuff and find his email

devindotcom 2 days ago 0 replies      
I just participated in a panel on this very topic. The takeaways were to know who you're pitching, build a relationship, and be honest and succinct. If you have a good product relevant to that publication's readers, a good news editor or writer will pick up on that.
danieltillett 2 days ago 0 replies      
I have often wondered if it would just better to get straight to the point and outright bribe journalists. Most are making a pittance and a few thousand dollars in cash handed under the table should make any startup story come out like it is the new Uber.
jldugger 2 days ago 1 reply      
> i.e. wait until youre 25% past the milestone to announce - so that youre that much closer to announcing the next milestone.

I wouldn't be surprised to hear this violates a securities law.

hrgeek 2 days ago 3 replies      
Does it really work?
untilHellbanned 2 days ago 2 replies      
Lol b/c other YCers say don't waste time getting press.
amirhirsch 2 days ago 0 replies      
Flybrix is having a very successful launch today driven by a PR strategy that goes against Steps 2 and 3 of this advice. We hired a great PR firm to manage contacts and we got blanket coverage everywhere because of a press embargo.

I don't think I could have managed this on my own and on the timeline we did it.

trjordan 2 days ago 3 replies      
It's common knowledge around here that PR is wasted effort for startups. So why do this?

For startups and for any company, brand makes things nebulously easier. Sales require one fewer call, hiring pipelines are slightly more full, fundraising intros are easier. The point about creating News is really at the core: if you can make News one of the consistent outputs of your company, and you can see the results of News on your actual work, then you should do it.

Like everything else at a startup, brand is one tool. Don't use that tool unless the founders are strong with it and there's a well-defined path between that brand and traction.

Machine Learning: Models with Learned Parameters indico.io
305 points by madisonmay  2 days ago   31 comments top 6
antirez 2 days ago 4 replies      
I strongly advise everybody with one day free (and not much better things to do) to implement a basic fully connected feedforward neural network (the classical stuff, basically), and give it a try against the MNIST handwritten digits database. It's a relatively simple project that learns you the basic. On top of that to understand the more complex stuff becomes more approachable. To me this is the parallel task to implement a basic interpreter in order to understand how higher level languages and compilers work. You don't need to write compilers normally, as you don't need to write your own AI stack, but it's the only path to fully understand the basics.

You'll see it learning to recognize the digits, you can print the digits that it misses and you'll see they are actually harder even for humans sometimes, or sometimes you'll see why it can't understand the digit while it's trivial for you (for instance it's an 8 but just the lower circle is so small).

Also back propagation is an algorithm which is simple to develop an intuition about. Even if you forget the details N years later the idea is one of the stuff you'll never forget.

nkozyra 2 days ago 2 replies      
This is well-written and I applaud any step toward demystifying the sometimes scary sounding concepts that drive much of the ML algorithms.

Knowing you can pretty quickly whip up a KNN or ANN in a few hundred lines of code or fewer is one of the more eye-opening parts of the delving in. For the most part, supervised learning follows a pretty reliable path and each algorithm obviously varies in approach, but I know I originally thought "deep learning? ugh, sounds abstract and complicated" before realizing it was all just a deep ANN.

Long story short: dig in. It's unlikely to be as complex as you think. And if you've ever had an algorithms class (or worked as a professional software dev) none of it should be too daunting. Your only problem will be keeping up the charade if people around you think ML/AI is some sort of magic.

djkust 2 days ago 1 reply      
Hi folks, authors here in case you have questions.

This is actually part 3 in a series. For developers who are still getting oriented around machine learning, you might enjoy the first two articles, too. Part 1 shows how the machine learning process is fundamentally the same as the scientific thinking process. Part 2 explains why MNIST is a good benchmark task. Future parts will show how to extend the simple model into the more sophisticated stuff we see in research papers.

We intend to continue until as long as there are useful things to show & tell. If there are particular topics you'd like to see sooner than later, please leave a note!

yodsanklai 2 days ago 1 reply      
I took Andrew NG's ML class on Coursera. It was certainly interesting to see how ML works but I'm not sure what to do with this. Particularly, I'm still unsure how to tell beforehand if a problem is too complex to be considered, how much data it'll require, what computing power is needed.

Are there a lot of problems that fall between the very hard and the very easy ones? and for which enough data can be found?

throwaway13048u 2 days ago 3 replies      
So this may be a place as good as any -- I've got a decent math background, and am self teaching myself ML while waiting for work to come in.

I'm working on undertstanding CNNs, and I can't seem to find the answer (read: don't know what terms to look for) that explain how you train the convolutional weights.

For instance, a blur might be

[[ 0 0.125 0 ] , [ 0.125 0.5 0.125 ] , [0 0.125 0]]

But in practice, I assume you would want to have these actual weights themselves trained, no?

But, in CNNs, the same convolutional step is executed on the entire input to the convolutional step, you just move around where you take your "inputs".

How do you do the training, then? Do you just do backprop on each variable of the convolution stem from its output, with a really small learning rate, then repeat after shifting over to the next output?

Sorry if this seems like a poorly thought out question, I'm definitely not phrasing this perfectly.

aantix 2 days ago 1 reply      
There's been a couple of times where I needed to classify a large set of web pages and used a Bayes classifier.

I would start to get misclassified pages and it was so difficult to diagnose as to why these misclassifications were occurring. Bad examples? Bad counter examples? Wrong algorithm for the job? Ugh.

I ended up writing a set of rules. It wasn't fancy but at the end of the day, I understood the exact criteria for each classification and they were easily adjustable.

Snapchat Releases First Hardware Product, Spectacles wsj.com
315 points by Doubleguitars  1 day ago   293 comments top 58
keithwhor 21 hours ago 11 replies      
I think this is brilliant. Even the press details seem perfectly crafted, with one article referencing Evan's "supermodel girlfriend."

Snapchat can win here based on brand alone. The hardware features are a plus, but they're going to sell a lifestyle. Think GoPro + Versace. Commenters here are caught up in the tech. It's not the tech. Get a few celebrities in these, people will buy them and barely use the recording features. They're cheaper than Ray-Bans and I bet you and half of your friends own a pair of those.

Snapchat can assemble an AR powerhouse from the ground up with brand goodwill. Evan and his team have figured out the best market strategy to do so. Google is not "cool" and could never attempt to pull this off.

I have tremendous respect for Evan Spiegel right now. Bold move. Amazingly positioned. I wish them the best of luck. Dare I say, it has the scent of Jobs to it - the vision, the risk ("we make sunglasses now!") and definitely the "cool-factor." Don't misinterpret - this isn't the iPhone, not yet anyway, but I think they're on to something very big.

primigenus 19 hours ago 4 replies      
This fixes everything broken about Google Glass. It's almost disturbing how much more on point this is:

Of _course_ they're sunglasses.

Of _course_ it's focused completely on video.

Of _course_ it's marketed as being about sharing your memories as you lived them.

Of _course_ you can only record 10 second videos at a time.

Of _course_ snaps automatically sync to the app.

Of _course_ they're designed to appeal to young fashionable people.

Of _course_ the charge lasts all day

This is one of those things where once you see it it's just obvious this is what it was supposed to be all along.

fowlerpower 1 day ago 7 replies      
I think this is significantly better than what Google did with Google Glass.

It's better because it focuses on the one thing that is really easy to do well. It does not try to do everything at once. It doesn't try to give you apps in your glasses and everything under the sun. This is the right approach to products. Do one thing but do that well.

Before you criticize me think back to the original iPhone, it didn't start with an App Store and everything under the sun like the iwatch did. And yet the iPhone is an icon and the watch is no big deal.

rdmsr 23 hours ago 0 replies      
This is definitely the result of Snapchat's acquisition of Epiphany Eyewear back in 2013[1], which was a startup that made something very similar.


CodeWriter23 23 hours ago 2 replies      
Hype and grumbles aside, I believe optimizing the "I want to record what I'm seeing right now" to a tap near your temple is pretty compelling. Fumbling to get my camera out of my pocket, or even just grab from tabletop and swipe-to-cam is often long enough to miss that precious moment with my daughter.
ftrflyr 1 day ago 4 replies      
Why? You need to seriously question the motives behind such a launch. IMHO:

[1]Snapshot is an online multimedia application.[2]The infrastructure required to move from online to hardware requires significant investment (beyond the $1.8B they recently raised) - that of which I don't believe Snapchat can fund without a serious re-monetiziation strategy beyond Ads. It is only a matter of time before FB makes the move into Snapchat's market more than they already are.[3]This is an unproven market. Google tried it and didn't succeed. A better play - let someone else test the market a bit more and then move in with a solid Ad monetization strategy around the Spectacles. [4]Why Hardware?! Seriously? I believe Evan is overplaying his hands with so much VC capital coming his way.

whitecarpet 12 hours ago 2 replies      
Another huge innovation which is more about software than hardware is the new circular video format: you can rotate your phone and the video keeps its orientation.

Quite impressive, you have to see it in action:


WhitneyLand 13 hours ago 1 reply      
Looks like they have learned from the glasshole debacle.

1). The messaging emphasizes it's just "a toy", a low volume experiment. More playful and more humble approach makes it a smaller target for ridicule.

2.) Pricing at $149 also makes it less pretentious and more importantly, puts it in the discretionary income range of what the heck I'll give it a shot.

leetrout 1 day ago 2 replies      
Even thought I'm not "inb4" Glass comparisons this really does hit a market that I think is untapped. I used to have a "flipcam". It was before I had a phone with the ability to take HD video and before a GoPro was a choice for me because of cost (I still don't have a GoPro).

The ability to have cheaper, stylish, handsfree video recording of my POV has a lot of potential. How-to videos, the "capturing memories" as noted in the article, even just easily recording benign life experiences (police stops, for instance) seamlessly and without hassle is huge.

I do hope there is a tattletale light or something so that the average user can't surreptitiously record things and otherwise easy privacy controls... and I hope it's not long before someone hacks this or they unlock the product to do more than 10 second clips...

If I were GoPro I'd be nervous.

Edit: Actually a second thought- this would be a lot better than body cams in a lot of situations (or certainly a good companion) because it would capture the officer's line of sight.

orbitingpluto 23 hours ago 0 replies      
Now everybody can be Spider Jerusalem...

Just like Google Glass users being called Glassholes, SnapChat glasses will probably be called something like SnapChads, because only white rich guys in pastel shorts and rugby shirts named Chad will use them. The aesthetic just isn't there for wide adoption.

josephpmay 21 hours ago 4 replies      
Being someone in the AR space, I find this a smart but risky move. If they're marked right and become "cool" I'll definitely have to cop a pair (and at $130 they're almost disposable). Spectacles will make it way easier for me to post to Snapchat at parties/concerts/etc without having to break out of the moment by taking my phone out. Strategy-wise, this is a Trojan horse into the AR hardware space, which Evan has wanted to get into for years. However, they fit way better into Snap's image of being a media company vs. directly launching an AR headset.
technofiend 1 day ago 1 reply      
If this means I can go to a public performance and no longer have to try to look past the sea of upthrust arms and glare of 1000 brightly lit screens to see what I came to see then it can't come quickly enough!

Particularly since I feel it will inspire the next product which is an IR flood light that renders all digital cameras useless, since there are so many people oblivious to the fact by trying to capture the experience for themselves they're detracting from the experience for everyone else.

Letting people who need a digital memento silently get one without intruding on the experience of those of us just there to enjoy and be in the moment is a great compromise.

nitrogen 18 hours ago 0 replies      
I'm amazed the top-rated top-level comments are all so positive. We have enough people shoving cameras into devices and situations where they don't belong. At least we know what they look like now so we can ostracize anyone wearing them.
arcticfox 1 day ago 4 replies      
> (Spiegel argues that rectangles are an unnecessary vestige of printing photos on sheets of paper.)

It's also the shape of nearly all screens in the world. Perhaps I'm not visionary enough, but I don't foresee a circular computer or phone screen really improving the current situation...

slackoverflower 1 day ago 1 reply      
Snapchat has a huge opportunity in its hand which it has limited to take full advantage of: starting a revenue share program with influencers on the platform. Facebook has yet to do it and Snapchat, which is strapped with VC dollars, can attract a lot more influencers to its platform. I think the companies on the Discover are already in some sort of revenue sharing agreement with Snapchat but brining this to the massive number of young influencers unlocks huge opportunities for Snapchat.
bunkydoo 17 hours ago 0 replies      
Well I'll be completely straight and say this isn't anything new. (You've been able to buy similar video glasses from china for about 5 years now) but if it can properly integrate with the app, and slim down a LOT more. To the point the camera is unnoticeable - they could finally start making some money. Well, until the Chinese knockoffs start rolling in
robbles 9 hours ago 0 replies      
This article mentions Snapchat's hundreds of employees and multiple offices. This is one of the most obvious examples of the "what are they all doing?" question for me. I know it must take quite a few people to run operations at that scale, and of course they have an advertising business too, which likely explains the need for multiple offices. But it seems like Snapchat is still an extremely minimal app with only a couple of extra features being added over the years. Instagram had only 13 employees when it was acquired, so what role are most of these people in?
cobookman 13 hours ago 2 replies      
For those wondering wtf are these, I don't like the styling, why do these exist...etc, well, i don't think the target market for these is hacker news viewers. I will say that they do look awesome. Way easier to use these than a go pro or hold a camera/phone. Hopefully it's not just locked down to Snapchat.
nvr219 1 day ago 0 replies      
I'm into this! I think selling it as a toy is the right approach.
k_sh 22 hours ago 0 replies      
> Why make this product, with its attendant risks, and why now? Because its fun.

The way they framed this product is _so_ refreshing.

bobsil1 10 hours ago 0 replies      
>he was the best product visionary Id met in my entire life.

This person has never seen the Snapchat interface.

xeniak 22 hours ago 0 replies      
> initially appears to be a normal pair of sunglasses

While it's less offensive than Google Glasses, this doesn't look like "normal" glasses.

NTDF9 19 hours ago 0 replies      
This is genius. Really. You want to know what kind of crowd will drop $129 on this?



TeMPOraL 19 hours ago 2 replies      
I like it. Seriously, "creepy" is just a word that means "I can't accept the reality doesn't work the way I'd like it".

That said, I worry about implementation. My guess is that it's going to be directly and permanently tied to Snapchat itself. Which significantly reduces the potential usefulness of this product - not everything you record is something you only want to have sent directly to Snapchat. Personally, I want files. Plain, old files. Is that so hard to understand for all those cloud-first companies?

anonbiocoward 13 hours ago 0 replies      
They really should have consulted with the Warby Parker folks, or pretty much anyone who actually designs glasses.
nappy 19 hours ago 0 replies      
If this leads to fewer people holding out their phones at concerts... Then I'm especially excited ;)
mkagenius 13 hours ago 0 replies      
When deciding about products, try to think if cute Minions (from Despicable Me) would like that? - Evan Spiegel
jondubois 12 hours ago 0 replies      
Maybe Snapchat will sell some of their users' videos to porn companies (for VR porn)... There are two cameras - Obviously for VR; and given Snapchat's history as a sexting app, I think it's clear where things are heading here.
p4mk 13 hours ago 0 replies      
I love the execution of "circular videos", surprising that no one has implemented this before!


ajamesm 12 hours ago 1 reply      
Great product for people who want to film women in public, but not be noticed. Game changer
mathewsanders 12 hours ago 1 reply      
There's been an empty store on exchange place in NYC financial district (near Tiffany's) that for a couple weeks has had a huge Snapchat logo taking up the entire window. I wonder if they're also gonna explore retail along with hardware.
hellogoodbyeeee 1 day ago 2 replies      
I don't understand why all these software companies are in a rush to make hardware. With the lone exception of Apple, all hardware seems to resort in a race to the bottom commoditization resulting in paper thin margins.
listic 14 hours ago 0 replies      
How do I read the full story? I tried signing in with Facebook, but it redirects to http://www.wsj.com/europe.
pmontra 18 hours ago 1 reply      
Where are those 10 second videos stored? At Snapchat, on the phone, into the glasses? That changes dramatically the privacy implications of both the glasses and Snapchat. Remember what he said: he watched videos from one year ago. Snapchat has been all about deleting everything now.
BinaryIdiot 22 hours ago 2 replies      
I can't be the only one who thinks this is going to eat GoPro's lunch, am I? Sure the initial version may not be as high quality as a GoPro and the time limit isn't as good but those are easy things to fix and they have a monstrous social network (something GoPro is sorta trying to break into).

If anything kills GoPro it's something like this.

hackerews 13 hours ago 0 replies      
I love the difference between this and Glass - 'capture life's moments in style' (spectacles.com) vs. 'join the future' (http://marketingland.com/wp-content/ml-loads/2014/05/glass-h...)
bradleybuda 1 day ago 1 reply      
Friday night media release? Surprising.
idlemind 10 hours ago 0 replies      
Glasshole meet Snaptwat.
Dwolb 13 hours ago 1 reply      
On the design end, I don't like the look of the camera lense.

Are they able to darken the lense glass to hide the camera a bit? Maybe they could match the black of the camera sensor to the black of the glass a little more. Otherwise it looks a lot like two cameras on your face.

mankash666 23 hours ago 1 reply      
2015 revenues of $59M. Assuming an above average salary range, 1000 employees cost about $250M. If they were a public company, they'd get slaughtered on the stock markets.
rabboRubble 21 hours ago 0 replies      
I reserve judgement until I see a pair in color. And better yet, in person. And see more detail about the power situation.
tomkinstinch 13 hours ago 0 replies      
What about people who wear prescription glasses, but can't wear or dislike contact lenses? Is it possible to replace the lenses with prescription ones?
adamnemecek 1 day ago 0 replies      
I wonder what is the intended use case of this. The response will be lackluster which will make creating a V2 harder.
mrharrison 10 hours ago 0 replies      
I think Apple needs acquihire snapchat and promote Evan as the new Apple CEO. I have zero hate for Cook and think he is great CEO. But Evan is shaping up to have some the most modern product prowess out there. I don't know if these spectacles will be a hit, but I think his choices are in the right direction.
clydethefrog 17 hours ago 0 replies      
Reminds me of SeeChange from Dave Egger's The Circle.I wonder if Clinton will wear them during this election.
oliv__ 16 hours ago 0 replies      
I'm just going to wait until this "Spectacle" self deletes after a few months...
Multiplayer 21 hours ago 0 replies      
The most interesting part of the article to me is how useless the WSJ comment section is.

I cannot believe this is still an issue for major publications.

JustSomeNobody 11 hours ago 0 replies      
These don't look comfortable at all.
vasanthagneshk 21 hours ago 0 replies      
Is it only me that does not want to read the article because I cannot read it anonymously?
dmritard96 17 hours ago 0 replies      
ill wait for the generic model that posts to any social network....
superJimmy64 21 hours ago 0 replies      
This is a ridiculous product... reminds me of the classic upper management/CEO "ideas". You know the kind: obsolete, neglects societal concerns (security???), nobody around to tell them it's a bad idea.

> (Why make this product, with its attendant risks, and why now? Because its fun, he says with another laugh.)

Sometimes you can look at something and just KNOW that there is not a chance that pile of junk is gonna gain traction.

drivingmenuts 13 hours ago 0 replies      
Nice design.

How do they solve the personal privacy issues that arose with Google Glass? Or have they even bothered?

PercussusVII 11 hours ago 0 replies      
Fuck off, Snapchat
smegel 20 hours ago 0 replies      
Goofy but clever. The kind of thing that might be a hit with a certain youthful demographic. And you need to be "cool" to pull something like this off - i.e. not Google.
amingilani 21 hours ago 2 replies      
How do I get around the paywall?
Bud 22 hours ago 1 reply      
throwaway28123 23 hours ago 1 reply      
I'd just like give everyone a reminder,

>The most important principle on HN, though, is to make thoughtful comments. Thoughtful in both senses: civil and substantial.

"Google Glass 2.0" and similar cheap bashing isn't just against the rules, it's boring and petty.

Take it to 4chan, you'll get the attention you're after.

nefitty 13 hours ago 0 replies      
This is exciting for the wearable headset market. If even a fraction of Snapchat's users get this it will normalize the space much more than Google Glass was able to. This is especially considering the young demographic Snapchat caters to, which I assume is more open to new technologies.
Sublime Text 3 Build 3124 sublimetext.com
321 points by tiagocorrea  3 days ago   223 comments top 36
guessmyname 2 days ago 8 replies      
> With these latest changes, Sublime Text 3 is almost

> ready to graduate out of beta, and into a 3.0 version.

Wow, Finally! I have been using ST3 for several years (wow, years) and always wondered what is keeping the developer from labeling that version as stable. From all the issues reported here [1] I have never encountered one while using the editor for pretty much all my work. Those $70 are definitely worth every penny. Sometimes I cringe from videos featuring ST while using a non-registered license, this week it happened with a course from Google engineers via Udacity, Google engineers!!! As if they don't have miserable $70 to buy a license, I assumed they were in a rush and didn't have time to set the license which I hope they bought.

Anyway, thanks for all the hard work Jon, and recently Will.

[1] https://github.com/SublimeTextIssues/Core/issues

spdustin 3 days ago 1 reply      
From the release notes [0]:

> Minor improvements to file load times

I didn't even realize there was room to squeeze out more performance here. Sublime Text is wicked-fast opening pretty much everything I throw at it.

[0]: https://www.sublimetext.com/3

derefr 3 days ago 4 replies      
> a menu entry to install Package Control

If Sublime is going to acknowledge Package Control, why not just ship with it? I'm sure the Package Control folks would be glad to move their repo upstream.

gravypod 3 days ago 5 replies      
I wish the Sublime Text people open sourced their code. I'd buy it from them in that event and I'd finally have a text editor to recommend. Atom, VS code, and anything else is completely blown out of the water by ST. There's a reason it's still around and it's because ST is the only thing that can even think of doing what sublime text can do.

Good work to the people behind it, it's an amazing feat no doubt. Just please consider making it free software for all of us who care about that just a bit. Amazing work none the less.

wkirby 3 days ago 3 replies      
Actually the only thing that keeps me from switching back to ST3 is Atoms first class support for `.gitignore` and excluding files from the quick open menu.

I know there's a package that claims to update the file ignore pattern to match the open project, but it really doesn't work well at all.

connorshea 3 days ago 7 replies      
I really, really wish it was open source. I understand why it isn't, but with its main competitors being Atom and VSCode, it's hard to warrant using a closed source text editor even if it's so much faster and I'm used to it.
statictype 2 days ago 2 replies      
The only thing I really want from Sublime (or VSCode) is an API that lets me display an output panel/sidebar with an html engine embedded in it.

Atom provides this - it also provides arbitrary html in the editor itself which is cool but also what makes it slow.

I just want it for the supplementary panels that show build outputs, documentation or other contextual information.

That's enough to let me customize it for our team's usage.

supergetting 2 days ago 0 replies      
When I first started using Sublime, I disliked the occasional popups, and thought I'd just keep using it without paying $70 for a text editor?!?!

But I HAD to buy the thing! Not because I wanted to avoid the annoying popup, but because of everything we know about Sublime today; performance, simplicity and intuitiveness of the UI, packaging system, etc.

The article mentions that they're coming out of beta in the near future! nice! and I just noticed they're already mentioning sublime text version 4 (under sales FAQ page).

modeless 3 days ago 3 replies      
I'm surprised so many people here are using Sublime to edit >100 MB files. Yes, it handles them (as long as the lines aren't too long), but it always has to load the entire file before displaying the first line. Aren't there some editors that don't have to do that?

On a related note, large files are often binary. I appreciate that Sublime can display binary files but it's pretty bare bones, and there's no editing support. I'd love to see what Sublime HQ could do if they worked on binary editing support for a couple of milestones. For example, the ability to locate and edit strings in binary files would be cool, as would a basic hex editor.

Manishearth 3 days ago 1 reply      
> Also new in 3124 is Show Definition, which will show where a symbol is defined when hovering over it with the mouse. This makes use of the new on_hover API, and can be controlled via the show_definitions setting:

Is this just an API hook which a plugin can add a definition resolver to, or does this automatically find definitions for all builtin languages? If the latter, this is super cool!

If the former, I'm going to try and update https://packagecontrol.io/packages/YcmdCompletion for this


Edit: omg works out of the box. Seems to be a simple grep-based thing (so it lists all definitions of the same name), but that's still quite useful!

martanne 2 days ago 3 replies      
A number of people expressed the need to edit large files. For the development of my own editor[0] I would be interested to know what kind of usage patterns most often occur. What are the most important operations? Do you search for some (regex) pattern? Do you go to some specific line n? Do you copy/paste large portions of the file around? Do you often edit binary files? If so what types and what kind of changes do you perform?

[0] https://github.com/martanne/vis

mangeletti 3 days ago 1 reply      
I have one single complaint about Sublime Text:

In order to truly clear your history (files open, last searches, etc.), I have to maintain a script with the following:

 find ~ -name *.sublime-workspace -delete rm ~/Library/Application\ Support/Sublime\ Text\ 3/Local/Session.sublime_session
Other than that, see https://news.ycombinator.com/item?id=12553515

TsomArp 2 days ago 0 replies      
I'm an UltraEdit user. I have tried Sublime Text because of all the nice comments, but I don't see it. Can somebody that also uses or used to use UE tell me what I'm missing?
sagivo 3 days ago 2 replies      
sublime is by far my favorite editor. fast and lots of plugins. specially if you work with big files. i sometimes need to work with files larger than 150MB and it takes few seconds to open. atom crushes and can't even open the files.
onetom 2 days ago 0 replies      
Here comes a piece of history.I replied this to my Sublime Text 2 purchase confirmation email I got from Jon Skinner on 2011/08/30:

> hi jon,> > my salary was reduced by 30% just yesterday, but when i woke up today,> they 1st thing i did was purchasing sublime. it's that fucking awesome!> i wish it would be open source, so people could learn from it...> but, hey, i doubt many open source developers could contribute quality> code to it.. :)> > if u could implement the elastic tab stop feature (which has some reference> implementation on the nickgravgaard.com/elastictabstops/ site), then i> would be happy to pay another 60bucks for it.> actually, u could sell separate license for the version which has this> feature...> i know it would be quite elitist, but it worked well with the black macbooks> back then...

jayflux 2 days ago 0 replies      
The Official Sublime Rust package supports this update:https://github.com/rust-lang/sublime-rust/pull/87
putlake 3 days ago 0 replies      
On Mac I use TextWrangler for quick editing and VS Code as the IDE. I never need to open super large files so after reading this discussion I tried to open a 177MB text file in TextWrangler and it opened quickly and was editable. Searching within the file was also super fast.
ricardobeat 2 days ago 0 replies      
The addition of Phantoms [1] is the killer feature in this release for me. This will allow embedding custom HTML [2] inline in the editor, which is something I've been dreaming of - the power of Atom's nice plugin UIs with no compromise in speed!

[1] https://www.sublimetext.com/docs/3/api_reference.html#sublim...

[2] https://www.sublimetext.com/docs/3/minihtml.html

woodruffw 3 days ago 0 replies      
Awesome! I'm especially liking the Phantoms API - there's a ton of potential there for richer plugins and graphical inlining.

I've moved between maybe half a dozen editors over the past half-decade, but I always end up coming back to Sublime.

0xmohit 2 days ago 2 replies      
nonbel 2 days ago 0 replies      
If this issue with the SublimeREPL package could be resolved, it would be perfect:https://stackoverflow.com/questions/27083505/sublime-text-3-...

Anyone have a guess as to why this happens? It causes me headaches using R as well.

jeffijoe 2 days ago 0 replies      
Been using Sublime Text 3 for years (and I do have a license), and been trying out Atom/VSCode lately. Atom can get real slow, but I feel like the extensions for Atom are of higher quality (linters, TypeScript integration). I think it might have something to do with HTML/CSS/JS vs Python for plugin development.
dman 3 days ago 0 replies      
One minor feature request - can you please simplify the setting of font size for the tree browser and for the menu entries. (I know that tree browser font size can be set by the theme, but it is a bit non trivial using PackageResourceViewer to patch theme files to do this). I still havent found a way to change the font size of the menu entries.
codepunker 2 days ago 0 replies      
Amazing! ST is the only editor I considered good enough to pay for and I can see it's getting even better!
makapuf 2 days ago 0 replies      
I suspect still ST3 still being in Beta is a service to ST2 users, who paid it but for a relatively short time before first st3 betas and whose license key will be valid with ST3 betas and not ST3 once out. Not that I am in this case at all.
muktabh 2 days ago 0 replies      
Thanks for the hard work sublime text team for making programming so enjoyable. I already find ST3 so flawless that to think it still can be improved is beyond me. Once again, great going.
barpet 2 days ago 1 reply      
I am an emacs person that converted from ST (on Windows/Linux) and TextMate but I have always preferred ST/Textmate over anything else available.
pbnjay 3 days ago 0 replies      
Whoa very nice new features! Now I need GoSublime to support them!
realraghavgupta 2 days ago 0 replies      
Atom is really good to go, but after using it and some others, I am back to Sublime Text. They are still working on version 3.0 but the beta is also so stable.
dikaiosune 3 days ago 1 reply      
Very cool to see a screenshot from servo's codebase.
ld00d 2 days ago 0 replies      
My favorite new feature:

> Settings now open in a new window, with the default and user settings side-by-side

brightball 2 days ago 0 replies      
Sublime is slowly making me end my hold out that Textmate will one day take over again.
alexmorenodev 2 days ago 0 replies      
I'm craving for transparent background.
niahmiah 3 days ago 0 replies      
nobody cares. Atom ftw
gotofritz 2 days ago 2 replies      
There are a few annoying things about ST3 which aren't so on Atom

- no engagement with the developers. For $70 I expect to be able to file bug reports and maybe some feature requests. Without being banned.

- multi file search is ridiculously poor. I can't save search patterns, the long text box with all the file patterns is hard to navigate (on OS X if you put the cursor at the end it starts scrolling), but most of all the result pane doesn't stick as it used to. I have to search again every time I click on a file from the results then close it.

- copy and paste is STILL buggy on OS X. Sometimes you paste a string and it puts it in the line above the one where you have your cursor.

- package control is not included. It's just common sense

- the scrollbars are invisible on OS X. I don't want a minimap, it used too much space and adds too much noise

- I use BracketHighlighter. Every time I want to customise the highlight colour it's a royal pain in the neck because of ST3's crazy architecture

I'd much rather use atom these days.

bcherny 3 days ago 2 replies      
Does Sublime still exist? With all the hubub about VSCode and Atom, I've sort of forgotten about it.
Facebook Overestimated Key Video Metric for Two Years wsj.com
278 points by tshtf  2 days ago   148 comments top 26
coldtea 1 day ago 7 replies      
>the tech giant vastly overestimated average viewing time for video ads on its platform for two years, according to people familiar with the situation

They didn't "overestimate". They plainly gave false numbers to drive ad sales.

Coldtea's law: Never attribute to incompetence what can be explained by profit.

sean_patel 2 days ago 3 replies      
"How Facebook is Stealing Billions of Views" - Kurzgesagt In a Nutshell exposed this with detailed analysis and evidence. Watch it here => https://www.youtube.com/watch?v=t7tA3NNKF0Q

This is neither a bug nor a "mistake". It's straight up fraud. Fraud because Facebook uses these false #s to prove to their advertizing clients that they are getting a HUGE ROI, when they really aren't. Also the investors and analysts bump up their ratings and the stock goes higher and higher on false ad view #s.

By counting a view as a legitmate view after the video plays only for 3 seconds, their "algorithm" counted billions of views even when the user has not even seen it, because Auto-play is enabled by default and you have to opt-out / disable it and no-one does it. It takes a person roughly 4 seconds to scroll off their feed as they quickly "scan" their friends posts.

codehusker 2 days ago 4 replies      
On my programmer hand, I can see myself making a similar technical choice. We have auto-playing videos, we shouldn't count the views less than 3 seconds because that wasn't really an intentional view. But that means we should also exclude them from the overall view count. I don't know if FB did that as well.

On my shareholder hand, this seems slightly like fraud.

On my advertiser foot, this seems slightly like fraud.

emcq 2 days ago 6 replies      
This seems slightly overblown. I suspect this was a metric useful for engineering that found its way to external usage. That's because in many recommendation systems in practice you want to filter out the spurious views, and a simple way to do that is with dwell time.

It's not uncommon to require at least 50% of pixels in view for 1 second before you have an impression for static images [0, 1]. AOL defines an impression requiring 2 seconds for images [1]. Facebook likely did some analyses to find that 3 seconds was a good cutoff for their site.

There are more sophisticated ways to estimate dwell time but they seem uncommon in practice; perhaps due to their difficulty communicating to advertisers what the impression metric actually means.

For sites with many bots or inbound marketing you often find users bounce quickly which drives some of this timing. I'm a bit surprised it needs to be that high for Facebook without many bots or users bouncing quickly. Perhaps this is for mobile users scrolling quickly.

[0] http://advertising.aol.com/specs/terms/aol-viewability-terms[1] http://mediaratingcouncil.org/063014%20Viewable%20Ad%20Impre...

bluetwo 2 days ago 1 reply      
"likely overestimated average time spent watching videos by between 60% and 80%"

Soooooooo... fraud? They say it didn't impact billings/revenue, but I have to imagine they will now be in a position to give discounts, if not refunds.

jtchang 2 days ago 1 reply      
I don't think this is overblown at all.

I expect to see a class action lawsuit because of this. If you are a big advertiser and you can somehow show damages because of this error there may be a case.

There are some adtech companies recently who focus purely on reporting the metrics of your ad purchases. Kind of like acting as a third party. I forgot exactly what they are called.

laurihy 1 day ago 2 replies      
Then again, if a "video view" is defined as "watched over 3 seconds (50% of the video visible in the screen, IIRC)", then it sort of makes sense that "Average Duration of Video Viewed" doesn't include non-views.

For sure Facebook should attempt to name their metrics as descriptively as possible, but also advertisers should make sure they understand how different conversions are measured, what's included and what's not. Another example would be "Clicks" metric, which included all engagement (i.e. likes, shares etc), instead of just "Link clicks".

pducks32 2 days ago 1 reply      
I don't believe for a second that this was unintentional. The amount of press and mind share that Facebook has received for their "rocketing video efforts that rival YouTube" was totally worth pissing off some ad exec. Very few people will see this news compared to "Facebook dominates video" headlines that have been everywhere. This was a smart play on their part.
nostrademons 2 days ago 2 replies      
My data-weenie hat tells me that this is a silly metric anyway. As an advertiser, you really care about the distribution of viewing times, and in particular what fraction of viewers watch the video all or most of the way through. Arithmetic mean is virtually useless when the data has a power-law or other non-linear distribution, and it's highly likely video viewing times exhibits this.
mikejb 1 day ago 0 replies      
I didn't fully understand what the bug was, but I found on a German news page[1] with a slightly more detailed description:

Aufgezeichnet wurde zwar die Gesamtsehdauer aller User, geteilt wurde dieser Wert allerdings nur durch diejenigen User, die das Video lnger als drei Sekunden ansahen

So the average was not calculated correctly: They accumulated the duration of all video views (including of those shorter than 3 seconds), but divided it by the number of 'legitmate' views - i.e. only those longer than 3 seconds. So you get a pretty big offset if you have many views under 3 seconds (which they probably do, thanks to autoplay).

[1] https://www.heise.de/newsticker/meldung/Facebook-Unklare-Vid...

cft 2 days ago 0 replies      
We are using Facebook Audience Network in random order with other ad network for mobile ads. FB ads earnings are about 3-5% of other networks, with exactly the same traffic. Other networks disclose the percentage the revenue split between them and the publishers, but FB does not. What do they overestimate in this case, I wonder?
bonniemuffin 2 days ago 0 replies      
I don't think average watch time is a particularly useful metric, because no matter how they count views, there's some kind of arbitrary cutoff. If they count a view as soon as the player finishes loading, they'll include a whole bunch of people who didn't intend to watch the video, and in fact only "watched" a few milliseconds of it before bouncing. So that will drag the metric down artificially. I bet if they count views as soon as the video starts playing, they could drag down average watch time by improving player load time, because they'd count more unintentional "views" of very short duration.

I hope/assume this is part of a suite of metrics that they provide to advertisers, so you can understand it in the context of other things like video completion rate or counts of views that reach X% through the video.

throwanem 2 days ago 0 replies      
It's like watching a wasp land on a nettle. You know somebody's going to get stung, but you just don't care who.
shostack 1 day ago 0 replies      
While this is concerning, smart advertisers were looking at the distribution of view length percentages anyway as you can easily pull those columns in and their derived metrics.

Likewise, smart advertisers look at lift in conversion metrics when possible, in which case this stat is irrelevant.

That said, FB has not exactly helped things by making metric definitions a little obfuscated in general.

Personally, I see a bigger concern is them giving 100% view through conversion credit with a 1 day window by default as part of any website conversion action tracking. There are very few cases (like some retargeting situations) where you'd ever want to give full weighting for a VT, and while there is likely value in VT's, I probably wouldn't give 100% credit to them by default given the rate at which people scroll on mobile. But advertisers like to see big numbers, agencies like to show big numbers, and so you have platforms like FB aggressively try to push metrics like this and some of their rather loose definitions of "engagement" without great explanation of the nuances or pros and cons. These are largely left up to the advertiser to determine since, to be fair, they are very subjective.

Savvy buyers know this and configure their reporting and tracking settings accordingly because FB and PMDs give you those options. They are sometimes just buried.

Ultimately, IMHO FB and Google's greatest defense towards any of these sorts of claims is better and (more importantly) transparent attribution data and tools. If they can prove their value on the bottom line, other things often don't matter to many advertisers. Attribution is a tough nut to crack, but for advertisers spending large sums, it is critical to be successful in these channels.

aaron695 2 days ago 0 replies      
I don't get it, it makes sense?

Less than 3 seconds is not a view any more than seeing the picture of the video is a view. It doesn't count I'd say?

Or do advertisers have to still pay for a 2 second view?

restlessdesign 2 days ago 1 reply      
It is unreasonable, even for a developer, to count 3 seconds as a play. Anyone working on a video product would know they were inflating the numbers.
Bedon292 1 day ago 1 reply      
Ok, what am I not understanding here?

If they are only counting a view as over 3 seconds. And then only averaging the those views what is the problem? They tell me 1000 people viewed it (over 3 seconds), and then say the average view time was 10 seconds for those 1000 users. How is this an issue? Why do I care that 9,000 people scrolled past it and weren't counted in the viewer numbers? Now if they were reporting this as 10,000 viewers and saying 10 second average, I see the problem. Is that what was happening? Or what am I missing?

smithunsero 1 day ago 2 replies      
In 2012 there were reports that 80% of traffic from Facebook Ads were automatic bots. So this is not the first time Facebook use tricks to get more money from clients.
breatheoften 2 days ago 1 reply      
What was the actual bug? Does this mean that no video had an average view time less than 3 seconds?
bruinbread 1 day ago 0 replies      
This isn't anything new. Several YouTubers have "exposed" how Facebook tracks views. It's not accurate, but it's the system that has been in place for a while and they've defended it as such.
Arkaad 1 day ago 0 replies      
I am disappointed by FB.Next we'll learn that they do tax evasion.
supermatt 1 day ago 0 replies      
If face book only count a view as over 3 seconds, and they measure the average view as only those videos over 3 seconds, what is the problem? Surely now its less representative?
randomgyatwork 1 day ago 0 replies      
When will Facebook 'content creators'(users) get compensated for the value they create?
yueq 2 days ago 0 replies      
Will there be a class action on this matter?
malloreon 1 day ago 0 replies      
how much of their video income will they be refunding to advertisers?
tn13 1 day ago 0 replies      
It might be inaccurate but does not matter much in advertiser perspective. Given the conversions etc. eCPMs conversion to the real expected value.
Zuckerberg and Chan aim to tackle all disease by 2100 bbc.com
255 points by timoth  3 days ago   317 comments top 53
kendallpark 3 days ago 16 replies      
I'm glad they're putting money into medical research, but I kinda roll my eyes when people make big claims about curing X, especially when X is something incredibly broad like "cancer" or in this case, "all diseases." AI/ML has barely scratched the surface of its potential in medicine, however, I find it naive to think that you can throw AI/ML at any random disease and always get a cure. Even after a century. Will we have a cure for trisomy 21? For antisocial personality disorder? For obesity and addiction? These things are far more complicated than just creating the right drug.

But as much as I'm rolling my eyes at their blanket statement, the spirit of "yes we can!" does way more for science and progress than naysay of critics.

Animats 3 days ago 4 replies      
Probably not feasible without some genetic re-engineering to design out vulnerabilities to common diseases. That's how aging will probably be solved. There's a big debug time problem, though. It takes about two generations to be sure you got it right. We'll probably have very long lived mice decades before it works for humans. (Many cancers in mice can be cured now. This doesn't translate to humans.)

Then there will be species conflicts. Merck people won't be able to mate with Novartis people because they'll be too different genetically.

apatters 2 days ago 4 replies      
This is difficult to say without sounding snarky, but will 'all disease' include the mental disorders (both discovered and undiscovered) that Facebook inflicts upon its users? Depression from seeing other people's perfect lives, obsessive compulsive dopamine-fueled update checking to maximize Zuck's ad revenue etc.

I'm glad he's spending his money this way but how about the way he makes it. Facebook is not a particularly benign product and when you throw in the rising privacy, censorship, and "Free Basics" concerns, I would argue it's creeping towards having a net negative impact on the world.

I would have much more respect for Zuckerberg if he took a profit hit to fix his product and how his company interacts with the world. It's worth noting that Bill stepped away from Microsoft before he got deeply into philanthropy, in part to avoid conflict of interest. He knew he had a checkered past and he made a break with it. Zuck has a checkered present. Curing diseases in the developing world while pushing your product on it at the same time is somewhat morally ambiguous in my view.

idlewords 3 days ago 3 replies      
Pharmacology is an interesting example where better technology and scientific understanding have made things worse than earlier, low-tech methods ("inject plant extracts into animals and see what happens").

The number of new drugs discovered per dollar of research has been dropping since 1960, and obvious explanations (like "the easy ones have already been found") turn out not to explain the phenomenon. [http://www.nature.com/nrd/journal/v11/n3/fig_tab/nrd3681_F1....]

This is something we should try to understand better, since it goes against the intuition that technology is an unalloyed good in scientific research.

I applaud the money they're spending, but the level of technophilia in the announcement gives me pause.

mmaunder 3 days ago 2 replies      
ryandrake 3 days ago 4 replies      
Not to take the wind out of anyone's sails, but there is a concern to be raised about relying more and more on charities to fund the public good. When a democratically elected government funds the public good, at least in theory, the public at large has a small say in choosing what counts as "public good". When you leave it to charity, you're relying on the morals of individual wealthy donors to decide what counts as a "public good". I don't claim to know which method is more risky in terms of mis-allocation of resources, but it's something to think about.
bitL 3 days ago 1 reply      
Saw this on Reddit, maybe someone can comment:


From Wikipedia:

The Chan Zuckerberg Initiative is not a charitable trust or a private foundation but a limited liability company which can be for-profit, spend money on lobbying, make political donations, will not have to disclose its pay to its top five executives and have fewer other transparency requirements, compared to a charitable trust. Under this legal structure, as Forbes wrote it, "Zuckerberg will still control the Facebook shares owned by the Chan Zuckerberg Initiative".

calsy 3 days ago 5 replies      
I know it's more of a western problem, but it would be good if someone could find some real solutions to the obesity crisis.

If you ask many GPs in the west they will tell you that a majority of illnesses they address are related to weight and diet. With an aging, increasingly overweight adult population alongside a sharp rise in child obesity come big consequences for health care over the next half century.

Im not saying its an easy task you can just throw money at, but if trends continue as they are now the health and economic impact on society will be huge.

Bar some sort of national catastrophe (for e.g. war, famine, disease) is it a crazy idea to think we could see a reduction in obesity levels? Are we simply resigned to the fact that we will just get bigger and bigger in the future?

a13n 3 days ago 3 replies      
"Mark Zuckerberg and Priscilla Chan announce $3 billion initiative to cure all diseases"


sams99 3 days ago 0 replies      
The absolutely crazy thing is that in this day and age the vast majority of diagnosed paediatric cancers are not sequenced (no germline, tumor or RNA seq) It actually makes me feel quite sick that there is so much information out there that is not being mined or analyzed.

I hope this money does not go into some sort of "in 100 years from now moonshot" as opposed to, we have huge urgent needs for money right now.

bwindels 2 days ago 0 replies      
These self-congratulatory billionaire philanthropists and their tax evasion schemes really irk me. Everything won't be peachy in the future. We're well on our way to warm up the planet by 4 degrees Celsius by 2100, because of the globalized world in which you could gain your fortune. Stop exerting power of money that doesn't belong to you, pay your taxes like everyone else, and maybe governments would have more means to improve issues that were democratically prioritized, with more oversight. These flashy single issues funds with vague goals are mainly there to serve the ego of whoever funded them.

The school of life has a good video on this topic: https://www.youtube.com/watch?v=mTAE5m3ZO2E

BorisVSchmid 2 days ago 0 replies      
Most infectious diseases in humans are known to have originated from wildlife, so eradicating all current diseases will still leave us with new diseases entering the human population. Current rate at which new diseases establish themselves in the human population is about ~1 per year, moreso under unstable climate [Greger, Woolhouse].


Looks like Chan and Zuckerberg's initiative on infectious diseases is focused on a rapid response once a disease establishes itself in humans. There is certainly a lot to win there, but it is still acting after the fact, rather than prevention. Would have liked part of the effort to be focused on monitoring wildlife to understand which diseases are at risk of jumping over.

Greger, M.Crit. Rev. Microbiol. 33, 243299 (2007).https://www.ncbi.nlm.nih.gov/pubmed/18033595

Woolhouse, M .E.J. Trends Microbiol. 10, s3s7 (2002). https://www.ncbi.nlm.nih.gov/pubmed/12377561

hmate9 3 days ago 6 replies      
Not sure why there is a lot of negativity here. It doesn't seem that ridiculous of a goal. I think by 2100 we will have extremely powerful AI that will make fighting diseases extremely easy compared to methods available today. To be honest, it seems like an achievable goal.

I wish Zuckerberg and Chan the best in this.

tschellenbach 3 days ago 2 replies      
This is great. Great achievements as a startup founder and now philanthropy.

For those comparing this to pharma companies. Pharma companies invest in drugs that they can make money with. It sounds as though this $3 billion is aiming at more general research and making it publicly available.

stevefeinstein 3 days ago 1 reply      
I would never discourage them, audacious goals are how real change is achieved.

But in the history of the world, every time you conquer one disease, a worse and scarier one seems to fill the void. And if you wanted to get pedantic, it could be said that Humanity is a disease of the Earth. Eliminating that infestation might qualify under the goal. Perhaps AI is not going to solve the problem by eliminating disease, but eliminating that which can be diseased. Kill the prey, and the predator dies.

whybroke 3 days ago 1 reply      
This is lovely and all.

But he is in a much better position to work on the curious problems of ever increasing political polarization in our new Post-Factual world.

If I were to guess, over the next century that problem is going to result is a vastly more misery than a slight speed up to medical technology could compensate for.

pkaye 3 days ago 0 replies      
I just feel this is too broad and unbounded. They should have focused on a specific diseases and a shorter timeframe. By 2100 most of us will be long dead by current standards.
vegabook 3 days ago 4 replies      
Very nice. Some context:

Mark Zuckerberg is worth 55 billion dollars. This is 5% of his net worth.

Mark Zuckerberg spent 20bn on Whatsapp. At his 28% shareholding in facebook that's a 5.6bn USD personal commitment.

The top 5 global pharma companies spent 42 billion USD on R&D in 2015 alone. Total pharma sector R&D is circa 200 yards. Every singe year. They aren't anywhere near "curing all diseases". This intiative would fund them for 5 days.

Very generous, but let's keep some perspective.

karmicthreat 3 days ago 0 replies      
So lets say this were possible. Where would be the best place to throw this money? Just researching X disease one by one isn't going to be successful. There are not enough resources available to make that happen that way. Period.

So what if a big leap in computational biology happened? Making faster machines is relatively easier and largely unregulated.

So you focus on simulating disease and some form of automation that tries to cure it. We have the problem of building these models for the computer to crunch on. So why not build them from people. Continuously monitor everything about someone. DNA, the various omics, self reports. All the while machine learning is trying learn these models. So other automations can change them.

So the first thing we need is a way to collect all this data. Itself a major medical breakthrough. How much data do we need to build the models? This seems to be the first breakthrough we need to even approach this.

supergirl 3 days ago 0 replies      
any money spent on research is good but,

1. governments around the world probably spend hundreds of billions yearly on research in medicine alone. and zuckerberg wants to solve everything with his 3bn?

2. our current technology is not even close to good enough to make the kind of major breakthroughs needed to say we 'cured cancer'. for example, the biggest neural networks we trained have the order of 10bn parameters while the human brain has 100bn neurons, each, I'm guessing, having at least 10 parameters. similarly for very small scale technology.I think we need to tone down a bit the hype in AI and computing.

2pointsomone 3 days ago 0 replies      
Feeling so privileged right now; what a great time to be alive and see such visionary leadership. Thank you Mark, Priscilla, Bill, and the thousands of people whose names I don't know who work tirelessly on these problems.
ravenstine 3 days ago 0 replies      
All disease? I hope they mean specifically pathogens, because there are plenty of diseases that are caused by other things or have unknown causes, and you can throw ten times the net worth of Facebook at them and they probably wouldn't be cured noticeably sooner. If it's just pathogens, I could believe that if we programmed nanoprobes that could target them, making antibiotics/antivirals permanently obsolete.
meitham 2 days ago 0 replies      
To achieve this goal, Mark will need access to the entire world medical records, selling our data to insurance companies. Sorry Mark, but I refuse to trade my privacy, and the safety of millions that cannot afford medical insurance with a promise of living in a world without disease.
bunkydoo 3 days ago 0 replies      
Addiction is probably one of the worst diseases, and I am not sure that you can 'cure' it.
helthanatos 3 days ago 0 replies      
New diseases will come to be. Possibly, the cures will cause the new diseases, possibly something else, but all disease won't be conquered.Does this ambitious projection include only disease or does it also include disorder? I think it would be cool to cure disorder before disease.
NicoJuicy 2 days ago 2 replies      
I don't think it's a good thing to make us disease-free in terms of evolution.

I think it's better to just cure the diseases that kill us ( long/short-term) eg. cancer, aids, ... or make us immoble for longer term ( parkinson, ..)

Why? Because i think the best defense is a good immune system. You don't get a good immune system by staying inside and infection free and using medication all the time. Let people become sick ( not lethal ones) and let nature fight it off.

danielmorozoff 3 days ago 0 replies      
In this vein, Some very interesting work is being done by the church group at Harvard to encode cells to withstand all viral infections at the genetic level.


subcosmos 2 days ago 0 replies      
I built a visualizer of the top causes of death. We've got a lot of diseases to cure!


yazaddaruvala 3 days ago 0 replies      
Including aging as a disease?
Arkaad 3 days ago 0 replies      
I wonder what they plan to do about the anti-vaccination movement.
mungoid 3 days ago 0 replies      
What about new diseases that are discovered between now and then?
zaro 2 days ago 0 replies      
I don't remember who's quote is but sounds relevant for this article :

We won't have cure for diseases until first we have a cure for greed.

unknown_apostle 3 days ago 1 reply      
The commitment is very big, my comment is very small.

Whatever exists needs to be challenged continuously to keep existing. Any naive attempt to suppress all adversity forever will backfire.

chris_wot 3 days ago 0 replies      
I'd like to see their efforts in stopping the spread of the most rapidly increasing disease to challenge humanity in any man's lifetime.

It's called Facebook.

troels 3 days ago 0 replies      
How does one even define "disease"?
pokemongoaway 2 days ago 0 replies      
Such a joke. If they were anyone else we would be calling them out. Is there any evidence that this wealth won't be concentrated in the same industry, whose stock prices would decrease if their drugs worked to treat any disease causes. A claim like this can be dismissed without evidence.
languagewars 3 days ago 0 replies      
I'm sick of this contrarian disruptive nonsense.

Redd Foxx got to choose his fate, but what am I going to do while forced to sit around the hospital dying of nothing?

Find a socially acceptable alternative to disease before you eliminate it.

(Ok, to put it more clearly: get off my damn lawn and my damn planet you stupid non-exponential function understanding kids.. Please!)

Arkaad 3 days ago 1 reply      
Why does the author keep saying "Ms Chan"? Aren't they married?
wiz21c 2 days ago 0 replies      
Who will own the intellectual property rights on the discoveries ? If it's Zuck, then I'm not interested.
karmicthreat 3 days ago 0 replies      
By 2100 I hope we have a patchable structures that just need an electronic update to generate the new immune cell/protein/expression.
grownseed 3 days ago 2 replies      
These acts of apparent philanthropy from ridiculously wealthy people rub me the wrong way. It feels like those rich patrons from older times who would bestow their "generosity" as they pleased, except that in modern times most countries have frameworks in place to make this sort of work happen, and these rich people choose to ignore them, or worse, disparage them.

Companies like Facebook (and people like Mark Zuckerberg) actively avoid paying taxes whenever they can, in a lot of countries that, for example, have public healthcare and other public institutions that would normally benefit from these taxes.

It's a bit like repeatedly stealing some kid's lunch, and then making fun of the kid for her weakness while appearing strong (and stronger in comparison to the weak kid) and compassionate when the kid passes out and you carry her on your back.

caub 2 days ago 0 replies      
as long as smoking will be removed the planet I'm fine
dschiptsov 2 days ago 0 replies      
Not bold enough. Let them tackle nondeterminism first.)
partycoder 2 days ago 0 replies      
Most likely by year 2100 Mark and Priscilla won't be alive, and people that read about this claim won't be alive either. Then, it is not a legally binding claim. I think it's a way to promote their foundation.
drcross 3 days ago 2 replies      
EDIT: JWZ disagrees that this is all rainbows and puppydogs.

Archive.org link because JWZ dislikes HN- https://web.archive.org/web/20160818144913/https://www.jwz.o...

johansch 3 days ago 1 reply      
Hubris, much?

Yes, this is a commendable effort, but I don't think they have the smarts/money for this. Even at an investor/patron level.

snappy173 3 days ago 0 replies      
this is marginally better than the Bluth family's fundraising efforts to cure TBA ...
aestetix 3 days ago 1 reply      
Step 1: shut down Facebook.
limeyy 3 days ago 7 replies      
I wonder why, all these billionaires first want to make billions, and then do philanthropy. How about making their services/business/products more affordable with which they are making all this money with in the first place?

For example: MS Office used to cost 4-500 euro for the average home user a few years ago. That was ridiculous.

If you have a small shop and 2000 Facebook page likes, Facebook rips you off each time you want to reach them.

Maybe market dictates these prices but then again, they would be in the position to dictate the prices in the first place.

meira 3 days ago 1 reply      
Charity with evaded money is very evil.
M_Grey 3 days ago 1 reply      
I plan to conquer the world and all of its inhabitants... long after all of us here are dead or senescent.

I guess it is easy to make empty statements if you make them apply to a far enough future. Mars colonies from Musk (still working on getting the colonists there in one piece of course), and all disease tackled!*

*With $3Bn

lostmsu 3 days ago 0 replies      
I really wonder why those techy multibillionaires invest in medicine rather than techy stuff. I'd rather like to see fusion and hardware research done. Speaking globally, that might just bump global economy so significantly, the illnesses would go down just because more people would be able to afford education and medical care easily.
nenadg 3 days ago 4 replies      
Oh that's just great, another billionaire philanthropist curing all diseases. Like Gates did.

I don't know whether their passion loses momentum inside 'initiative/fund' or it was doomed to be opposite of it's cause from the start?

OpenSSL Security Advisory openssl.org
265 points by jgrahamc  2 days ago   102 comments top 12
VeejayRampay 2 days ago 6 replies      
My first reaction: This never ends, does it?Second reaction: Security is notoriously hard, nice that people are looking at the code and being thorough, it's for the collective best.
dijit 2 days ago 4 replies      
Luckily I moved everything to openbsd's libressl which is /mostly/ compatible.

I wonder if this bug affects them, typically the HIGH's haven't[0]

It really feels like every other week there is a bug in OpenSSL and after following along with the libressl blog I understand why- the code is an absolute mess[1]

[0] http://undeadly.org/cgi?action=article&sid=20150319145126

[1] http://opensslrampage.org/page/49

cperciva 2 days ago 1 reply      
OK, now we can go ahead with FreeBSD 11.0-RELEASE.

In my time as security officer, it was a rare and surprising occurrence when we didn't need to hold an upcoming release due to a pending OpenSSL advisory. It got to the point of the release engineer saying "I think we're ready to start the release builds tomorrow, any news from OpenSSL" and me replying "nothing yet, but I'm sure it will come" -- their timing was absolutely impeccable.

MichaelMoser123 2 days ago 2 replies      
if i understand correctly this one is relevant to certificate issuers that publish certificate revocation lists via the OCSP protocol; could be used for denial of service but not for hacking into the certificate issue, is that correct?

Also most bugs in OpenSSL seem to be during renegotiation of protocol zzzz defined by some obscure RFC that nobody really understands how to implement, is that correct? Why can't they simplify these protocols, do we really need these fancy renegotiation features?

Actually Daniel Bernstein says that over-complicating the protocols is a clever way to make sure that software infrastructure remains insecure.


justinmayer 2 days ago 1 reply      
As far as I can tell, patched OpenSSL packages for affected Debian/Ubuntu releases are not yet available.



_jomo 2 days ago 1 reply      
The relevant commit (fix): https://github.com/openssl/openssl/commit/e408c09bbf7c3057bd...

Edit: This is only the commit for the HIGH severity CVE-2016-6304.

robryk 2 days ago 1 reply      
Why is the severity of the first vulnerability high? It allows a denial of service on the server's side by a client and nothing else. The second vulnerability seems to be very similar in severity, except that it can be used both against clients and servers, and yet it is of just moderate severity. What am I missing?
beezle 2 days ago 1 reply      
Wonderful. Glad I recently added OCSP support to servers. Arg!

So does this affect TLSv1.2 only servers that do NOT support client renegotiation of any type?

nodesocket 2 days ago 0 replies      
Does not appear that SSL Labs (https://www.ssllabs.com/ssltest) tests for this exploit (CVE-2016-6304) yet. Anybody have instruction on how to test your server?
pdevr 2 days ago 0 replies      
>"Servers using OpenSSL versions prior to 1.0.1g are not vulnerable in a default configuration [..]"

In case anyone is using older versions, like we do in our production environment.

turingbook 2 days ago 1 reply      
>This issue was reported to OpenSSL by Shi Lei (Gear Team,Qihoo 360 Inc.)

A Chinese hacker

Yuioup 2 days ago 2 replies      
If it does not have a home page and a catchy name like "Heartbleed", then this bug does not exist.
My Most Important Project Was a Bytecode Interpreter gpfault.net
312 points by 10098  3 days ago   148 comments top 22
robertelder 3 days ago 5 replies      
One of the moments where I really started to feel like I was starting to 'see the matrix' was when I was working on a regex engine to try to make my compiler faster (it didn't, but that's another story). The asymptotically fast way to approach regex processing actually involves writing a parser to process the regex, so in order to write a fast compiler, you need to write another fast compiler to process the regexes that will process the actual programs that you write. But, if your regexes get complex, you should really write a parser to parse the regexes that will parse the actual program. This is where you realize that it's parsers all the way down.

When you think more about regexes this way, you realize that a regex is just a tiny description of a virtual machine (or emulator) that can process the simplest of instructions (check for 'a', accept '0-9', etc.). Each step in the regex is just a piece of bytecode that can execute, and if you turn a regex on its side you can visualize it as just a simple assembly program.

sillysaurus3 3 days ago 19 replies      
Also: a software rasterizer.

Most people refuse to write one because it's so easy not to. Why bother?

It will make you a better coder for the rest of your life.

Let's make a list of "power projects" like this. A bytecode interpreter, a software rasterizer... What else?

tominous 3 days ago 1 reply      
I love the author's meta-idea of refusing to accept that unfamiliar things are black boxes full of magic that can't be touched.

A great example of this mindset is the guy who bought a mainframe. [1]

Refuse to be placed in a silo. Work your way up and down the stack and you'll be much better placed to solve problems and learn from the patterns that repeat at all levels.

[1] https://news.ycombinator.com/item?id=11376711

wahern 3 days ago 3 replies      
Two approaches are severely underused in the software world:

1) Domain-specific languages (DSLs)

2) Virtual machines (or just explicit state machines more generally)

What I mean is, alot of problems could be solved cleanly, elegantly, more safely, and more powerfully by using one (or both) of the above. The problem is that when people think DSL or VM, they think big (Scheme or JVM) instead of thinking small (printf). A DSL or VM doesn't need to be complex; it could be incredibly simple but still be immensely more powerful than coding a solution directly in an existing language using its constructs and APIs.

Case in point: the BSD hexdump(1) utility. POSIX defines the od(1) utility for formatting binary data as text, and it takes a long list of complex command-line arguments. The hexdump utility, by contrast, uses a simple DSL to specify how to format output. hexdump can implement almost every conceivable output format of od and then some using its DSL. The DSL is basically printf format specifiers combined with looping declarations.

I got bored one day and decided to implement hexdump as a library (i.e. "one hexdump to rule them all"), with a thin command-line wrapper that emulates the BSD utility version. Unlike BSD hexdump(1) or POSIX od(1), which implement everything in C in the typical manner, I decided to translate the hexdump DSL into bytecode for a simple virtual machine.

The end result was that my implementation was about the same size as either of those, but 1) could built as a shared library, command-line utility, or Lua module, 2) is more performant (formats almost 30% faster for the common outputs, thanks to a couple of obvious, easy, single-line optimizations the approach opened up) than either of the others, and 3) is arguably easier to read and hack on.

Granted, my little hexdump utility doesn't have much value. I still tend to rewrite a simple dumper in a couple dozen lines of code for different projects (I'm big on avoiding dependencies), and not many other people use it. But I really liked the experience and the end result. I've used simple DSLs, VMs, and especially explicit state machines many times before and after, but this one was one of the largest and most satisfying.

The only more complex VM I've written was for an asynchronous I/O SPF C library, but that one is more difficult to explain and justify, though I will if pressed.

gopalv 3 days ago 1 reply      
The project that affected my thinking the most was a bytecode interpreter[1].

I've had use for that knowledge, nearly fifteen years later - most of the interesting learnings about building one has been about the inner loop.

The way you build a good interpreter is upside-down in tech - the system which is simpler often works faster than anything more complicated.

Because of working on that, then writing my final paper about the JVM, contributing to Perl6/Parrot and then moving onto working on the PHP bytecode with APC, my career went down a particular funnel (still with the JVM now, but a logical level above it).

Building interpreters makes you an under-techtitect, if that's a word. It creates systems from the inner loop outwards rather than leaving the innards of the system for someone else to build - it produces a sort of double-vision between the details and the actual goals of the user.

[1] - "Design of the Portable.net interpreter"

_RPM 3 days ago 1 reply      
I saw the matrix after I first implemented a virtual machine. I recommend everyone does it because it will teach you a lot about how code is executed and transformed from the syntax to the actual assembly/bytecode. A stack based virtual machine is so simple it takes a lot of thinking to understand how they work. (or maybe I'm just not that smart).

It's interesting that he implemented function calls via a jump. In my VM a function is just mapped to a name (variable), so functions are first class. When the VM gets to a CALL instruction, it loads the bytecode from the hash table (via a lookup of the name).

Since this is a procedural language where statements can be executed outside of a function, implementing the functions as a jump would be difficult because there would need to be multiple jumps between the function definition and statements that aren't in a function.

I really wish my CS program had a compilers class, but unfortunately they don't, so I had to learn everything on my own.

briansteffens 3 days ago 0 replies      
Nice post! I really enjoy playing around with things like this. It's amazing how little is needed to make a language/interpreter capable of doing virtually anything, even if not elegantly or safely. As long as you can perform calculations, jump around, and implement some kind of stack your language can do just about anything.

I recently threw something together sort of like this, just for fun (I like your interpreter's name better though): https://github.com/briansteffens/bemu

It's crazy how much these little projects can clarify your understanding of concepts that seem more complicated or magical than they really are.

anaccountwow 2 days ago 2 replies      
This is a required hw assignment for a freshmen class @ cmu. https://www.cs.cmu.edu/~fp/courses/15122-s11/lectures/23-c0v...

Given it has some parts already written in the interest of time...

oops 2 days ago 0 replies      
Nice read! Reminds me of nand2tetris that was posted not too long ago https://news.ycombinator.com/item?id=12333508

(You basically implement every layer starting with the CPU and finishing with a working Tetris game)

foobarge 2 days ago 0 replies      
I've done something similar 21 years ago: a C interpreter targeting a virtual machine. The runtime had a dynamic equivalent of libffi to call into native code and use existing native libraries. I added extensions to run code blocks in threads so that the dinning philosopher problem solution was very elegant. Back in the days, not having libffi meant generating assembly on the fly for Sparc, MIPS, PA-Risc, i386. Fun times. That C interpreter was used to extend a CAD package.
reidrac 2 days ago 0 replies      
I wrote a VM for the 6502 for fun and it was one of most interesting and satisfying projects I've ever made in my free time.

It is very close to a bytecode interpreter, only that it comes with a specification that is actually the opcode list for the MOS 6502 (and few details you need to take into account when implementing that CPU).

Besides there are cross-compilers that allows you to generate 6502 code from C for your specific VM (see cc65).

douche 2 days ago 0 replies      
This reminds me a little bit of my computer architecture class. We started at logic gates in a simulator[1], and worked our way up from there to flip-flops and adders, memory chips, a simple ALU, and eventually a whole 8-bit CPU in the simulator. I want to think that we were even writing assembly for it, loading the programs into the simulated memory, and executing it. It was a great way to get a sense of how everything works, and I think it's when C-style pointers really clicked for me.

[1] this one, IIRC https://sourceforge.net/projects/circuit/

pka 2 days ago 1 reply      
I'm thinking a lot of the complexity of writing a compiler stems from the usage of inappropriate tools. I.e. I would rather kill myself than write a lexer in C (without yacc / bison), but using parser combinators it's a rather trivial task.

Similarly, annotating, transforming, folding, pattern matching on, CPS transforming etc. the produced AST is pretty trivial in a language that supports these constructs. And again, a nightmare in C.

That leaves codegen, but using the right abstractions it turns into a very manageable task as well.

Here's a compiler written in Haskell for LLVM [0].

[0] http://www.stephendiehl.com/llvm

memsom 2 days ago 0 replies      
I did this in C#. It was a lunch time project at work a couple of years ago. It was fun. I still want to do a V2 and remove all of the shortcuts I put in because I didn't want to write code for the stack and stuff like that. At the end of the day, my solution was spookily similar to this - the 32bit instructions - well, yeah, I was the same! It was just simpler. I did have a few general purpose registers (V1, V2 and V3 I think) and I did have routines to handle bytes, words and such like. So stuff like this (as a random example I pulled from the source):




ADD_B ;;value will go back on stack


SM_B V1 ;;value we use next loop

SM_B V1 ;;value we compare

SM_B V1 ;;value we echo to console

TRP 21 ;;writes to the console

ST_S '',13,10,$

TRP 21 ;;writes to the console

CMP_B 50 ;;compares stack to the constant


ST_S 'The End',13,10,$

TRP 21 ;;writes to the console


philippeback 2 days ago 0 replies      
Parsers made easy and pretty much interactive:



This include the dynamic generation of blocks and arrows style things...

philippeback 2 days ago 0 replies      
Soulmate of yours here: https://clementbera.wordpress.com

Lots of optimizations going on for OpenVM.


Interesting bit: VM is written in Slang and transformed into C then compiled.

So you can livecode your VM.In the VM simulator.

elcct 2 days ago 0 replies      
I did something similar in the distant past, that is I wrote subset of C compiler (functions, standard types, pointers) to imaginary assembler and then bytecode interpreter. It was awesome fun, but also I got so into it my - then - girlfriend started to question my commitment to the relationship. So be careful, this is really interesting thing to do :)
rosstex 3 days ago 0 replies      
In this same vein, I recommend coding an emulator! It can be an excellent experience.


curtfoo 3 days ago 0 replies      
Yes I wrote a parser/compiler and interpreter for a custom domain specific language and it had a similar effect on my career. Lots of fun!

Okay I guess technically I used a parser generator that I then modified to build an AST and convert it into assembly-like code that fed the interpreter.

reacweb 2 days ago 0 replies      
Bill gates also started with an interpreter (basic interpreter). Many parts of early windows applications were developed in p-code and visual basic is an important part of Microsoft success.
loeg 3 days ago 0 replies      
I like implementing emulators, because the toolchain and architecture specification are all there already. You get to implement what is basically a little embedded CPU.
dpratt 3 days ago 1 reply      
I'd add a driver for a non trivial binary protocol - I ended up implementing a JVM driver for Cassandra a few years ago, and it was a blast.
       cached 25 September 2016 04:11:02 GMT