hacker news with inline top comments    .. more ..    1 Jun 2017 Best
home   ask   best   2 years ago   
SCOTUS: Patent Rights Over a Printer Cartridge Are Exhausted When It Is Sold cornell.edu
862 points by beefman  1 day ago   294 comments top 35
touchofevil 1 day ago 13 replies      
If you want to learn about a really interesting aspect of the "first sale doctrine" and how it applies to software, you should have a look at "Vernor vs Autodesk" in the USA and compare it to "Oracle vs UsedSoft" in the EU. Basically, in the USA the courts determined that if a company sells you software, but in their terms & conditions claim that they are merely granting you a license, then you can't resell the software b/c you aren't considered to own it. In the EU however, if a company sells you a permanent life-time license in exchange for a one-time fee, the courts determined that you aren't merely licensing that software, you own it and you are allowed to resell it.

I think these different rulings haven't been fully appreciated yet. For example, if you buy Apple's Final Cut X for $299, you should be allowed to resell that software if you live in the EU, but there is currently no way to transfer licenses between users, preventing users from reselling it. It seems to me that by preventing users from reselling their software, Apple (and the Google Play Store) are probably violating EU law on this matter.

awjr 1 day ago 3 replies      
In short, the sale of a product does not allow you to control what is done with the product afterwards through patent law.

I'm assuming this is almost similar to attempts to use copyright law to stop the sale of products on the grey market. http://www.bipc.com/court-rebuffs-attempt-to-use-copyright-l...

I do wonder though, if they changed the underlying software on the cartridges they would get into trouble. I do not see this stopping John Deere's practice of locking up their hardware through copyright laws. https://www.wired.com/2015/02/new-high-tech-farm-equipment-n...

roywiggins 1 day ago 2 replies      
When you can cite Lord Coke in your opinion, I guess it's practically mandatory to do so.

> As Lord Coke put it in the 17th century, if an owner restricts the resale or use of an item after selling it, that restriction is voide, because . . . it is against Trade and Traffique, and bargaining and contracting betweene man and man. 1 E. Coke, Institutes of the Laws of England 360, p. 223 (1628)

jasonlfunk 1 day ago 2 replies      
The Planet Money podcast recently did an episode about this case: http://www.npr.org/sections/money/2017/03/31/522226226/episo...
gameshot911 1 day ago 8 replies      
Can someone clarify one part:

>The other option is to buy a cartridge at a discount through Lexmarks Return Program. In exchange for the lower price, customers who buy through the Return Program must sign a contract agreeing to use the cartridge only once and to refrain from transferring the cartridge to anyone but Lexmark.

>As a result, even if the restrictions in Lexmarks contracts with its customers were clear and enforceable under contract law, they do not entitle Lexmark to retain patent rights in an item that it has elected to sell.

There are two issues at play here, right? One is Lexmark's patent rights, and the other is the contract between Lexmark and the consumer. The Supreme Court held that Lexmark can't use patent rights to prevent refilling the cartridges, but what of the contract? Is that enforceable?

3JPLW 1 day ago 1 reply      
The text is the same, but I find the official PDF better typeset and much more readable: https://www.supremecourt.gov/opinions/16pdf/15-1189_ebfj.pdf
Angostura 1 day ago 0 replies      
That's a fascinating judgement and the hypothetical case that they use to illustrate things seems interestingly chosen:

>But an illustration never hurts. Take a shop that restores and sells used cars. The business works because the shop can rest assured that, so long as those bringing in the cars own them, the shop is free to repair and resell those vehicles. That smooth flow of commerce would sputter if companies that make the thousands of parts that go into a vehicle could keep their patent rights after the first sale. Those companies might, for instance, restrict resale rights and sue the shop owner for patent infringement. And even if they refrained from imposing such restrictions, the very threat of patent liability would force the shop to invest in efforts to protect itself from hidden lawsuits. Either way, extending the patent rights beyond the first sale would clog the channels of commerce, with little benefit from the extra control that the patentees retain. And advances in technology, along with increasingly complex supply chains, magnify the problem. See Brief for Costco Wholesale Corp. et al. as Amici Curiae 79; Brief for Intel Corp. et al. as Amici Curiae 17, n. 5 (A generic smartphone assembled from various high-tech components could practice an estimated 250,000 patents).

joshuak 1 day ago 2 replies      
Good, but the tip of the issue is licensing. Autodesk[1] has set precedence that all an organization must do to limit the resale of anything is institute a EULA. Simply by asserting that an offer is for a license not the thing being licensed the seller can bypass common law, Lord Coke, and the first sale doctrine.

P.S. This also means that expensive professional software like Autodesk's, or anything with such an EULA, cannot be considered an asset since it has no dollar value after purchase.

1: https://www.wired.com/2010/09/first-sale-doctrine/

ryandamm 1 day ago 2 replies      
This is really similar to the concept of 'first sale' in copyright law, which similarly prevents the copyright holder from using copyright to restrict what a buyer does with the object.

Glad to see it extended to patent law as well.

doodlebugging 1 day ago 0 replies      
This case sounds similar to the ruling back in the 1980's (I think) that came from a suit by the makers of Warn winches against a North Texas trailer maker/seller.

From flawed memory, the trailer sales business offered the Warn product line at prices well below those of other Warn dealers and below the suggested retail prices from Warn. He was sued by the manufacturer in an attempt to get him to raise the prices and he prevailed I think based on first sale doctrine since the court ruled that Warn had already been paid for the product and that ownership and control of the winches passed to the trailer manufacturer who was thus free to advertise and sell them at any price he desired even if it meant he took a loss on each one sold. They belonged to him and he could do as he pleased with them. He had been using them as a kind of loss leader where one of the incentives of buying a trailer allowed you to purchase a winch at a large discount.

EDIT: The case was not Warn winches, it was Ramsey winches and the ruling was:


Briefly - I have a bad memory. The case was a lot more involved since Ramsey tried to terminate the distributor agreement and was thus sued by Pierce Sales. Pierce was a high-volume winch dealer and due to the high volume of sales he was able to buy the winches from Ramsey at the lowest price available to dealers. He then used that buying power to advertise the lowest prices for the winches and even offered other dealers the opportunity to buy hard-to-find winches directly from his stock at prices lower than they could buy directly from Ramsey if that particular winch was even available from Ramsey stock. Pierce alleged price-fixing by Ramsey and ultimately won the case.

I remembered the court case but almost none of the pertinent details.


ChuckMcM 1 day ago 2 replies      
Yay, hopefully this will translate into a non-crazy ink refill situation which will translate into a much reduced price in ink cartridges. My hope is that the following will occur;

1) People who sell re-filled cartridges, and offer to refill your existing cartridges will no longer suffer malicious lawsuits from HP & Lexmark.

2) That will increase the supply and create a price competition between re-fillers. Making it possible to easily find ink cartridges at 1/2 to 1/3 the price that the printer manufacturer sells it for.

3) The manufacturers will reduce prices on their ink cartridges in order to support their revenue stream.

I also expect more counter measures like ink cartridge chips that 'self destruct' when the cartridge is exhausted to prevent refilling. And aggressive prosecuting under the DMCA people who reverse engineer cartridge chips to create work alike versions.

DannyBee 1 day ago 2 replies      
In practice it means people will still be able to stop you from doing things with stuff you own. Just not using patent rights.Lexmark's real problem here is that its enforceable contracts are usually with the resellers, and so enforcing against a third party purchaser is trickier, so it tried to use patent law instead. (It has plenty of contracts, including on the wrapping of the printer cartridges, I'm just sticking with the ones that are easy to enforce)
kevin_b_er 1 day ago 0 replies      
You better be glad it went this way, or you would have no reasonable property rights. All objects may have contained an unknowable restriction on their use or ownership. Everything could have what is tantamount to an easement and you wouldn't know what it was unless you found out how it was first sold.

Fortunately sanity on basic notion of property rights remains within the SCOTUS.

inputcoffee 1 day ago 2 replies      
Some context for this consequential decision:


optimiz3 1 day ago 1 reply      
Couldn't find the vote without teasing it out of the text -

It was an 8-0 decision, with a minor dissent from Ginsburg on what happens when a product is sold in a foreign territory.

jvandonsel 1 day ago 3 replies      
Could Lexmark argue that they're not really selling the cartridges to you, but are instead leasing them for an indeterminate period of time?
wordsarewind 1 day ago 0 replies      
As opined here the justification for the exhaustion of the patent after sale is that the patent holder has in the sale been granted the price desired, fair compensation, for the patented item, and thus cannot demand further use of the patent with respect to the item. However, this desired price can only be set by the patent holder in the monopoly granted through the patent in the US. Therefore, this price cannot be set outside of the US where the patent holder has no patent monopoly, and thus competition. Consequently, the patent holder will not receive the desired price and remain uncompensated by the patent, and thus may still require further use of it.

Surely the Supreme Court shouldn't disqualify the patent holder's right to fair compensation for the invention in sales outside of the US?

rabboRubble 22 hours ago 1 reply      
How does this case compare to Monsanto vs Bowman? If you recall, this case pertained to a farmer that bought seeds from a local farmers association, then applied Roundup guessing that some/all of the seeds were GMO. Monsanto argued he violated their patent. Bowman argued that the GMO patent was exhausted after the first sale. He lost.

This Lexmark case seems to undermine the Monsanto case. I don't understand this inconsistency between the two cases.

Anybody able to clarify why these are different?

SeanDav 1 day ago 3 replies      
I can't believe that the printer market has not been disrupted by someone offering a decent printer without ridiculous ongoing printer cartridge costs.

I just bought a 99 printer that will cost more than 99 in printer cartridge costs after just a few months of casual use and I did not spot any alternatives.

CalChris 1 day ago 0 replies      
There's a certain obviousness to this decision. Indeed it's hard to read the rest of the opinion after it explains what it is to exhaust patent rights in the first paragraph. There's plenty of precedent, indeed when you're quoting Coke, you're going back to the 17th century and his decision doesn't seem different from the opinion.

This all begs the question how did this ham handed attempt to abuse patent rights even end up in court and then how did it get appealed to the Supreme Court and moreover why did the Supremes bother to hear it? Because it seems obvious at least to this NL and obvious stuff usually isn't on their docket.

AdmiralAsshat 1 day ago 2 replies      
Outside of the narrow case involving printer cartridges, what other implications and precedents can we expect this to set?

Can it stop smartphone makers from being able to blacklist devices that are resold, for instance?

test6554 1 day ago 0 replies      
So a company signs a contract for cheaper ink that requires them to return the cartridges to Lexmark and only Lexmark.

Then companies willfully break the contract and send their cartridges to a 3rd party. Lexmark sues the third party and courts throw out the suit.

Lexmark can either tighten enforcement and restrictions on their own customers as well as sue them, or stop the program all together. I assume they will just stop the program and only sell the more expensive new cartridges going forward.

mmastrac 1 day ago 1 reply      
Wasn't apple using patents to stop modding of their magsafe cables?
sova 1 day ago 0 replies      
An excellently subtle line of English text to say "yeah your friend can refill your printer cartridges for you, without facing penalties of patent infringement"
01572 1 day ago 3 replies      
Will companies now save more money on toner?

Is there anyone selling a reverse-engineered, refillable pod for the coffee machines that only accept pre-filled proprietary ones?

Do these coffee machine vendors seek to use patents to protect their sales of coffee?

Edit: I know Keurig and Nespresso are the well-publicised examples, but I was thinking of the others. I assume with reasonable confidence there are others still using non-refillable proprietary pods.

post_break 1 day ago 1 reply      
Apple really pissed me off about this. Magsafe is not licensed. Buy Magsafe bricks, cut the end off, and make a battery that can charge MacBooks? See you in court.


bhhaskin 1 day ago 0 replies      
This is a pretty big deal. It is great to finally start to see what patents where supposed to be, which was to protect the inventor until they could become established in the market. Not as a tool to stifle innovation. Very interesting outcome.
pitaa 1 day ago 4 replies      
> Gorsuch, J., took no part in the consideration or decision of the case.

Does anyone know why this is?

limeyx 21 hours ago 0 replies      
So now the companies will start selling "leases" to printer cartridges ?
baltimore 1 day ago 2 replies      
Does this mean that printer prices are about to go up since HP/Brother/Lexmark will no longer be able to make as much money on the ink?
conistonwater 1 day ago 3 replies      
Can somebody explain, what does the word exhaust mean? It doesn't seem like it's being used in the plain-English sense here.
guelo 1 day ago 0 replies      
This is another in a series of necessary smackdowns of the Federal Circuit.
geofft 1 day ago 6 replies      
I'm a little surprised that Ginsburg dissented here, and also dissented from Kirtsaeng v. John Wiley: naively, she's "liberal", and (at least in my bubble) being "liberal" is associated with wanting less strong IP protection.

But I see also that she wrote the majority opinion in Eldred v. Ashcroft, saying that the 28-year extension to copyright terms was constitutional.

What's the right way to understand her legal thinking here? Is she known as an IP maximalist? Or are there other principles she's using to reach these conclusions? (I don't completely follow her logic that, because US patent law doesn't provide any protection in other countries, US patent rights are preserved across a sale in some other country.)

Steko 1 day ago 1 reply      
Florian Mueller (yes, [1]) says this is very bad news for Qualcomm. He quotes Roberts' decision:

"The problem with the Federal Circuit's logic is that the exhaustion doctrine is not a presumption about the authority that comes along with a sale; it is a limit on the scope of the patentee's rights. The Patent Act gives patentees a limited exclusionary power, and exhaustion extinguishes that power. A purchaser has the right to use, sell or import an item because those are the rights that come along with ownership, not because it purchased authority to engage in those practices from the patentee."

Then recaps FTC v Qualcomm:

Presumably, some people in another Washington DC building are now reading the Supreme Court decision: the lawyers working on the FTC's case against Qualcomm. The FTC argued in its January complaint, under a headline that describes Qualcomm's "no license-no chips" policy as "anomalous among component suppliers," that "when one of Qualcomm's competitors sells a baseband processor to an OEM, the OEM can use or resell the processor without obtaining a separate patent license from the competitorjust as a consumer buying a smartphone does not have to obtain a separate patent license from the seller of the smartphone." The FTC went on to explain that "Qualcomm is unique in requiring an OEM, as a condition of sale, to secure a separate patent license requiring royalty payments for handsets that use a competitor's components." For example, this would apply to a situation in which a device maker is a customer of Qualcomm and, say, Intel or Samsung's component business.

And Apple v Qualcomm including relevance of overseas sales portion of today's decision:

Count XXIII of Apple's antitrust complaint against Qualcomm is a request for judicial "declaration of unenforceability [of Qualcomm's patents in certain contexts] due to exhaustion." Apple alleged in its January complaint that "Qualcomm has sought, and continues to seek, separate patent license fees from Apple's [contract manufacturers] for patents embodied in the chipsets Qualcomm sells to Apple's CMs, a practice that is prohibited under the patent exhaustion doctrine." ... Apple's complaint already anticipated that Qualcomm would point to its corporate structure: "Qualcomm has attempted to evade the patent exhaustion doctrine by selling baseband processor chipsets to Apple's [contract manufacturers] through QTC, which is operated by QTI, which is in turn a wholly owned subsidiary of Qualcomm." Apple then points to Qualcomm's 2012 restructuring, which I already blogged about back then with a focus on open-source licensing issues. The Supreme Court's broad and inclusive approach to exhaustion simply doesn't allow any kind of end-run around the exhaustion doctrine through a first sale outside the United States as in one of the two issues relevant in the Lexmark case.


[1] Yes it's FM but his analysis here seems better than it did 6+ years ago. I don't remember him saying things like this in the oracle case: "The good news is that the Supreme Court has once again overruled the Federal Circuit in a way that strengthens those defending themselves against attempts to gain excessive leverage and extract overcompensation from patents."

EGreg 1 day ago 0 replies      
What's new precedent has been set by this decision? It seems it upholds an existing doctrine.
Goodbye PNaCl, Hello WebAssembly chromium.org
617 points by Ajedi32  1 day ago   334 comments top 21
eeZi 1 day ago 13 replies      
This one I'm fine with since WebAssembly is a worthy replacement, but I'm still annoyed at Google discontinuing Chrome Apps.

Some examples of specialized apps I use all the time that would require a native app otherwise:

- Signal Desktop

- TeamViewer

- Postman

- SSH client

- Cleanflight drone configuration tool

It was one of the best things that happened to Linux desktops in a long time and removing it hurts users and makes them less secure.

Now everyone is moving to Electron and instead of one Chrome instance, I'm now running five which use more than one GB of RAM each. Much less secure, too, since each has its own auto-updater or repository and instead of being sandboxed by Chrome's sandbox, they're all running with full permissions.

It also means I cannot longer use Signal Desktop on my work device since installing native apps is forbidden for good reasons, while Chrome Apps are okay.

It also hurts Chrome OS users since Chrome Apps are being abandoned in favor of Electron. It also makes it less useful for developers to create Chrome Apps since the market is much smaller.

Since Chrome Apps continue to be available on Chrome OS, I'm considering separating that functionality into a stand-alone runtime or making a custom build for Linux. Anyone wants to help with that?

withjive 1 day ago 4 replies      
Looks like Mozilla won this fight.When Mozilla didn't accept PNaCl and Pepper API proposed by Google, Mozilla went down the ASM path which now led us to Web Assembly being the general way forward.
lwlml 1 day ago 7 replies      
At this point, I really loathe adopting any facet of web-browser technology: there are too many broken APIs in too many browsers to maintain on both sides of the system: the browser developers have an insane number of combinations of features that need to be useful, secured and made reliable and developers for browsers are always at some weird disadvantage where they can spend months or years maintaining an application for the browser to find it rots out from underneath them.

Should you even be slightly successful in the use of an API, you always have to worry about deprecation when someone is no longer interested in doing the maintenance any more.

I am sure there were more than a few game developers that are are livid today about this announcement.

These things will go in cycles and I expect there will be an native-application cycle coming soon from browser-api-fatigue.

nimrody 1 day ago 1 reply      
Then perhaps Andreas Gal's "Chrome Won" assertion wasn't entirely accurate. After all WebAssembly is something that was derived from Mozilla's asm.js.
_wmd 1 day ago 6 replies      
The most material result of this is that Chromebooks won't have a working SSH client starting sometime next year, because WebAssembly can't do real sockets without an external proxy.
gklitt 1 day ago 0 replies      
It seems like the browser vendors are doing a good job coordinating to provide a robust ecosystem around Web Assembly. Clearly much healthier for the future of the web than fragmented browser-specific solutions.
tehabe 16 hours ago 1 reply      
Am I in the minority that I think it is a positive development, when Google is discarding proprietary ideas in favour of open developments?

Not perfect but the direction looks good so far.

seanwilson 22 hours ago 0 replies      
I'm so confused why Google deprecated Chrome Apps. You'd think they'd take advantage of the abundance of Electron apps by extending the capabilities of Chrome Apps to grow their Chrome ecosystem and attract new developers.
coolmitch 1 day ago 7 replies      
As a web developer working primarily in JS, what should I be learning now to stay relevant/up-to-date once WebAssembly is more common? Are we going to see more web stuff built with c++, like the dsp example in this blog post?
willvarfar 20 hours ago 2 replies      
This is a sad day. NaCL was excellent tech, and its a shame it didn't get incorporated into LLVM proper, and that it didn't take off. We should be using NaCL/pNaCL for all apps everywhere, and for components within apps... etc.

ZeroVM was a desktop/server sandboxing environment that just didn't get any attention and mindshare. Shame!

I want a ZeroVM-like system that makes all of the Debian user-space available on any other OS, each app in a little sandbox... It ought just be another compiler target and automated.

Oh to what could have been! :(

JacobiX 6 hours ago 0 replies      
We used PNaCl to port some games to a Smart TV. It was a painful experience. Debugging chrome from a custom gdb, breaking changes from release to release, and some very subtle bugs in pepper API. But the final results were surprisingly not so awful.
lawthemi 18 hours ago 1 reply      
It's sad. PNACL is more efficient.Go to lichess.org/analysis, make a few moves, turn on the engine analysis.With firefox and WASM, my machine compute 300 knode/s.With chrome and PNACL, my machine compute 2000 knode/s.That's a big step backward.
jhpriestley 1 day ago 3 replies      
From the asm.js FAQ:

 Q. Why not NaCl or PNaCl instead? Are you just being stubborn about JavaScript? A. The principal benefit of asm.js over whole new technologies like NaCl and PNaCl is that it works today
asm.js however wasn't good enough to actually be useful, as evidenced by the lack of adoption and the move to wasm. So now we have wasm which is not backward compatible. We would be further along now if Mozilla/Eich would have gotten behind Google's more mature effort, this really was stubbornness IMO.

xoroshiro 1 day ago 2 replies      
I'm not familiar with what goes in a browser, but for some reason, browsers seem to eat up a lot of resources.

While I am happy that it looks like this will (more or less) be standardized across browsers, I still hope for the day where running a more minimal browser (text, image, maybe videos) will become viable. Of course, I'm pessimistic on this, seeing as so many sites are probably not functional without javascript and other related technologies, but maybe some web developers care about choice. Who knows.

jepler 1 day ago 2 replies      
OK, so where do I get a mosh client programmed in WebAssembly? Without it, my chromebooks might as well be bricks.
Crontab 1 day ago 3 replies      
Is WebAssembley going to turn into yet another web technology that can be used by websites in order to track us or to annoy us with advertising?

I'm asking because every time something new is introduced, it feels like it ends up being used to abuse users (Javascript, XSS, Cookies, HTML5 Video).

TekMol 12 hours ago 1 reply      
Can somebody ELI 5 what is the benefit of WebAssembly vs Asm.js?
ap46 18 hours ago 1 reply      
Can someone get opencv running on it?
r0anne 14 hours ago 2 replies      
>We will remove support for PNaCl in the first quarter of 2018

While the new Google Earth with PNaCl was just introduced, a large engineering cost for a semester-lived technology! Too bad

camus2 1 day ago 1 reply      
Why bother investing time and resources in Google techs when they keep on discontinuing them? they don't care about enterprise software, if they did, they wouldn't act like that. AMP? lol, think again before implementing this, it will not be worth the effort since Chrome will eventually drop that too.

A good reminder not to invest in any Google specific technology.

> We recognize that technology migrations can be challenging.

You recognize you wasted a lot of people's time.

zurn 22 hours ago 1 reply      
Say hello to memory-unsafe languages and resulting vulnerabilities in web apps?
Uber Fires Anthony Levandowski nytimes.com
603 points by coloneltcb  1 day ago   277 comments top 27
tmh79 1 day ago 5 replies      

Here is some context for those who aren't current on the case.

One result of the injunction (all of the hearings up until now) was that uber needs to use all of its power to compel levandowski to testify, the extreme limit of which is firing him. Uber followed through as was legally required

This is one part of a number of things that came out of preliminary injunction hearings, other parts are (1) the breadth of the case is much smaller than as originally filed, its now about trade secrets, not about patent infringement (2) Waymo is allowed a bit more indepth "discover" to see if they can find evidence of their tech in ubers documents or in ubers hardware itself.

Legally, there is no inference that can be drawn from this to imply uber is guilty, they have willfully carried out a court order.

Uber still has a self driving car program, that is staffed by a few hundred engineers.

The case against levandowski has been referred to a federal prosecutor to review the possiblity of criminal charges against levandowski, if I was his lawyer and he was in this position, I would likely refer him to "plead the fifth" regardless of his guilt.

The case is most definitely still going to trial

ziszis 1 day ago 2 replies      
Link to formal termination letter: https://www.washingtonpost.com/blogs/the-switch/files/2017/0...

The termination letter indicates that the termination is for cause and may have implications on stock awards and other compensation:

"Under the Stock Award and other agreements, you are entitled to 20 days to cure the events that give rise to this termination for Cause. This letter constitutes the prior writtennotice triggering the commencement of that 20day period."

throwaway13234 23 hours ago 1 reply      
Statement from Levandowski's lawyers:


"The bite of the Courts May 11, 2017 Preliminary Injunction Order, as it relates to nonparty Anthony Levandowski, can be summarized quite simply: 'Waive your Fifth Amendment rights... or I will have you fired. The choice is yours, Mr. Levandowski.' But, even when framed as a choice, this command runs counter to nearly a half century of United States Supreme Court precedent, beginning with Garrity v. State of New Jersey, 385 U.S. 493 (1967), in which the Court held that the Fifth Amendment forbids a government entity from threatening an individual with the choice between self-incrimination and job forfeiture. Id. at 497, 500. As the Supreme Court observed in Garrity, the option to lose [ones] means of livelihood or pay the penalty of self-incrimination is the antithesis of free choice to speak out or to remain silent. Id. at 497. As the Supreme Court made clear, whenevera state actor imposes this choice between the rock and the whirlpool, it engages in unlawful constitutional compulsion, which, among other things, operates to immunize any resulting testimonial statements."

Isn't anyone else here bothered by the due process implications of Judge Alsup's demand to force Levandowski to give up a Constitutional right or be fired? I'm generally a fan of Alsup, but this sets terrible precedence.

stevebmark 1 day ago 1 reply      
This is incredible. It looks like Uber is trying to pin this on him. I really hope it's true (and proven in court) that Uber encouraged him to steal secrets and start a company with the intention of Uber buying his half-assed startup. Because this firing totally fits the bill of Uber executives' public behavior, trying to shift the blame and throw people under the bus.
gumby 1 day ago 5 replies      
I was thinking simply, "too little, too late" but then realized this could be quite interesting.

Uber's model in most domains is to push hard over the line in the hopes of moving that line. More charitably this could be called "ask for forgiveness rather than permission". So canning Levendowski could be seen as simply a case of this.

But Google's suit is against Uber, not Levendowski, who could now spill beans on all sorts of unsuspected malfeasance. At this point what has he got to lose? If he decides to cooperate with google things could get very interesting.

philip1209 1 day ago 2 replies      
Oh wow, I just noticed under the related stories that Travis's mother was killed and father was seriously injured in a boating accident this week:


dafty4 1 day ago 1 reply      
Are there truly that many Lidar-related trade "secrets" that Google knows about that suppliers and competitors at Velodyne, SPIE, ex-military researcers, etc. don't already?

If Levandowski is targeted by federal prosecutors, can he argue inevitable discovery based on existing public domain principles and papers (textbooks, etc.)?

I find it hard to believe that Google is the only innovator in Lidar thus far. It just seems like they are because it's now cool.

mikekij 1 day ago 3 replies      
I would think Google's argument would only need to be "Uber spent $600M on a 9-month old company". That alone is a huge red-flag, no?
7ero 1 day ago 0 replies      
At this point, I think this is a move to slow Uber's entry into the market, there is probably irrevocable damage because I would imagine it's hard to prevent their IP from being used.

I wonder how things would have played out if Otto was never acquired.

rmellow 1 day ago 1 reply      
Not mentioned in the article:

Uber is investing in a self-driving research unit in Toronto, Canada led by Raquel Urtasun [1].

Seems like a move to dodge all this Waymo brouhaha and benefit from Toronto's deep learning scene.

[1] https://www.techcrunch.com/2017/05/08/uber-hires-raquel-urta...

ChuckMcM 1 day ago 1 reply      
Not exactly unexpected given the court order earlier, interesting bit about how he has 20 days to decide if he wants to rectify the 'cause'. Presumably that would mean admitting stuff that he previously felt would incriminate himself so doing so would put him into more jeopardy on some axis.

This has got to be painful for all the parties involved, I can only hope that people who are watching are thinking to themselves "hmm, seems like some bad choices." And whether or not they are really bad or just being painted that way, it gives you a sense of how everything will be used to craft a narrative around the sequence of events that serves the purposes of the people doing the crafting, not necessarily the participants about whom the narrative speaks.

Having experienced personally the effects of bad actors trying to create a narrative that differed markedly from 'reality' in order to protect their own vulnerability I know how pulling only certain "facts" out of the history can tell a different story than the truth.

openmosix 1 day ago 3 replies      
Interesting. The situation between Uber and Levendowski always reminded me of the prisoner dilemma. If one bails on the other, might the other starts talking about what really happened? I think that staying together would have been the best course of action for Uber. Very interesting to see how this will develop.
Kiro 1 day ago 2 replies      
What if Uber is actually innocent? I've always just assumed they are guilty and planned this all along.
Overtonwindow 1 day ago 3 replies      
This issue brings to mind the fight over Phillip Shoemaker at Apple. Self driving cars and the engineering behind it seems highly specialized. Will this case with Levandowski put pressure on engineers to stay put, a stronger, tacit non-compete and no-poaching rule?
tyingq 1 day ago 2 replies      
Of course, whatever salary he was making is far less interesting than the direct benefit of the original Otto acquisition.
jacquesm 1 day ago 1 reply      
Surprised that took this long. Hot potato dropped, now let's see where the damage claims will point.
walshemj 15 hours ago 0 replies      
Isn't anyone concerned that you can be fired for excising your rights under the constitution?
wdb 11 hours ago 0 replies      
What stops Uber from hiring him again as a contractor?
aioprisan 1 day ago 1 reply      
That too much longer than expected. What was Uber thinking?
ProfessorLayton 1 day ago 1 reply      
Interesting. Could Levandowski throw Uber under the bus without incriminating himself?

Additionally, I wonder if Uber has clawback agreements in place for their Otto acquisition.

ianamartin 1 day ago 0 replies      
If he stays out of jail re: the criminal prosecution, the hype will die down, people will forget about it, the internet outrage machine will move on, and he'll do something else and be fine.
accountyaccount 23 hours ago 0 replies      
probably a good move
joering2 1 day ago 1 reply      
... comes as a result of his involvement in a legal battle between Uber and Waymo, the self-driving technology unit spun out of Google last year.

Honest question - can they do that to him, or anyone else, before guilt is actually proven? What if the result is that nothing is found?? Can he counter-sue for wrongful termination??

Next time if you are company X and are angry your employee Y left for company Z, just sue Z in hopes they will let Y go.

AngelloPozo 1 day ago 0 replies      
Uber followed through on their public warning. Doesn't look good I must say.
sfdghjkl2345678 1 day ago 0 replies      
It's ON!!!!
yeukhon 1 day ago 10 replies      
I think no one will ever dare to hire Levandowski. His career is over. Maybe still too early to draw any conclusion, but I can't seem to have any counter potential excuse or reason to believe the self-driving programs weren't stolen. Why on earth would someone like him do that? Conceited arrogance?

Now I don't understand taking the Fifth. If everyone takes the Fifth, how do you convict someone? Find evidences, and have the a grand jury find the person guilty?

Everhusk 21 hours ago 1 reply      
It's sad to see Uber destroy a talented engineer's credibility like this... Moment of silence for the man who once went out and built a self driving motorcycle on his own [1].

[1] https://www.youtube.com/watch?v=6CYGT97i8qU

How to Improve a Legacy Codebase jacquesmattheij.com
618 points by darwhy  1 day ago   271 comments top 46
apeace 1 day ago 6 replies      
> Do not fall into the trap of improving both the maintainability of the code or the platform it runs on at the same time as adding new features or fixing bugs.

I don't disagree at all, but I think the more valuable advice would be to explain how this can be done at a typical company.

In my experience, "feature freeze" is unacceptable to the business stakeholders, even if it only has to last for a few weeks. And for larger-sized codebases, it will usually be months. So the problem becomes explaining why you have to do the freeze, and you usually end up "compromising" and allowing only really important, high-priority changes to be made (i.e. all of them).

I have found that focusing on bugs and performance is a good way to sell a "freeze". So you want feature X added to system Y? Well, system Y has had 20 bugs in the past 6 months, and logging in to that system takes 10+ seconds. So if we implement feature X we can predict it will be slow and full of bugs. What we should do is spend one month refactoring the parts of the system which will surround feature X, and then we can build the feature.

In this way you avoid ever "freezing" anything. Instead you are explicitly elongating project estimates in order to account for refactoring. Refactor the parts around X, implement X. Refactor the parts around Z, implement Z. The only thing the stakeholders notice is that development pace slows down, which you told them would happen and explained the reason for.

And frankly, if you can't point to bugs or performance issues, it's likely you don't need to be refactoring in the first place!

specialist 1 day ago 8 replies      
Sound advice.

re: Write Your Tests

I've never been successful with this. Sure, write (backfill) as many tests as you can.

But the legacy stuff I've adopted / resurrected have been complete unknowns.

My go-to strategy has been blackbox (comparison) testing. Capture as much input & output as I can. Then use automation to diff output.

I wouldn't bother to write unit tests etc for code that is likely to be culled, replaced.

re: Proxy

I've recently started doing shadow testing, where the proxy is a T-split router, sending mirror traffic to both old and new. This can take the place of blackbox (comparison) testing.

re: Build numbers

First step to any project is to add build numbers. Semver is marketing, not engineering. Just enumerate every build attempt, successful or not. Then automate the builds, testing, deploys, etc.

Build numbers can really help defect tracking, differential debugging. Every ticket gets fields for "found" "fixed" and "verified". Caveat: I don't know if my old school QA/test methods still apply in this new "agile" DevOps (aka "winging it") world.

cessor 1 day ago 2 replies      
I'd add a prerequisite to the top of this list:

- Get a local build running first.

Often, a complete local build is not possible. There are tons of dependencies, such as databases, websites, services, etc. and every developer has a part of it on their machine. Releases are hard to do.

I once worked for a telco company in the UK where the deployment of the system looked like this: (Context: Java Portal Development) One dev would open a zip file and pack all the .class files he had generated into it, and email it to his colleague, who would then do the same. The last person in the chain would rename the file to .jar and then upload it to the server. Obviously, this process was error prone and deployments happened rarely.

I would argue that getting everything to build on a central system (some sort of CI) is usefull as well, but before changing, testing, db freezing, or anything else is possible, you should try to have everything you need on each developer's machine.

This might be obvious to some, but I have seen this ignored every once in a while. When you can't even build the system locally, freezing anything, testing anything, or changing anything will be a tedious and error prone process...

taude 1 day ago 1 reply      
This is a good high-level overview of the process. I highly recommend that engineers working in the weeds, read "Working Effectively with Legacy Code" [1], as it has a ton of patterns in it that you can implement, and more detailed strategies on how to do some of the code changes hinted at in this article.

[1] https://www.safaribooksonline.com/library/view/working-effec...

bmh_ca 1 day ago 7 replies      
I mostly agree with this - bite-sized chunks is really the main ingredient to success with complex code base reformations.

FWIW, if you want to have a look at a reasonably complex code base being broken up into maintainable modules of modernized code, I rewrote Knockout.js with a view to creating version 4.0 with modern tooling. It is now in alpha, maintained as a monorepo of ES6 packages at https://github.com/knockout/tko

You can see the rough transition strategy here: https://github.com/knockout/tko/issues/1

In retrospect it would've been much faster to just rewrite Knockout from scratch. That said, we've kept almost all the unit tests, so there's a reasonable expectation of backwards compatibility with KO 3.x.

_virtu 1 day ago 12 replies      
How does one get better if they only ever work in code bases that are steaming piles of manure? So far I've worked at two places and the code bases have been in this state to an extreme. I feel like I've been in this mode since the very beginning of my career and am worried that my skill growth has been negatively impacted by this.

I work on my own side projects, read lots of other people's code on github and am always looking to improve myself in my craft outside of work, but I worry it's not enough.

kentt 1 day ago 10 replies      
> Do not ever even attempt a big-bang rewrite

I'd love to hear a more balanced view on this. I think this idea is preached as the gospel when dealing with legacy systems. I absolutely understand that the big rewrite has many disadvantages. Surely there is a code base that has features such that a rewrite is better. I'm going to go against the common wisdom and wisdom I've practiced until now, and rewrite a program I maintain that is

1. Reasonably small (10k loc with a large parts duplicated or with minor variables changed).

2. Barely working. Most users cannot get the program working because of the numerous bugs. I often can't reproduce their bugs, because I get bugs even earlier in the process.

3. No test suite.

4. Plenty of very large security holes.

5. I can deprecate the old version.

I've spent time refactoring this (maybe 50 hours) but that seems crazy because it's still a pile of crap and at 200 hours I don't think it look that different. I doubt it would take 150 hours for a full rewrite.

Kindly welcoming dissenting opinions.

korzun 1 day ago 2 replies      
> Before you make any changes at all write as many end-to-end and integration tests as you can.

I don't agree with this. People can't write proper coverage for a code base that they 'fully understand'. You will most likely end up writing tests for very obvious things or low hanging fruits; the unknowns will still seep through at one point or another.

Forget about refactoring code just to comply with your tests and breaking the rest of the architecture in the process. It will pass your 'test' but will fail in production.

What you should be doing is:

1. Perform architecture discovery and documentation (helps you with remembering things).

2. Look over last N commits/deliverables to understand how things are integrating with each other. It's very helpful to know how code evolved over time.

3. Identify your roadmap and what sort of impact it will have on the legacy code.

4. Commit to the roadmap. Understand the scope of the impact for your anything you add/remove. Account for code, integrations, caching, database, and documentation.

5. Don't forget about things like jobs and anything that might be pulling data from your systems.

Identifying what will be changing and adjusting your discovery to accommodate those changes as you go is a better approach from my point of view.

By the time you reach the development phase that touches 5% of architecture, your knowledge of 95% of design will be useless, and in six months you will forget it anyways.

You don't cut a tree with a knife to break a branch.

stephenwilcock 4 hours ago 0 replies      
It is great to see more people sharing their strategies for managing legacy codebases. However, I thought it might be worth commenting on the suggestion about incrementing database counters:

> "add a single function to increment these counters based on the name of the event"

While the sentiment is a good one, I would warn against introducing counters in the database like this and incrementing them on every execution of a function. If transactions volumes are high, then depending on the locking strategy in your database, this could lead to blocking and locking. Operations that could previously execute in parallel independently now have to compete for a write lock on this shared counter, which could slow down throughput. In the worst case, if there are scenarios where two counters can be incremented inside different transactions, but in different sequences (not inconceivable in a legacy code), then you could introduce deadlocks.

Adding database writes to a legacy codebase is not without risk.

If volumes are low you might get away with it for a long time, but a better strategy would probably just to log the events to a file and aggregate them when you need them.

maxxxxx 1 day ago 7 replies      
How do people handle this in dynamic languages like JavaScript? I have done a lot of incremental refactoring in C++ and C# and there the compiler usually helped to find problems.

I am now working on a node.js app and I find it really hard to make any changes. Even typos when renaming a variable often go undetected unless you have perfect test coverage.

This is not even a large code base and I find it already hard to manage. Maybe i have been using typed languages for a long time so my instincts don't apply to dynamic languages but I seriously wonder how one could maintain a large JavaScript codebase.

lbill 1 day ago 0 replies      
I used to work on a messy legacy codebase. I managed to clean it, little by little, even though most of my colleagues and the management were a bit afraid of refactoring. It wasn't perfect but things kinda worked, and I had hope for this codebase.

Then the upper management appointed a random guy to do a "Big Bang" refactor: it has been failing miserably (it is still going on, doing way more harm than good). Then it all started to go really bad... and I quit and found a better job!

busterarm 1 day ago 0 replies      
All of this seems to focus on the code, after glossing over the career management implications in the first paragraph.

I've done this sort of work quite a number of times and I've made mistakes and learned what works there.

It's actually the most difficult part to navigate successfully. If you already have management's trust (i.e., you have the political power in your organization to push a deadline or halt work), you're golden and all of the things mentioned in the OP are achievable. If not, you're going to have to make huge compromises. Front-load high-visibility deliverables and make sure they get done. Prove that it's possible.

Scenario 1) I came in as a sub-contractor to help spread the workload (from 2 to 3) building out a very early-stage application for dealing with medical records. I came in and saw the codebase was an absolute wretched mess. DB schema full of junk, wide tables, broken and leaking API routes. I spent the first two weeks just bulletproofing the whole application backend and whipping it into shape before adding new features for a little while and being fired shortly afterwards.

Lesson: Someone else was paying the bills and there wasn't enough visibility/show-off factor for the work I was doing so they couldn't justify continuing to pay me. It doesn't really matter that they couldn't add new features until I fixed things. It only matters that the client couldn't visibly see the work I did.

Scenario 2) I was hired on as a web developer to a company and it immediately came to my attention that a huge, business-critical ETL project was very behind schedule. The development component had a due date three weeks preceding my start date and they didn't have anyone working on it. I asked to take that on, worked like a dog on it and knocked it out of the park. The first three months of my work there immediately saved the company about a half-million dollars. Overall we launched on time and I became point person in the organization for anything related to its data.

Lesson: Come in and kick ass right away and you'll earn a ton of trust in your organization to do the right things the right way.

OutsmartDan 1 day ago 7 replies      
Big bang rewrites are needed in order to move forward faster.

A huge issue with sticking to an old codebase for such a long time is that it gets older and older. You get new talent that doesn't want to manage it and leave, so you're stuck with the same old people that implemented the codebase in the first place. Sure they're smart, knowledgable people in the year 2000, but think of how fast technology changes. Change, adapt, or die.

sz4kerto 1 day ago 8 replies      
The OP has so many reasonable, smart-sounding advice that doesn't work in the real world.

1) "Do not fall into the trap of improving both the maintainability of the code or the platform it runs on at the same time as adding new features or fixing bugs."

Thanks. However, in many situations this is simply not possible because the business is not there yet so you need to keep adding new features and fix bugs. And still, the code base has to be improved. Impossible? Almost, but we're paid for solving hard problems.

2) "Before you make any changes at all write as many end-to-end and integration tests as you can."

Sounds cool, except in many cases you have no idea how the code is supposed to work. Writing tests for new features and bugfixes is a good advice (but that goes against other points the OP makes).

3) "A big-bang rewrite is the kind of project that is pretty much guaranteed to fail.

No, it's not. Especially if you're rewriting parts of it at a time as separate modules

My problem with the OP is really that it tells you how to improve a legacy codebase given no business and time pressure.

hinkley 1 day ago 0 replies      
It's my turn to disagree with something in the article.

> Before you make any changes at all write as many end-to-end and integration tests as you can.

I'm beginning to see this as a failure mode in and of itself. Once you give people E2E tests it's the only kind of tests they want to write. It takes about 18 months for the wheels to fall off so it can look like a successful strategy. What they need to do is learn to write unit tests, but You have to break the code up into little chunks. It doesn't match their aesthetic sense and so it feels juvenile and contrived. The ego kicks in and you think you're smart enough you don't have to eat your proverbial vegetables.

The other problem is e2e tests are slow, they're flaky, and nobody wants to think about how much they cost in the long run because it's too painful to look at. How often have you see two people huddled over a broken E2E test? Multiply the cost of rework by 2.

user5994461 16 hours ago 0 replies      
Agreed about the pre-requirements: Adding some tests, reproducible builds, logs, basic instrumentations.

Highly disagree about the order of coding. That guy wants to change the platform, redo the architecture, refactor everything, before he starts to fix bugs. That's a recipe for disaster.

It's not possible to refactor anything while you have no clue about the system. You will change things you don't understand, only to break the features and add new bugs.

You should start by fixing bugs. With a preference toward long standing simple issues, like "adding a validation on that form, so the app doesn't crash when the user gives a name instead of a number". See with users for a history of simple issues.

That delivers immediate value. This will give you credit quickly toward the stakeholders and the users. You learn the internals doing, before you can attempt any refactoring.

artursapek 1 day ago 3 replies      
Are there businesses building automation and tooling for working with legacy codebases? It seems like a really good "niche" for a startup. The target market grows faster every year :)
SideburnsOfDoom 1 day ago 1 reply      
> add instrumentation. Do this in a completely new database table, add a simple counter for every event that you can think of and add a single function to increment these counters based on the name of the event.

The idea is a good one but the specific suggested implementation .. hasn't he heard of statsd or kibana?

mfrisbie 1 day ago 0 replies      
Sometimes your inner desires to rewrite it from scratch can be overwhelming.


yeukhon 1 day ago 0 replies      
Healthcare.gov is a good example although not legacy codebase. Anyway, I think fixing small bugs and writing tests are best way to learn how to work with legacy system. This allows me to see what components are easier to rewrite/refactor/add more logging and instrumentation. Business cannot wait months before a bug is fixed just for the sake of making a better codebase. But I agree on database changes should be minimal to none as much as possible. Also, overcommunicate with your downstream customers of your legacy system. They may be using your interface in an unexpected manner.

I have done a number of serious refactoring myself and god tests will save me a huge favor despite I have to bite teeth for a few days to a few weeks.

moonbug 1 day ago 0 replies      
This should be one of the first tasks that any aspiring career programmer has. It's an essential experience in making a professional.
weef 1 day ago 2 replies      
Great advice. Writing integration tests or unit tests around existing functionality is extremely important but unfortunately might not always be feasible given the time, budget, or complexity of the code base. I just completed a new feature for an existing and complex code base but was given the time to write an extensive set of end-to-end integration tests covering most scenarios before starting my coding. This proved invaluable once I started adding my features to give me confidence I wasn't breaking anything and helped find a few existing bugs no one had caught before!
deedubaya 1 day ago 1 reply      
Yeah, I've done this. It's frustrating and easy to burn out doing it because progress seems so arbitrary. Legacy upgrades are usually driven by large problems or the desire to add new features. Getting a grip on the code base while deflecting those desires can be hard.

This type of situation is usually a red flag that the company's management doesn't understand the value of maintaining software until the absolutely have to. That, in itself, is an indicator of what they think of their employees.

alexeiz 1 day ago 1 reply      
I was in this situation more than once.

My actions are usually these:

* Fix the build system, automate build process and produce regular builds that get deployed to production. It's incredible that some people still don't understand the value of the repeatable, reliable build. In one project, in order to build the system you had to know which makefiles to patch and disable the parts of the project which were broken at that particular time. And then they deployed it and didn't touch it for months. Next time you needed to build/deploy it was impossible to know what's changed or if you even built the same thing.

* Fix all warnings. Usually there are thousands of them, and they get ignored because "hey, the code builds, what else do you want." The warning fixing step allows to see how fucked up some of the code is.

* Start writing unit tests for things you change, fix or document. Fix existing tests (as they are usually unmaintained and broken).

* Fix the VCS and enforce sensible review process and history maintenance. Otherwise nobody has a way of knowing what changed, when and why. Actually, not even all parts of the project may be in the VCS. The code, configs, scripts can be lying around on individual dev machines, which is impossible to find without the repeatable build process. Also, there are usually a bunch of branches with various degrees of staleness which were used to deploy code to production. The codebase may have diverged significantly. It needs to be merged back into the mainline and the development process needs to be enforced that prevents this from happening in the future.

Worst of all is that in the end very few people would appreciate this work. But at least I get to keep my sanity.

mannykannot 1 day ago 2 replies      
WRT architecture: In my experience, you would be lucky if you are free to change the higher level structure of the code without having to dive deeply into the low-level code. Usually, the low-level code is a tangle of pathological dependencies, and you can't do any architectural refactoring without diving in and rooting them out one at a time (I was pulling up ivy this weekend, so I was primed to make this comment!)
iamNumber4 1 day ago 0 replies      
Sometimes you get an entire septic tank full of...

Code base that is non-existent, as the previous attempts were done with MS BI (SSIS) tools (for all things SSIS is not for) and/or SQL Stored procedures, with no consistency on coding style, documentation, over 200 hundred databases (sometimes 3 per process that only exist to house a handful of stored procedures), and a complete developer turn over rate of about every 2 years. with Senior leadership in the organization clueless to any technology.

As you look at a ~6000 lines in a single stored procedure. You fight the urge to light the match, and give it some TLC ( Torch it, Level it, Cart it away) to start over with something new.

Moral of the story, As you build, replace things stress to everyone to "Concentrate of getting it Right, instead of Getting it Done!" so you don't add to the steaming pile.

matt_s 1 day ago 0 replies      
Regarding instrumentation and logging - this can also be used to identify areas of the codebase that can possibly be retired. If it is a legacy application, there are likely areas that aren't used any longer. Don't focus on tests or anything in these areas and possibly deprecate them.
quadcore 1 day ago 1 reply      
From what I've seen the most common mistake when starting working on a new codebase is to not read it all before doing any change.

I really mean it, a whole lot of programmers simply dont read the codebase before starting a task. Guess the result, specially in terms of frustration.

ransom1538 1 day ago 1 reply      
> Before you make any changes at all write as many end-to-end and integration tests as you can.

^ Yes and no. That might take forever and the company might be struggling with cash. I would instead consider adding a metrics dashboard. Basically - find the key points: payments sent, payments cleared, new user, returning user, store opened, etc. THIS isn't as good as a nice integration suite - but if a client is hard on cash and needs help - this can be setup in hours. With this setup - after adding/editing code you can calm investors/ceos'. Alternatively, if it's a larger corp it will be time strapped - then push for the same thing :)

lol768 1 day ago 1 reply      
Any advice on what steps to take when the legacy codebase is incredibly difficult to test?

I completely agree with the sentiment that scoping the existing functionality and writing a comprehensive test suite is important - but how should you proceed when the codebase is structured in such a way that it's almost impossible to test specific units in isolation, or when the system is hardcoded throughout to e.g. connect to a remote database? As far as I can see it'll take a lot of work to get the codebase into a state where you can start doing these tests, and surely there's a risk of breaking stuff in the process?

pc86 1 day ago 1 reply      
I've been a part of several successful big-bang rewrites, and several unsuccessful ones, and saying that if you're smart they're not on the table is just flat out wrong.

The key is an engaged business unit, clear requirements, and time on the schedule. Obviously if one or more of these things sounds ridiculous then the odds of success are greatly diminished. It is much easier if you can launch on the new platform a copy of the current system, not a copy + enhancements, but I've been on successful projects where we launched with new functionality.

d--b 1 day ago 1 reply      
I agree with most of this, though I think it doesn't dive into the main problem:

Freezing a whole system is practically impossible. What you usually get is a "piecewise" freeze. As in: you get to have a small portion of the system to not change for a given period.

The real challenge is: how can you split your project in pieces of functionalities that are reasonably sized and replaceable independently from each other.

There is definitely no silver bullet for how to do this.

Bahamut 1 day ago 0 replies      
Can't say I agree with the big bang rewrite part necessarily - at my last job, I found myself having to do significant refactors. The reason was that each view had its own concept of a model for interacting with various objects, which resulted in a lot of different bugs from one off implementations. My refactor had some near term pain of having to fix various regressions I created, but ultimately it led to much better long term maintenance.
alexwebb2 1 day ago 0 replies      
> How to Improve a Legacy Codebase When You Have Full Control Over the Project, Infinite Time and Money, and Top-Tier Developers

edit: I'm being a little snarky here, but the assumptions here are just too much. This is all best-case scenario stuff that doesn't translate very well to the vast majority of situations it's ostensibly aimed at.

kevan 1 day ago 0 replies      
>Use proxies to your advantage

At my last gig we used this exact strategy to replace a large ecommerce site piece by piece. Being able to slowly replace small pieces and AB test every change was great. We were able to sort out all of the "started as a bug, is now a feature" issues with low risk to overall sales.

safek 1 day ago 2 replies      
> Do not ever even attempt a big-bang rewrite

Really? Are there no circumstances under which this would be appropriate? It seems to me this makes assumptions about the baseline quality of the existing codebase. Surely sometimes buying a new car makes more sense than trying to fix up an old one?

jhgjklj 1 day ago 0 replies      
The biggest problem in improving legacy codebase is that the people who have involved with have been too long and are completely using old techinques and as a new developer you can not change them, they will change you which means its hard to improve.
rattray 1 day ago 1 reply      
> Yes, but all this will take too much time!

I'm actually quite curious; how long does this process typically take you?

What are the most relevant factors on which it scales? Messiness of existing code? Number of modules/LOC? Existing test coverage?

macca321 1 day ago 0 replies      
Another thing you can do is start recording all requests that cause changes to the system in an event store (a la event sourcing). Once you have this in place, you can use the event stream to project a new read model (e.g.a new, coherent, database structure).
btbuildem 1 day ago 0 replies      
Thanks for posting, some excellent high-level advice.
jefurii 1 day ago 0 replies      
Stick around that startup long enough and this a good set of things to do with your own code.
jlebrech 1 day ago 0 replies      
do refactoring you should have known at the time and not the brand new fangled way to do things, that way each new way fades into the other.
crankyadmin 1 day ago 1 reply      
Delete it...

(Speaking from experience from work)

pinaceae 1 day ago 0 replies      
First and foremost, do not assume that everyone who ever worked on the code before is a bumbling idiot. assume the opposite.

If it's code that has been running successfully in production for years, be humble.

Bugifxes, shortcuts, restraints - all are real life and prevent perfect code and documentation under pressure.

The team at Salesforce.com is doing a massive re-platforming right now with their switch to Lightning. Should provide a few good stories, switching over millions of paying users, not fucking up billions in revenue.

jofer 1 day ago 1 reply      
I agree with everything said, but I think they assumed a well-maintained and highly functionality legacy codebase. In my experience, there are a few steps before any of those.


1. Find out which functionality is still used and which functionality is critical

Management will always say "all of it". The problem is that what they're aware of is usually the tip of the iceberg in terms of what functionality is supported. In most large legacy codebases, you'll have major sections of the application that have sat unused or disabled for a couple of decades. Find out what users and management actually think the application does and why they're looking to resurrect it. The key is to make sure you know what is business critical functionality vs "nice to have". That may happen to be the portions of the application that are currently deliberately disabled.

Next, figure out who the users are. Are there any? Do you have any way to tell? If not, if it's an internal application, find someone who used it in the past. It's often illuminating to find out what people are actually using the application for. It may not be the application's original/primary purpose.


2. Is the project under version control? If not, get something in place before you change anything.

This one is obvious, but you'd be surprised how often it comes up. Particularly at large, non-tech companies, it's common for developers to not use version control. I've inherited multi-million line code bases that did not use version control at all. I know of several others in the wild at big corporations. Hopefully you'll never run into these, but if we're talking about legacy systems, it's important to take a step back.

One other note: If it's under any version control at all, resist the urge to change what it's under. CVS is rudimentary, but it's functional. SVN is a lot nicer than people think it is. Hold off on moving things to git/whatever just because you're more comfortable with it. Whatever history is there is valuable, and you invariably lose more than you think you will when migrating to a new version control system. (This isn't to say don't move, it's just to say put that off until you know the history of the codebase in more detail.)


3. Is there a clear build and deployment process? If not, set one up.

Once again, hopefully this isn't an issue.

I've seen large projects that did not have a unified build system, just a scattered mix of shell scripts and isolated makefiles. If there's no way to build the entire project, it's an immediate pain point. If that's the case, focus on the build system first, before touching the rest of the codebase. Even for a project which excellent processes in place, reviewing the build system in detail is not a bad way to start learning the overall architecture of the system.

More commonly, deployment is a cumbersome process. Sometimes cumbersome deployment may be an organizational issue, and not something that has a technical solution. In that case, make sure you have a painless way to deploy to an isolated development environment of some sort. Make sure you can run things in a sandboxed environment. If there are organizational issues around deploying to a development setup, those are battles you need to fight immediately.

logicallee 23 hours ago 0 replies      
This says, near the end, "Do not ever even attempt a big-bang rewrite", but aren't a LOT of legacy in-house projects completely blown out of the water by well-maintained libraries of popular, modern languages, that already exist? (In some cases these might be commercial solutions, but for which a business case could be made.)

I'm loath to give examples so as not to constrain your thinking, but, for example, imagine a bunch of hairy Perl had been built to crawl web sites as part of whatever they're doing, and it just so happens that these days curl or wget do more, and better, and less buggy, than everything they had built. (think of your own examples here, anything from machine vision to algabreic computation, whatever you want.)

In fact isn't this the case for lots and lots of domains?

For this reason I'm kind of surprised why the "big bang rewrite" is, written off so easily.

Kerbal Space Program Acquired by Take-Two Interactive kerbalspaceprogram.com
461 points by Impossible  11 hours ago   282 comments top 22
TeMPOraL 9 hours ago 14 replies      
KSP was possibly my best entertainment spending ever. It definitely is the best game for me in terms of costs / time played. If you haven't played it yet, do yourself a favour and buy it now. If you have a kid with even tangential interest in space, get a copy for them.

Side effects of playing KSP include:

- getting an intuitive feel for basic orbital mechanics

- finding yourself reading up on actual math to better understand what's happening with your rockets (and how to build more efficient one)

- no longer being able to watch most space movies due to frustration caused by the filmmakers not grokking basic orbital physics

(RE the last point - after Gravity, Interstellar, The Martian and The Expanse series, getting basic spaceflight wrong should no longer be accepted in popular media. Looking at you, makers of The 100.)

jesseryoung 11 hours ago 2 replies      
Hopefully the game will be better off under Take-Two (I am not familiar with their past treatment of indy-like games like this)

I have read several stories online about how poorly SQUAD treated the core development team of KSP: https://www.develop-online.net/news/squad-devs-blast-kerbal-...

mhh__ 9 hours ago 1 reply      
Kerbal Space Program is a game that we're quite lucky to have. No microtransactions, no DRM, just sciencey goodness.

Except for those making it, who Squad apparently didn't bother paying anywhere near what they were worth.

parisidau 45 minutes ago 0 replies      
Shameless self-promotion, but together with some friends I wrote a book for O'Reilly Media on KSP!

Amazon: https://www.amazon.com/Kerbal-Players-Guide-Easiest-Program/...

O'Reilly: http://shop.oreilly.com/product/0636920035138.do

Safari: https://www.safaribooksonline.com/library/view/the-kerbal-pl...

mediocrejoker 10 hours ago 6 replies      
I hope this goes well. I would love to see a remake that retains the exact same gameplay with more modern graphics. Hopefully all the people who paid for the current in-development version are not left high and dry in terms of updates and bugfixes.

I also have heard the rumors that the team was not treated well, and that the game was never really the focus of the company. I think it may have been a side project of one of the developers on a totally unrelated product (ie. not even a game).

Orangeair 5 hours ago 2 replies      
I sometimes get the feeling that this is the only game HN plays. When people talk about not being able to switch away from Windows due to games, it seems like someone always responds, "Well Kerbal Space Program runs on Linux and that's all I care about." I don't think I've ever seen articles similar to this one about other games gain as much traction. Can't think of very many articles about games getting to the top of HN at all, actually, unless they're about John Carmack writing one.
nirav72 9 hours ago 1 reply      
I just logged into my account and grabbed the installers and portable zip files for the last 2 versions. Just in case, take-two's influence somehow breaks the KSP experience I've come to love and enjoy.
xigency 11 hours ago 6 replies      
Video game company acquisitions can be brutal. I hope everything goes well for the team now and several years in the future.
pawadu 11 hours ago 4 replies      
Whats next? Dwarf Fortress acquired by EA?
Graham24 10 hours ago 2 replies      
I await the release of Grand Theft Planet.
tangue 11 hours ago 1 reply      
I hope they won't fuck up the game. As a side note I didn't suspect there was that much people working on this game.
cosinetau 10 hours ago 1 reply      
Congrats KSP team! Hope this next adventure is unlike my adventures with Jeb.
cydonian_monk 8 hours ago 1 reply      
Hopefully this means things will "improve" without turning the community into a sterile, lifeless environmemt, but we'll see. Maybe not much changes, maybe they take the IP and run with it. Who knows.

Must say it was weird to stumble on this news here on HN first instead of on the KSP forum (where admitedly it was posted first); guess it's been a busy morning and I just missed it.

cr0sh 10 hours ago 2 replies      
I have a good (bad) feeling that this change will likely mean that, sooner or later, Linux support will be dropped.

/bet me it won't...

koiz 4 hours ago 0 replies      
It seemed something was up when a few devs went to valve.
dschuetz 5 hours ago 0 replies      
I'm just glad that Microsoft didn't get this one. I bought it within the early access period, so I actively contributed to the development. I hope/expect to see some franchise spin-offs with the Kermans <3
renega3 8 hours ago 1 reply      
I stopped playing KSP due to the microstutter issue (ostensibly due the to garbage collector) fixing that would make the game playable again.
codezero 10 hours ago 1 reply      
I've had so much trouble playing the port on console. I know it was made by a contractor but ugh. I want to love this game but it's total masochism to play on console right now. Hopefully this leads to something good.
tdsamardzhiev 6 hours ago 0 replies      
Well, out of the big gaming companies, Take Two is the best one to get acquired by. Let's see where that leads.
erikb 6 hours ago 0 replies      
Why do they always say "nothing will change"? Of course things will change. That why someone acquired them, to change something.

And why do they always say the acquisition is good for the community? I count meself to the community, yet I didn't receive any six to ten digit payouts from the sale. Why should it be exciting for me?

cdrark 10 hours ago 0 replies      
Come on co-op mode!
wexxx 5 hours ago 0 replies      
this is awesome!
Node v8.0.0 Released nodejs.org
518 points by petercooper  1 day ago   173 comments top 17
nailer 1 day ago 11 replies      
Short ver: async await is now in an LTS release of node. Anything that returns a promise can now be run inline - ie, no endless .then() chaining - provided you've started an async context:

 const writeFile = util.promisify(fs.writeFile), request = util.promisify('superagent'), stat = util.promisify(fs.stat), sorts = require('sorts'), log = console.log.bind(console) const getJeansAndSaveThem = async function(){ var jeans = await request.get('https://example.com/api/v1/product/trousers').query({ label: 'Levi Strauss', isInStock: true, maxResults: 500 }).end() jeans = jeans.sort(sorts.alphabetical) await writeFile('jeans.json', jeans) const status = await stat('jeans.json'); log(`I just got some data from an API and saved the results to a file. The file was born at ${status.birthtime}`) }
Note: you should add error handling, I'm new to async/await and this code is to demonstrate a concept on HN, not to run your life support system. ;-)

SparkyMcUnicorn 1 day ago 2 replies      
"Note that, when referring to Node.js release versions, we have dropped the "v" in Node.js 8. Previous versions were commonly referred to as v0.10, v0.12, v4, v6, etc. In order to avoid confusion with V8, the underlying JavaScript engine, we've dropped the "v" and call it Node.js 8."

I was wondering how this would be handled. I guess old habits die hard since this article title includes the "v".

petercooper 1 day ago 0 replies      
For anyone not in the JS/Node worlds, this is a significant release that people are particularly excited about. It was also delayed somewhat due to wanting to align with V8 which should, however, be totally worth it :-)

Other relevant posts digging into new features include http://codingsans.com/blog/node-8 and https://blog.risingstack.com/important-features-fixes-node-j...

flavio81 1 day ago 6 replies      
For me, this one brought happiness:

Node.js 8.0.0 includes a new util.promisify() API that allows standard Node.js callback style APIs to be wrapped in a function that returns a Promise. An example use of util.promisify() is shown below.

This is great stuff. This enables writing code using async and await at all times, which is what any sane developer would do when writing code for Node.js.

elzi 1 day ago 1 reply      
I know there's much bigger things in this release to be excited about, but I'm so happy they're allowing trailing commas in function args/params.
ianstormtaylor 1 day ago 2 replies      
Does anyone have a link to better explanation of the changes to `debugger`?

> The legacy command line debugger is being removed in Node.js 8. As a command line replacement, node-inspect has been integrated directly into the Node.js runtime. Additionally, the V8 Inspector debugger, which arrived previously as an experimental feature in Node.js 6, is being upgraded to a fully supported feature.

It sounds like `node debug` will no longer work? But it is replaced with something that's better? What is `node-inspect` and where can I learn about it?

neovive 1 day ago 6 replies      
This is exciting news! I'm a long-time LAMP developer (now mostly Laravel) and have been experimenting with NodeJS for an upcoming API project. As Javascript becomes a larger part of each new project, using one language throughout the entire stack is becoming much more compelling.

Is Express still considered the de facto web framework for NodeJS? Or are other frameworks better suited for someone used to the "batteries-included" philosophy of Laravel. I'm watching the new "Learning Node" course from WesBos since he covers async/await and Express seems very similar to most MVC frameworks.

STRML 1 day ago 1 reply      
This is a big release. Async/await in stable core is something I've been (literally) waiting 6 years for.

Many people have criticized Node's cooperative multithreading model, with both good and uninformed reasons. Yet, it is without dispute that the model is popular.

Async/await is a giant leap forward toward making Node usable for beginner and expert alike. This release is a celebration.

For those of you with existing applications looking to migrate, try `--turbo --ignition` to emulate (most) of the V8 5.9 pipeline. Anecdotally, microbenchmark-style code regresses slightly, real-world code improves by as much as 2x. Exciting times.

riccardomc 15 hours ago 0 replies      
It really is a significant release: the world 'significant' is used in 7 of the first 13 lines of the release statement.


Matthias247 1 day ago 4 replies      
This just motivated me to play around a little bit with JS async/await implementation. What I found interesting is that async functions will always return promises, even if an immediate value could be returned. Like for example in the following function:

 async function getFromCacheOrRemote() { if (random()) { return "Got it"; } else { await DoSomethingLongRunnning(); return "Got from network"; } }
The function will return a Promise independent of which branch is taken, although it could return a string for the first branch. Does anybody know the reason? From a consumer point of view it does not matter if the consumer uses await, since await accepts both immediates and Promises. Is it because always returning promises is more convenient for users which use Promise combinators instead of await and less bug-prone? Or does it maybe even benefit JS runtime optimizations if the returntype is always a Promise - even though the promise in both cases might be of a different subtype?

For most applications it probably doesn't matter anyway. However returning and awaiting immediate values eliminates an allocation and eventloop iteration compared to using a Promise, which is helpful for high-performance code. This is why C# now introduced custom awaitables and things like ValueTask<T>.

samueloph 1 day ago 2 replies      

It looks like somebody needs to set up the deb repository for 8.x, the installation script[1] is there, but there's no repo[2] for the node 8.x itself.

I also think this[3] url needs to get an update to reflect the new release.

edit-> Considering Debian Stretch will be released June 17th, it would be nice to have a repo for this release, i mean ..node_8.x/dists/stretch/Release.. instead of only jessie and sid's.


bricss 1 day ago 0 replies      
Long awaited release full of joy!
curiousgal 1 day ago 5 replies      

I just finished cleaning my home folder out of the ~100,000 files npm created over the past couple of months. I just build interesting Node projects I come across to check them out and it's gotten that big. I wonder how it's like for regular node devs.

cheapsteak 1 day ago 2 replies      
>node-inspect has been integrated directly into the Node.js runtime

Is node-inspect the same thing as node-inspector or something else?

__s 1 day ago 0 replies      
Looking forward to writing unit tests for luwa. Node v8.0 should include wasm 1.0
k__ 1 day ago 1 reply      
The promisify stuff looks rather clunky. Aren't there better options?
p5k 1 day ago 3 replies      
Notejs should get promise versions of its current callback-based APIs:

 const fs = require("fs"); fs.writeFile("Hello, World", "helloworld.txt", (error)=>{ if(error) throw error; console.log("done!"); });
Should be:

 const fs = require("fs"); fs.writeFilePromise("Hello, World", "helloworld.txt").then(()=>console.log("done!"),error=>console.error(error));

How to Sleep theatlantic.com
625 points by ALee  3 days ago   257 comments top 39
teolandon 2 days ago 17 replies      
My biggest struggle with sleep is that I'm always excited to do stuff, and always feel like I'm not done with my day. Exceptions are when something happens and I end up feeling very depressed during the day, and simply want to shut down and do nothing.

Usually, I get so infatuated with a script I'm writing, a new program I discovered, a bug that I need to resolve, a book that I'm reading, some concept that I'm thinking of, that my mind just keeps on being active, and wants to keep working. It's the worst when I'm working on my computer, due to the blue light (I've started wearing yellow sunglasses to minimize the effect), while it's a bit better when I'm reading or listening to music or thinking.

In any case, this is a great article. I feel like small amounts of sleep has been the greatest inhibitor of my performance in... anything really. Being dumb and young I felt like I could still function correctly, but I really started noticing that I had better tournament results when actually sleeping 8 hours, while my results on all other days were lackluster. I read up on a lot of things and convinced myself that sleeping enough is essential. I still slip up and don't even go on my bed at the right times, my sleep schedule goes all over the place for a lot of different reasons, but I'm really trying. I feel like I might need to seek some professional help on this, but I'll still take it as far as possible before that.

xupybd 2 days ago 1 reply      
>So either that is the amount of sleep that keeps people well, or thats the amount that makes them least likely to lie about being sick when they want to skip work. Or maybe people who were already sick with some chronic condition were sleeping more than thator lessas a result of their illness. Statistics are tough to interpret.

Love that, no lazy journalism, no ridiculous claims. Just the facts and some possible implications.

wakkaflokka 2 days ago 5 replies      
I could write an essay about my battle with sleep. I'm in my 30's and I finally think it's solved.

Sleeping meds, sleep studies, CBT-I, you name it - I've done it.

My ultimate solution ended up being:

- Earplugs

- Exercise

- Waking up the same time every single day, no matter how late I stay up. CBT-I had me wake up at 6:30 am every morning, and go to bed at 1 am. After a week of exhaustion, I started falling asleep like a rock. Then my therapist gradually had me go to sleep earlier and earlier until my time-to-sleep was still short and I had few awakenings during the night, but felt refreshed the next day. Turned out to be just around 6.5 hours a night

- No coffee after 3 pm

There are still nights where I have an active mind and have trouble sleeping, but I'll just let it happen without constantly worrying "oh no, I'm not gonna get ___ hours of sleep tonight". Because the minute you try to force yourself to sleep, it's over.

ericdykstra 2 days ago 0 replies      
No small art is it to sleep: it is necessary for that purpose to keep awake all day.

Ten times a day must thou overcome thyself: that causeth wholesome weariness, and is poppy to the soul.

Ten times must thou reconcile again with thyself; for overcoming is bitterness, and badly sleep the unreconciled.

Ten truths must thou find during the day; otherwise wilt thou seek truth during the night, and thy soul will have been hungry.

Ten times must thou laugh during the day, and be cheerful; otherwise thy stomach, the father of affliction, will disturb thee in the night.

When night cometh, then take I good care not to summon sleep. It disliketh to be summonedsleep, the lord of the virtues!

But I think of what I have done and thought during the day. Thus ruminating, patient as a cow, I ask myself: What were thy ten overcomings?

And what were the ten reconciliations, and the ten truths, and the ten laughters with which my heart enjoyed itself?Thus pondering, and cradled by forty thoughts, it overtaketh me all at oncesleep, the unsummoned, the lord of the virtues.


kutkloon7 2 days ago 1 reply      
Great article. I especially like the interpretation of the statistics by the author, which is, well, hardly any interpretation at all:

"One 2014 study of more than 3,000 people in Finland found that the amount of sleep that correlated with the fewest sick days was 7.63 hours a night for women and 7.76 hours for men. So either that is the amount of sleep that keeps people well, or thats the amount that makes them least likely to lie about being sick when they want to skip work. Or maybe people who were already sick with some chronic condition were sleeping more than thator lessas a result of their illness. Statistics are tough to interpret."

Contrasted with articles that take one example (a 94-year old making a breakthrough in some field) and directly generalize it ("to be a genius, think like a 94-year-old"), this is a much healthier and saner approach to interpreting statistics.

(I didn't make this example up; it was on hacker news)

RandomInteger4 3 days ago 9 replies      
I'm not sure what the long term effects of chronic melatonin supplementation are, but I'll find out eventually. I've been taking between 3-6mg of melatonin every night for the past few years (2011?) It's almost required. Without it my sleep cycle seems fine at first, but then gets out of whack as I can't seem to keep a circadian rhythm in line with the rest of society / the earth's rotation.

On the opposite end of the spectrum, I can't function mentally without caffeine. I tried going off caffeine a few times, and while the withdrawal effects were horrible, they eventually passed and everything felt great except my ability to concentrate on anything. Sadly I can't afford to see a doctor for my ADHD meds. While caffeine helps, it still leaves much to be desired.

Exercise helps immensely, both in terms of sleep and ability to concentrate, but at some point I injured my upper back (rhomboids and rotator cuff muscles), so I can't get the same level of exercise I had before.

manibatra 2 days ago 0 replies      
Personally the change that has helped me the most has been mental. I used to feel "guilty" of going to bed early, not working to exhaustion. Now I view sleep as something to enjoy. Just letting go of that guilt has me sleep a lot better. From being a light sleeper I have gone to be able to sleep through my housemates blaring loud music.
0xcde4c3db 3 days ago 2 replies      
I guess it's once again time for my standard PSA response to this genre: various chronic medical conditions can interfere with sleep. If you consistently have trouble sleeping or sleeping well over an extended period of time, it very well could be something more than "poor sleep habits".
lphnull 2 days ago 1 reply      
I'm 30 years old now.

I was able to live on 4-6 hrs of sleep a night all the way up to age 25-28. That's when sleep started becoming a problem.

At age 30, I absolutely need 8 hours of sleep minimum average of sleep, but that average has to be accumulated over the course of a week! That means that a single night of sleeping less and doing strenuous tasks on a linux terminal now takes a toll on me in ways that I have never felt before in my youth.

Full disclaimer: I am a blue collar worker at a non-computer job who physically excerts myself and am very fit as a result of my job. This is part of why sleep is mandatory for me.

The older you get, the more sleep you need and the less alcohol your body can handle. This is a universal truth that people <age 25 have a hard time accepting because everybody has to be a superman of course.

caio1982 3 days ago 0 replies      
It actually does not tell how to sleep, it only discusses common sense strategies like taking melatonine and avoiding (or not) caffeine. Kind of a let down.
KennyCason 2 days ago 0 replies      
> The original studies seemed to say yes. But when the military put soldiers in a lab to make certain they stayed awake, performance suffered.

One minor piece of anecdotal evidence here. I have done a few 5-6 day sleep deprivation experiments in my life. I've stayed up for 3 days more times than I can count. I also used to regularly sleep every other day for long chunks of time. It's something that I could do much better when I was younger, and I try to avoid this now as I regularly get sick when I don't sleep for extended periods of time nowadays.

Firstly, performance (particularly my short term memory) always suffered. Sometimes if not active, or sitting for long periods of time I'd also get pain in my joints. Typically, when I fall asleep or start feeling tired it's because I enter a small boring, quiet homely environment (i.e. go home, or sit in a quiet room, or watch tv). My secret to staying awake was constant activity like walking around, talking to people, hydrating (water), small snacks, and walking some more, etc.

I feel that the effects of sleep deprivation hit the hardest when I'm not being stimulated physically. As such, I think dragging someone into a lab would have a harsh effect on one's performance. While I think no matter what you will suffer from performance degradation, I would love to see some contrast between performance given different environments/habits.

ashark 2 days ago 6 replies      
1) no glowing screens at all after the sun goes down.

2) no glowing screens at all after the sun goes down.

3) no glowing screens at all after the sun goes down.

4) very low candle-temperature lighting only after dark. Especially try to keep it out of your direct line of sight.

It'll work, but 1-3 are hard.

jedisct1 2 days ago 0 replies      
I don't have a computer at home any more.

Granted, the office is at walking distance, and I can go there 24/7, but not having a computer at home recently made a huge difference.

Once I go back home, I don't have the temptation of hacking something really quick, which will eventually last longer that expected, and I'll then keep thinking about it all night long.

Verdict? Better sleep. And I can get up earlier. Overall I feel better and more productive, if only because there is a better separation between work (including on OSS projects) and personal life.

rrggrr 3 days ago 7 replies      
1. Room temperature should be between 60 - 67 degrees F.

2. No electronics, games, and minimal to no blue light 30min to 1hour before sleep.

3. Do not exercise less than 3 hours before sleep. Exception: sex.

4. Coffee and other stimulants before 12pm, not after.

5. Avoid naps longer than 15 minutes day of.

6. Stretch before going to sleep, particularly if you experience minor restless legs or periodic leg movements.

7. Avoid alcohol, will reduce sleep quality.

8. Avoid stimulating TV, conversations or books before sleep.

9. Controversial: Sleep in late if you can. Adequate sleep is more important than consistent sleep rhythm. My opinion only.

aarohmankad 3 days ago 9 replies      
What are your recommendations for dealing with noisy roommates/hallmates?

There have been nights where I had to put on my ANC headphones to get some peace and quiet. (I've heard a good pair of earplugs may work?)

herbcso 2 days ago 0 replies      
Is nobody else concerned with the implications of what losing sleep does to the doctor going through residency? I've always thought that was insane. The author even admits to having observed the detrimental effects first-hand, yet never suggests that this practice should be abandoned - why is that!?

I as a patient have enough of a problem giving myself into the care of a doctor-in-training, why does s/he have to sleep-deprived on top of not being fully trained? Is this some sort of macho thing, or a "well, I went through this hazing, so you gotta do it, too" kind of thing?

Somebody please enlighten me as to what the point of this seemingly counter-productive practice is!

bobjordan 2 days ago 0 replies      
My experience with Melatonin is that about 1.5 mg per night is a game changer. I travel across the Pacific Ocean several times per year and it got to where I was a stick of dynamite temper wise for a week after each trip, just not able to cope with any irritations, due to Jetlag. On top of that, just the general stress of being an entrepreneur resulted in bad sleep. For some reason, I bought the melatonin, and I'm very glad I did it. Now, I sleep like I did when I was in elementary school. Lots of dreams and even wake up with solutions to problems that I went to bed thinking about.
charliemol 1 day ago 0 replies      
>In one study published in the journal Sleep, researchers kept people just slightly sleep deprivedallowing them only six hours to sleep each nightand watched the subjects performance on cognitive tests plummet. The crucial finding was that throughout their time in the study, the sixers thought they were functioning perfectly well.

>Effective sleep habits, like many things, seem to come back to self-awareness.

One of the things I've noticed is that it's really hard to police your own sleep schedule, especially if you aren't aware of the consequences of losing a few hours of sleep. I'm working on a bot that helps you get to bed earlier, and our power users often come to us with a really clear understanding of what happens when they don't get enough sleep (e.g. "I perform way worse on my Army fitness test", "I'm not focused enough to do my side project after work") and still need to set up systems to keep themselves accountable on a daily basis.

That said, I think there's a much larger "zombie population" of the "sixers" described above that isn't getting enough sleep and simply isn't particularly aware of it. From a population health standpoint, the question then becomes: How do we get people to appreciate the effect of getting a full 7-9 hours of sleep when they don't explicitly feel the effects on a daily basis? Not only that, but how do we get them to unwind and prioritize getting good night's sleep at the time of day when willpower is low and Netflix temptations are high.

The CEO of Netflix somewhat flippantly declared sleep their biggest competition, and I think they're crushing the competition right now.


On the bright side there are people who have used our product and seen it make a pretty big difference. The trick was getting them to start with a very unambitious bedtime goal relative to their average bedtime, and gradually make the bedtime earlier week over week until they've dismantled their bad sleep habits.

ysavir 2 days ago 0 replies      
The article mentions William Dement, one of the pioneer researchers on sleep. His book The Promise of Sleep is a great and easy read, and I absolutely recommend it for anyone looking to learn more about the subject and the history behind the study of sleep.
chippy 2 days ago 0 replies      
My "One Simple Trick" to help limit active thinking when in bed, and thus make it easier to sleep is to write the thoughts down, pen on paper.

By thinking I mean things like being excited about an event, going over a conversation, thinking about some code, an idea, things to do tomorrow, errands etc. All things that can be literally dumped onto paper and stored. In my experience I have found that pen and paper work better than typing into a device.

Now, I still seem to wake up multiple times during the night, but it's not because my brain is excited anymore.

branchless 3 days ago 0 replies      
This is an interesting article if only for the nugget that only 1% think they function well on 4-5 hours (though they may be mistaken).

The title isn't great - it cautions against common fallacies about aids to sleeping.

OJFord 2 days ago 1 reply      
> In 2013, a 24-year-old advertising copywriter in Indonesia died after prolonged sleep deprivation, collapsing a few hours after tweeting 30 hours of working and still going strooong. She went into a coma and died the next morning.

Things like this always slightly scare me.

I have been awake consecutively for far longer, and on several occasions. But does that mean I just can - or would I really be risking death each time?

esseti 2 days ago 2 replies      
"Or, sometimes preferable, read something on paper.". Now, to read on paper we need light, so the problem is not solved (altought the ligth is not directly from the device into the eyes). But the real question is, if I use the kindle with its light that lights up the screen, will it be the same as using a phone? or what?
smartbit 2 days ago 0 replies      
William Dement gave a Google Tech talk on September 23, 2008. Dement recalls that Randy Gardner who stayed awake for 11 days in 1964, when asked some 40years later "would you do this again?" he replied "No way would I do this again" [0]

Very interesting from Dement's talk is that equilibrium daily average sleep for completely health young adults is 8:15 50min [1]. Most people I meet contest these results and state that they can work optimal with less than 7h25m daily sleep.

[0] https://www.youtube.com/watch?v=8hAw1z8GdE8&t=1310

[1] https://www.youtube.com/watch?v=8hAw1z8GdE8&t=28m29s

ziglef 2 days ago 1 reply      
I consider myself one of those short-sleepers. Ever since I was a kid I averaged 5-6h of sleep a day.

While the differences perceived (which can always be misleading) from sleeping 6 or 8 hours weren't noticeable, if I slept 4-5 for a week my short term memory would suffer, reflexes and split second decision making (think fast passed multiplayer shooters) would also suffer.

But what I noticed was that although the split second decision process would come back after a good night sleep, short memory would take me a whole 3-4 days to come back at its finest.

Obviously this is all what I observed and not to be taken seriously, because as we know observing and understanding oneself is one of the hardest tasks out there.

Just my 2c

mythrwy 2 days ago 2 replies      
I've had great luck with these videos.



I went through a phase a few years ago where I'd fall asleep only to wake up a short time later with mind racing, then be up half the night and tired the next day. This went on for some months and was very annoying.

These videos cured that phase right away. I don't listen to them much anymore but they really worked. It wasn't just staying asleep that was cured, the quality of the sleep seemed much better. Still listen on occasion if having trouble getting in "sleep mode".

bhavyapruthi 2 days ago 0 replies      
"Dolphins are said to sleep with only half their brain at a time, keeping partially alert for predators. Many of us spend much of our lives in a similar state."This is definitely deep.
diyseguy 1 day ago 0 replies      
For those struggling with caffeine addiction and poor sleep I strongly recommend rutaecarpine. Take one a few hours before bedtime and it deactivates the caffeine so you can sleep.
drukenemo 2 days ago 0 replies      
A recent TED I watched linked sleep deprivation with the speed one can develop Alzheimer


mansilladev 2 days ago 1 reply      
How not to sleep:


I read this article 8 hours ago. Now I'm in bed, staring at this screen, typing this comment at 5 AM.

chillytoes 1 day ago 0 replies      
This was a pretty weak article. Usually The Atlantic packs a powerful punch. This seemed like clickbait.
m-j-fox 2 days ago 0 replies      
Whatever you do, don't click the video at the bottom of the article unless you have a few hours to kill. Dr. James is an impossibly engaging and fun-to-watch youtuber and there goes my memorial day.
Izmaki 2 days ago 0 replies      
Reading this on Monday morning already late for work and wondering when I can have a nap...
notyourloops 2 days ago 0 replies      
I had trouble with insomnia until I took up the practice of meditation. It was not my intention to solve my insomnia via meditation, but that's what happened incidentally.
TheAdamist 2 days ago 0 replies      
New to sleeping with people, I find the actual sleeping part the tricky bit. Not my expectation at all.
bojanvidanovic 2 days ago 0 replies      
One of my cousins is in that 1% of people. He sleeps 4-5 hours a night and stays hyperactive all day. I'm so jealous of him!
bewe42 2 days ago 0 replies      
I can recommend "The effortless sleep method" by S. Stephens.
GoToRO 2 days ago 0 replies      
If you have problem sleeping do this: there will be some times when you will sleep better. What you have to do is go back 3-7 days and see what you did in those days and do more of that regularly.
michaele 2 days ago 0 replies      
Try 15-20 minutes of meditation right before you go to bed. I find it slows my mind, decreases stress and prepares my body to sleep deeply and well.
Ask HN: What are some examples of successful single-person businesses?
653 points by 1ba9115454  2 days ago   298 comments top 66
jasonkester 2 days ago 14 replies      
Careful with your terminology. "Successful" has different meanings for different people.

By my definition, for example, I run the most successful single-person business that I'm aware of. But it doesn't make millions, so it might not meet your definition at all.

My goal was to replace my day job with a software business that required as close to zero attention as possible, so that I could have time to spend on the things that actually matter to me.

The business brings in the equivalent of a nice Senior Developer salary, which is not what most people think of when they imagine a successful Startup. But it lets me work with a bunch of cool tech when I want to, and, more importantly, is automated to the point where Customer Service involves a quick 30 second - 10 minute email sweep over morning coffee. For me, that's a lot more valuable than a few more million dollars in the bank.

The cool thing about running your own business is that you get to decide on your own definition of success.

EDIT: I wrote a bit about how I got into this position, in case anybody is interested. It's not actually all that hard to do:


dhruvkar 2 days ago 2 replies      
Builtwith.com (one employee/founder and a part-time blogger) does an estimated $12M a year [1] assuming a 'few thousand' = 2000 paying customers.

"the Basic at $299 per month for customers that want lists of sites mainly for the purpose of lead generation; Pro at $495 per month, suited more for users that work in an industry using a lot of A/B testing and comparison-type data; and Enterprise at $995 per month, which covers all bases and allows sales teams with multiple people to all use the platform at once. Brewer says that in terms of paying users on the platform there is a few thousand and the split is about 40 percent Basic, 40 percent Pro and 20 percent Enterprise."

Similar thread a while ago [2]


2: https://news.ycombinator.com/item?id=12065355

Edit: specificity and formatting

jimminy 2 days ago 2 replies      
At some point scale will require you to hire, at least a few people, if you're really successful. But two examples that I can think of are Markus Frind (Plenty of Fish) and Markus Persson (Minecraft).


Markus Frind is probably the biggest. He spent 5 years (2003-2008) working on Plenty of Fish, and at that point it was bringing in about $5M/yr and had 3 employees.

When the site sold in 2015 for $575 million it was 70 employees, but he still owned 100% of the company.


Markus Persson would be another possible option, for the first $10-20M that Minecraft brought in he was the only person (aside from a contracted musician). And then for a while after that, it was him and his friend who was hired to manage the business side so he could focus on the programming work.

wriggler 2 days ago 2 replies      
I built and run StoreSlider[1]. It made ~$700,000 in 2016, mainly in affiliate revenue from eBay. Costs are essentially hosting (between two and five $10 Linodes, depending on load).

Took me some effort to built, but it's on autopilot now.

[1] https://www.storeslider.com

russellallen 2 days ago 3 replies      
Your problem will be definitional. The Rock earned ~ $65mm last year. Is he a 'one man company'? I guarantee he's billing through a services entity...

1: https://www.forbes.com/sites/natalierobehmed/2016/08/25/the-...

numbsafari 2 days ago 3 replies      
Isn't Tarsnap[1], by Colin Percival a great example of this? I'm surprised it wasn't the first thing mentioned since he's reasonably active on HN.

1: https://www.tarsnap.com/about.html

xchaotic 2 days ago 2 replies      
How do you define successful single-person? I've been running a one person consultancy for 12 years now, had to retrain quite a bit over the years, sometimes it was so busy that I outsourced pieces of work. It's been good enough that I have a house and no mortgage attached to it, all while spending almost enough time with my family - much more recently.This is what I wanted and I consider that a success in maintaining a work/life balance, working from home and having a good life in general.It's not quite 'fu' money yet, as I still ahve to work for a living, but I working towards that goal.I know a few good people that agree with this point of view - Basecamp/37 signals folks etc.
chrischen 2 days ago 3 replies      
I built and run Instapainting.com by myself. As of the date of this comment it is still only one employee (me). https://www.indiehackers.com/businesses/instapainting

Things like customer support is outsourced to other startups, and of course the artists on the platform don't work for me, but could be if the company was structured differently (it's structured as a marketplace).

danieltillett 2 days ago 1 reply      
Now that I am no longer a single employee business (again) I can admit that I ran Mark II of my company on my own doing everything without outsourcing (sales, customer support, development, sysops, UI/UX, website design, copywriting, manuals, SEO, advertising, accounting, etc) making much more than seven figures in profit for quite a few years.

It probably wasnt the wisest idea to stay solo for so long, but the freedom of not having employees made me very reluctant to hire anyone again. The only reason I chose to hire is that the business' growth forced me make the decision to either turn away customers or hire staff. The people I have are great, but I do miss the days of doing everything myself without having to explain why something is important.

joelrunyon 2 days ago 3 replies      

Bootstrapped social networking site doing multiple 5-figures/month.

sudhirj 2 days ago 1 reply      
There's pinboard, maciej still runs it solo, I think.
LeonidBugaev 2 days ago 1 reply      
Sidekiq by Mike Perham http://sidekiq.org/

Over 1MM annual revenue https://www.indiehackers.com/businesses/sidekiq

siner 2 days ago 1 reply      
Changu 2 days ago 4 replies      
The Flappy Bird creator said he made $50k per day from in app ads. But he pulled the game after a short while. Said because he felt guilty for making people play all day. Would love to know the whole story behind this.
hyperpallium 2 days ago 1 reply      
Problem is, "big for one person" is not big enough to be news, relative to all the companies. Once they get big enough for many to hear about them, they have to grown, to handle it. e.g. Notch (Minecraft)

Secondly, the best way to make solid, reliable money is to have a niche, without competition. So, you keep your mouth shut.

You'll probably most likely notice them in small, industry-oriented niches. Or... after they grow larter than one-person.

To give an answer: https://balsamiq.com/products/mockups/

mylh 2 days ago 2 replies      
We (two python developers) have started a SaaS SEO checker service [1] in February 2017 (took 4 month to develop from 0) and already have paying customers on our business plan. I completely agree with the definition of successful business when you have ability to do what you want when you want. I already have a couple of other websites generating revenue from advertising and all this allowed me to quit daily job 2 years ago. So definitely there are a lot of examples of successful single- (two-) person businesses out there.

[1] https://seocharger.com

majani 2 days ago 2 replies      
According to porn industry insiders, xvideos is run by a married couple. They are very secretive, but they definitely do millions in revenue annually.
avichalp 2 days ago 0 replies      
We can find few of them here https://www.indiehackers.com/businesses
galfarragem 2 days ago 0 replies      
Sublime Text was for a long time a single-person business.
speedyapoc 2 days ago 2 replies      
Not entirely single person, but I run Musi [1] with one partner. We have monthly revenues in the mid six figures with 2-3k a month in expenses.

[1] https://feelthemusi.com

webstartupper 2 days ago 0 replies      
I'm surprised no one has mentioned improvely.com by Dan Grossman.

I think it makes around $40K to $50K per month. Over the last few years, I've seen it grow from around $10K to $50K. That slow steady SaaS growth is pretty inspiring.

flgb 2 days ago 0 replies      
Daring Fireball by Jon Gruber (https://daringfireball.net).
xiaoma 2 days ago 1 reply      
If Satoshi Nakamoto is still alive and still has access to the coins he mined but never sold, they're already worth billions and the work has changed the world.
puranjay 2 days ago 1 reply      
I know some affiliate marketers who make $2M+ without any employees.

Apparently, ranking well for certain keywords (mostly web hosting and website builders) can be very, very lucrative.

sharkhacks 2 days ago 0 replies      
Here are a couple of awesome examples: Affiliate Marketer https://www.smartpassiveincome.com/ Patt is awesome, he actually shares his monthly income and expense statements. Started solo and now he hired a bunch of people.

Nathan Barry (http://nathanbarry.com/) the guy who started convertKit https://convertkit.com/

rachekalmir 2 days ago 1 reply      

Guy quit his job a year or two ago to develop this full-time and seems to be doing pretty well for himself. I use the client all the time as a developer.

eps 2 days ago 0 replies      
If I recall correctly, IMDB used to be a one-man show for a long time, up to and even after getting acquired by Amazon.
anovikov 2 days ago 1 reply      
I know a guy who does arbitrage of porn traffic and he makes $2M a month, already saved up $20M.
neals 2 days ago 0 replies      
Google > quora > 10 year old article > https://www.inc.com/magazine/20080901/the-other-number-ones....

But they have staff.

Large single-person startups? https://smallbiztrends.com/2014/07/successful-one-person-sta...

dqdo 2 days ago 1 reply      
The most successful one-man business is not in software. I know of a successful mediator. He charges $18000 to $20000 per day and has always been booked for the last 20 years.


elvirs 2 days ago 1 reply      
my business:)1.5m annual revenue, 10-15k mobthly profit, built from zero, very proud of it.
planetmaker 2 days ago 0 replies      
Working by example may work. And analysing many successful examples may also yield some insight. But make sure to get the full picture: look also at those who fail. They might have tried the very same methods to most degrees. Don't fall for the survivorship bias :) It might be other factors which are truely important than those which seem the obvious ones.
pipio21 2 days ago 1 reply      
Please first define success. You should think about your own values in order to know what is success for you.

I personally know people that made millions from creating software products and companies. But I do know nobody that did(or does it) it alone.

In fact, I "made millions" myself whatever that means starting with software(a million dollars is way less than 10 years ago because of inflation so it is not that much, specially if you life in a expensive place), but I made a hell lot of work and found colleagues along the way.

IMHO you should never focus on money. Money is just a tool for exchanging value. You should focus on creating value, even if at first it gives you little money. Because of innovation dilemma most things that create real value give you very little money first( Do you know how much money the Apple Store did the first year?)

In my opinion your priority should be finding a social circle that will help and understand you. If you have a business that means entrepreneurs. They will understand and support you like no one else. HN is virtual, you need real people around.

For me success is the ability to be free in my life, made my own decisions in my business, I could write on HN, or go climb a mountain when people is working, or travel a new country, or the ability to only invest on business that are ethical for me.

If earning more money means not being free, I will decline the offer, in fact I decline offers every single day. Why should I do it? To become a 80 years old billionaire? To have everybody know me so I have to live isolated against paparazzis or criminals wanting to kidnap my children because they know I am rich?

But your values could be different. Your priorities could be to show off, exert power over other people, of go meet interesting people, or have extreme experiences or send your children to elite schools, whatever is success for you.

wessorh 2 days ago 0 replies      
Domainers: I've known many on person companies that made tons off parking domains. Seems like this model has run its course.

Farming has done well for my wife, she run her business and feeds a bunch of folks. Find her at the Oakland Grandlake on saturday and Marin civic center on Sunday. She sells plants :)

BanzaiTokyo 2 days ago 0 replies      
I suppose there is very little public information about such companies because they have no obligations of sharing it.
tjpnz 2 days ago 1 reply      
This guy uses an AI to write books for Amazon. Note that article is from 2012.


Mz 1 day ago 0 replies      
There are additional resources listed here:


coderholic 2 days ago 2 replies      
https://ipinfo.io - single person business that does over 250 million API requests a day, and generates good revenue.
starikovs 2 days ago 0 replies      
As for me, I develop https://thestartupway.website/ only by myself but I really cannot tell you if it's a successful business. I have a job of a software engineer and when my friends ask me to make a landing page for them I just use my tool and take a small money from them. It's just for fun for me and it's great that it helps somebody with their needs. So, for me, it's a little success )
DaiPlusPlus 2 days ago 1 reply      
I don't think there are any that ever remain a one-person company in practice - even for my own projects I've always needed to outsource or farm-out tasks that aren't a valuable use of my time - e.g. website design or handling customer support. I'm sure there are plenty of de-jure sole-proprietor ships - but I doubt any of them of truly work alone.
lgas 2 days ago 1 reply      
Why discount outsourcing? The book "The E-Myth" argues that you absolutely should outsource everything but your core competency. (And "The 4-hour work week" would argue you should outsource that too)

Does outsourcing somehow diminish success?

cyrusmg 2 days ago 0 replies      
Nomadlist.com from levels.io
magsafe 2 days ago 0 replies      
https://www.bottomlinehq.comSingle founder/employee, 6-digit revenue, no outside funding.
haidrali 2 days ago 1 reply      
Salvatore Sanfilippo: Sole creator and maintainer of Redis

Mike Perham: Sole Developer of SideKiq ( Background tasks processing with Redis) and Inspector (Application infrastructure monitoring, reimagined)

plantain 2 days ago 1 reply      
Plenty of Fish? Exited for billions while still a solo operator
wordpressdev 2 days ago 1 reply      
I made millions from Adsense, not in USD though :)
mingabunga 2 days ago 0 replies      
Top affiliate marketers in the health, wealth, personal development and dating niche make $m per year, some in the 10's of $m.
sudhirj 2 days ago 1 reply      
sleeplesss 2 days ago 1 reply      
I sell twitter and Instagram followers for 5 years. I made 15000 usd in average ( before tax).
gfiorav 2 days ago 0 replies      
avemuri 2 days ago 0 replies      
Bitcoin? That is, if Satoshi is a single person
epynonymous 2 days ago 0 replies      
plenty of fish comes to mind, not sure if it's around anymore, but this was a free dating platform
SirLJ 2 days ago 0 replies      
Stock Trading: no customers, no employees and no investors, check my profile for details on how to start. Good luck!
gumby 2 days ago 0 replies      
Craigslist is pretty close to a single person operation and it's been pretty successful.

I know it's an outlier.

badkangaroo 2 days ago 0 replies      
NuclearC 18 hours ago 0 replies      
cbar_tx 2 days ago 0 replies      
please don't. we have enough people monetizing junk on the internet. you're trying to skip the most important step.
HeavyStorm 2 days ago 1 reply      
zackrompin 2 days ago 0 replies      
symbiosis 2 days ago 0 replies      
sunstone 2 days ago 0 replies      
wellboy 2 days ago 0 replies      
kough 2 days ago 0 replies      
Well, they all have the same number of employees.
GrumpyNl 2 days ago 1 reply      
fiatjaf 2 days ago 1 reply      
wand3r 2 days ago 1 reply      
Tinder is a highly successful single person business vs. the Ashley Madison strategy of focusing on couples.
The US has forgotten how to do infrastructure bloomberg.com
381 points by Typhon  10 hours ago   438 comments top 53
imgabe 9 hours ago 21 replies      
I don't know the answers, but as someone who works in the industry (on the design side). I think a big unmentioned factor is probably liability and the prospect of litigation.

If you look at old blueprints for projects in the past, they are a LOT less detailed. They had to be, because it was physically more difficult to produce them since they had to be drawn by hand. A lot was left to the contractor to figure out in the field.

Now, drawings are more detailed and contractors are incredibly reluctant to make even the smallest decisions on their own. They don't want to assume the liability and risk getting sued if they do something wrong, so they push that off on the engineers and architects.

This means every time there's a question, it has to be submitted through a formal process, tracked, answered, documented. And if the change has any cost impacts, the contractor tacks on a hefty premium because they know they can get away with it (and they probably underbid in the first place to win the job). Delays pile up, every clarification becomes an expensive change order, construction workers twiddle their thumbs while designers get around to addressing questions and this all costs money and time.

michaelt 9 hours ago 8 replies      

 That suggests that U.S. costs are high due to general inefficiency [...] Americans have simply ponied up more and more cash over the years while ignoring the fact that they were getting less and less for their money.
There's a general effect in political systems, that if a law takes $20 from 1,000,000 people and gives $100,000 to 200 people, the people who lose money won't have enough incentive to put up a big fight; but the people who receive money will have more than enough motivation.

For example, if a big irrigation project will force taxpayers to subsidise corporate farms, the corporations have a big incentive to spend on ads and campaign contributions. Or if you have to give $60 to a private company for tax filing software, they have a big incentive to lobby and make campaign contributions to keep the tax system complicated.

I'm sure construction projects are subject to the same pro-waste incentives.

I'm not sure what the solution to this is - campaign finance reform, perhaps?

noonespecial 9 hours ago 5 replies      
I thought about this when I visited the Hoover Dam.

The audacity of the thing and sheer impossibility of doing anything remotely like it in today's America makes it seem like a relic from an ancient civilization.

It felt like visiting the pyramids in Egypt.

truxus 8 hours ago 1 reply      
Am also a design engineer, I specialize in water and wastewater works. My clients are all municipalities, with tight budget and politics are a factor. Engineering productivity has climbed thanks to computers, but construction productivity continues to decline. In my experience on small jobs this has a lot to do with safety and regulations. It takes a team of 2-3 to enter confined spaces (manholes) for momentary inspections or maintenance, it takes extra workers to set up traffic zones to ensure travellers are less of a danger to the workers. Time is taken to ensure archeological, agricultural, and culturally sensitive areas are not disturbed. Minority and women owned businesses are given contractual preference, whether they are most qualified or not. It takes a special (read: expensive) team several weeks to document trivial wetland areas (most people call them roadside ditches), and another person weeks of labor to explain how impacts will be minimized. The government sets standard labor rates for construction labor.

But these are things we as a society have deemed important. Its not acceptable for lives to be lost. It's not acceptable for construction workers to accept low wages. It's not acceptable to recklessly degrade our environmental resources, and it's important to have diversity in this industry.

I don't know if it's true in other countries, but it seems the USA vascilates between priorities depending on the public administration. I am young so my experience is short. Bush saw a real estate bubble, Obama saw an insurance bubble, Trump et al aim for a construction boom. I would add that in New York my home state, a Democrat state, there is a large infrastructure program starting, so it's not just Republicans.

erentz 9 hours ago 1 reply      
From my perspective infrastructure is politically driven in the US. Projects are debated for decades, over this time changes are made to placate some groups and buy the support from other groups, and create jobs for some politician, until it baloons into something that is many billions of dollars. Then it is built as a one off mega project. At this point it should be killed but this is seen as the only way to get stuff done.

Things like CAHSR instead of agreeing on a goal and deciding we should have a CA-wide rail system, so then establish a division of Caltrans which we fund to incrementally build/acquire/run a network through ROW acquisition, running DMUs on the routes in the mean time, making it compatible with existing systems, etc. instead becomes one giant $60b acquisition that is master planned for a multi decade time frame to be built almost entirely as a stand alone system the benefits of which can't really be enjoyed until decades in the future.

saosebastiao 8 hours ago 1 reply      
I'm pretty sure we're so far invested in being crappy at government that we can't feasibly turn back without inflicting a lot of pain.

Admittedly anecdotal, but after having lived near two major DOE national laboratories (where >40% of my neighbors worked at the lab), I wouldn't be surprised if 60-80% of the workers at these labs could be eliminated under a combination of audit-based reform, regulatory reform, and management changes. Without any change in output or results. I've listened to descriptions of what people do at these labs, and it blows me away how low productivity they are compared to the private sector. I've known people who work entire workweeks that could be consolidated into 3-5 hours of work, and they're willing to admit it. In fact, those that would try to change from within have told me they would feel vulnerable to retaliation if they went through with it.

And that's before we get into the shitshow that is federal contract work. We've turned federal contract work into a goldmine for whoever has enough lawyers to win a bid. And we don't even know how many people are employed doing that contract work [0]!

At this point, meaningful reform means taking 10's of millions of people and forcing productivity on them to the point where most of them are unnecessary. We could probably double the unemployment rate with the right reforms. That's why it won't happen.

[0] http://www.govexec.com/contracting/2015/03/even-cbo-stumped-...

tannerc 8 hours ago 2 replies      
Related to what we might label "modern" infrastructure, I just got back from a trip visiting the outskirts of Illinois.

The towns there are small, but full of people hungry for opportunities and good work.

Yet everywhere you go there is no work to be done. Businesseseven big box stores like Walmart or Targetare closed down. Homes lay vacant despite their reasonably cheap (at least for someone living in Silicon Valley) $20,000 price tag. Even churches in the area have to close their doors but leave their steeples standing, unable to draw in an audience orand most importantlyany money to maintain things.

This isn't Detroit I'm talking about, it's a fairly typical suburb of a larger metropolitan area in the midwest.

The answer for many of these people has largely and loudly been: "Bring back jobs from overseas! Stop outsourcing work to China!"

But of course that's not a valid answer, since the problem is that the jobs these towns once knew now belong to machines which can work tens of times harder and longer at a fraction of the cost of their former human counterparts. Yes, some of the work has gone overseas, but much of it has just become "modernized" by technology.

And here's the thing: the infrastructure for things as simple as Internet access in these parts of the US just isn't there.

So nobody goes to school to learn programming or design or how to be a modern entrepreneur because they (the individuals and schools) just don't have any connection to those parts of the landscape. And when they do, their model is wildly out of date.

One of the Universities I visited and, later, a high school had each just opened a computer lab for students in which the goal wasn't to help students learn programming, or design, or anything like that, but merely how to type.

This of course added on top of the decrepit roads, buildings, etc. The state is wildly out of money because it can't put people to work, and the people can't work because the infrastructure just isn't there. It's depressing to see, really. I want to know how we can improve this, and what someone living on the other side of the country might do to help.

beat 9 hours ago 2 replies      
This reflects in health care as well. American health care costs about twice as much as it does in every other first-world nation. Those systems run the gamut from fully socialist to mostly privatized, but they all share a common feature - they provide universal coverage at half the cost of the American system. That says there's something uniquely broken in our model.

Infrastructure? Same thing. There's something distinctly American in how slow and expensive it is. These are systemic issues, not some single-cause thing that [liberals|conservatives] can finger-point to a partisan villain.

bkjelden 9 hours ago 7 replies      
I have been wondering lately if we have become so dependent on certain sectors of the US economy (housing, infrastructure, healthcare, education) that risk aversion is suffocating innovation.

No one wants to send their kid to an unproven, experimental university. No city wants to beta test a new style of road building. No patient wants to be the first to try a new treatment. We are so dependent on these things that the cost of failure is astronomical. Because no one is willing to try new methods of doing things, costs never go down. There are marginal improvements, for sure, but no disruptive changes.

It feels like a weird manifestation of NIMBYism. We all want innovation, but we want to test it on someone else first.

stupidcar 9 hours ago 2 replies      
Vague talk of inefficiency in construction isn't really very informative. Has anyone ever done a study where they take two very similar projects building a medium sized office building, for example in the US and China, and following them both from beginning to end, auditing exactly how much money and time is spent on each stage? It seems like this would provide a useful basis for comparison.
bogomipz 9 hours ago 4 replies      
I don't know if "forgotten" as used in the title is the correct word so much as just "out of practice." Look at this list of infrastructure projects in China in the last decade or so:


It's hard not to be impressed by that list. And China is undertaking these scale projects abroad as well from Latin America to Africa.

It's should be no surprise that you get really good at something the more you do it. What was the last project that the US Federal Government undertook on a similar scale as one of these? "The Big Dig[1]"? Notable for being the most expensive highway project in US history and yet served only Boston?

The US seems to spend interminable months squabbling over whether or not its un-American to use imported steel to replace parts of its crumbling infrastructure(see Bay Bridge, Tapan Zee projects etc.) while the Chinese seem to spend that time actually executing the project.

[1] https://en.wikipedia.org/wiki/Big_Dig

vmarsy 9 hours ago 3 replies      
> Yet Frances trains cost much less.

Decades of expertise might be a reason here, France has the longest amount of (high speed rail miles / country area), and the non high-speed network is also huge. It's also possible that the monopoly of having only one, state-owned, railroad company (SNCF) let that company negotiate/choose better prices much more aggressively than if there was many companies. I don't think it was stupid at the time to have such a state-owned monopoly for building this key infrastructure.

Also I'd think that with that big of a network, some things start to become less costly because of economies of scale. Last April SNCF bought 30 trains to Alstom for 250m, with the current US infrastructure I doubt the US needs that many trains. If one day like France they have 450 high speed trains, maybe it'll cost them less to have more built.

Japan and France are dense countries, the area of France is roughly equal to the area of Texas, it's hard to imagine 450 high speed trains there, where they're struggling to even get one high speed rail line between Houston and Dallas.

bischofs 9 hours ago 1 reply      
I recently ran into the MDOT project manager for a rebuild/expansion of a section of I-75. With some googling I found that she was working on the project since 2001 - construction started in 2016 and is not scheduled to be completed until 2030.

So she will be working on the project for 30 years; her entire career... something is pretty broken about the government structures around these projects.

Apreche 9 hours ago 7 replies      
It costs extra because CORRUPTION. They're skimming off the top at every level, and that's why they can't let anyone investigate.
hx87 8 hours ago 0 replies      
I suspect that part of the problem is that expertise doesn't reside in monolithic corporations and governments anymore, but in specialist consultant companies that have relatively high fixed costs and thus must charge more to make a profit. In addition I suspect that there are a lot of lookalike consultant companies that charge a lot of money in exchange for kickbacks but don't provide any value.
tmh79 9 hours ago 2 replies      
It seems to me like the biggest things that have changed are the safety regulations on the construction process itself and the finished product. The golden gate bridge was built in a few years, but with a few worker deaths. Building a new bridge like the golden gate today would likely be illegal, and take ~25 years.
mnm1 5 hours ago 0 replies      
Is inefficiency inefficiency when it's done on purpose? That's what the article misses. The salaries of workers might not be higher than in other countries, but I bet what we pay contractors vs. what they produce is beyond astronomical. It certainly is when government contracts out software. With modern computing, I don't see why we don't keep track of costs down to the penny. A new hammer is bought to continue construction? The receipt is scanned and everything is tracked. You don't scan it, you don't get reimbursed. All actual costs are known and the profit margin can be calculated from there. It might not help that much until more data is collected and future projects can be estimated better using that data, but it sure as hell beats the current model of guessing the lowest estimate to get the contract, getting the contract, and then spending 10-1000x the estimate.
myrandomcomment 5 hours ago 0 replies      
So I worked for the family construction firm for a few years. The amount of ass covering, paperwork and complexity to what should be a simple thing was amazing. The amount of litigation involved when anything was not as expected was amazing vs. just trying to sort it between the those involved. My father build a $M business on just doing change orders & claims. There is your problem.
chadgeidel 9 hours ago 2 replies      
My brother and father (concrete construction) would probably lay the blame on onerous regulation. They regularly complain about nit-picky engineers with their insistence on (in their words) unreasonable slavish devotion to engineering specifications. I'm not on the jobsite so I don't actually know.

I wonder is our construction regulations are much more stringent than other western nations?

TetOn 7 hours ago 1 reply      
This article from 2011 (http://washingtonmonthly.com/magazine/marchapril-2011/more-b...) makes a compelling case that, after decades of cutting government jobs and creeping privatization, we simply don't have enough bureaucrats to organize and run large, complex projects anymore. These other countries having success with infra projects still do. Thus the issue.
DisposableMike 9 hours ago 0 replies      
We haven't "forgotten" how to do anything. We just let ourselves (collectively) become OK with projects that extraordinarily long, slow, and expensive, and the contractors aren't going to be the ones to force positive change in the system.
esfandia 4 hours ago 0 replies      
I wouldn't be surprised if infrastructure costs and delays were going up everywhere though, including France. Would be nice to have actual data.

I think it comes down to new non-functional requirements: safety, environmental concerns, accessibility, liability protection, etc. With these requirements, Paris or London or New York would have never had a subway system in the late 1800's or early 1900's. By now though, all these systems have been maintained and modified to comply to the new non-functional requirements over time.

So if you added the accumulated cost of running a subway system today, while adding all the maintenance costs over time, it might be the case that the cost of building the same thing from scratch would be cheaper, but unfortunately without the benefit of having passengers enjoy it (and pay for it) for over 100 years (thus partially subsidizing the maintenance costs).

So you could even consider something as rigid as a subway system built in an "agile" way, with a quick and literally dirty MVP out of the way early, and additional non-functional requirements added over the course of decades. It's just that those requirements weren't known or wanted initially.

Now would it be ok to do the same now? Have a subway system in a city that needs it, but without bathrooms in stations, with no pollution requirements initially, not accessible to the disabled, (name your other requirements not necessary for a MVP), but with the benefit that at least it exists now, it can be subsidized by passengers that are willing to use it, so that over time those other requirements are met. Better have them later than never.

tomohawk 2 hours ago 0 replies      
In my state, a contractor was paid to build a major highway using concrete pavement. If done properly, such a road should last 40 years or more, and without the thumpity, thumpity sound you get when the concrete is not put down properly.

The contractor was not experienced and totally muffed the job.

Rather than penalizing the contractor, they paid the contracter to build the road again, plopping asphalt on top of the concrete after grinding the concrete and redoing the joints that they screwed up. The asphalt hasn't lasted that long, so it will need to be redone again soon.

So, the contractor was inept, but so was the government overseer.

rjohnk 8 hours ago 0 replies      
After the collapse of the 35W bridge, the new bridge the took its place was built ahead of schedule and on budget. As with many things in history, disaster seems to hone our ability to overcome obstacles and get things done.


OliverJones 6 hours ago 0 replies      
I hate to be snarky, but there's another factor: laziness.

When there's a replacement bridge under construction around here, business at the local dunkin donuts goes up ALL DAY, not just at lunch and shift change. Everybody's there: laborers, supervisors, delivery drivers, managers, engineers, owners' reps from the state DOT and the towns involved.

And, in Massachusetts we have a peculiar extra cost: The police force has a monopoly on doing the traffic safety work that flag people do everywhere else. They have the right to arrest people who work in or near roads if they start working without a paid police detail on the site, or keep working after the police detail leaves. Slows work down; gives people a reason to wait around after arriving on the job.

hkarthik 9 hours ago 2 replies      
The fastest that I've seen infrastructure put up in the US is where there are toll roads being built.

I suspect this is because there is a high incentivize to get a toll road up quickly and start generating revenue to pay back the initial investment.

Residents generally hate tolls, but its a good motivator for infrastructure to go up quickly.

I never understood why similar incentives wouldn't kick in for mass transit projects.

defined 8 hours ago 0 replies      
> There is reason to suspect that high U.S. costs are part of a deeper problem.

Maybe this is mere cynicism, but if the workers and materials don't cost that much more than in France, my immediate reaction is "excessive profiteering", or, to put it another way, "because we can, and nobody is stopping us".

The same applies to our unreasonably high education costs, and healthcare (although that's much more complex an issue, but at its root, it is too many fingers in the pie).

edraferi 5 hours ago 0 replies      
> U.S. costs are high due to general inefficiency -- inefficient project management, an inefficient government contracting process, and inefficient regulation. It suggests that construction, like health care or asset management or education, is an area where Americans have simply ponied up more and more cash over the years while ignoring the fact that they were getting less and less for their money. To fix the problems choking U.S. construction, reformers are going to have to go through the system and rip out the inefficiencies root and branch.

That's a tall order. It'd be nice to see the article suggest a way to accomplish that.

hawaiianed 6 hours ago 0 replies      
There is some poor analysis in the article with regards to Davis Bacon Act, they approached it and then missed what it was telling them.

Here in Hawaii, Laborer Prevailing Wage is $50+ an hour, toss on our mandatory insurances worker's comp and taxes and it costs almost $80 an hour to pay that worker before profit and other overhead.

Laborer rates vary by locality, but no construction worker who gets more than 3-months of Davis Bacon pay is averaging $35,000 a year it's going to be closer to $50,000.

Whenever we do any State, or Federal work there is a huge stack of submissions that have to be made, I kill trees like it was my job, shoveling paperwork out the door for these projects.

barretts 9 hours ago 0 replies      
One important factor unmentioned so far: Federalism. The US has very strong state and municipal governments compared to France and Japan. In many cases, any one of the three can veto a project. A new commuter rail line under the Hudson River connecting NYC and NJ was begun in 2009 and NJ Gov Chris Christie killed it in 2010 (citing cost overruns), after $600m had already been spent.
exmicrosoldier 9 hours ago 0 replies      
Too many value extracting rentier owners and valueless executives skimming off the top of projects done for the common good.
esmi 8 hours ago 0 replies      
By U.S. I think they mean the Federal Government. It's possible to build very large projects, presumably to code and under proper safety standards, in America if one desires. Here are two examples which come immediately to my mind. I'm sure they are many more examples.


I think the Federal Government looks at infrastructure as jobs programs and this is why they are drawn out. The job is the end not the thing they're actually building.

kazinator 6 hours ago 1 reply      
Why all the guesswork about why it costs so much?

That sort of thing should be readily explained by accounting: follow the money.

Public infrastructure should be financially transparent.

mrjaeger 5 hours ago 0 replies      
Is there any way to do a bottoms up analysis of infrastructure projects like this and see where the differences come from? I imagine the costs on projects like these are pretty finely broken down, although I'm not sure how one would go about actually getting the numbers to analyze.
supernumerary 6 hours ago 0 replies      
In Detroit, lawmakers recently passed legislation making it hard to sue the city for crumbling sidewalks.


macspoofing 6 hours ago 0 replies      
This article was frustrating to read because it is clear the author did not do one iota of research. He identified a problem with American construction sector being inefficient or not-cost effective and did nothing to even attempt to answer why - yet still managed to stretch that one thought into a 1000 word essay.
rrggrr 9 hours ago 1 reply      
Durability. Its one thing to quickly and cheaply add infrastructure, and I've traveled the world to places where this is accomplished at a dizzying pace. Its another thing altogether to engineer and build projects that can endure over time and in the face of natural and unnatural disaster.

Yes, the US is overdue for infrastructure investment. Yes, its costly and unproductive Federal and State entitlement obligations compete for these dollars. But the expense of building infrastructure in the US is justified by the longevity of its engineering and build quality. This is not the case globally.

babesh 7 hours ago 0 replies      
It's a symptom of captured large government in the US.

The smaller 25k person town I live in doesn't nearly have the same issues. We've managed to put up new school buildings the last couple of years with not too much fuss and on budget. On the other hand, the town is relatively well off and there is seemingly good community participation.

anubisresources 9 hours ago 0 replies      
I posted this here the other day, I'm not a civil engineer so I'm sure there are things I'm missing but the idea seems to have some merit: https://www.jefftk.com/p/replace-infrastructure-wholesale

It doesn't address any of the numerous political issues with American infrastructure, but it could be good for repair and maintenance

chromaton 9 hours ago 1 reply      
I-85 in Atlanta was repaired in just over 6 weeks. So things can get done quickly if the incentives are right.
Mz 7 hours ago 0 replies      
They are comparing it to health care, which suffers from a serious excess of bureaucracy. So I will suggest that too much bureaucratic red tape is likely a factor.
mcrad 8 hours ago 0 replies      
As the value of a law degree has ran way ahead of more hands-on professions, this is the result. Constitutional amendment: JD's banned from public office
howard941 4 hours ago 1 reply      
Is excessive profit taking eating up the productivity?
cletus 9 hours ago 1 reply      
The US hasn't forgotten. Nor is this unique to the US. It seems to be a widespread issue in the developed world.

The problem is that labour is simply too expensive now. And, to a lesser, extent, so is real estate.

In the 2000s Australia experienced an unprecedented resources boom fuelled by China's growth. This especially impacted Western Australia and Queensland. WA in particular is rich in iron ore, oil and gas and other resources.

In Perth in 2000 you could buy a 70s 3 bedroom house within a few miles of the city center for <A$100k. By 2005 it was $350k. Capital projects for the resources industries were in the works amounting to over A$100B. All of these have a huge construction component that soaked up the construction supply.

In the 90s you could build a house in as as little as 3 months. From the early to mid 2000s that time frame is now closer to a year and costs 5 times as much.

So homes became much more expensive. Of course building commercial and industrial property also got more expensive. Property costs are a significant input cost into any business that operates there. Increased residential costs soak up disposable income and lead to wages growth. Increased wage costs makes things more expensive and the cycle continues.

So Perth transformed in the 1990s from a city that was very affordable with a good standard of living to one of the haves and have nots. The haves are those in the construction and resources industries. They were making crazy money. Everyone else was pretty much a loser.

Arguably this is dutch disease [1].

So now the city wants to do things like build train lines. Well labour is stupidly expensive because the cost of living is so high and buying up real estate to put said train line on is also super expensive.

A lot of people don't seem to feel this because they're incumbents (ie they bought their houses 15+ years ago). Others have immigrated so have foreign money bypassing the local economy.

So look at the Second Avenue Subway as one example. $17B for a few miles of tunnels? Really? Well that's the cost of labour in the US and real estate in Manhattan. Since it's underground I don't imagine a large percentage of the total cost is real estate either.

So it seems like when a lot of this infrastructure was built, the relative cost of labour was much lower. The standard of living was also much lower, apart from the 1950s and 1960s, which can be viewed as a transitional anomaly more than a normal equilibrium.

[1] https://en.wikipedia.org/wiki/Dutch_disease

primeblue 4 hours ago 0 replies      
Too much compliance and suing by lawyers/everyone.
rhino369 9 hours ago 1 reply      
Do these construction costs inefficiencies also appear in construction projects that are privately funded? Or is it just publicly funded projects?
cmurf 6 hours ago 0 replies      
That's because income taxes on the wealthy are way too low, which incentivizes stuffing income into real estate, stocks, and bonds.

Earn a million dollars worth of either earned or unearned income, and you pay either (simplistically) 39.6% on earned and 23.6% on unearned income. That's too f'n goddamn cheap. It encourages peopel to pay that tax and put it into one of the above, rather than take the risk by starting or growing a business and taking a tax deduction for business expenses.

When the silent generation was in charge of things, income tax was never less than 75% for the top tax bracket, for over 40 years. And during that time there was massive private and public infrastructure being built, and we didn't have a $20 trillion national debt.

JustSomeNobody 8 hours ago 0 replies      
Because the developers have learned how to play politics for more money?
jk2323 8 hours ago 2 replies      
An African government official visits Europe. There, a European government official invites him to his luxury home. The African is shocked.

He asks "How can you afford such a house if you a just a government official?"

The European says: "Look out of the window. can you see the bridge?"

The African: "Yes"

The European: "See, the government paid for two train tracks but the bridge has only one."

The African says: "I understand".

2 years later the European official visits the African in Africa. He invites him into his house and his house is a luxury palace.

Now the European is shocked. He asks "How can you afford such a palace? Recently you were so impressed with house and now you live in a palace?"

The African says: "Look out of the window. can you see the bridge our taxpayers paid for?"

The European: "No."

Kenji 9 hours ago 2 replies      
The solution is shrinking the state to the bare minimum and making as many transactions as possible voluntary.
masterleep 9 hours ago 8 replies      
aaronarduino 9 hours ago 1 reply      
It's not that we have forgotten. It's the fact that regulation, time to get approval, and other red tape make the timeline longer. Gone are the days when things just get done.
Wikipedias Switch to HTTPS Has Successfully Fought Government Censorship vice.com
393 points by rbanffy  2 days ago   119 comments top 11
shpx 1 day ago 4 replies      
It won't last, at least for China. Their government is working on a clone of wiki, scheduled for 2018[0]. Once that's done they'll likely completely ban the original.

Wikipedia publishes database dumps every couple of days[1]. So it shouldn't be that expensive for smaller governments to create and host their own censored mirror. You'd maintain a list of banned and censored articles, then pull from wikipedia once a month. You'd have to check new articles by hand (maybe even all edits), but a lot of that should be easily automated, and if you only care about wikipedia in your native tongue (and it's not english) that's much less work.

The academics will bypass censorship anyway, since it's so easy[2], so an autocrat won't worry about intellectually crippling their country by banning wikipedia. Maybe they don't do this because the list of banned articles would be trivial to get.

Better machine translation might solve this by helping information flow freely[3]. We have until 2018 I guess.

[0] https://news.vice.com/story/china-is-recruiting-20000-people...

[1] https://dumps.wikimedia.org/backup-index.html

[2] https://www.wired.co.uk/article/china-great-firewall-censors...

[3] https://blogs.wsj.com/chinarealtime/2015/12/17/anti-wikipedi...

awinter-py 2 days ago 3 replies      
Can an expert comment on side-channel attacks on HTTPS and whether they're less viable on HTTP/2?

My assumption is that because wikipedia has a known plaintext and a known link graph it's plausible to identify pages with some accuracy and either block them or monitor who's reading what.

I also assume that the traffic profile of editing looks different from viewing.

petre 1 day ago 0 replies      
There was an IPFS clone of wikipedia after Turkey blocked it.


darkhorn 1 day ago 1 reply      
There were few censored pages on the Turkish Wikipedia when it was on HTTP. They were the "vagina" article and election prediction article. Only those pages were censored.

Last month there were some articles on the English Wikipedia about ISIS-Erdoan (I don't care true or not). Then they have blocked all Wikipedia (all languages). Because they were unable to block those individual pages.

rocky1138 2 days ago 5 replies      
How do governments censor only parts of Wikipedia when the site is encrypted? How do they know which pages you are browsing if they can't see the URL?
gwern 2 days ago 0 replies      
After reading through the whole paper, I would have to say that there is far less censorship of WP, HTTPS or HTTP, than I guessed.
enzolovesbacon 2 days ago 2 replies      

 Critics of this plan argued that this move would just result in more total censorship of Wikipedia and that access to some information was better than no information at all
I'm no critic of this plan but I still don't understand why this wouldn't result in more total censorship. Someone explain please?

shusson 2 days ago 2 replies      
TIL: HTTPS encrypts the URL.
SpacePotatoe 2 days ago 2 replies      
I just wonder what UK government has against German metal bands
vbezhenar 1 day ago 3 replies      
Currently HTTPS sends domain in clear-text before establishing a connection. It allows to host (and block) website by domain, not by IP. May be HTTPS should have optional extension to send URI in clear-text before establishing a connection. This way, if censors decide to block Wikipedia, users can opt-in into this behaviour and have unblocked Wikipedia except few selected articles.
libeclipse 2 days ago 1 reply      
> a positive effect

Any numbers/figures?

MoonScript, a programmer friendly language that compiles to Lua moonscript.org
382 points by type0  2 days ago   156 comments top 18
leafo 2 days ago 14 replies      
Hey all, I made MoonScript about 6 years ago.

I used it to build a ton of opensource stuff in addition to the company I founded. I use it every day and I'm very happy with how it's turned out. I regret not updating the language more frequently, but I've been busy building a bunch of stuff in it.

The biggest open source project is a web framework for OpenResty: https://github.com/leafo/lapis

It's used for the following open source websites:https://github.com/luarocks/luarocks-sitehttps://github.com/leafo/streak.club

The company I made, https://itch.io also runs on it. There are a a ton of extra Lua modules I've made in MoonScript to facilitate it. Here's a list of my published modules: https://luarocks.org/modules/leafo

I've also completed Ludum Dare 11 times now, making a game each time in MoonScript. I've used https://love2d.org/ as the game engine. You can find the games on my GitHub: https://github.com/leafo?tab=repositories

Feel free to ask any questions

Cyph0n 2 days ago 1 reply      
This is one of @leafo's excellent projects. He has developed a web framework for MoonScript called Lapis. One of the cool things about Lapis is that it runs on top of OpenResty, which is a high performance web server for Lua applications. Lapis is production-ready: it runs his amazing game marketplace, itch.io.
jdonaldson 2 days ago 1 reply      
On a related topic, Haxe now compiles to Lua as well : https://haxe.org/blog/hello-lua/(disclosure : I'm the author of the lua target)

Back to moonscript/Lua, I've been super impressed with the YAGNI principals of the language. I was originally drawn to LuaJIT and its raw speed, but there's a lot of great things to say about the language and its community.

My goal is to write more Lua for smaller scripting purposes, and use Haxe/Lua for more complex projects, taking advantage of LuaJIT speed in both cases.

mtalantikite 2 days ago 7 replies      
Lapis + MoonScript + OpenResty looks like a lot of fun, so I was just starting to get a local environment running for it and immediately ran into the Lua 5.1 and 5.2+ divergence. For someone who has just been a casual observer of the Lua ecosystem over the years, can someone talk about the community's feelings towards that divergence? The OpenResty docs basically state it's not worth the effort to support anything but Lua 5.1, which at this point is 5 years old and no longer updated.

From someone on the outside that makes me hesitant to spend much effort investing in the ecosystem. Are the libraries fragmented across different versions too? Are there really no plans for an upgrade path, or does everyone just expect to use 5.1 forever?

weberc2 2 days ago 4 replies      
I understand that people love programming in classes, but I really want something like a stripped-down ES6. In particular, I want:

1. Lists and objects/dicts only; objects have no prototypes at all; objects have no first-class methods, but they can have functions as member variables, and those functions can close over other data in the object

2. All function and variable declaration is anonymous (as a consequence of 1); if you want to define a schema for an object, build a function that takes certain arguments and returns an object with the corresponding structure.

3. Coroutines/fibers/etc for async things; no promise/callback hell

4. Type annotations a la Python (not sure how this fits with 2 just yet)

EDIT: I did a bad job of emphasizing this, but syntax is important to me--I want something that looks like:

 var foo = { x: [1, 2, 3], y: (a, b, c) => { var sum = a + b + c; sum * sum }, }
Whether or not you agree that syntax should be a relevant criteria is a different matter. :)

fasterthanlime 2 days ago 1 reply      
moonscript powers the itch.io indie marketplace - the original announcement has a quick write-up: http://leafo.net/posts/introducing_itchio.html

After spending years working with Ruby or Java, it was a very nice change of pace. Lapis (leaf's web framework) is crazy good as well: it's easy to write fast code, see this article on coroutines: http://leafo.net/posts/itchio-and-coroutines.html

(disclaimer: I work there with Leaf!)

vortico 2 days ago 2 replies      
I love the syntax that Coffeescript-like languages are converging to, like Moonscript, tj's luna (https://github.com/tj/luna) and Pogoscript (http://pogoscript.org/). They fall way on the right of the "computer code vs. human code" spectrum, which is handy when writing applications as quickly as possible rather than focusing on fine details like implementation and performance. It's a shame I don't see it often in the wild, except for Atom and a few others.
yev 2 days ago 3 replies      
I find Lua itself very programmer friendly.
Pxtl 1 day ago 1 reply      
Some of the stuff in Moonscript feels a little overdone and magical, like parens-less function calls and the syntactically significant whitespace - but others really feel like they're fixing the gaps in Lua. Moonscript's nice lambda syntax, ternary operator, default local, implicit return, etc.

But I don't think I really needed the ! operator, for example.

PatentTroll 1 day ago 2 replies      
I was on a team that used Lua in production on tens of thousands of machines running 24/7. The Lua interpreter was the weak link in the system, requiring at least daily restarts. Squashed so many bugs over the years, and still found more and more all the time. What I'm saying is, the Lua interpreter itself is flawed and I can't imagine using it as a runtime if I had the chance to avoid it. That said, the language was fun and fast and got the job done. The interpreter could use some work.
partycoder 2 days ago 5 replies      
Adding superlatives to your stuff is, to me, just noise. e.g: "fast", "simple", "friendly", "lightweight".

I would rather provide conclusive proof, like some side to side comparison of features to illustrate how idiomatic MoonScript is supposedly friendlier than Lua.

Personally I think the changes are not necessarily in the right direction:

For example, what if you have a typo in a variable identifier when assigning a value to a variable? Now you have a new variable. Where to look for the definition of a variable? It depends on what runs first now. That's not friendlier. CoffeeScript made the same mistake in the name of simplicity and it quite frankly doesn't add any value. The time you saved typing "local" will be now consumed several times debugging and code reviewing... and you will have to pay more attention during your code reviews.

Then, removing braces and other delimiters. That's not necessarily better either. Let's remove lane delimiters from streets, traffic signals, stop and yield signs, and let's make it all implicit based on rules you can follow in your head. See? doesn't work.

koolba 2 days ago 2 replies      
"Moonscript is like CoffeeScript for Lua"

That's a very succinct description and from skimming the examples, quite apt.

But what I want is Typescript for Lua, specifically to use with Redis. That way I can define interfaces for my keys/values and have static checks on their usage.

nilved 2 days ago 0 replies      
Lua is a great language and MoonScript improves on it even further. I used MoonScript to write a load balancer with OpenResty recently, and World of Warcraft interface code six years ago. :)
shalabhc 2 days ago 0 replies      
Another Moonscript project is https://howl.io - an editor written almost entirely in Moonscript that runs on LuaJIT.
OriginalPenguin 2 days ago 3 replies      
Can this be used instead of Lua with Redis? Is there an example somewhere of that?
weberc2 1 day ago 1 reply      
How does MoonScript compare to Wren? https://github.com/munificent/wren
nirav72 1 day ago 0 replies      
I wonder if this would work with NodeMCU development for esp8266 boards.
k__ 2 days ago 0 replies      

We need more programming languages with sane syntax.

Why do so few people major in computer science? danwang.co
405 points by dmnd  2 days ago   593 comments top 97
lordnacho 2 days ago 32 replies      
The grief factor of learning to code is on a different scale to every other major. One missing semicolon will take your whole tower down, and you realise this in the first day of practical exercises.

Even if you are of the opinion that CS is math, and coding doesn't come into it, you will hit a coding wall early on.

In fact, every exercise in CS has this problem. You add a new thing (eg inheritance), and it breaks. But not only that, it might be broken because of a banal little syntax problem.

And that's just what you consider code. If you put in the wrong compiler flags, it breaks. If you can't link because something needed rebuilding, it breaks. Want to avoid it? Learn how make works. Huge number of options, which you won't understand, because you're a novice.

Oh and learn git, too. And Linux. Just so you can hand in the homework.

Compare this to the rest of university. I'll use my own experience.

- Engineering subjects tend to revolve around a small number of vignettes. Here's an aircraft engine in thermo. Draw some boundaries, apply some equations. If you get it wrong, your tutor can still see if you were going the right way. Once you've learned the relevant points, it's not hard doing some rearrangements and plugging in some numbers.

- Economics essays are basically bullet points. Miss one out, you still have an essay. Which you can hand in without knowing git.

simonsarris 2 days ago 7 replies      
Slight note about Dan Wang picking 2005: That was the peak of CS degrees awarded because it's 4/5 years after the height of the dot-com bubble. So the upward bump in the mid-2000's is somewhat explainable as an anomaly.

I think his point 1 is underrated. CS degrees are flat because aptitude is flat.

You can compare CS degrees to other degrees over time at nsf.gov:


We have more grads than ever, but they are dumber than ever (we have the data to prove this), getting less difficult degrees.

I have a bad feeling that we are running up against some diminishing returns on education and hiding it with numbers like the total number of grads. The number of grads for difficult degrees and the quality of grads seems to be another story.

> In 1970s 1-in-2 college grads aced Wordsum test. Today 1-in-6 do. Using that as a proxy for IQ of the median college grad, in the 70s it was ~112, now its ~100.

More stats: https://medium.com/@simon.sarris/why-is-computer-science-enr...

cynusx 2 days ago 3 replies      
The whole idea that people pick their education based on rational assumptions when they are 18, know essentially nothing and are coming straight out of an environment where perceived status is all that matters on the playgrounds is ridiculous.

The choice of what to study is not a rational decision but a social decision. People follow their friends, guys go study what the hottest girls he knows are going to study, parents push kids to study fields that they themselves perceive as high-status like finance, law or medicine.

The biggest problem with computer science degrees is that it is a relatively new field and it hasn't been embedded in society as high status yet. This will change, but it will take generations for it to take effect.

The field is obviously difficult but you don't have to be a genius to get a CS degree, it doesn't require much more determination to study ridiculous amounts of jargon for law or medicine degree then to understand complex discrete mathematics problems. The social cost of "failing" a law degree is much higher (parents complain son of X did pass and he had similar SATS) then failing an engineering degree (parents understand "it", they don't understand computers either).

dahart 2 days ago 3 replies      
Whoa, that chart seems really misleading. There are 3x more CS majors than math majors and 2x more than physics. The chart is showing derivatives, not absolutes. That basically undermines the title of the article.

Looking at the graph, it's also super important to see the context before 2005 -- that start date adds significantly to the misleading impression this graph is giving.

Math had more majors in 1970 than it does today. Physics has only grown by 50% in the last 40 years, and both have had huge dips just like CS had.

I was coming up with some explanations myself, but now I think I reject the premise, and feel like the right question is: why are so many people majoring in CS and so few in math and physics. More math and physics people can code than ever before, it seems like they'd be able to score coding jobs and be more prepared than a lot of CS grads.

overgard 2 days ago 2 replies      
I think the problem is how the subject is taught. I remember my first CS class, and there was quite a bit of diversity that quickly disappeared after the first test when people started dropping the class. (I'm pretty sure something like half the class disappeared, and I don't think this is at all unusual). So why did all those people initially find interest, and then disappear?

I remember the way they taught it was VERY dogmatic Java/OOP. Putting aside my personal feelings on those subjects, that's like teaching someone to swim by throwing them in the ocean without a life jacket. I tutored some other students, and picking up the language AND the IDE and the debugger and understanding compilers -- it was too much. I remember trying to learn java as a kid and being baffled, and then picking up QBasic and basically getting it immediately. QBasic teaches you some awful habits of course, but for a beginner it's much easier to reason about and it will teach you how to think like a computer. I'm not suggesting we go back to teaching QBasic, but it has to be something other than Java. I think CS departments throw everyone in the deep end with an awful curriculum, and then act surprised that everyone leaves except the hardcore nerds that already knew these subjects before they got to school.

kutkloon7 2 days ago 16 replies      
I majored in computer science, and I don't feel like I've learned anything. Every time there was an opportunity to learn something cool (mathematical theory behind cryptography, assembly language, details of a processor, the cache, or a communication protocol) or useful, it was glanced over as 'technical details'. Instead, I was introduced to many, many small topics (programming, graphics, databases, algorithms, user experience, functional programming, logic, web programming), but we didn't go in the depth. To be honest, I forgot most of it. It did easily land me a job as a consultant, though.

I honestly wish I'd picked a more interesting major, like electrical engineering or physics. I feel like I could learn the things I've learned in a few months (which may or may not be true).

Computer science is just not very hard, while physics, math, and engineering is. I think the guys from other fields can be more successful programmers, just because they are smarter (more used to solving hard problems).

In computer science, the only course that required a little bit of creativity was algorithms. It was stuff mathematicians are practically trained to do (be it not in exactly the same setting).

xxSparkleSxx 2 days ago 8 replies      
I think the market is just way over saturated. Hiring practices for developers point to just that.

I mentioned in a different thread how simole it is for my travel-nurse of a sister to get a new job (her stints around the Bay Area paid ~100k and she only has 2 years of experience).

Developers jump through hoop after hoop for employment, this wouldn't happen if they were in demand like a nurse. The market is just responding appropriately, though maybe not how the masters would prefer it.

CM30 2 days ago 1 reply      
I still suspect it's this reason from the post:

> You dont need a CS degree to be a developer

With another catch. Basically, a lot of people don't intend to go into the tech industry right away. No, they end up in it because it's one of the faster growing industries with decent financial prospects.

So they learn something else, work in a different field for a bit (or a low paid retail job) then end up going into tech where the jobs and money are.

Not everyone is 'passionate' about the subject.

theprop 1 day ago 2 replies      
I'm not sure about that data. Definitely lots of engineers and physicists as it has been noted have programming experience. Beyond that, tons more people are majoring in computer science as of the last 10 years.

It's become the most popular major at Stanford.

At Princeton in just 5 years from 2011 to 2016 it grew 3x to become the most popular major from 36 to 130 majors. At Yale in those 5 years, the number of CS majors doubled (though it's not the most popular major there).

In at least 3 states now the single most popular job is software engineers (30 years ago in just about every single state it used to be driver), and I imagine that trend is only going to continue so you will see more and more computer science majors.



cs702 2 days ago 3 replies      
One possible answer no one has mentioned so far is that there many smart, capable people who do NOT want to spend hours every day sitting at a desk, in front of a computer, focused on code, with limited human interaction... so they pursue majors in other fields.
johan_larson 2 days ago 2 replies      
I would guess the issue is prestige. Software development, like engineering in general, is not a top-tier profession in the US. The actually top-tier professions are doctor, lawyer, and banker/financier. Software developers, like accountants, are well paid but second-tier in status, geeks who worry about the details rather than distinguished professionals who call the shots.

Interestingly, the three professions I mentioned above all have graduate degrees, whereas software developers have B.Sc. credentials, if that.

brod 2 days ago 0 replies      
I am a self taught web developer, I started a bachelor in IT with a CS major mid 2014 assuming the title would increase job opportunities.

Late 2014 I landed a part-time job in web development, that role then moved to full-time and I transitioned out of the Bachelor program as I was learning more valuable work-related skills at my job or in my spare time.

Since then I've advertised to employers that I'm part way through a bachelor, willing to complete if they believe it's necessary but otherwise not interested. I'm now earning just above the average cited in the article and have no intentions of returning to school.

I know of a few classmates that are in the same boat, they got a part-time job, transitioned to full-time and quit schooling.

In my opinion, the fact I was studying was critical in landing the first job but useless afterwards once I could prove my ability and worth. I doubt people who only completed a degree could compete at technical interviews against people with a self taught background.

GnarfGnarf 2 days ago 0 replies      
Engineering, physics, math etc. differ from Comp. Sci. in one major respect (so I hear from colleagues).

If you invest a finite amount of hours in the first category, you are pretty much guaranteed you will have something to show for it. Not easy, not simple, but you will get results.

Comp. Sci. is a black hole. You can blow ten hours on an obscure logic error. Students know from experience that their tightly budgeted schedule can be wrecked, and they can miss deadlines for reasons that seem out of proportion with the payback. This impacts their other subjects as well.

BTW I've a 50-year career in IT. The sum total of my academic qualifications was 1 week of FORTRAN. The rest I learned on the job.

elihu 2 days ago 1 reply      
My theory: computer programming is awesome, but getting a good job doing interesting work is really hard. A bachelor's degree in computer science won't even get your foot in the door in a lot of places. However, there is lots of work doing IT for banks or writing Java for insurance companies or creating web pages for doctor's offices. That sort of thing might pay pretty well, but it's not the sort of thing that you would have said when someone asked you what you wanted to be when you grow up.

Maybe a lot of prospective students perceive (correctly or not) that all the best jobs are already filled by talented people and the competition for those is intense. If you didn't get in at the right time when the industry was in a massive growth phase, you're more likely to get stuck in a dead-end job.

ryanmarsh 2 days ago 1 reply      
> I think that people who go to college decide on what to major in significantly based on two factors: earning potential and whether a field is seen as high-status.

Also laziness, virtue signaling, dilettance, and genuine interest.

runeks 1 day ago 1 reply      
One possible answer: you can learn computer science without needing a degree at all, and subsequently prove that you've learned it by just writing a program that works.

That makes a CS degree inherently less valuable than almost all other degrees out there. Why would employers request a degree saying you know your stuff, when they can just ask you to prove it directly?

I was recently st a meetup and met a guy I went to school with. When he was in the process of acquiring his masters degree in physics, he was contacted by Google, who wanted to employ him. He went to an interview (which, apparently, was several interviews by different people all working for Google), and he got the job and moved to Ireland to work for them. Moral of the story: get a degree which offers the greatest value for money, and learn CS in your spare time, for free.

metaphor 2 days ago 0 replies      
No discriminant between BS and BA flavors of CS. No numbers to capture those who minored in CS. No breakdown of engineering by discipline (in particular, CpE and EE). Interesting data, but it leaves much to be desired.

Surprising that there's no discussion of CS as a "tool" discipline in the same sense as math and stats are, especially at the bachelor level.

When I consider that "Engineering" comprises far more distinct disciplines than "Computer and information sciences", stats on the former are quite dismal. This becomes even more evident at the master's level[1]: for 2014-15, the number of master's degrees conferred in all engineering disciplines is ~25% less than the number CS bachelor's degrees in the same FY.

[1] https://nces.ed.gov/programs/digest/d16/tables/dt16_323.10.a...

GCA10 2 days ago 1 reply      
Dan nearly solves his own riddle with possible explanation No. 2: "You dont need a CS degree to be a developer." He just doesn't spend long enough probing this issue.

Majoring in computer science is like majoring in English with hopes of becoming a writer. Or majoring in economics with hopes of starting a business. You'll get all the theory. You'll mingle with all the lifers. But because you try to come into the guild at age 18, there's a risk of narrowness/tunnel vision.

The people majoring in stats, math, physics, etc. may work on more interesting problems during their college years, or develop a more holistic sense of how to come at big new areas of learning. Meanwhile, the opportunities for non-CS majors to pick up programming skills via electives or non-classroom projects are huge.

Where Dan sees a problem, I'm seeing a healthy diversity. The U.S. is able to come up with enough software talent as is, drawing on many different pathways. Why insist that everyone be trained the same way?

david-cako 2 days ago 2 replies      
A better question is "why do we care".

Software is about the only career I can think of where there are movements created to inject social status into it so that people get into it who are only interested so long as it comes with social status/trendiness.

The major also doesn't fix the thought process. You either have it or you don't.

Futurebot 2 days ago 0 replies      
Aside from the many valid theories already listed, I'd add:

perception of dullness

Many people find the idea of staring at code all day, regardless of potential for remuneration, boring. Worrying about every little character, futzing around with compilers and debuggers, and reading manuals isn't many people's idea of fun.

Over the years, several non-developers have commented on this to me; "boring," "dry," and "dull" were generally the adjectives used. It's also perceived by many (rightfully) to be especially prone to the "retrain on your own dime" issue (which has become more common across industries and jobs, but in software dev is particularly acute.) The idea of spending your weekends having to learn a new library or brush up on your fundamentals to interview yet again isn't appealing, and it wouldn't surprise me if younger people were already very clued into this.

"Most desired career among young people: 'YouTuber'":


platz 2 days ago 2 replies      
Primarily, because computer programming is low-status.

(also, this is one of the main reasons why females are deterred from joining)

issa 2 days ago 0 replies      
I've always had a personal explanation that I have nothing but anecdotal evidence to back up. The most important skills that make someone a good coder are that they enjoy solving problems, and are willing to work single-mindedly on something until it is done. If you have those skills, there's really no reason to go to college to learn anything.

In fact, I would bet that the graph in the article corresponds inversely to how easy it has become to learn programming on your own. From manually copying code out of a magazine when I was a kid, to stackoverflow today.

I submit that the coders who DO get degrees are people who really enjoyed school (for reasons unrelated to learning), people who didn't really know what they wanted to do in life and school was expected of them and/or the path of least resistance, and people who are much more into research than the average developer.

Houshalter 1 day ago 0 replies      
I have a theory that it's because computers are pushing coding stuff away. If you bought a personal computer in the 80s, you basically needed to learn how to program. The computer would come with a basic interpreter built in and easy to find. It would come with a thick dead tree manual on how to program it.

Windows, as far as I know, doesn't come with any programming language built in. You can do some shell stuff or js in the browser, but you can't make an application with that (easily.) And that stuff is hidden away and not encouraged, you have to do research to find out it's even there.

And mobile OSes are even more locked down. As mobile devices replace desktop computers for the vast majority of people, how are they ever supposed to get into programming?

In some sense it is easier than ever to get into programming. Programming languages are better, the internet makes learning resources much more available, there's libraries that can do whatever obscure thing you want to do. But all this is hidden away in a secret world that most users will never venture into and don't know exists.

I know this sounds like it shouldn't be a big deal, but I really believe it is. I was so intimidated by learning programming that I put it off for a long time. It seemed like it would be very complicated and difficult. When I did try to learn, I tried with C++. I also early on tried to program stuff with batch scripts and was put off by how limited it was. Eventually I tried another obscure proprietary language that I found through clicking on an ad. All of these were terrible choices for a beginner who wants to learn programming. But I didn't know enough to know they were terrible choices.

If someone had installed python on my computer and showed me some simple examples I could play with, I would have been so much better off. Eventually I stumbled across a blog post showing how to open up the developer console on a browser just by pressing F12. And some simple example stuff in js. It's like an new world just opened up to me. I know some people that had a similar experience with the computercraft mod for minecraft, of all things.

mdc2161 2 days ago 1 reply      
Personal anecdote: I didn't major in it because I had no idea I would enjoy it.

I was fortunate that my engineering program had two semesters of Java. We spent more time hand drawing logic gates than coding in the intro course and so it wasn't until the second (data structures) that I realized it was something I wanted to pursue. It was too late for me to change majors at that point, but not too late to take internships and then a job as a programmer.

mathattack 1 day ago 0 replies      
My 2 cents:

1) It's hard. Very hard. Unlike most other subjects you can't fluff through it. It works or it doesn't.

2) Every programming class is a ton of work. Even if you're great at the subject, it's generally your most time consuming course.

3) Because of #2, if you don't know from Day 1 that this is your major, it takes forever to get through the coursework.

4) More than most majors it's very hard to take even the intro classes if you haven't done it before.

Perhaps because of all this, most CS majors I know are people who just couldn't imagine majoring in anything else.

SubiculumCode 2 days ago 0 replies      
Because college is funner when you don't have to take the CS weeder course.
matheweis 1 day ago 0 replies      
One thing that seems missing from the post is the proliferation of alternative degrees; Computer Information Systems, Information Systems, Software Engineering, Information Technology, etc etc.

I would be surprised if the aggregate of all of those degrees didn't meet or exceed the trend of the others.

Osiris 2 days ago 2 replies      
I did a year a computer science before switching majors. Computer science doesn't teach students how to be software developers. It's an academic study of computing, which is important but not for the majority of development jobs.
shmolyneaux 2 days ago 0 replies      
At the University of Waterloo in Canada, the ratio of CS applications to available spaces is over 15:1 [0] according to a Computer Science professor. It could simply be a supply issue.

[0]: https://twitter.com/plragde/status/834474871010648064

jacquesm 2 days ago 0 replies      
A very large factor is that having a CS degree is not going to make up for the years lost as a developer if you're any good. Some companies are pretty heavy on the degree requirement but even the larger ones like Google have been slowly backing away from this.

Lots of developers come into computer science through physics, maths and other peripherally related fields and discover they're good at computers.

Finally, it's hard to continue to work on a degree for a pittance while your less capable buddies are raking in 6 figure salaries. At some point the words 'opportunity cost' will start to appear in your nightmares.

azakai 2 days ago 0 replies      
One major factor not mentioned here is that the number of college graduates in a field is not just determined by how many apply to it, nor the author's #6 (reactionary faculty that fail large amounts of people). There is also how many people are accepted in the first place.

If a university has 100 spots for CS, then even if twice as many people apply for CS in one particular year, there will still be 100 people (but with higher SATs, presumably). There is some flexibility here, but it is limited - those 100 people require a certain number of faculty and grad students to teach them. They need a large-enough building with the proper facilities (you can send students to classrooms in another building sometimes, but it's not optimal). The campus can't just accept more than the students they planned for without preparation, and those plans are made long in advance.

If a university sees a field is popular, it may work to eventually be able to accept more applicants. But it might not decide to do so - popularity among students isn't the only factor considered, there are many others, like ease of acquiring funding and grants, likelihood of undergrads becoming graduate students (and whether the university wants more or less of those), etc., all of which require multiyear planning and also have various political factors.

tl;dr It's worth seeing if we can find data on the number of applicants, and not the number of graduates. It's possible the number of applicants has been increasing.

djsumdog 2 days ago 1 reply      
I'm glad I got a full CS degree, but I knew several people who dropped to the business versions (often called MIS or CIS depending on your school) and learned a lot of the basics of programming and web front ends without more the hard core algorithms and foundation work.

As I read the into, I think the author touched on a lot of the reasons I was starting to think of. A lot of people do boot camps (which are overpriced for-profit garbage btw), community college programming classes, etc. I know people out of this programs that understand bigO notation and do all kinds of fun scaling work and I know CS majors who can only program Java/C# and don't know what a SATA connector is. You get out of your field what you put into it.

As far as women in our field, I hesitate here. I don't really think it's the hostile landscape. I've worked with several female engineers. Some are amazing and good designers. Some are terrible. The ratio to good/bad males, in my limited non-scientific empirical view, seems about even. I also haven't really witnesses women being treated badly either and I've worked in five cities and several jobs over the past two decades. What I have seen are entire groups of people being treated like crap in hostel work environments, not limited or segregated by race or gender.

I feel there are also not that many people in our field (both men and women) because it's...pretty horrible. Seriously, we sit in front of screen for 8 hours a day watching the world tick by, often doing our best to design the best we can to be bolted onto old decaying crap that should have been retired a decade ago. Or we build shiny new products that benefit the few and have tons of crazy requirements that come out of no where that nobody wants. There aren't as many women in engineering because in general women chose jobs that are more rewarding even if they're lower paying. I think we could all take a page from that philosophy, if we didn't live in a world where we were afraid of ending up on the bottom or without enough for essentials.

I can honestly only two about two years at a time in IT these days. I've embraced the Sabbatical (http://penguindreams.org/videos/taking-a-sabbatical/) even though I realize it's probably not sustainable long term, and also realizing my earnings in software give me this unique advantage, that most people simply don't have.

slackingoff2017 1 day ago 0 replies      
CS requires a decent amount of smarts, requires constant learning, and is boring to most people. This is enough that it will never be an attractive job to most of the population.

Why isn't it drawing more engineers though? I think CS is seen as the risky choice for an engineering job. There's been multiple tech job boom and bust cycles over the years. Why pick CS when most branches of engineering pay almost as much and don't have nearly the risk?

Another thing I've seen happening firsthand is other professions getting dragged into the CS sphere. I know multiple electrical engineers that spend their days writing code now. Circuit design is becoming largely automated, they just need coders that understand the circuits. Same with marketing, I know a couple guys that majored in marketing who spend their days tinkering with WordPress. Finance too, basically all trading has some level of automation. Probably half the people writing code now never intended to. I like to think this, at least in part, is why so much code appears to have been written by satan.

vandyswa 2 days ago 0 replies      
Agree that all signs are there's a glut. Note that we're down to somebody at age 30 starting to notice age discrimination. With a career longevity approaching that of an NFL player or MMA fighter, is it really a good choice any more?
dep_b 2 days ago 3 replies      
Because database connections are hard.
fitchjo 1 day ago 0 replies      
I know not exactly what the author is discussing in his post, but as someone that did two years in CS before switching (to accountancy), one of the main reasons I switched was the stark contrast in interest in the field between myself and (seemingly) everyone else. In hindsight, I think I would have made a good project manager (instead of a developer), but that path was not really communicated to me in a way that resonated with me. So I just saw a bunch of people with a much greater zeal for coding than I had and decided I should try something different. Maybe in another life...
tsumnia 2 days ago 0 replies      
It's a few problems, but I disagree with some of the author's points. One issue it's posing the dot-com crash as a similar peak as what we are currently in. Eric Roberts of Stanford wrote an opinion article on what he saw was the ebb and flow of CS [1]. We are in another peak, undoubtedly, but I'd argue this peak mirrors 1984s popularity.

Roberts suggests the issue with the 80s "crash" was an inability to meet demand. As such, universities began placing restrictions on incoming students. If it's damn near impossible to enroll in THIS major, I'll just go elsewhere. While this next link is primarily for women, you can see every other STEM/Law/Med domains grew, while CS did not [2]. Likewise, university "retraining" was no standardized, so you may not have gotten the training you needed. Fast forward to today, we say the university system is broken, but the only competitor right now are the recruitment boot camp or the "learn it yourself" model. Regardless of your opinion of any of the three, it is clear they are attempting to be products in "handling the demand".

To counter "anti-nerd culture" and "immigrants" as bullet points - seriously? That's stuff we complained about 20 years ago (in the 2000's). Nerd culture is mainstream now that we've got billionaires everywhere and outsourcing didn't take "all the jerbs". This points sound more like parroting the concerns of the past.

[1] https://cs.stanford.edu/people/eroberts/CSCapacity/[2] http://www.npr.org/sections/money/2014/10/21/357629765/when-...

killjoywashere 1 day ago 0 replies      
Did he actually mean to imply more people major in physics? The most recent numbers show physics at an all time high of less than 8,000 bachelor's degrees awarded (1) whereas the computer science bachelor's degrees, restricted to CS departments in engineering schools (no "information" degrees, no CS departments in math or science colleges), were over 10,000 (2).

There was a drop in relative growth because several years before that, the dot-com bubble burst and women fled the field. He says he didn't see that in the NCES tables he looked at, but for pete's sake, that's the first link on Google! (3)

(1) https://www.aps.org/programs/education/statistics/bachelors....

(2) https://www.asee.org/papers-and-publications/publications/co...

(3) https://nces.ed.gov/programs/digest/d12/tables/dt12_349.asp

taylodl 2 days ago 0 replies      
"Cultural centrality ofSilicon Valley"?! Excuse me, who's culture? Most of us in the field don't care about Silicon Valley and nearly no one outside the field cares. Sounds like someone needs to get out of their bubble.
zacsme 1 day ago 0 replies      
I am a little surprised by so many comments talking about the problems with the CS curriculum in college. If people aren't even majoring in computer science then I would first think that the issue starts before students come to college.

Younger students, K-12, have little exposure to computer science concepts or even programming in general. Sure some schools are great but many public schools in the US are average at best.

siliconc0w 2 days ago 0 replies      
Schools do a terrible job getting people excited about computer science and even worse software engineering. You start with C and Java and a bunch of arcane syntax and commands to print 'Hello World' when you could start by making simple games or apps and working backwards to introduce CS concepts mr miyagi style.

I started with Basic in elementary and PHP in middle/high school and by the time I got to college - young arrogant me was like, "what is this C/java noise and why do I need it when I can already do all this cool stuff with php!". I didn't really start appreciating CS and how it applies to software engineering until much later.

BadassFractal 2 days ago 0 replies      
Can we already kill the meme that CS or STEM are hostile to women?

Correlation does not imply causation. Disparate outcomes do not imply disparate treatment. Nobody in the right mind looks at the 94% of child care services jobs being filled by women and exclaims "Aha! Systemic sexism against men, matriarchal oppression afoot, we must address this social injustice!", yet all common sense falls apart when it comes to STEM.

Good talk on the subject here: https://www.youtube.com/watch?v=Gatn5ameRr8

sidlls 2 days ago 0 replies      
For the same reason so few people major in any subject. CS isn't special. People major in subjects because they're interested in them, they think the career path might be good, there is social prestige, and many other reasons. It's entirely not surprising that CS has few people selecting it as a major.

The decline or slower growth relative to other fields requiring similar kinds of intelligence may be an interesting question--or it may not be, but the posted article doesn't, in my view, present any compelling case for either answer.

wanderr 1 day ago 0 replies      
> Have people been deeply scarred by the big tech bubble? It bursted in 2001; if CS majors who went through it experienced a long period of difficulty, then it could be the case that they successfully warned off younger people from majoring in it. To prove this, wed have to see if people who graduated after the bubble did have a hard time

As someone who graduated shortly after the bubble burst, I can attest that yes indeed we did have a hard time. I had a year of professional programming experience under my belt (took a year off) and still couldn't find anything for a long time. Eventually took a job making 24k at a failing company that was a nightmare to work at, quit that and did tech support for a county library district (they needed someone who could program but didn't have the budget to hire a developer) making 35k for a few years. I kept an eye on the broader market during that time but it seemed like everything required 10 years of experience.

Overtonwindow 2 days ago 2 replies      
Because universities continue to place a heavy emphasis on math, specifically calculus, as a gatekeeper. Calculus is useful but I don't believe it's necessary for a CS degrees.
paulmooreparks 1 day ago 0 replies      
I don't understand the assumption that a software developer should study computer science to prepare for a career in designing, writing, and delivering computer software.

Consider the building trades. Employees in that industry don't study "Building Science". Architects study architecture. Engineers study engineering. Craftsmen in the various building trades study in apprenticeships.

Why not admit that computer scientists should study computer science (a valid and useful area of study in its own right) and instead develop a full-fledged degree program for the various skills involved in the software-development industry?

michaelbrave 2 days ago 1 reply      
I can share my personal story.

I've always been computer savvy and would have loved to have gone into programming, now I'm trying to prep to go back to school for computer science, so I think it may be relevant.

But even being good with computers I was never really a good student in high school, and due to moving around, parents divorcing etc, I had huge gaps in my math education(I still don't know my multiplication tables). To the point that I never really thought I was good at math until I got to college.

By the time I got to college though I had already put years into becoming a graphic designer, it was my career path and I could graduate faster if I stayed on it. So I did, because I was so close to finishing. I've regretted it ever since.

Now I'm older, wiser, full of regret and better at math, so now I'd love to go back to school or attend a boot camp, but I'm legitimately broke, and I have no idea how to pay for it. So I keep trying to learn on my own, from the occasional book or youtube video.

TLDR: Math education was lacking and required, I was already on a career path, have regretted it ever since.

mncolinlee 1 day ago 0 replies      
I'll go with a different angle on this problem. Computers have become indistinguishable from magic for most of the modern population. Even with IDEs like IntelliJ and Visual Studio, the interface of programming has not kept pace with the sexiness of GUIs. As a result, lots of young people take computers for granted and give up quickly when presented with UNIX, git, and the rest of the command line tech stack used by programmers.

I'd say that the decreasing percentage of women in computing has also hurt. When I starting working almost a couple decades ago at Cray, they had significantly more women in programming. Today, most hard sciences graduates are women, but only about 25-30% of CS graduates are women. I don't have a great answer why this fall-off is happening, but it seems to be a symptom of cultural issues. Maybe it's the influence of VCs and the bro culture bias of finance? I honestly don't know.

jokoon 1 day ago 0 replies      
Because there so many different things to do in CS. Knowing how to code will already give you a job, but mastering CS at a certain level is another realm of work, and CS keeps changing and evolving.

That's like working with cars, you need fewer engineers, and many mechanics and technicians.

Coding is like a spoken language. It's not so hard to write and fix code and there is already a lot of business involving just that, so my guess is that many students just learn to code and don't really do real CS.

The computing industry keeps growing and growing, so it means you need more technicians to keep up with growth, not nice degrees. Of course it's nice to have PhDs, but good luck training them. Education relies on constrained resources.

Raphmedia 2 days ago 0 replies      
Two path were in front of me:

A) Enter the market without a major. Work for a low but decent pay for 4 years (with yearly pay raises) and then use the experience to move elsewhere and jump up in salary.

B) Spend 4 years without any pay, get out of school and end up with an entry position, work 4 more years and then move to a position that offers a good salary.

Needless to say that I went with A) and am not regretting it at all.

eximius 1 day ago 0 replies      
Personally, I just didn't want the workload. My favorite semester was when I was taking 4 senior level math courses. Each week, I'd hunker down and knock out the homework over 2-4 hours for each class. For CompSci, it's more like 10-20 hours for each class. The workload in CS is just stupid.
sovande 2 days ago 0 replies      
CS jobs in the west are victims of Globalisation. We simply cannot compete with Asia and Eastern Europe with regards to wages. Hence outsourcing which has been ongoing for decades. We _can_ compete on quality and solutions, but these are hidden properties which might or might not be a problem in the finished product. Business people consider scope and cost first and foremost. In addition, many of the best and highest paid programmers are self-thought. This works, because 99.9% of your career will be bread and butter CRUD apps anyone can do. At the university we studied algorithms and data structures which we will never use or implement. When was the last time you had to do a breadth-first-search on a graph? I really cannot recommend anyone in the west to study CS today.
matthewbauer 2 days ago 0 replies      
I haven't seen a mention of this, but what about the rise of CS-like degrees that people have been getting. Many colleges now offer "software engineering" and related majors. In addition some colleges have recently now CS into the Engineering department. Could that be messing up the data?
01572 2 days ago 4 replies      
To an audience of CS majors this will probably sound like trolling but honestly I have never used any software written by a "computer scientist" that I came to value and rely on -- software that I consciously chose to use.

Whereas I have found such software on multiple ocassions written by mathematicians or persons in some other field, e.g., physics, etc.

As a user of software, I do not believe that a computer science degree is of any significance in terms of the ability to write good software.

The blog post makes a comparison to Liars Poker. Perhaps it should be noted that the author of Liars Poker majored in art history. It was not necessary for him to have a particular degree in finance to do his "job". That was the point of book.

The question to ask today is whether one needs a degree in CS to write good software.

armchair_hunter 2 days ago 0 replies      
The chart shows degrees conferred, not how many people are enrolled in a degree. I've been teaching CS at universities since 2010 and I've seen a tremendous growth in CS enrollment. On the flip side, a large number of students fail.

This spring, I failed about about a third of my pupils in the intro class and in following data structure course.If this holds true for future semesters -as it has for the past few- only around half actually make it through data structures.

CS is hard, and not just because of how exacting the syntax is. It is completely new for many students. Engineering is hard, but a student has expectations they could draw on from math and physics. Same with biology related fields and chem related fields. There are expectations from high school a student can draw upon.

jondubois 1 day ago 0 replies      
Just like with Maths, I think that to effectively learn programming, you have to believe that you're good at it... Unfortunately, it has become harder for people to convince themselves of this because the tooling required to build simple software is much more complex than it used to be.

I think that new developers are exposed to more complexity earlier on and so they are more likely to get overwhelmed. It's not quite the slow-paced discovery process that it used to be. New developers have more visibility of the road ahead... And it's a damn long road.

euske 2 days ago 0 replies      
Here's my pet theory: when I try to put myself into an average high school student's shoes, they're already surrounded by all the CS achievements today; namely, video games and smartphones and Facebooks. They're exposed to them a bit too much. On the other hand, when I was a high school kid I didn't realize how CS is affecting our infrastructure and how many things are still unsolved. I guess this still applies to the current generation too. Combining these two, the field might look rather "finished" or "too competitive" to an average person, which could deter them from applying.
akhilcacharya 2 days ago 0 replies      
>Why is the marginal student not drawn to study CS at a top school, and why would a top student not want to study CS at a non-top school, especially if he or she can find boot camps and MOOCs to bolster learning?

Because I'm not smart enough to get into MIT/Stanford/UCB/CMU?

hn094062 2 days ago 0 replies      
I tried pulling a CS-English double major in the 1980s. I loved programming, especially the then-new network programming. But the school I attended used the ACM CS Curriculum which pretty much required a math minor in addition, and the logistics didn't work for me (I ran out of money). I would have needed another three semesters just to meet the math requirements.

IIRC, I didn't particularly enjoy the actual CS classes, instead I'd spend hours playing with the Sun workstations and tinkering with how commands and code interacted. I could care less about Universal Turing Machines but became the defacto sysadmin for our tiny cluster. None of that counted as course credits of course.

gozur88 1 day ago 0 replies      
I think it's a combination of a CS degree being slanted more toward research than preparation for a career as a software developer, and not everybody is cut out for it, temperamentally.

Particularly the latter. A lot of people break out in hives at the thought of spending the next forty five years glued to a computer monitor.

SeanDav 1 day ago 0 replies      
Slightly OT:

Anyone notice the irony of this thread being right next to another HN thread titled: "As Computer Coding Classes Swell, So Does Cheating"?

lelandbatey 2 days ago 0 replies      
In case anyone would like a mirror, I've tried to save the page as best I can here: http://mirror.xwl.me/why-so-few-computer-science-majors/peop...

I seem to have the only working mirror which includes his graph, though the link to the original on his site is: http://i2.wp.com/danwang.co/wp-content/uploads/2017/05/bache...

sonabinu 1 day ago 0 replies      
One of the biggest factors that made me withdraw from CS was learning programming in high school. I had a teacher who wasn't able to communicate effectively and my own ego being bruised badly when I struggled and struggled to make code work. My second act more than a decade later gave me confidence with better teachers and internships where I saw even seasoned programmers make mistakes and develop multiple iterations before being done. This was an eye opener!
k__ 2 days ago 0 replies      
Computer science has 2 parts many people consider hard.

Math and programming.

I was very bad at math, but I already learned programming in high school. This enabled me to do the programming classes without learning too much and put the saved time into math classes.

Also universities value students who are good at programming, because they are cheap labour for their projects. Seemed to me that only <50% of the students even wanted to do programming, so they had to think about other things to make the profs happy.

"Oh you will work for 3-6 months for me and all I have to do is let you graduate? I'm sold!"


khyryk 2 days ago 0 replies      
This is just my experience, but I know that many people weren't enthused by the fact that computer science classes were absolutely packed with people. Obviously this depends on the school, but if class sizes of 200+ persist even after the first few intro courses, it'll whittle down the number of people pursing the major for a variety of reasons, such as poorer quality instruction and the inability of those who need a bit of help to get it as the line outside the TA's office is in the dozens.
jvanderbot 2 days ago 0 replies      
I work with electrical engineers, mechanical engineers, and mathematicians, and computer science students, all who program pretty darn well.

The programming is incidental to solving real world problems. If your job is to crank out code which envelops someone else's design solution, it doesn't really matter what courses you took in undergrad, as an undergrad education of any kind is just a very broad introduction to many things, in the hope that one will "stick" for employment.

walshemj 2 days ago 0 replies      
Id disagree that CS or IT in General is "high" status even in the USA.

Just because a few 17/18 years olds think iphones are "kewel" does not mean that CS /IT / STEM jobs ae high status.

Take the UK no techie/stem leader gets the really high honours CMG KCMG, GCMG or as yes minester put it.

Bernard: Of course, in the service, CMG stands for Call Me God. And KCMG for Kindly Call Me God.Hacker: What about GCMG?Bernard: God Calls Me God.

gallerdude 2 days ago 1 reply      
Proud to be a freshman majoring in CS and also proud to be a nerd.
ensiferum 2 days ago 0 replies      
Maybe it's because compared to the other scientific engineering jobs and curriculums software engineering as a job is like finger painting with feces.;-)
pbui 2 days ago 1 reply      
From my experience, the main reason for the low number of CS majors is simple: most students don't know what Computer Science is. At the university where I teach, half of the CS majors arrived on campus not knowing they would major in Computer Science simply because they didn't know what CS was. Only after taking a first year engineering sequence where they sample different aspects of multiple engineering disciplines do many of these students realize CS is an attractive and interesting field to study.

Moreover, I have taught a variety of introductory to computing courses to non-CS majors (ie. humanities and business) and what I've found is that a number of students (particularly women) really enjoy the computing classes and say they wish they had majored or minored in CS, but they didn't know what it was until they took the class. A few actually do switch into a computing related major afterwards, though not necessarily CS.

This may seem counter-intuitive, but while many people know how to use computers and technology, many people don't actually understand how it works. Because of this, Computer Science is a mystery to most people and so they don't consider it. This is in part why I am excited about the CS4All movement at the K-12 level... simply exposing Computer Science or computational thinking will go a long way in attracting more people to the major.

Alternatively, another reason why you don't necessarily see a growth in CS majors is because programming is not restricted to Computer Science. Most science and engineering disciplines involve programming now and many curriculums will have programming courses. This is even true in humanities (ie. digital humanities) and business (ie. data analytics) where coding is becoming a desirable skill. If you had a deep interest in say economics and needed to develop some programming skills to simulate models or evaluate data, you can gain these skills and knowledge outside of the CS major and I think that is a good thing.

With this in mind, I think a lot of CS departments will need to consider the shift from being a "destination" major to a "service" major where a significant portion of the teaching load is to non-CS majors who want a minimal core, but not all of CS. A flat growth in CS majors does not necessarily mean a lack of computing or programming education in general.

Finally, I would say that in my department, we have seen record growth in the past few years (from 50 a few years ago to 150) and that is caused a number of problems. This is not restricted to our university as noted in "Generation CS" from CRA:


So for us, the challenge for us is not growing the number of majors but how to manage the surge in a sustainable manner.

noobermin 2 days ago 0 replies      
A random meta-comment, this was actually a very good and reasoned piece here. I am have tired of blog posts that wax poetic about issues while relying on anecdata and intuition, while this piece actually looks at data and statistics. I've begun to avoid blogposts like this that discuss controversial topics because they often lack those things, but this didn't disappoint. Great job.
agjacobson 1 day ago 0 replies      
No attempt was made to measure the number of productive dropouts. I.e. authors bias was "count people with a degree."

Hypothesis, gedanken experiment. Award all hackers who are able to support themselves, not necessarily as developers, but having to do with computers, with a CS degree. The curve fills right in.

scandox 1 day ago 0 replies      
Because most intelligent people don't want to fight with configs, syntax and technical arcana. I think we should accept that there is a strange mix of intelligence and obtuseness common to the people that stay in this profession.
tobyhinloopen 1 day ago 0 replies      
I was rejected from college because I was too young. They told me to come back a year later.

I decided to apply for a programming job to earn some money and get back to college the year later. 8 years later... still didn't go back and no intention to.

mk89 1 day ago 0 replies      
One more reason is that... compared to many other disciplines (law, medicine, etc) you can find a job "just" by doing exercises and maybe with some open source contributions (or as a freelancer).
smcg 1 day ago 0 replies      
Lack of good coding/CS classes in high school. We're still not good at it.

Introduction to CS class in high school was what got me hooked in the first place.

dboreham 2 days ago 1 reply      
Hmm. Perhaps because it tagtets a profession that just isn't that big, compared to dentists and doctors and accountants and lawyers. Almost every human needs one (each) of those folk. They don't generally need an algorithm expert.
foobar1962 1 day ago 0 replies      
Lookin at the graph of graduates and the dip in CS from 2005 for a few years, I'd suggest the cause is that the potential cs students skipped doing their degree and went straight into a startup.
avenoir 2 days ago 0 replies      
Burnout, stress and health issues resulting from stationary way of life would have drove me away from CS 10 years ago had I known what I was getting into. It's a love-hate relationship for sure.
fulafel 2 days ago 2 replies      
What interesting things are going on in the field of CS from a scientific POV?
keithnz 2 days ago 0 replies      
I know a number of universities in NZ have non comp sci paths to programming careers through their engineering department. Maybe there's some course diversification going on?
atomical 2 days ago 0 replies      
I didn't want to take four years of calc and physics.
devwastaken 1 day ago 0 replies      
One point I see missing: Colleges are inefficient, have poor quality of life, cost too much, and will not teach you the skills you need.

Financially, there is no way I could afford a CS degree today. People like to make the argument that its 'not much' because you'll get paid your entire tuitons worth in one year of work! But, thats not true for everyone. Infact its not true for many. Perhaps if you already live in silicon-valley-esque areas, maybe. But if you don't, Microsoft, Google and Amazon aren't waiting at the door for you. So what happens when you get a degree, and you don't get a 'good' job out of it right away? You probably end up in retail, putting away your entire paycheck into your tuition when you can't defer it anymore. Or, you get a low-paying 'tech' job that burns you out of the field.

But, even if you can afford it, can students go through with it? If any self-respecting developer went back to college now, after owning a house, having a family, y'know, a life, I think they'd drop out in the first few months, for what we would then count as perfectly understandable reasons. But for students, both colleges, and society, treat them like vessels without need for things like privacy and ownership.

Colleges play the game of forcing students into classes that have nothing to do with their majors. For example, speech classes. Yes, these are nice to have, but I am an adult, and I should be able to choose how I spend my money. In the system today, you are at the complete mercy of what the college tells you to do. Don't like it? Too bad. No warranty, no returns, its gone.

College tuition and overall living amenities are quite terrible in most locations. The state (public) university here charges the same amount as commercial apartments across the street, for a dorm room you share with another student that is smaller than your kitchen. Infact, only one building even has a kitchen, so you're stuck with your meal plans, which are during times when you have classes. Oh, also, if you miss a meal, you don't get that money back.

If you're a male, and want to live near the college, you are at a disadvantage for rent. Girls are more preferred for renting, to the point where these places are girls-only, are cheaper, and are the closest to the campus. Cheaper as in, a few hundred less than a dorm room, and you actually get your own room.

Add ontop that this college purposely built in fast-food restaurants, over-spend on decoration and marble counters for their cafeteria (and other places), have teachers with superiority complexes and are generally incompetent - I don't think its a bad choice to avoid that altogether. Even if you're working at Walmart for years in the cheapest apartments, its still most likely better living conditions.

If colleges actually wanted to invest in education, there are a million ways they could be doing that. Thats not to say that all colleges are like this, community colleges can be better at costs and what you need, but students are never told about any of this. They are given a list of options. "Pick one".

mbell 2 days ago 0 replies      
I still can't figure out why prospective developers go to school for computer science other than lack of better options. It's the equivalent of someone whom wants to go into civil engineering getting a degree in physics. It can certainly work, but it doesn't really line up with the end game. I guess it's mostly just the lack of computer engineering programs. But, most of those seem to be rather off base in terms of preparing developers as well.
matchagaucho 2 days ago 0 replies      
Learning the science behind how my guitar was made didn't make me a better player. But it was useful knowledge.
jorblumesea 2 days ago 0 replies      
You don't need a CS degree to do most run of the mill engineering work. It helps, but definitely not required.
djsumdog 2 days ago 1 reply      
Anyone have an archived version? Looks like it's hosted on wordpress and the database got hammered.
bane 2 days ago 0 replies      
That's a really interesting chart. I'm "lucky" enough to have gone to school around the time of the first dot-com crash and remember the surge of people into CS around that time -- most looking for the kinds of huge paychecks for little work that were becoming legendary during that time period. It was surprisingly hard to find and connect with peers that were authentically interested in technology, computing and similar subjects.

After the dot-com crashes, and 9-11, and lots of the ridiculous paychecks dried up, people left the major in droves. I remember my university in particular went from having to turn away students from the CS major to having major recruiting events for CS in the span of just a couple years, with huge swings in faculty count and facilities.

One thing that really came out of all this I think, was a better understanding by the public that CS != programming major, and companies were looking for programmers. It was then perfectly acceptable to take an easier major that focused on programming and get the same job as the CS student who had to endure a much more difficult course load. There was also an effect in industry as people who endured even harder majors found they could simply make more money as programmers and had the mental tools to get up to speed rather quickly.

I remember distinctly at my school at least, that students self-sorted majors by perceived difficulty in a way not too dissimilar and not too much out of agreement with the famous xkcd "Fields Arranged By Purity" https://imgs.xkcd.com/comics/purity.png

IIR the sorting went something like: any Liberal Art < any Soft Science < Information Technology < Information Systems < Biology < Software Engineering < Chemistry < Computer Science < Computer Engineering < Electrical Engineering < Physics < Math

My school peers all sort of used major as a badge of rank in social functions even though it was kind of useless and stupid. But I think it also connects to this chart, by all accounts I've heard, there's a vast oversubscription of Biology majors and the way the market handles this is to introduce more hoops or very very low pay. In other words, it's virtually impossible to get a great job as a biologist without getting a PhD in the field. Chemistry is similar. But the same isn't true in CS on up.

ericcumbee 2 days ago 0 replies      
My reason and why I went IT instead of CS was the amount of Math.
apexkid 1 day ago 0 replies      
Haven't you been to India?
JDiculous 2 days ago 0 replies      
Here's why I switched out of a CS major in 2009 as a sophomore (to math) after being convinced that I'd major in CS since high school.

* Feeling that I can learn programming on my own, and wanting to experiment with something I wouldn't otherwise teach myself in college

Of course CS != programming, but in my head at the time I saw them as the same. I'd been teaching myself programming since I was a kid, and I knew I'd be able to teach myself whatever I needed to know if needed. Thus I felt that it made more sense for me to study something totally foreign to me that I wouldn't otherwise learn on my own.

* Fear of living out the rest of my life like the movie Office Space, everything being so damn predictable

This was before software engineering was considered "cool" or had any prestige. Being a software engineer and sitting at a desk all day in a gray cubicle writing enterprise software or whatever sounded boring as hell. As a socially awkward introvert with no other skills, I felt that majoring in CS would inevitably lead me down that comfortable but unfulfilling route, which frightened me. It wasn't just the fear of living a boring life, I just hated the predictability, knowing that I'd never be more than some boring code monkey with a decent salary (though not finance/doctor/lawyer money) and boring job (at the time I clearly knew absolutely nothing about entrepreneurship).

* Not feeling passionate about programming anymore, and feeling like I'd never be able to compete with all my classmates who are so damn passionate about it (and not caring anymore)

A lot of people in the field seemed to be super passionate about programming, coding all day and all night. I had gotten into it at 12 years old because I wanted to make video games, but as my interest in video games was receding, I realized I wasn't really as into it as I thought I was. I felt like there was no way I'd ever be able to compete with my competition who lived and breathed programming.

* CS is boring

This was a huge revelation for me. On one hand I loved programming and thought it was awesome that I could do what I considered fun and get school credit for it. But at some point I realized that although I love the programming part, I found the CS I was being taught mind-numbingly boring. I couldn't care less about sorting algorithms, binary trees, graph traversal algorithms, and most of the other abstract crap I was supposed to learn. I just didn't see why I had to know that stuff.

I've realized that I get super interested in this same material when the knowledge is directly necessary for something I'm trying to build, but otherwise I couldn't care less.

* CS is hard

I thought math was easier, which was honestly part of the reason why I switched to math. Given the obsession companies have on GPA, it was a logical decision.

* Fear of becoming like my classmates

I was a socially awkward introvert, and I wanted to be social and extroverted. I don't know how it is now, but at the time the CS department had the highest concentration of socially awkward introverted weirdos, not to mention the complete lack of women. I remember working in the CS lounge once and facepalming at cringey jokes. I didn't want to be around these losers lest I become one of them.

* Wanting to work on more important problems

I think the industry has a tendency of thinking that software engineering problems are the most important problems facing humanity right now.

For some reason I thought majoring in math would give me the toolkit to solve the most important problems in the world. Maybe I was too brainwashed by those movies where some genius in a flash of revelation scribbles some equation on a whiteboard.

* Wanting to make a ton of money

Software engineering money was good, but I didn't like how quickly and steeply the money topped out. I didn't want to enter an industry knowing that my compensation would cap out at $200k/yr (I don't think the tech giants were dishing out $300k/yr all-in comp packages to new grads back then, or if they were I wasn't aware). I wanted the sky to be the limit, which is why I became interested in finance (again, I wasn't aware of entrepreneurship at the time).


Of course going back I probably would've majored in CS because the interview process in the industry skews towards CS knowledge, and math eventually became boring and too abstract and isn't as relevant.

_Codemonkeyism 2 days ago 1 reply      
geebee 1 day ago 0 replies      
Great article. Thanks to Dan Wang for writing it.

My main difference in perspective with this article (I'm hesitant to call it a disagreement, because it's more a matter or perspective than any specific conclusion) is that I don't think people need to be consciously aware of market or societal forces and pressures to be powerfully influenced by them.

I think anyone who wonders why more people don't major in CS (as well as other fields claiming a "shortage") should read the chapter on pay and professions from Adam Smith's "Wealth of Nations". I don't think they need to read it and accept it without critical thought, just be aware of the perspective - that there are a huge number of inter-dependent factors, other than pay, that powerfully influence the desirability of a profession.

Here's a link:


This is all pretty intuitive - if you want people to take on tedious, odious, or dishonorable work, you may have to pay them well.

I actually think CS, and programming, may be a more unpleasant profession than people recognize. Huge open offices, back visibility, SCRUM meetings that deny long term thinking and autonomy, constant technology churn, age-related employment issues, and, yes, specialized visas that mean employers can rely on captive employees who can't leave the field and have limited rights to leave their employer, all these things do mean that CS may be a much less desirable field for people with academic talent. Also - while wages are high, this may be the Silicon Valley effect. A job that pays an average of 120k, but plays this consistently in smaller, less expensive cities, may be more desirable than a job that pays 150k on average, but where 95% of the employment is concentrated in a place where the median price of a house is $1.2+ million.

Just for a dose or reality, a registered nurse in San Francisco earns more, at the median, than an application developer. That's a-ok by me! Nursing is a tough job. But if someone prefers to do good as a nurse and make more money than siting around fixing bugs in the latest javascript framework, come on, that's perfectly rational!

I really don't think young people need to have analyzed this to be influenced by it. There's a reason we call it the "invisible hand".

In short, if it is rational to avoid this field, that's probably enough to conclude that these are factors in deterring workers from it. I don't think you need to prove hyper-awareness specifically of these issues.

Keep in mind, people who are capable of learning to code and work in software development teams do have a high level of capacity for work and study. They have a lot of options. I'm not sure that software development, as a field, is all that competitive with the other things they can do.

In short, people may be behaving very rationally by avoiding this field.

skybrian 2 days ago 0 replies      
Because: Error establishing a database connection.

(Seems to be back now.)

erikbye 1 day ago 3 replies      
Perhaps "very smart" was an incorrect assessment on your part.
letmein 1 day ago 1 reply      
On Conference Speaking hynek.me
374 points by danielh  18 hours ago   113 comments top 21
Touche 11 hours ago 3 replies      
I've not done nearly as many conference talks as many people here (I do about one a year) but just for entertainment here is how it usually goes for me:

1. Come up with a proposal, send it out to as many conferences as I can find.

2. Wait.

3. Most reject it. Some times (often, actually) all of them reject it. Go back to step 1 (You lose 3 or 4 months when you are waiting, not knowing if any will accept your proposal).

4. If one of them accepted, be overjoyed!

5. Tell myself I'll start working on the talk super early so I'm extra prepared.

6. Actually not start until 1 to 1 and a half months before the conference.

7. Be super stressed. Not get anything else meaningful done.

8. Day of the talk I am angry at myself for agreeing to do it when I get little out of it.

9. Do the talk, it goes way better than I expected! I didn't totally embarrass myself and people seemed engaged.

10. It's over! Oh my god, it's over! Thinking of all of the things I can get done now, I'm never giving another talk and putting myself through that again.

11. 3 or 4 months pass and I see people I know are giving talks and I get the itch to do it myself again... back to step 1.

ethomson 17 hours ago 2 replies      
On the whole, this is excellent advice. The introduction is completely true for me: I give many talks every year and it is, without question, a _lot_ of work. I suspect that everybody is a bit different as to how they prepare, but like the author, I do the cold rehearsal in my hotel room a half-dozen times (at least) before I actually go to give the talk.

I also break my talk into logical chunks - say five or six sections. I practice each of those individually, timing them. This gives me an average for how long each section takes, so I have a schedule written down. This lets me know how far over or under my time allotment I am so that I can adjust on the fly, either adding some additional explanation to some areas or subtly truncating something.

I always know my "bail out" slide - if I end up running out of time, what's the "thank you!" slide number? If you simply type in that slide number in PowerPoint or Keynote, it will jump to that slide without fanfare. Don't ever tell your audience that you ran out of time to get to all your material, or flip through the slides to the end that they won't get to see. They'll feel like they were ripped off. (Also, make sure to structure your talk so that the special bonus material is at the end, so they're _not_ actually ripped off.)

mxstbr 17 hours ago 6 replies      
Having spoken at ~20 international conferences I'm pretty certain people underestimate the work that goes into giving a great talk you'll remember.

This also bugs me when people say "Oh, that person's given this talk at that conference before". Preparing a good talk is a lot of work, and after that's put in why should you not be allowed to give that talk more than one time?

Also only very few people watch conference videos. Giving the same talk a dozen times, by the 10th time maybe a handful of people in an audience of hundreds will have seen the talk before. I'm honestly surprised conferences still record the talks because I'm fairly certain it's not worth the money for them. (there are outliers to this when somebody gives the most amazing talk ever that gets watch millions of times, but how often does that happen?)

I'd much rather conferences invest money into a better experience for the attendees and speakers.

yomrholmes 5 hours ago 0 replies      
I spoke at my first conference about two years ago, and it was a huge learning experience. Here's how I'd do it again, if I did it again:

1. Expect this gig to take a huge amount of time. As such, make sure that you allocate 1-2 weeks of full time work to prepare. Will it take this much time? Probably not, but its good to prepare anyways and know what you're diving into.

2. Speak at a conference as part of a much larger communications strategy. What does that mean? Its waaay easier to speak about something that you're already talking about on your blog, with customers or with your colleagues. Then, you can just develop that existing conversation into something that works well in front of a live audience. Developing an idea is a lot easier than creating an idea from scratch.

3. Test ideas first on your blog, HN or Twitter. Generally, what people want to hear and engage with at a conference is similar to what people want to read and engage with online. So, write a bunch of articles and share a bunch of articles, and see what people like from that.

4. Practice, practice, practice. Talking at a conference is like giving a performance. Would some violin player just wing it on stage? Definitely not, unless they have 10,000 hours of experience. So, practice giving your talk at home in front of the mirror. Hire someone to watch you while you practice. Per point one, this stuff takes time, and like any piece of work, you need to develop your skills.

munns 33 minutes ago 0 replies      
I speak as part of my job and have spoken at probably 20+ events that are 3rd party to my employer in the past 2 years. Currently I am averaging about a conference a month in 2017.

I thought this was a really great list. Some big ones I like to call out:

#9: Travel - This gets me more than most things. I have on occasions bumped into other speakers completely unprepared for their travel for or for things that might go "boom" such as: laptop failure, presentation corruption, display adapters not existing (or breaking which is harder to prepare for) and my personal favorite Immunity Boosters. Hell yes. A coworker turned me on to these two years ago after coming down with the plague after speaking at a few too many events in a short period. Now its a must for me and whether its placebo effect or not I haven't gotten sick while traveling/speaking since.

#10: Showtime - No one is born a great speaker. Flat out no one. I know people who speak weekly at public events and they used to suck at it too. Don't be afraid/stress too much before a talk. That said, I have seen people bite off more than they can chew and give a first talk at a major tech event such as AWS's Re:Invent where rooms average 1k people. If you're going to choke at your first event, don't have it be that big/visible of a one. Start with local meetups!

#5/6: A big one that I always recommend is peer review your content before you even start dry runs. Presentations often live longer on sites such as Slideshare than they do in the minds of those who have seen them live. It is in sites like Slideshare that your spelling, grammar, and even design issues will stand out the most. Get someone who is detached from your presentation to read through it, maybe even two people, take that feedback and then move forward. For me, my wife who was a journalism major reviews almost all of my content despite not knowing much about the technical nature.

jwildeboer 16 hours ago 1 reply      
I admit I am one of those conference speakers that doesn't prepare a lot. I tend to discuss the topic beforehand with the organisers, go inside the venue to get a feel for the audience, go on stage and just deliver. IMHO it is all about creating a bond with your audience and interact with them as spontaneous as possible. Works for me, but I know it's not for everyone.

I also rarely use slides nowadays. That helps a lot. Sometimes I use a whiteboard. The way I deliver keynotes and presentations is maybe best summed up (and definitely inspired by) this article from the late Pieter Hintjens:

http://hintjens.com/blog:107 Ten Steps to Better Public Speaking

nickjj 15 hours ago 1 reply      
Glad to see I'm not the only one who relies on scripts.

After having recorded 36 hours worth of video training courses, I've written over 150,000 words of scripts because explaining technical information in a concise way usually depends on thinking about how to word your sentences beforehand.

I'm really envious of people who can wing in depth tech talks amazingly well, but at the same time I'd also be surprised if those people even exist. Winging it "decently" and "amazingly well" are so much different.

shidoshi 11 hours ago 0 replies      
Lots of "I give a lot of talk" folks on here. I'm a listener, and I just want to thank all of you. Being brave and sharing your knowledge to help empower others is no small thing. So, again, thank you.
porterde 7 hours ago 0 replies      
Great article. Reminds me of Damian Conway's great conference talk on giving tech presentations - that one changed my approach on preparing for talks forever. https://youtu.be/W_i_DrWic88 and http://damian.conway.org/IBP.pdf are the notes. Well worth watching.
brightball 7 hours ago 0 replies      
I speak at a lot of local meetups and from one of those got invited to speak at a pretty big conference (M3AAWG). It was really intimidating since the speakers consisted almost entirely of Facebook, Google, Comcast, Microsoft, Rackspace...and somehow me.

I enjoyed it but was really nervous and had some serious imposter syndrome going on. I generally like giving talks but for me, it was a very different experience knowing that you were speaking for people who were paying to be there. The speaking invite allowed me to attend the conference for free though and I learned a lot.

My talk was basically a practitioner's experience of using/implementing a lot of different anti-phishing/anti-fraud techniques that people were deeply specialized in throughout other parts of the conference. I had what I hope, for others sake, was a very unique experience of combating a lot of fraud and seeing things come from all angles where a lot of larger targets will tend to deal with different parts of attacks in entirely different departments. I couldn't go deep on anything, but mainly got to share my experience.

AndrewKemendo 10 hours ago 0 replies      
This guy basically prepares his "Hour" the same way a stand-up does, though without that all crucial audience feedback you need for comedy.

That's a great way to do it if you are focusing on one specific thing at a year turnaround rate.

If however you are asked to present a wide range of topics then it doesn't work quite the same and you need to be better at improvising and speaking off the cuff.

I probably speak 15 times a year on 3 different topics:

Augmented Reality

Applied Machine Learning


Each time I am asked to speak, I pull slides or structure from previous talks, and then update them with the latest from the field or my own constant research/learning.

Generally speaking though I don't start prep more than a week in advance - which is different than most people I think because I have so much experience here.

The day before, I will spend a few hours going through a routine where I just present several times to my hotel room. If it's an hour long presentation I won't typically walk through the whole thing each time, just the transitions usually. Once done I'll distill the points I'm making into bullets and write them onto a notecard. If there is a podium I'll use the notecard, if not then I just gotta memorize the bullets and go from there.

The reality here is also that a lot of conference speaking is about building momentum from previous talks and building relationships with the conference organizers. You need to have a great relationship with the organizer because things will go wrong and being able to show you can go with the flow is important.

Almost as important as what you present is being able to present it. Being prepared for contingencies (slide backups on dropbox, thumb drive, laptop with HDMI and VGA), knowing how to wear a pin mic, talk into a handheld mic, knowing how to use a clicker, doing pre-show prep for wonky videos or sound issues where necessary, know how to answer questions, give space for panel members to talk etc... are all parts of the equation that make you a good speaker or not and thus get invited to speak or not.

Most people miss all of these things or ignore them assuming that the staff has everything covered. Generally speaking conference staff are run ragged so anything you can do to help make their lives easier is appreciated and will be remembered.

Samathy 16 hours ago 0 replies      
Great blog post and certainly a lot to take away.

I've spoken quite a few times and several different conferences/events and love it.However, the thing I struggle with most is coming up with a topic. I find it incredibly hard to think of something I believe people will find interesting.However, I expect this is simply down to lacking industry experience and not having spent extensive time working with any particular language/tool.

simonswords82 9 hours ago 0 replies      
I've been running a software company, working on and managing various software projects, and launching/running software products for 10+ years.

The timing of this article is excellent as I was just about to start the search for conferences I could share some of my knowledge with. I've spoken at universities, colleges, and small business conferences a bunch of times and my talks are usually well received.

However, I'm still not sure about is where to find conferences with audiences who might be interested in what I have to say.

itaysk 6 hours ago 1 reply      
I'm curios about travel arrangements: i have spoken in many many events locally where I live, but never abroad. Thinking to propose a talk for a conference abroad, is it acceptable to expect them to cover travel expenses? (Not talking about pay for the talk itself, just flights and hotel)
tezza 17 hours ago 12 replies      
Are conference speakers paid ?

How do you keep earning money when giving talks all the time ?

Do they pay the airfare and accommodation ?

Does their work sponsor them or do they take personal holiday ?

baby 16 hours ago 0 replies      
I haven't given as many talks so I can't really contribute, but I see a very different pattern already: I tend to apply when I already have researched a topic and have some slides. Maybe I should re-think my approach :)

Also I would never drink coffee (or any caffeinated drink) before a talk, and rather wake up late to get a goooood night of sleep. Also eat really light.

> If you watch the talk, you may notice that I dont do Q&As. That has two reasons

Never really understood Q&As after the talk. We can always have a private discussions or use different ways to ask questions.

zaiste 14 hours ago 2 replies      
Fantastic article and wonderful tips. You could package it as an e-book, and maybe even sell it ;)

A shameless plug: I'm working on a side project which aims to help tech speakers get the most out of speaking engagements: https://eventil.com/for/speakers

htormey 9 hours ago 1 reply      
Does anyone maintain a directory of conferences that accept proposals grouped by technology?
sanswork 17 hours ago 3 replies      
I'm quite envious of conference speakers. I would love the experience but I never have any solid ideas that I think I could turn into a good talk.
jasonlotito 11 hours ago 1 reply      
A lot of good advice, but I personally disagree with the slides not standing on their own. For me, slides + speaker notes should be able to stand on their own. It requires extra work and effort, but I believe the results are generally better because people can then consume the material the way they want.

However, like your guidelines, this is my personal one.

juskrey 16 hours ago 5 replies      
If a speaker should specifically train him/herself for a talk, the talk is not worth listening.
Unicode is hard shkspr.mobi
333 points by edent  2 days ago   193 comments top 31
masklinn 2 days ago 4 replies      
> The is printed just fine on some parts of the receipt!


Assuming the printer uses ESC/POS[0] (which is likely), the codepage is part of the printer's state. To change the code page, the driver sends a specific ESC command (<ESC t x> aka <1B 74 XX> where x/XX is the desired codepage byte) (none of which is "UTF8" incidentally) and you can change the codepage before each actually displayed character.

So it's the driver software fucking up and either misencoding its content (most likely) or selecting the wrong codepage. The might be displayed correctly on the right side because it's e.g. hard-coded (properly encoded) while the product label is dynamic and when that was added/changed no care was taken with respect to properly transcoding. The printer absolutely doesn't care, it just maps a byte to a glyph according to the currently selected codepage.

[0] ESC because the protocol is based on proprietary ESCape codes[1], POS because the entire thing's a giant piece of shit

[1] https://en.wikipedia.org/wiki/Escape_character#ASCII_escape_...

LeonM 2 days ago 3 replies      
My name is Lon, with the acute accent on the e. I usually leave the accent when I need to enter my name somewhere digitally, since in about 50% of the cases it's not handled correctly. It usually ends up as L'on.

Even in the travel world it goes wrong all the time. You'd expect large international travel organisations (yes, talking to you Tui!) to be able to handle UTF8 names since many of their customers and locations will have special characters, but no. I once was nearly refused to board an airplane because the name on my ticket did not match the one on my passport...

d2p 2 days ago 3 replies      
I've seen this a lot too; but it's not the weirdest thing I've seen on a receipt... We once ate at The Boot Room at Cheshire Oaks and when we got the bill, the numbers didn't add up! (I don't add these things up but since the things we ordered were fairly round numbers and should've been just below 20 and the bill was just over, it was obvious something was fishy).

I totalled the numbers up again and the total was exactly 1.50 less than the total shown on the bill! My wife (having no faith in my basic adding skills) pulled out her phone telling me "don't be silly" and added them up to get the same result as I had.

We asked the waiter about it, who disappeared off to get his own calculator.. He added things up, looked confused and then took it off to the manager. She then repeated the process on the calculator and also looked confused, unable to explain what had happened. They gave us 1.50 in cash, apologised and then kept the receipt (I guess they didn't want us posting that on twitter!).

To this day I've no idea what happened. You could suggest that some programmer somewhere is getting rich off this, but it seems rather unlikely to me. I'd really love to know what the cause was (and whether the manager ever reported it further up the chain; because this seems like a rather serious error to me.. how often does it happen? is it always 1.50? did the issue get found/fixed?).

arielm 2 days ago 5 replies      
I'm pretty sure the reason only some of the currency symbols aren't correct has to do with the database.

If you think about it, the item names are most likely coming from a database that just might not be in the right encoding (latin1 is still the default in MySQL I think). The symbols that do work are probably hard coded into the receipt's template, and hence don't have this problem.

Why a shop owner would store the price and currency symbol in an item's description is beyond me, but having worked in the POS world and seeing what shop owners do with their items I'd definitely believe it.

bhaak 2 days ago 2 replies      
"The is printed just fine on some parts of the receipt!"

That's probably a hint that it isn't the printers fault.

I would guess that some other system that is used to enter what's available on the menu is using CP 437 and somewhere an encoding step (CP 437 to Unicode) is missing so we get the character.

I wonder what character we would get if it was a "5 cocktail" instead.

cbr 2 days ago 2 replies      

 In order to maintain backwards compatibility with existing documents, the first 256 characters of Unicode are identical to ISO 8859-1 (Latin 1).
This isn't true in a useful sense. It does look like it's true in Unicode codepoint space [1] but in any specific encoding of Unicode it can't be the case because latin1 uses all 0-255 byte values. For example, in utf8 it's only an exact overlap for bytes 0-127 (7 bit ascii).

(Though maybe this means you could convert latin1 to utf-16 by interleaving null bytes with the latin1 bytes?)

[1] https://en.m.wikipedia.org/wiki/Latin-1_Supplement_(Unicode_...

TazeTSchnitzel 2 days ago 1 reply      
> So ASCII gradually morphed into an 8 bit language - and that's where the problems began.

Oh sweet summer child. No, ASCII itself was a problem. Before we had 8-bit character sets, we had 7-bit character sets:


This is why IRC considers [\] and {|} to be lowercase and corresponding uppercase letters respectively: it was made by a Scandinavian, and in their character sets, some accented characters occupy the same positions as ASCII [\]{|} would.

The story of character sets is the story of evolving common subsets: ISO 646 within ASCII, ASCII within extended ASCII (or at least, some variants thereof), Latin-1 within the Unicode BMP, the Unicode BMP within Unicode.

Oh and by the way, before we had 7-bit character sets, we had 6-bit (e.g. IBM BCD). And before those, we had 5-bit (e.g. Baudot code). And before that, we had different telegraph codes (variations of Morse code)

jbg_ 2 days ago 0 replies      
The code that sends the price to the printer was written with currency symbols in mind, and selects the correct code page before sending the code for the symbol.

The code that sends the "product name" was not, and doesn't correctly translate its input to the code page that the printer is using.

When I made a homemade POS system for a bar, years ago, I ran all the printers in bitmap mode and rendered the receipts in software, to sidestep this and other problems. The performance was still acceptable, but I think the reason many POS systems don't go this route is compatibility; they have to work with many models of printer and bitmap support is not universal, and even among those printers that support it I am not sure if it is standardised.

anonymfus 2 days ago 1 reply      
>Each language needed its own code page. For example Greek uses 737 and Cyrillic uses 855.

Cyrillic is not a language, it's an alphabet/script. Codepage 855 was used for Cyrillic mostly in IBM documentation. In Russia codepage 866 was adopted on DOS machines, because in codepage 855 characters were not ordered alphabetically.

>Even today, on modern windows machines, typing alt+163 will default to 437 and print .

It's only true for machines where so called "OEM codepage" is configured as codepage 437. But in Russia it's codepage 866 by default, so typing alt+163 prints .

kakwa_ 2 days ago 0 replies      
>8859-1 defines the first 256 symbols and declares that there shall be no deviation from that. Microsoft then immediately deviates with their Windows 1252 encoding.

>Everyone hates Microsoft.

If only it was only that... Microsoft has even worse encoding schemes. The ugliest I encountered was an "encoding" based on glyph indexes in ttf files.

Conversion is a pain in that case, and is uncertain... it also leads me to not so beautiful code...


Even between Microsoft products (namely Office on Mac and Office on Windows), this scheme is not handled properly (the string is incorrectly handled as an UTF-16LE string on Office on Mac).

kps 2 days ago 1 reply      
Some of what's written here is not quite right. ASCII was developed in cooperation with ISO, ECMA (European Computer Manufacturer's Association), and BSI (British Standards Institute), and CCITT (International Telegraph and Telephone Consultative Committee), and it was clear from the start that there would be national/linguistic versions this was the origin of code pages, to use the IBMism. ISO 2022 / ECMA 35 had defined the means of designating character sets (both 7-bit and 8-bit) by 1971, a decade before the IBM PC chose to ignore the standard.
Symbiote 2 days ago 2 replies      
The receipt also has the time in 24 hour format, then a zero-padded AM/PM format a couple of lines below. Shoddy software, with no attention to detail.

In Britain, it would be easy not to notice the incorrect symbol when setting up the machine. Elsewhere in Europe, it ought to get noticed quickly but I occasionally get receipts in Denmark where the shop's address (or even name!) is corrupted, like "Skrdderi, Lvstrde" instead of "Skrdderi, Lvstrde".

chmaynard 2 days ago 0 replies      
The salient property of all flavors of ASCII is that each character fits nicely in an 8-bit word. This word size was commonly used in computer memory at the time, and memory was very expensive.

My first programming job was writing software for the MUMPS operating system on a DEC PDP-15, which had an 18-bit word size. PDP-15 MUMPS used 6-bit ASCII (which was uppercase only) because three characters fit nicely in an 18-bit word.

sixothree 2 days ago 0 replies      
The problem here is not the printer.

I'm willing to bet the problem here is that the descriptions of the items are stored in the database as ascii and not unicode.

garyclarke27 1 day ago 0 replies      
Interesting Article - Reminds me of a recent experience when I registered a few companies, one of them included R&D in the name. No problem for UK companies house, online registration within minutes. But has been surprising how much grief the & character causes with other systems. Banking systems refuse to accept it, they only accept a very limited number of characters for names. Should have used RnD like AirBnB - is ridiculous though, that gymnastics like this are still required in 2017! In the EU most banks are relaxed about account names, they just rely on IBANs but in places like Serbia they are annoyingly anal and reject payments if the name does not match exactly.
jorangreef 1 day ago 0 replies      
We don't always take the time to understand Unicode.

I wrote the following article for Node.js to try and clarify the intersection of Unicode and filesystems, especially with regard to different normalization forms, and using normalization only for purposes of comparison:


jrochkind1 2 days ago 0 replies      
Character encoding is hard. Unicode is not hard, at least not that hard, certainly compared to character encoding before unicode. Unicode is the solution, not the problem. The problem here is that something got confused about what charcter was encoding in use somewhere -- debugging this is hard, but the best solution is almost always "just make it a unicode encoding, ideally UTF-8, at every stage of the pipeline you can".
bencollier49 2 days ago 1 reply      
Not enough love for code page 437! If we had proper support for it I wouldn't have so much trouble displaying proper smiley faces in the console. Linux, I'm looking at you.
git-pull 2 days ago 0 replies      
I am author of a CJK language library for python called cihai (https://cihai.git-pull.com).

So as part of this, and after years, I eventually realized the only way to make a scalable tool to lookup Han glyphs is to build upon UNIHAN: The Unicode Consortium's Han Unification effort.

I write about Unicode and UNIHAN in my own words here: http://unihan-etl.git-pull.com/en/latest/unihan.html

The challenge with Unicode and hanzi is there are many historical and regional variants to a single source Han grapheme of the same meaning.

So, each glyph or variant gets its own codepoint, or number, reserved. In fact, this years when Unicode 10.0 is cut, the new CJK Extension F will introduce 7,473 characters (http://unicode.org/versions/Unicode10.0.0/).

Thankfully, my only task is to make the database accessible in as friendly a way as possible. Which is actually a mammoth task, see, there are over 90 fields which are used to denote dictionary indices, regional IRG [1] indices (which are national-level workgroups that convene to add new characters), phonetics (mandarin, cantonese jyutping, and more).

The fields are dense. They pack in objects that are most easily split up by regular expressions. https://github.com/cihai/unihan-etl/blob/master/unihan_etl/e...

So a UNIHAN field for kHanyuPinyin (http://www.unicode.org/reports/tr38/#kHanyuPinyin):

U+5364 kHanyuPinyin 10093.130:x,l 74609.020:l,x

U+5EFE kHanyuPinyin 10513.110,10514.010,10514.020:gng

U+5364 is two values (separated by the space), then a list of items either of the colon (:), which are separated by commas.

You may wonder where this all comes from. The effort is global, but a good deal of it is thanks to people who took their time to contribute it, organizationally or personally. Take a look in the descriptions of the fields at http://www.unicode.org/reports/tr38/ for bibliographic info.

In any event, the hope is to create a successor to cjklib (https://pypi.python.org/pypi/cjklib) and have datasets for CJK available in datapackages (http://frictionlessdata.io/data-packages/). That way, sources of data are sustainable and not tied down to any one library.

[1] https://en.wikipedia.org/wiki/Ideographic_Rapporteur_Group

faragon 2 days ago 0 replies      
"The printer doesn't know which code page to use, so makes a best guess."

The printer probably use a default code page, and that's all. BTW, Unicode is not hard. The "hard" part is reading the device manual, and implementing encoding conversion properly. Also, in cases where no character selection is possible, in most cases you can use the printers in graphic mode.

asimpletune 2 days ago 0 replies      
What if in the distant future, the actual spelling of people's surnames drift due to normalization of this? I'd liken it to immigrants having their names transliterated to a latin alphabet at Elis Island, or something like that.
RedCrowbar 2 days ago 1 reply      
ukasz Langa recently gave a PyCon talk [1] on the subject.

[1] https://www.youtube.com/watch?v=7m5JA3XaZ4k

kris-s 2 days ago 0 replies      
Related PyCon talk about this: https://youtu.be/bx3NOoroV-M
kevin_thibedeau 2 days ago 0 replies      
> Unicode was born out of the earlier Universal Coded Character Set

Unicode was started independently and later harmonized with UCS.

callesgg 2 days ago 0 replies      
Some parts of Unicode is hard like many characters looking almost exactly alike.
kmicklas 2 days ago 1 reply      
Unicode isn't hard, dealing with software that doesn't use it is.
k_sze 1 day ago 0 replies      
teddyh 2 days ago 1 reply      
The Absolute Minimum Every Software Developer Absolutely, Positively Must Know About Unicode and Character Sets (No Excuses!) from 2003:


devoply 2 days ago 5 replies      
Unicode is not hard. What's hard is the conversions between all these different systems. That's the hard part. Unicode is simple enough to be done flawlessly as long as you stick to unicode for everything.
a3n 2 days ago 0 replies      
In ancient times we tried to build the Tower of Babel, that would reach to God and Heaven. God said "Nah," made us all speak different languages and scattered us around.

Now it looks like we're up to our old tower building ways again, except this time with computers and data. So God smirked and gave us Unicode.

They Basically Reset My Brain theplayerstribune.com
418 points by aburan28  2 days ago   138 comments top 24
bsenftner 2 days ago 3 replies      
I am a geek, always have been. But my dad was an all state football player and a golden gloves boxer in the Army. Despite my "best efforts" I was muscular, and was (for real) forced to play football against my wishes. I played for 6 years, and finally stopped after an injury that crushed vertebra in my lower spine. Healing from that injury, I went from a weight of 210 to 135, and required therapy to learn to walk again. I took me 10 years after that before I was physically active again. Reading this article is reading symptoms I have. I think I need to go to this clinic...
soneca 2 days ago 5 replies      
I find it interesting that at the end he doesn't blame american football in itself for the negative long term effects in his health as I thought he was heading to. He ends up (spoilers alert) coaching football.

I myself do not have a strong opinion on the matter of the ethics of creating a billionaire business around a game that is so dangerous to everyone that plays it seriously, from high school to pros. I am instinctively against, but as long as the issues are clear, transparent and everyone involved have all access to information needed to do their own informed choices, it seems correct.

That said, a very brave and insightful tale on how these personal struggles are. Very well written too. And glad to know that there are effective treatments out there that can help this kind of health problems.

ErikAugust 2 days ago 2 replies      
I'm 33 and played sports my whole life.

This includes:

- Hundreds of competitive basketball games - youth, AAU, high school, etc.

- Thousands of running miles - including completing (and winning) trail/mountain ultra marathons

- Dozens of soccer games at mixed levels

- Training and sparring at a boxing gym for 4-5 days a week for a year

And none of this has caused any major injury.

The two years I played high school football? A separated shoulder, and a torn MCL. Not to mention having my bell rung many times. I could measure the amount of football games I actually played in like under an hour worth of actual time.

falcolas 2 days ago 2 replies      
As a person who works with their mind all day long, concussions scare me. They scare me more than injuries to my hands do. I took a good hit to my head in a really slow lowside on my motorcycle (I, to this day, don't remember the incident itself, but I'm pretty sure it was a combination of loose gravel and rolling onto the throttle early). For months after that hit, I had more trouble than usual understanding complex concepts; had trouble building that mental model which lets us work.

Concussions result from your brain rattling around inside your skull like Jello. It's hard to write them off as minor inconveniences when you look at it like that. I recommend watching the "beer bottle to the head" episode of Mythbusters; those slow motion shots are scary (though exaggerated).

nashashmi 2 days ago 2 replies      
Key takeaway:

> They had all kinds of neuro training exercises and routines they put me through, but a lot of it was centered around meditation and intense emotional therapy sessions. The exercises and therapy were to stimulate the parts of my brain that were running slow, and the mediation was to slow down the parts of my brain that were going a mile a minute.

I guess that is how they reset his brain. I need the same thing. Three years and still recovering, I have counted five bottlenecks in getting there.

1. Fear of memories, 2. Ego/arrogance, 3. Imbalanced thought pattern, 4. Mental unrest, 5. Lack of stimulating activities.

Now that I think of it, the author has described the same thing. Meditation has worked wonders. I have combined it with philosophy and reasoning. I am also trying to identify and reduce areas of ego. Stimulating activities is a recent discovery but I am still working at finding stuff for it.

And the results are interesting: before my mind was on infinite replay, then had a hard time remembering stuff, and now memories from when I was one years old are coming back like it happened yesterday. That never happened!

jcstk 2 days ago 1 reply      
Glad to see The Players Tribune on here. If you haven't read it before, it has some amazing content - not just from pro athletes, but college athletes you've never heard of.

The perspectives are often fascinating. Here's another great one from Bronson Koenig, a Native American and one hell of a basketball player, on his experience at Standing Rock: https://www.theplayerstribune.com/bronson-koenig-wisconsin-b....

phonon 2 days ago 1 reply      
A friend of mine has been working on an app (based on research from Dr. David Eagleman) to help you track any cognitive declines from, among other things, participating in contact heavy sports.

One use is for coaches to mandate players retake it periodically, so players can stop playing before the point of no return.




mcgrath_sh 2 days ago 0 replies      
The entire history of the NFL and concussions is incredibly damning, including the league denying the damage concussions cause and the league buying their way into medical journals to push "studies" that supported that position. If you want to know more "League of Denial" is a phenomenal book. PBS also created a two hour documentary with the authors that has the same name.


bluejekyll 2 days ago 3 replies      
> So today, I put on football camps and work with kids in the small town of Aledo, Texas, where I live, and I work with my own boys, coaching them up, too.

This is a fantastic story, and a wonderful recovery, but getting to the end of it and coming across this line... it's great that he's helping coach these kids, but it would be even better to steer them in a direction where they won't end up with the same story, or worse. Never making it, but still having the same symptoms. There are other sports, way less impact.

I grew up playing soccer, and even there they, at least in the US, as I understand it from my nephews, heading the ball is illegal until high school.

Concussions are problems in all sports, but American Football is just about the worst, and on top of that does so much other damage from the heavy hits. Concussions are perhaps the most prevalent and biggest problem.

whistlerbrk 2 days ago 1 reply      
That agent is a hero for insisting on that insurance policy.
srdeveng 2 days ago 1 reply      
Quite a few negative reactions regarding the decision to go back to coaching in retirement. I think change has to start at the coaching level, as boycotting the sport will do little to change the status quo, especially in the immediate.

A few thoughts -

In my own experience in playing contact sports (lacrosse, not football), it's a trained behavior to shake off injuries, avoid trainers, and otherwise ignore your body's warnings of potential harm. This is taught by the coach [or worse, parents]. The encouragement to push yourself beyond natural limits only increases as you progress to the collegiate and professional levels.

The unfortunate effects of competition are that coaches skirt a dangerous line of balancing the star player[s] safety and winning the game, and this behavior is clear to the players lower in the depth chart who wish to become the next star.

Some of the more disturbing things openly shared were how to pass the concussion protocol, that coach will let you take off a week of practice after a hard head hit so don't go to the trainers, and to shake off any and all injuries as you will be rewarded for being tough. I, and any number of my ex-teammates, agree we experienced what are now known as "minor concussions" constantly throughout our season. Only major concussions would go reported. Being able to walk off the field typically meant you had only a minor injury, and could go back in once getting some wind.

The fact that so many are injured during practice goes to show, it's coming from the coach's inaction and not just during the heat of the game.

Under this light, I think Finley is taking a proactive approach to change by inserting himself on the front lines.

beautifulfreak 2 days ago 0 replies      
No one has mentioned the movie, Concussion, with Will Smith, who portray Dr. Bennet Omalu. According to wikipedia, he "was the first to discover and publish findings of chronic traumatic encephalopathy (CTE) in American football players." Seeing the injuries dramatized, the effects those injuries have on football players, really drives home how serious concussions are.
WoodenChair 2 days ago 1 reply      
Football ruins this guy's life and when he finally gets better he coaches kids' football. It sounds like he has an abusive relationship with football.
brightball 2 days ago 4 replies      
Coaching and proper equipment are important. The biggest thing is that the better the equipment gets the more comfortable players are trying to hit harder and faster.

Rugby matches are very rough with near constant collisions but no pads. I'd be really interested to see a comparison study between to two.

interfixus 2 days ago 3 replies      
A good friend of mine died a few months ago from ALS, clearly - also by his own reckoning - a delayed consequence from a severe car crash in his youth. "I was given an extra 29 years, I can't complain" he tapped out to me the last time I saw him, speechless and immobilised in a hospital bed.

There's a fairly well established correlation between head trauma and this abominable affliction. Why anyone voluntarily would throw themselves into that kind of risk is utterly beyond me.

But then, so is any kind of football, be it the US or the European kind.

rrggrr 2 days ago 0 replies      
This appears to be similar to "Brainspotting", a technique that emerged from Dr David Grand's work with EMDR therapy. They're trying now to get funding for fMRI studies during treatments to better understand and possibly validate the treatment. The mode of operation appears to be increasing metabolic activity in certain areas of the brain for the purpose of enhancing processing/garbage collection.
everyone 2 days ago 0 replies      

From the Mayo clinics page on CTE

"CTE is a progressive, degenerative brain disease for which there is no treatment."

egypturnash 2 days ago 0 replies      
tl;dr: "Football gave me multiple concussions and severely broke my brain. Now I am seducing kids into the same passion for football that lead me to that point."
SkyMarshal 2 days ago 0 replies      
Wold be nice to have a submission title that actually says what the article is about.
EGreg 2 days ago 0 replies      
I wonder if similar things (stimulating some parts of the brain and not others) can be done for other professions, such as coding where you are addicted to it, or maybe even the autism spectrum.
jaequery 2 days ago 1 reply      
I really enjoy stories like this. Are there any place with more like it?
ensiferum 2 days ago 0 replies      
What he has sounds like it is most likely https://en.m.wikipedia.org/wiki/Chronic_traumatic_encephalop...
concussions 2 days ago 0 replies      
I suffered 3 concussions in a 3 year period -- all from stupid accidents, not sports -- and the effects have persisted for years, even with various types of rehabilitation. It's very difficult to describe how a concussion changes a person. Of course, problems with balance and speech are obvious. But for me, the concussions also impacted my mind, my ability to think. I used to be able to read for an entire day, soaking up information. But now, it's like I hit a brick wall after a certain point, where it becomes impossible to proceed. I basically have a set amount that I can learn in one day -- whether it's the API for a library, the architecture of legacy code, etc.

Multitasking has become extremely difficult, even though it was never a problem for me before. It's a complete killer for my mind and it will exhaust it at the expense of the previously mentioned information-acquiring capacity very quickly. When I worked as a developer, trying to switch between multiple tickets between code reviews, conducting chats in multiple channels, and jumping back and forth between various programming languages was a huge sap upon my limited mental energy.

My first concussion wasn't too severe, but my second one was more so, as I probably was still healing from the first one. I had just come back from a break from work and didn't feel right in taking sick time. Compounding this, I was in the process of switching careers to become a developer and was studying very hard every night. I remember one night, about a week after my concussion, when I was writing some code. The pain in my head increased, until it became a pain of an intensity that I had never experienced before. This was probably my first experience of what would become many years of migraines... and this next part is probably unscientific, but I really felt that something "broke" at that point, as it signaled the start of many months of cognitive decline and emotional instability.

I sustained my third concussion when I felt pretty well healed from the second one. Not having learned my lesson, I didn't take much time off of work. While I didn't feel something "break" like the second time, I was working on a difficult project under a short time schedule, and I was worried about losing my first programmer job and the damage that could occur to my career if that were to happen. I made it through that project, but then new problems began to develop... and to persist. Two years later I still have many of the same problems. I wonder if my dedication to that job and love for programming have resulted in irreversible damage. It wasn't worth it.

Friends, we are all on this site because we are people who greatly use our minds. I want you all to remember that we each only get one brain, which is not only essential to our profession but which is the core of our personality, of who we are. The severity of impact is not associated with the severity of damage from a concussion, and experiencing one concussion makes you more prone to further concussions. My 3 dumb accidents have made the last 5 years of my life difficult in many ways, and have probably changed me for the rest of my life.

As such, I cannot condone willingly embarking upon an activity such as football which so clearly places one's mind at risk. To the author of this piece and to some who read it, football may be a game, but I think that our lives are more valuable than games to be played, than entertainment to be had. For every high profile recovery like this, there are countless children who are severely and permanently damaged for the sake of sport. Treasure your mind and the minds of your loved ones. And if you ever do suffer from a concussion, take complete and absolute rest, lest you jeopardize the healing process and find yourself with lifelong injury.

martamoreno 2 days ago 5 replies      
Hmm too bad that there are literally at least a billion people on earth with much more sad stories that didn't have a 10 mio. policy coverage for "when they can't do what they love to do" anymore, and also most people can't do what they love to do in the first place.

There is a name for all that, it's called "luxury problems". Like Paris Hilton telling us that her boyfriend threw her diamond thong out of the window and she can't find it anymore. Terrible.

Ohio Sues 5 Major Drug Companies For 'Fueling Opioid Epidemic' npr.org
249 points by CrocodileStreet  3 hours ago   158 comments top 20
AceJohnny2 6 minutes ago 0 replies      
I have a friend who lives in Columbus, going to Ohio State University. He regularly "regales" us of stories of the social and economic blight there. It's hard to imagine from the west coast.

Last week, he posted a picture of the Columbus Dispatch (the local newspaper), which featured a full-front-page ad for painkillers.


Of course, the newspaper isn't what it used to be, and has been recently sold due to lagging sales by the private family that owned it.

And when I shared this NPR article with this, he added "well maybe things got better, there are no longer overflow trailers in front of the City Morgue. But maybe they just moved them out back, I didn't search for them". And: https://coroner.franklincountyohio.gov/opiate-crisis-summit/...

sanguy 2 hours ago 7 replies      
After watching a 30 year old female overdose in a parking lot and not being able to be saved by first responders or emergency responders I've realized this problem is far more prevalent then most of us realize. This is a epidemic that is playing out under our very noses all across America.

This is just as dangerous as terrorists or any other think that goes bump in the night - it will destroy this country if not stopped.

Not sure what the solution is but this looks to be a good start to push for controls needed.

kartan 2 hours ago 2 replies      
"accuses the companies of engaging in a sustained marketing campaign to downplay the addiction risks of the prescription opioid drugs they sell and to exaggerate the benefits of their use for health problems such as chronic pain."

This is very relevant, from the article. This is the accusation. I have seen very generic comments in this thread that don't take this information into account.

pacaro 2 hours ago 5 replies      
The pharma companies have some liability, but there was a perfect storm

The "war on drugs", and the racially motivated moral panic about crack cocaine and addiction created a social attitude that casts addiction as a moral failing

Inadequate safety net healthcare provisions lead people to the cheapest treatment option, generic opiates fit this bill

bactrian 1 hour ago 2 replies      
The guy who did Silk Road was involved in the deaths of 6 users. He got life without parole.

These drug company execs belong in prison. They've directly and knowingly destroyed millions of lives.

jaggi1 1 hour ago 2 replies      
My wife was facing a burnout at the work. She was needlessly scared and suffered panic attacks. It is something I have gone through too and coped. But in her case because I love her more than I love myself I decided to take her to a doctor who then refereed her to a psychologist who then prescribed her drugs which on quick research appeared to be his plan to keep her on those forever. My expectation was that the doctor would suggest her some mild drugs and ask to take up Yoga or some other hobbies and assure her that everything is in fact alright with her.

He on other hand made a big deal out of whole thing.

We decided to trash all the medicines and lived happily without any issues.

My doctors have given me opioids so many times and I typically throw them out. Why take a substance like that if pain is bearable ?

lr4444lr 2 hours ago 3 replies      
I have a hard time with the notion that doctors themselves were conpletely innocent bystanders merely "duped" by disinformation on these drugs.
drugpusher 2 hours ago 0 replies      
I would think a key piece of evidence would be the IMS reports which show doctor level prescription data. The companies had to know that there were a lot of outlier doctors prescribing huge amounts of opioids way above the norm. (For those that don't know, you can buy data which shows what individual doctors prescribe. All drug companies buy this to know their market share and plan salesforce activities. You better believe they knew when things were going off the rails.)
kristofferR 1 hour ago 0 replies      
Here's a fantastic article which makes it clear why the drug makers deserves getting sued:


skookumchuck 2 hours ago 2 replies      
Attacking the supply doesn't work and never has worked. Even worse, it sentences a large number of people to die in agony because they cannot get pain relief.
gehwartzen 45 minutes ago 0 replies      
Seeing a lot of comments sugesting that strictly controlling the supply side is bad because lots of people legitimately in pain would be suffering. So how do other countries deal with this? The US consumes something like 80% of the worlds opioid supply
KC8ZKF 1 hour ago 0 replies      
The Econtalk podcast recently had a episode about the economics of opiate addiction. How people become addicted, how the drugs are distributed, etc.

[1] http://www.econtalk.org/archives/2017/01/sam_quinones_on.htm...

menacingly 32 minutes ago 0 replies      
There is a dangerous knee-jerk "but my grandma is in pain" reaction when this topic is brought up. I'm confident that we can settle somewhere between people dying in agony and _deliberately_ engineering super addictive dope then lying about it.
randyrand 19 minutes ago 0 replies      
Shouldn't the drug companies sue back for the state not having proper regulations?
pthreads 1 hour ago 1 reply      
I have very little confidence this will go anywhere. If anything the pharmaceutical companies will pay a small fine and agree to better inform patients and doctors, control supply chain better etc.. Nothing else is going to come out of this.

I get the feeling that this is just political showmanship. I wouldn't be surprised if the governor of Ohio is running for office in 2020. He has been making the rounds of talkshows trying to sound very concerned about people's health.

wnevets 2 hours ago 1 reply      
Stop worrying, the free market will fix it any day now.
11thEarlOfMar 2 hours ago 2 replies      
This is a recurring phenomenon in the US:

Opium/Morphine in the late 1800s:

"Throughout the late 1800s, the opiates (morphine and opium) continued to be distributed widely in patent medicines. There was also a widespread physicians' practice of prescribing opiates for menstrual and menopausal disorders. Too, there was extravagant advertising of the opiate patent medicines as able to relieve "female troubles."

Women, it seemed, had become the prevalent class of opiate users. Prescription and patent medicines containing the substances were advertised and accepted without question. Also, this was a convenient, gentile drug for a dependent lady who would never be seen drinking in public. "The extent to which alcohol-drinking by women was frowned upon may also [in addition to opiate medicines] have contributed to the excess of women among opiate users. Husbands drank alcohol in the saloon; wives took opium at home" (Brecher, 1972)."[0]

Amphetamines in the 1930s:

"Abuse of the drug began during the 1930s, when it was marketed under the name Benzedrine and sold in an over-the-counter inhaler. During World War II, amphetamines were widely distributed to soldiers to combat fatigue and improve both mood and endurance, and after the war physicians began to prescribe amphetamines to fight depression. As legal usage of amphetamines increased, a black market emerged. Common users of illicit amphetamines included truck drivers on long commutes and athletes looking for better performance. Students referred to the drug as "pep pills" and used them to aid in studying."[1]

LSD in the 1950s:

"Non-therapeutic use of LSD increased throughout the late 1950s and 1960s. Among the first groups to use LSD recreationally were research study participants, physicians, psychiatrists, and other mental health professionals who later distributed the drug among their friends. Prior to 1962, LSD was available only on a small scale to those who had connections in the medical field, as all the LSD was produced by Sandoz Laboratories, in Basel Switzerland, and then distributed to health professionals. However, the drug was not difficult to produce in a chemical laboratory. The formula could be purchased for 50 cents from the US patent office, and the LSD itself could be stored inside blotting paper. Soon a black market for LSD in the US emerged."[2]

On the one hand, it's tragic. On the other, these events seem to have a similar arc and we should not be surprised to see opioids taken off the market and criminalized, like Opium and Amphetamines.

[0] http://www.druglibrary.org/schaffer/history/casey1.htm

[1], [2] http://www.pbs.org/wgbh/pages/frontline/shows/drugs/buyers/s...

revelation 2 hours ago 2 replies      
You would think persuading doctors to prescribe unnecessary opioids is the kind of crime that has the DEA kicking down doors and arresting managers.
hoodoof 2 hours ago 1 reply      
c3534l 2 hours ago 0 replies      
Come on, really? It's a goddamned opioid. If a doctor isn't aware that an opioid is potentially addictive, then that is a really bad doctor. OxyContin is better than morphine, so any drug epidemic (which doesn't really exist, people have been doing opioids/opiates for a very long time) is lessened, not exacerbated, by substituting older opiates with more modern opioids.
Intel Announces Skylake-X: Bringing 18-Core HCC Silicon to Consumers anandtech.com
258 points by satai  1 day ago   221 comments top 29
myrandomcomment 1 day ago 1 reply      
Intel getting kicked by AMD ever few years is good for the market and the consumer. I am still planning on getting an AMD system to show my support for their efforts. I have been holding off for one with a gasp integrated GPU as I will be using the system as a media center. Right now I have the high end Intel compute stick. The limited RAM is a huge draw back. Oh, if it plays Civ6 well, that's a huge bonus.
gbrown_ 1 day ago 5 replies      
> Intel hasnt given many details on AVX-512 yet, regarding whether there is one or two units per CPU, or if it is more granular and is per core.

I can't imagine it being more than one per core. For context Knights Landing has two per core but that's a HPC focused product.

> We expect it to be enabled on day one, although I have a suspicion there may be a BIOS flag that needs enabling in order to use it.

This seems odd.

> With the support of AVX-512, Intel is calling the Core i9-7980X the first TeraFLOP CPU. Ive asked details as to how this figure is calculated (software, or theoretical)

So lets work backwards here the Core i9-7980XE has 18 cores but as of yet the clock speed is not specified.

A couple of assumptions:

- We're talking double precision FLOPs

- We can theoretically do 16 double precision FLOPs per cycle

FLOPs per cycle * Cycles per second (frequency) * number of cores =~ 1TF

So we can guesstimate the clock frequency being ~3.47Ghz.

Edit: In review such a clock speed seems rather high for an 18 core part. I'm not sure if consumer parts will do 32DP FLOPs?

slizard 1 day ago 1 reply      
Looks like they think they're still winning regardless of the price and that simply bumping core count to be the kings and bringing the price back to the Haswell-EP level high (rather than Broadwell-EP crazy) will be enough.

What also shows that they seem to be confident is that they're further segmenting the market based on the PCIE lane count to push everyone wanting >32 lanes into the >$1k regime.

All in all, the cool thing is not the i9s and high core counts which you could get even before by plugging a Xeon chip into a consumer X99 mobo (though you'd have to pay some $$$), but the new cache hierarchy which will give serious improvements in well-implemented, cache friendly codes!

deafcalculus 1 day ago 5 replies      
It's high time Intel started adding more cores to consumer CPUs rather than spending half the silicon area on a crappy integrated GPU. It's only thanks to Ryzen that this is happening.
redtuesday 1 day ago 0 replies      
It seems Skylake X will not be soldered [0] unlike previous HEDT CPU's from Intel. AMD even solders their normal consumer CPU Ryzen. How much will Intel save with this? 2 to 4 dollars per CPU?

I'm also curious what that means for the thermals. Intels 4 core parts have much better thermals when delided to change the bad TIM.

[0] https://www.overclock3d.net/news/cpu_mainboard/intel_s_skyla...

jacquesm 1 day ago 3 replies      
This really makes me wonder how many more unreleased products Intel has waiting in some drawer somewhere for that case where they have some serious competition.

It is also strong proof that without competition Intel is not going to release anything to move the market forward.

fauigerzigerk 1 day ago 4 replies      
I can't even read this article properly. The site uses 130% CPU, scrolling hardly works at all, it keeps making network requests like crazy and it even crashed my Chrome tab.

And for what reason? I do understand the dilemma that ad funded sites are in. I'm not using an ad blocker. But I simply don't get what purpose this sort of abusive website design is supposed to have.

I will never visit Anandtech again. I've seen it many times. It's never long after advertising gets irrational that content quality suffers as well and the entire site goes down the drain.

josteink 1 day ago 4 replies      
Oh. So now they're making the i9!

So it did take AMD and Ryzen to make Intel push it's game from it's 5-6 year long hiatus with the i7 eh?

Competition is clearly good :)

eecc 1 day ago 0 replies      
So, let's give credit when credit is due and call this the Intel Ryzen CPU :D
fcanesin 1 day ago 1 reply      
Meh, I bought a Ryzen 5 1600 for $199 and a ASUS B350M for $29 at micro center, paired that with 16 GB Crucial ECC DDR4 2400 for $149 (working on ubuntu 16.04, confirmed and stress tested)... so for $377 I have 12 threads @3.9GHz with ECC, that can go up to 64GB. Thanks Intel, but no.
Keyframe 1 day ago 2 replies      
That's good. Finally, we're moving with processors forward - probably thanks to AMD, again. My only hope is for them (both, either) to make thunderbolt standard feature on motherboards or ditch it completely.
vardump 1 day ago 3 replies      
So does it support ECC like AMD? Otherwise not interested.
Noctix 1 day ago 4 replies      
Can this be stated as an effect of Ryzen launch?
Sephr 1 day ago 2 replies      
Intel has been selling hexa-channel DDR4 Xeons since 2015 to select customers.

For users like myself constrained by memory bandwidth I would prefer that they publicly started selling their Skylake-SP Purley platform. In some configurations they even include a 100Gbit/s photonic interconnect and an FPGA for Deep Learning acceleration.

I would gladly pay $2500-3500 for an 18-24 core Intel CPU with hexa-channel DDR4 and PCIe 4.0 (or simply more than 44 lanes of 3.0).

abalashov 1 day ago 1 reply      
Perfect for running modern JavaScript frameworks! /s
mrmondo 1 day ago 0 replies      
Very glad to to see the clock speed didn't take a drop for the extra cores however still no ECC is disappointing to say the least.
pulse7 1 day ago 0 replies      
So the ultimate question is now, how much the ThreadRipper will cost...
faragon 1 day ago 1 reply      
My next home CPU will be an AMD Ryzen.
StillBored 1 day ago 2 replies      
Really intel? I don't want 10+ cores just to get reasonable PCIe connectivity. This is just another strike against these parts (after the lack of ECC). I guess intel is trying really hard to protect their server parts, but they continue to gimp the high end desktop parts (as if the removal of multisocket isn't enough).

I would really like to understand why intel tries so hard to not make a desktop part for people willing to spend a little more to get something that isn't basically an i5 (limited memory channels, limited PCIe, smaller caches, etc).

peter303 1 day ago 0 replies      
Please put in nextgen Macbook to be announced in June.Jump to the head of the line Apple. Remember your roots.
drudru11 1 day ago 0 replies      
I am still getting a Ryzen build
vbezhenar 1 day ago 1 reply      
Well, Intel still didn't show anything better than Ryzen 8 core. Their processors have higher costs and require fancy motherboards which I don't even sure I can buy in my city.
nazri1 1 day ago 1 reply      
90s: CPU Hertz2000s: RAM Sizes201xs: CPU Cores?
m-j-fox 1 day ago 0 replies      
High-Cost Computing?
kruhft 1 day ago 0 replies      
Good. Bring on more cores. I could use them.
dboreham 1 day ago 0 replies      
But this one goes to....9..
RichardHeart 1 day ago 2 replies      
I'm sick of having 0 to 1 choice in so many things. If a monopoly is bad, then what's the next worst number of companies? Two. Isn't the governments job to enhance the "free" market by forcing competition through forcing open on-boarding, or IP sharing, or breaking up, or really anything effective to lubricate the wheels of capitalism.
known 1 day ago 0 replies      
Why not name it as i18
pulse7 1 day ago 4 replies      
18-core Skylake-X is a luxury good: people will buy it just because it has 2 cores more than the ThreadRipper...
Show HN: Early-stage Yahoo Pipes spiritual successor pipes.digital
274 points by onli  2 days ago   113 comments top 30
zeptomu 2 days ago 11 replies      
I've some experience in visual programming languages (both professional and in my studies) and I advise against the graph-form where you connect nodes and edges.

This does not mean that text is the only way to implement programs (and even if you connect nodes, you're still programming), but maybe a good local optimum is in-between, e.g. interactive shells like Jupyter's Notebook and the Mathematica interface. I know there is LabView, the Blender-Editor and AFAIK some Unreal-Engine tool that uses this model, but bigger programs seem really incomprehensible to me.

It's always easy to show-case 5 programs in these node-&-edge editors, but I do not think it is the best approach for visual languages, as one should not under-estimate the layout problems and how to represent information in boxes.

So, I am all for new ideas in visual programming, but I am not sure if the "free canvas" approach works.

beardicus 2 days ago 1 reply      
I'm surprised nobody has mentioned Node-RED yet: http://nodered.org/

Node.js-based with a Pipes-like visual wiring interface. It's quite popular with the Raspberry Pi and Arduino crowd. Lot's of input and output plugins, and you can drop into Node scripting when necessary. I quite like it.

michaelbuckbee 2 days ago 5 replies      
This is cool, but I feel like Zapier, IFTTT and their open source equivalents have moved far past Yahoo Pipes. How do you see this stacking up?
onli 2 days ago 0 replies      
I'd be interested to know which blocks and functionality is missing to support the use cases you used Yahoo Pipes for! And of course general feedback is always welcome.
insomniacity 2 days ago 1 reply      
Nice. It's not very clear that you need to link the pipeline to the right-hand side. (I can't remember if you had to do that on Yahoo, it's been so long!). It's particularly a problem on a wide screen because the eye starts on the left, and I built my pipeline on the left and it took me a while to figure that out.

But well done on scratching an itch that lots of people have had!

unityByFreedom 2 days ago 0 replies      
Nice. Anyone else here ever use Ab Initio? It's an ETL tool for transferring data between databases and manipulating in the process. Seems very similar.

I've always wanted to recreate something with those ideas because it was a very efficient cross between visualizing code and writing it. This looks pretty similar although for different data sources.

houshuang 1 day ago 0 replies      
This discussion is very interesting to us, as we are building a workflow system for collaborative learning (students might do an activity individually, the output of that activity is aggregated and transformed, students are grouped based on an algorithm that processes data from the previous activity, output from one activity is redistributed to another activity, etc). (quick demo: https://www.youtube.com/watch?v=HQ9AyzLOn3Q)

So this discussion about visual languages, data transformation etc, is very relevant to us. One thing we're working on is how to make data transformation more intuitive... Right now we are using JSONPath to enable selection of one field to aggregate on (ie. you have a form where people input ideas, and another activity that takes a list of ideas, so you can input a JSONPath for the field to get aggregated). However, looking at JMESPath (http://jmespath.org/examples.html), it looks much more powerful. Has anyone seen any examples of graphical interfaces for going from one data representation to another, with preview, selecting fields, aggregation etc?

fareesh 1 day ago 3 replies      
Unrelated to pipes specifically but along the same lines, I feel like app developers ought to conform to a pre-defined standard like schema.org for their respective content so that everything can be inter-operable in theory. That way if I'm using Microsoft Todo or Google Keep or whatever, the potential for Google Assistant or Siri or Cortana to add to whichever Todo App I'm using is already there.

What are the drawbacks behind something like this?

mgkimsal 2 days ago 0 replies      
would like to suggest a couple of "starter pipes" - samples to more easily illustrate some possibilities.
bergie 2 days ago 0 replies      
Interesting, have to take a proper look tomorrow. We've done similar things with NoFlo (https://noflojs.org)
anonfunction 2 days ago 0 replies      
I used to use yahoo pipes for a few twitter bots I had. There was an rss to twitter webapp that I hooked up and it was fully automated for years before it stopped working.

I didn't see the thin red circle on the right, which I now understand to be the output. I almost gave up before realizing my mistake.

Xeoncross 1 day ago 0 replies      
Using yahoo! pipes I created a lifestream application[0] some years ago that won an honorable mention for the 10k javascript challenge. It was nice to have a dynamic site that required no server work on my part.

Updating it for this would probably be a neat way to test out the RSS feed usage.


jonincanada 1 day ago 2 replies      
I archived all the public yahoo pipes as Python modules -- before it was shut down #oldhardrives
Mister_Snuggles 2 days ago 0 replies      
I'd love something like this that I can run locally. From looking at the comments, it looks like Node-RED and NoFlo are two possibilities.

I was intrigued by Yahoo Pipes a while back, but didn't want to invest much in it in case it was shut down. Sadly, that worry was well founded.

mortadelegle 2 days ago 1 reply      
Very interesting, I do think that visual programming tends to work better when constrained to a small niche of computing, that's the hypothesis behind https://github.com/AlvarBer/persimmon
infectoid 2 days ago 0 replies      
Where I work we have a custom made system that was inspired by Yahoo Pipes. We are in the slow process of rewriting it.

Also this reminds me of and IoT solution I was shown recently.


hehheh 2 days ago 3 replies      
Neat. Any chance of adding JSON processing where you can pick a key (ala `jq`[0]) and then combine it with RSS or other JSON or what have you?

[0] https://stedolan.github.io/jq/

timvdalen 1 day ago 1 reply      
Looks interesting but the editor didn't give me any feedback/output and the login email never arrived.
Toast_ 2 days ago 3 replies      
Do you plan on supporting scraping content via css selectors/xpath/regex?
technologia 1 day ago 1 reply      
Its interesting and I do applaud your work, but I'll probably just stick to Apache Nifi with its flexibility.
arjie 2 days ago 1 reply      
Nice. Used to love Yahoo! Pipes What's your data model look like?
scottmcleod 2 days ago 0 replies      
I really loved Yahoo Pipes - Was ahead of its time if you ask me..Built some very cool automations on top of this. Basically was IFTTT and Zapier well before them.
pbiggar 2 days ago 1 reply      
Cool product, and great to see this get traction! My new company is very much inspired by Yahoo pipes and making something similar. Would love to chat about your approach!
j45 2 days ago 0 replies      
Pipes was a handy tool, hope this project keeps up!
dundercoder 2 days ago 1 reply      
I thought RSS had died a rather unfortunate and inconvenient death... am I wrong and it's still been kicking all this time?
nthcolumn 2 days ago 0 replies      
Pipes was awesome. Would it have killed them to just let it go free instead? Yahoo engineering did a lot cool of stuff.
laretluval 1 day ago 0 replies      
Doesn't seem to work on mobile Safari.
yownie 2 days ago 0 replies      
couple of code examples would be nice
rjeli 2 days ago 1 reply      
Dead (could not connect to server)
RRRA 2 days ago 0 replies      
Librarians are gonna go wild over this...
Essential Products Andy Rubins new hardware company essential.com
280 points by henrikgs  1 day ago   235 comments top 58
hbbio 1 day ago 3 replies      
Ok, a new phone made by the creator of Android which claims to be extremely well built.

However, since most phones now tend to reach the "good enough" level, my main question is about software and left unanswered. What version/flavour of Android does it run? How will updates be planned? For how many years will updates be provided? What's the size of the security team at Essential?

Providing an up-to-date Android with updates for at least 4 years like Apple does is key to me, as vulnerabilities come and go and the only reasonable way to be secure is to get security patches asap.

raesene6 1 day ago 8 replies      
I like the idea of tougher phones, but to me it misses the mark to talk about the titanium phone case surviving corner drop tests, it's the glass that's the problem.

The number of people I've seen wandering around with cracked phone screens from drops is quite high, and is the reason I put a case which covers the front on every phone I buy.

So having no phone case here just means you get the usual after market screen protectors and risk of cracked glass that most other phones suffer from.

al2o3cr 1 day ago 0 replies      
"We want to make a device that plays well with others, so here's our new proprietary expansion port!"

Even better, it uses 60GHz wireless to get data across the fraction-of-an-inch gap between the phone and the accessory. That should be a fun one for battery life.

Jdam 1 day ago 4 replies      
> Your phone is your personal property. Its a public expression of who you are and what you stand for.

Just no. It's just a tool that I use to communicate.

settsu 1 day ago 1 reply      
> - Devices are your personal property. We wont force you to have anything on them you dont want to have.

Ok, so at first glance this is just a diplomatic, manifesto-ese way of saying "no bloatware". However, there's probably a very pragmatic discussion about what this really means and that just leads us back around to where we are now with who defines "anything" (i.e., the phone app is on table for that discussion...)

> - We will always play well with others. Closed ecosystems are divisive and outdated.

Closed ecosystems are also knowable, stable, and can produce very happy customers.

> - Premium materials and true craftsmanship shouldnt be just for the few.

So for a few more? There's a reason mass-production is an economic success.

> - Devices shouldnt become outdated every year. They should evolve with you.

"Outdated" is an extremely subjective concept. Hardware that evolves? Do tell.

> - Technology should assist you so that you can get on with enjoying your life.

Should it?

> - Simple is always better.

Now you're just being lazy.

I'm a huge fan of big picture, think-outside-the-box vision-casting.

But this just comes across as so tone deaf from the very start and ultimately so vapid that it's easy to see how these SV figureheads have earned such a reputation for utter lack of self-awareness.

Please, if you have become this level of successful, you need someone in your inner circle who specifically is tasked with keeping you grounded.

shubhamjain 1 day ago 3 replies      
It doesn't seem premium Android phones have something spectacular to differentiate. Sure, you can spice up the camera, make the body more glossy, and add a beautiful screen. But the software is just another commodity that would be available for 1/3rd the price. That's why Google Pixel would always feel exorbitant even when the price is almost close to Apple iPhone. Seeing Essential's price tag, I have the same visceral feeling: "$750 for an Android Phone...? What?".

Here's where Apple eats the larger pie: the exclusivity of its experience that can only came at a price. In the past, the naive me used to think why Apple doesn't try to dent Microsoft's 95% desktop market with its excellent OS. Now I understand why that'd never happen: you can't be premium in people's eyes unless you create a brand of exclusiveness.

niho 1 day ago 2 replies      
There are very good reasons that aluminum is the best option for a mobile phone, rather than titanium. The most important is environmental. Aluminum is more abundant than titanium, it is easier and friendlier to extract/process. It can be recycled (very important!) and it is cheaper. Aluminum also has much better technical qualities. It is much lighter weight and easier to machine. It is softer, which means the casing will absorb most of the force from an impact when you drop your phone. And as others have pointed out; your screen or battery will break long before the structural casing. I have personally never been bothered with scratches on the casing of my iPhone. I'm much more worried about the overall environmental impact of the device.
samfisher83 1 day ago 0 replies      
They are selling the phone for 700 dollars.

700 dollars for a phone? You can get a s8, HTC, or LG for cheaper with the promotions they are running and those companies have a track record for making phones. They could have been like One plus one and produced a high end phone ~400 bucks. For 700 dollars this will have a hard time getting traction.

sparkling 1 day ago 1 reply      
Who cares if the casing is scratch resistant titantium? That case is not the limiting factor for hardware longevity, the non-removal non-user-servicable battery is.
JepZ 1 day ago 0 replies      
Well, I like the 'no logo' and the open software features. The other stuff looks more or less like a normal flagship smartphone nowadays (yes I like the 360 camera too, but it is not essential). The things that I am missing:

- replaceable battery

- SD card slot

- wireless charging

Those three are all essential to the lifetime of the phone. Storage requirements may change, batteries and power connectors may wear off.

I still use my 5 old Samsung S3 which has all those features (with updated Software). While I am willing to pay for a newer model (better camera, faster processor, etc.), I can't find a phone that promises an equal longevity.

Animats 1 day ago 1 reply      
First-world problem: what business to start when you have too much money and no really good ideas.

- Private space program? Everybody's done that.

- Sports team? Not into sports enough.

- Museum? Boring.

- Supercar company? IC engines are so last-cen and electrics mean competing with Elon.

- Super high end phone? Yeah!

MrBuddyCasino 1 day ago 3 replies      
This is the first phone since the iPhone that triggers an "I want that" feeling. Why the negativity?
je_bailey 1 day ago 1 reply      
Interesting. So he's started a new company to focus on products that have "play well with others" as a design concept.

I like the idea he's promoting with the phone where all the accessories either magnetically connect or a wireless connection. I hate having to purchase the same things over and over again.

philfrasty 1 day ago 1 reply      
Why 360 changes everything

Where does all the excitement for 360 videos come from? In its current implementation it adds absolutely nothing for the viewer and strips away the possibility for the creator to tell a story by choosing what the viewer sees.

Useful for VR yes, on a flat screen just no.

ghthor 1 day ago 1 reply      
Seems like a great idea in a space that needs more competition. Apple has a monopoly on designing complete user experiences using technology and I'm tired of it. Can't wait to see where this takes us.
fabrice_d 1 day ago 1 reply      
I find the Home product more interesting than the phone: https://www.essential.com/home
whalesalad 1 day ago 0 replies      
The 360 camera feels like a gimmick. I dig the idea of having it, but it gives me a gopro accessory vibe: pain in the ass to use and store (where do you keep it when you're not using it??), it'll get lost, etc...

I like the idea of a titanium enclosure that is resistant to damage during falls -- but that force needs to be absorbed somewhere. It's nice to know that the outer enclosure of my phone is absorbing some of the impact of a fall. If the Essential phone's titanium is not doing this are the internal components going to suffer more?

I'm interested in giving Android another shot but without the ability to go into a store and play around with one, it's hard to throw $700+ on blind faith. For example, the Pixel looks incredible in photos. It resembles the iPhone and offers an appearance of quality. Holding in your hand, however, it feels like a plastic piece of crap. If I'd have gone on photos/videos alone, I'd have been very disappointed.

Apple, for me, has been great due to the progressive enhancement and the ability to go into a store and play around. Each phone release is familiar, yet new and refreshing.

Every time the latest 'killer' Android device comes out, it will inevitably introduce a handful of paradigm shifts in both the hardware and software. I feel like you either need to be an early adopter willing to throw hundreds of dollars at devices more frequently, or settle for Samsung bloatware.

apexalpha 1 day ago 5 replies      
What is it with these companies putting the latest SnapDragon CPUs in their mobile devices?

No one cares about CPU performance. I've got a SD820 now in my Axon 7, and I can tell you there is 0 difference with a SD625 in daily use.

Except that the SD625 is cheaper and has an incredible battery life.

The only company realising that people care about UX in stead of specs seems to be Xiaomi. Consistently choosing SD625 and SD660 for their phones, because it is clear that any CPU can pull a phone.

And let's be honest. No one cares about mobile VR.

I'll take SD625 and 5000mAh battery over SD835 and 6GB RAM any day.

gallerdude 1 day ago 1 reply      
I do think pricing is the biggest problem here. Android phones have been flourishing recently because of cut prices (see: Moto G, OnePlus).

There's a LOT of good options for high end Android phones, and even if you manage to take 2nd or 3rd place, you won't remotely get half or a third of the profits.

They're getting too greedy too early. You have to earn the public's trust before you jump in with a $700 device.

dharma1 1 day ago 1 reply      
I think it looks really good. Not a huge fan of a proprietary expansion port, but I guess there is no other way of future-proofing for certain accessories, like sensors for inside-out VR/AR tracking.

I hope they get enough traction so that it'll be a viable business and these won't be paperweights in 2-3 years time

throwaway47861 1 day ago 0 replies      
Here are some angrily and hastily written observations:

- No microSD card slot. Yes, 128GB internal storage -- and it being an UFS, which is fast -- is a lot, but there are people who carry data on their phones and require portability and speed. There's honestly no excuse not to have a microSD these days.

- Small battery; 3040 mAh, seriously, shouldn't the OEMs have learned by now? Android is a battery eater, Google doesn't seem interested in making the OS more efficient and keeps thinking of half-assed "solutions" like the Doze mode which is basically "if it's the night and the phone hasn't moved in an hour, please cripple its functions until the owner picks it up", heh. For Android you'd best go for the absolute minimum of 3500 mAh or just admit you're after a quick buck. If you're serious about an Android phone, better just put 4500 mAh or more in your device and then I'll take you seriously.

- No 3.5mm audio jack. Yeah, keep dreaming, Andy Rubin. Parties with rich friends who tell you "things they hear" are not a good indicator about market needs. And you dare call your hardware "essential", lol.

- Display is not AMOLED. Heard about actually having a black color on your display? Guess not. Heard about dynamically turning off parts of the screen to save power while not losing any part of the image (because the turned off part is black)? Guess you haven't heard of that either, nor energy efficiency for that matter.

- No word on planned maintenance period -- 1 year, 2, 4, how much? It's a crucially important element nowadays, how can Android's creator be unaware of that?

- Cameras look good on paper but we all know it's the camera app which makes the real difference. I bet it'll be some default vanilla app which won't make a good use of at least 50% of the device's camera functions.

Overall -- overpriced pretty device. What else is new? The guy is pulling a popularity card to get away with yet another mediocre device and entice naive people to buy it because of his supposed prestige as Android's creator.

madmax108 1 day ago 0 replies      
>>> play well with others

...And right off the block, no headphone jack.

Why do companies insist on just blindly following Apple? Baah.

adim86 1 day ago 1 reply      
I am not impressed by the website at all. It is very well designed and in some ways pretends to give you a lot of information but I find myself asking the most basic of questions. What is Essential, does it do hardware, software or both? Is it running a special Android or vanilla? What is the screen made of? There is all this hype about how strong the phone is but I have never heard anyone complain that aluminum is not strong enough for their purposes etc.

For a marketing site, I am just not impressed with the amount of important information. Maybe the answers to all of this are in there, but it is so poorly arranged that after checking in a bunch of places I expect it to be. I have given up

zmix 1 day ago 1 reply      
I want:- 4 buttons on the finger's side, one jogwheel/microswitch at the thumb. Buttons are configurable/contextual. Of course, also touch-screen

- expandable flash

- best mobile camera to date

- Android (no bloat, unlocked, easily rootable)

- no bezel

- great battery

- size of Xperia Compact Z3 but thinner

- withstands rain and beach

- upperclass CPU/GFX/RAM/Flash

intrasight 1 day ago 3 replies      
I get a blank screen except for the menu. What plugin is needed to view that site?
Sephr 1 day ago 0 replies      
Looks great. I just wish they went with AMOLED. It'd be worth the increased price.
mstade 1 day ago 4 replies      
This is the first paragraph that shows up when seeing the site in mobile safari:

> I know people are going to ask me a lot of questions about why I started this company. Why didnt I just travel the world, ride my motorcycle, tinker with my robots, hang out at my bakery with friends and family. And to be honest I still do ask myself that sometimesbut not too often.

1. Maybe I'm not geeky enough, but I don't know who you are

2. I don't care who you are

3. What are you selling? A phone?

4. Oh screw this, I don't care enough to read past that pompous nonsense...

Oh well.

varelse 1 day ago 0 replies      
I really like the idea of a mobile phone that just works with a suite of consistent apps for photos, SMS, email, navigation, and whatever.

I really like maximizing local computation over cloud services.

And at first I thought this might be it. But alas, it appears to be just another pretty and overpriced Android phone. I guess I will continue buying last year's latest and greatest at a 50% discount or more once brand new shiny disrupts it.

p2t2p 1 day ago 0 replies      
This website doesn't work in Safari. Google Chrome is required. Back to IE6 age...

UPDATE: Nevermind, it started to work after couple refreshes.

UPDATE: Actually, 'home' section started to work, 'phone' section still doesn't work in Safari.

Markoff 1 day ago 0 replies      
no jack, no waterproofing, 3000mAh battery for 5.7", display disrupted by camera, no brand and they ask 700$ for this?
theprop 1 day ago 0 replies      
I thought Rubin's other new product, Lighthouse, a security camera which uses an AI backend to analyze video for anything "suspicious" and notify you on your phone was much more interesting & promising.
maufl 1 day ago 1 reply      
"My software engineers wanted me to talk about our vision for making all devices, even those we don't make ourselves, play well together."This sounds really interesting to me, but I can't find anymore information about what that means. Does anybody know?
izacus 1 day ago 1 reply      
Ahh, so new Android phone shipping only to US where Apple is dominating the high-end sales?

I see that going well.

Shelnutt2 1 day ago 1 reply      
Interested to see if Sprint and Verizon will support this on their networks. The phone supports all the needed bands, it more a business decision on the CDMA carrier side to certify it.

The rumors from a few months ago said Sprint was onboard, we'll see if that panes out.

linuxkerneldev 1 day ago 0 replies      
No mention of waterproofing, water resistance. That's the main thing I look for.
shanwang 1 day ago 0 replies      
dare i say the phone looks underwhelming?

the home hub looks interesting, but it seems the main selling point is it can work with other devices? so does it mean I can do things like asking Alexa to stream my itunes library on chromecast?

paule89 1 day ago 1 reply      
Shipping to the US only...
bananicorn 1 day ago 0 replies      
There also seems to be a new smart home device in the making, just click on the right icon in the header. Not sure if that's actually new or if I just misclicked...
souldoubt 1 day ago 0 replies      
Lovely site, but scroll lag janks so hard it's almost unusable!

Also hardware looks lovely. Any chance it runs something other than Android?

philip1209 1 day ago 0 replies      
Between generic brands and Apple, I don't think there is room for a third competitor. Fitbit learned this the hard way trying to sit in-between.

Trying to make a brand that is more expensive than Apple will likely fail. They have made gold devices before. Plus, most of luxury is perception - and they stand no chance of having better brand marketing and recognition.

I don't think that the operating system is enough of a differentiator, particularly when Google controls the software while promoting their own high-end hardware.

ed_blackburn 1 day ago 2 replies      
I'm drowning in a pretty website with no summary of what I am looking at.
kelvin0 1 day ago 0 replies      
We are the misfists, the craftsmen ... and uhh... Oh thats already been made. Well dammit it worked once!
jlebrech 1 day ago 0 replies      
I like the 360 camera idea, but how could that not snap to a case for any other phone?
thesanerguy 1 day ago 0 replies      
Andy has been able to get together a stellar team in such a short time
auggierose 1 day ago 0 replies      
That site makes me sea sick.
therealmarv 1 day ago 0 replies      
I bet this phone will not get fast Android updates ;)
philippnagel 1 day ago 0 replies      
Will the battery be easily replaceable?
aerique 1 day ago 0 replies      
But does it run SailfishOS?
thinkindie 1 day ago 1 reply      
is a 360 degrees camera really essential?
homero 1 day ago 1 reply      
All I want is the best processor, most ram, micro sd, replaceable battery and root. Why can't anyone deliver this? I'm here cracking a v20 when I could be happier.
jaboutboul 1 day ago 0 replies      
Ha! Yet another android phone.
whisdol 1 day ago 8 replies      
They seem to avoid mentioning the version of Android they are running - the specs only say "Android".

I'd like to be exited about this, but this uncertainty combined with the fact that their security personnel is a team of dogs[1] makes it quite hard for me.


groundCode 1 day ago 2 replies      
Hope the product is better than the website....
Numberwang 1 day ago 0 replies      
Boy do I miss the websites of the late 90s. How about you spend another second or two thinking about how your content is structured.
m-j-fox 1 day ago 1 reply      
> Why didnt I just travel the world, ride my motorcycle, tinker with my robots, hang out at my bakery with friends and family.

Was Andy a douche before he got rich or is that the price of success?

gadders 1 day ago 5 replies      
Chrome 40, so all I see is menu headings. Anyone got a summary?
deprave 1 day ago 0 replies      
He dumped the pile of insecure garbage called Android on us and now he's moving on to reinvent iOS. Got it.
ensiferum 1 day ago 0 replies      
Oh..just another craproid device
Tectonic: a modernized, self-contained TeX/LaTeX engine rust-lang.org
303 points by JoshTriplett  21 hours ago   90 comments top 17
santaclaus 20 hours ago 3 replies      
> Has a command-line program that is quiet

But what about my overfull hboxes???

mhd 19 hours ago 3 replies      
If this wraps 120k lines of actual TeX backbones, why write the API layer in Rust at all? Yet another dependency, just to be buzzword compliant?

We've come a long way since the original literate TeX program.

cossatot 20 hours ago 1 reply      
'Official' website here [0].

This looks great, although I do wonder to what degree the automated downloading of dependencies limits off-line work. I'm not sure how widespread this is, but one of my biggest productivity hacks is writing with the wifi turned off (I go for pencil and paper for first drafts if possible, but editing, figures and references all require the computer).

[0]: https://tectonic-typesetting.github.io/en-US/

jahewson 19 hours ago 2 replies      
It's hard to talk about a "TeX" engine when there is eTeX, XeTeX, and LuaTeX each with their own strengths and weaknesses heading in different directions. FWIW it seems to me that LuaTeX is actually the most promising of the three, not because of Lua but because of its extensibility.
IIAOPSW 20 hours ago 5 replies      
I just use overleaf. gets work done, easy to work with collaborators, does the package install thing automatically, real time preview of what your write, has dumb-person "rich text" mode etc.

Only downside is its cloud based so I can't be productive on an airplane.


tuananh 19 hours ago 1 reply      
Maybe use another name https://coreos.com/tectonic/
petters 20 hours ago 3 replies      
> powered by XeTeX

Does XeTeX support microtype these days? It is important if you want "best-in-the-world output" to PDF.

chj 17 hours ago 1 reply      
Sounds like a rewrite, but it's just a wrapper.
SEJeff 12 hours ago 3 replies      
This is a terrible idea as Tectonic as a software product already exists: https://tectonic.com

It is a registered "computer software" trademark as well:


Norfair 17 hours ago 1 reply      
Come on, you could have called it TeXtonic, what a missed opportunity for a pun.
rmbeard 9 hours ago 0 replies      
It failed to process the first two files I tested it on. I really like the idea but that is not a great start. Being able to switch between TeX engines would be a good option, would be great if you you could do that with CL switch.
RolandBuendia 8 hours ago 0 replies      
I failed to see what these three bullet items really bring to the table.

- Downloads resource files from the internet on-the-fly, preventing the need for a massive install tree of TeX files

- Automatically and intelligently loops TeX and BibTeX to produce finished documents repeatably

- Has a command-line program that is quiet and never stops to ask for input

My understanding is that both MikTex for Windows, as well as Texlive for Linux, allow users to install packages on the fly. To compile a file with BibTex, a user just needs to run TeX twice. And finally, TeX will only ask for a user input if there is an error.

domoritz 18 hours ago 1 reply      
So it's like latexmk + lazy downloading? How does it compare against latexmk?
deepnotderp 20 hours ago 1 reply      
I'd love to see something akin to "Weebly for LaTeX". Maybe I'm just too much of a theoretical noob, but I don't quite enjoy feeling like I'm writing code when trying to write english.
mrkgnao 19 hours ago 2 replies      
This might end up being the Rust "killer app"! (Servo aside, of course.)
rmbeard 8 hours ago 0 replies      
Given the name issues, why not change the name to rustex?
wolfi1 18 hours ago 1 reply      
why does it use xdvipdfmx? xetex would output pdf as well, doesn't it?
Going for the 5 hour workday krister.ee
305 points by kristerv  18 hours ago   172 comments top 39
jblow 11 hours ago 6 replies      
I had this same kind of personality / mental condition, and I am going to say, if he is really of the same personality type, 5-hour days are not going to help this author in the long term. What is helping his mood is not really the shorter day, but the hope of having made a short-term structural change that might fix things. The thing is, it won't. He already mentions at the end that burnouts are back. Well, pretty soon the 5-hour days will be feeling too long and he will be 'unable' to do them. Then what? 3-hour days?

The fundamental problem is that he doesn't actually want to be doing what he is doing, despite the rhetoric of "great team and awesome project". Come on, is that really how you feel about it deep in your heart, or is it empty SV rhetoric?

Two things will help this author:

(1) Strike out on your own, following your own motivation only. Yes you have to figure out how to make ends meet financially, but that is your lot in life. Fortunately it is easier to do this with computers than in most other fields.

(2) Meditate, learn to observe your mind and why it does what it does, so that you don't feel powerless or subservient to things like burnout. It's hard to explain the transformation that takes place, but being able to stand next to or outside these mental processes is very powerful.

Azeralthefallen 12 hours ago 8 replies      
I actually worked at a company who tried something similar to this (6 hours/5 days) for about ~2 months, and it worked for a little while. Unfortunately we found a lot of problems with this:

- People who started early (~7am), left really early, and people who like to start much later (~10am). This led to only ~3 hour overlap time period when the majority of the dev team was in. There was a small effort to normalize the start times, but we could never find a common time that worked for everyone.

- I felt this put an obscene amount of pressure on some people (including myself) more so than others.

- People hate meetings, people beg and cry that meetings are a waste, but the bitter reality is ether you get everyone on the same page at once, or you need to do it separately, which becomes even more pressure for certain people. Combine this with the limited time when you can guarantee the majority of dev will be in, makes it difficult.

- The coop/intern students i felt got shafted hard by this, since time is so much more valuable during that two months they often got very little direction.

- I also personally found that about 6 of us (team leads/seniors) ended up with much more pressure, and stress due to trying to force 8 hours of work into 6 hours of a day.

In the end we ended up giving up on the idea, due in combination to other departments complaining, some HR payroll issues, and problems with coverage.

jlebrech 18 hours ago 5 replies      
It's already a 5 hour work day, it's just inside of an 8 hour attendance day.
wastedhours 14 hours ago 4 replies      
Also worth realising that in a lot of companies you're not being paid for your output, you're being paid for a certain amount of access to your brain.

The value in your job isn't your output, it's the organisational outcomes that occur as a result of you doing what ya do.

Some companies are fine and built around the outcomes of 5 hours a day of your code. Some organisations though really want 8 hours a day of access to those sweet, sweet neurons. Even if the constant interruptions, meetings and feelings of unproductivity are side effects, perhaps they value the outcomes of 2 hours a day of your code and the value sharing that comes from an inane question at 17:59, more than absolute output.

Not saying the latter is more efficient or should be right, but just showing there's different value companies derive from their employees over and above project deliverables.

ryandrake 11 hours ago 4 replies      
Lot of people in this thread saying they'd be willing to work N% hours less for N% less pay. Am I the only one living in a high cost of living area, for which the opposite is true? I'd be totally for working more but getting paid more. At this point in my life, mid-life crisis age, I am starting to notice time stalking me, realizing my inadequate retirement savings, and wondering how many more at-bats I'm going to have before it's time to walk away from the baseball game. Am I going to have to eat dog food when I retire? What's my kid's college going to cost? Will I ever be able to afford a vacation? Who really has comfortable answers to these questions?

Why on earth would you choose to work and get paid less than you can, while you are young and capable?? I look back and wish I had worked multiple jobs when I was younger, not that I had fewer hours.

shubhamjain 18 hours ago 4 replies      
Given a choice, I think 8x4 would be more sensible in terms of benefits. The best use of free time is enjoyable experiences outside work, which is more doable when you have it without interruptions of a work schedule. Five hours a day is awesome if you have a hobby or a side project; not so much if you want to travel. The second reason is the time it takes to level up and actually start working. Honestly, this could be lessened if I didn't check reddit & HN first thing after I start working but I have grown a bit habitual to it.

At any given day, five hours of focused work is much better than eight hours filled with distractions, but I haven't found the magical solution to make that (super-focused work) happen.

Personally, I don't have much of an issue with working hours. The main issue is I can't enjoy long stretches of vacation. Yes, it's possible to sacrifice some of your salary to go wild, but you can't do it without getting a frown from your superiors.

n1vz3r 11 hours ago 3 replies      
I track all my billable time for last 7 years. My average is 4:30 billable hours a day and I can confirm that every period of over-working ends with equal or longer period of under-working. So the 4:30 is like a gold number. I stopped to fight with this, and now after 4:30 hours I happily clock out and go home. (Yes, I'm self-employeed). This way the only reasonable strategy to earn more without having health issues is to bill more per hour. Exception to this 4:30 rule is non-programming work that doesn't require high concentration: visual design, reports, configuration, CSS tweaks etc - I can do it pretty much non-stop for whole day.
puranjay 17 hours ago 3 replies      
I've always worked from home (freelanced, then started a remote company). Never actually been to a regular office.

Recently, I joined a coworking space.

I used to think that I'm "unproductive", but after seeing how others actually work, I'm surprised that businesses get any work done at all.

I'll be amazed if most employees work at more than 60% productivity.

lbill 17 hours ago 1 reply      
My contract specifies a 7.5h workday... But I do only 6h/day, and so do most of my colleagues. I am very lucky: I live in France, this country has a strong culture of "stay late at work and the boss will like you", yet my firm does not care about that. It cares about getting sh*t done.

This is a broader subject than "work hours": my firm thinks that staying more hour to procrastinate is not not useful, and it believes that employees are more efficient when they are happy! In order to have an efficient workforce and less turnover, I think any business should try to answer: "for each employee: what conditions does he/she need to be happy at work?".

_yosefk 15 hours ago 1 reply      
I've tried both 5 short workdays and 3 longer workdays. I like 3 longer workdays better. A full day for work and a full day for something else mean, to me, that I can focus more fully on work and then focus more fully on something else. A "half work, half not work" day means, to me, that I can't quite focus on either.

TFA presents the shorter week [the alternative to a shorter day] as 3 consecutive days; I prefer interleaving work and non-work days, so as to neither be absent from work for 2 days in a row, nor work for 2 days in a row.

readittwice 13 hours ago 0 replies      
There is another point to consider: Getting 20% less money, doesn't necessarily also mean 20% less money after taxes.When I was working part-time (20h per week) while studying, working full-time during summer meant this: ~90% more work (from 20h to 38.5h) but only ~60% higher salary (after taxes).So I just continued to work part-time in holidays and enjoyed summer.In my country higher salary means higher taxes, working less hours therefore means paying less taxes because of a lower salary.Taxes don't consider the number of hours you work.Although you need to consider that you also pay less e.g. into your pensions fund, some of your additional salary is "just" taxed away.When working 20h the difference for me was substantial, right now not so much. But that may be different for you.
jrumbut 17 hours ago 2 replies      
I really wish more companies would be willing to do this, as a childless adult I have no need for more than 20 hours worth of pay, and my health definitely benefits on a lighter schedule.

Currently I'm achieving this through contracting but I would much rather have a more typical employment situation to reduce the administrative burden and the need for sales.

theparanoid 18 hours ago 1 reply      
I wish more companies offered less than 40hrs/wk. But, hey, most places don't like even giving vacation.I'd rather commit seppuku than go back to a 40hr office.
kareemm 13 hours ago 0 replies      
Reminds me of a buddy's 5 year plan. He owned 50% of a company that he bootstrapped to 8 figures in revenue; his co-founder owned the other 50%.

Here's his 5 year plan:

Year 1: work 4 days a week

Year 2: work 3 days a week

Year 3: work 2 days a week

Year 4: work 1 day a week

Year 5: cash cheques, spend time with his family, and do adventure sports full time

dvcrn 16 hours ago 0 replies      
I am very very interested in this myself. I proposed something similar to my current company but got refused.

I experimented with different hours off on certain days and found that the thing that would turn me into a productivity monster would be a 4-5h work day with remote option. But now try to find something like this (especially in Asia). Despite loving my current job, I think if I would get a counter offer from a company with these benefits, I would probably quit right away.

I am personally not a office bee and dislike leaving when it's dark. I'm drained of all my motivation and the darkness makes me just want to go home, watch a YouTube video and sleep, just to repeat the same cycle again. I managed to counter this fairly successfully by working outside of cafes that have terraces and picking a new location every day. My motivation and productivity level stays up longer and the drain is reduced, but now if I just had more time...

waivej 10 hours ago 1 reply      
Back when I worked for someone else, I switched to part time after I had been there several years. I would work Mon, Tues, Thurs and got paid 60% of my previous salary. I was more productive than ever and came to the office having thought through things and ready to type it out.

I also didn't notice the drop in salary because I had time to repair things and didn't "buy" progress in hobbies. I also had time to design things for work while sitting in a convertible by a lake rather than a drab office. It was one of the best experiences of my life. Sometimes I wish I could do the same thing working for myself but the situation is different.

The only tough part was not having time to socialize with coworkers. They would take breaks and talk about things but I felt my time was so precious that I wanted to get my work done.

dasmoth 18 hours ago 3 replies      
Great to see people trying things like this!

But... Ctrl-F "commute"? Nope, don't see anything. Are you either remote or living very close to the office? Having more than a few minutes of commute does potentially change the trade-offs.

onion2k 18 hours ago 1 reply      
But since I'm giving the company the best hours of the day I would ask 80% of pay for the 60% of workload.

The author states that he's able to work with fewer distractions and better concentration when it's a 5 hour day, so everyone's a winner - he gets more time, he gets the same amount of work done, and the company saves money. I can imagine it works very well in the author's circumstances. But how many workers are in the same situation?

For a start, very few people can afford a 20% cut to their pay. Anyone in that situation is out.

For businesses that pay people enough that they could afford a 20% pay cut, such as the big high profile IT companies, the business is usually awash with cash; they need people to do more work and are willing to pay overtime for them to do it. Saving 20% of the wage bill is no incentive.

Lastly, as dasmoth points out, if you have a commute then you'd just be increasing the relative amount of time you're travelling compared to working, which I imagine would make it feel much worse. I have a 30 minute commute each way and when I do a half day it feels like a huge waste of time.

As a solution when it's appropriate I think it's great, but I doubt it's applicable to many people.

EZ-E 18 hours ago 1 reply      
One issue is that most companies would take a person asking for a lighter work hours as lazy, or not motivatedNo matter the company, there will always have "that" vibe when wanting less hours, unless they are specifically seeking a part time

"What's wrong, don't you want to work 50 hours a week like the rest of us ? What's the matter ? Not motivated ?"

galfarragem 2 hours ago 0 replies      
My dream workday:

From 6:15am to 9:00am, work alone at home.

Get ready and 15 minutes walk/cycle to an office.

From 9:30am to 12:45pm, team work.

6h/day (3h on Friday), 27h/week. Wednesday and Friday work from home only.

arien 15 hours ago 0 replies      
I went through some kind of burnout as well last year (more related to personal stuff than work itself) and decided to take a long vacation. I didn't want to go back to the usual schedule and risk having the same issue again in a year or two, so I decided to work part time when I came back.

So, now I work Monday to Wednesday and have Thursday to Sunday free. This means I only have three days per week to make an impact on the company (as opposed to the ones who work full time), so I find myself really focused. My productivity vs before has skyrocketed.

What do I do on my extra days? Sometimes I go out, sometimes I just do nothing and relax, sometimes I do small side contracts or personal projects. I hope one of these takes off one day and allow me to recover the money I gave up when switching to part time. At least now I've got the energy to work on them.

whatnotests 18 hours ago 1 reply      
/me looks at clock, laughs, gets back to work.
heiti 18 hours ago 1 reply      
Have you also done anything about trying to find out why you have trouble focusing? I read some older article from your blog also and got the feeling you have been taking yourself as a static thing and that the things around you need to change for you to feel better. Or did i get that wrong?
mto 15 hours ago 0 replies      
I'm a remote freelancer and currently also doing 25h/week but usually distributed over 7 days. Originally started because the birth of my daughter but I've quickly noticed how my productivity increased. I think it's even more than my productivity in 40h in the office (also because I could never really relax there). I used to have the "I hate everything and being locked up in that office all day long is awful" moments every 3 months. Now my motivation stays roughly the same over the year.
Zelmor 17 hours ago 1 reply      
2 weeks is not enough of a dataset. Habits have to be unlearned and off-work time reorganized.

Come back in a year.

Also, it would be better to ask the manager's observations regarding productivity. Biases and all, you see.

krambs 11 hours ago 0 replies      
I remember the Heroku founders espoused 6 hour work days for developers. They felt that this was the daily limit for creative workers - and that there was not only diminishing returns after 6 hours, but potentially negative ones (due to bad decisions made when overworked).
bufordsharkley 8 hours ago 0 replies      
> Free time has filled up. I still get more stuff done, but personal development still requires proper scheduling and planning. The allure of "I'll have time for everything" has gone.

Yes: it's not the working for 8 hours, but rather the limited time left after the 8 hours is over. There is so much more in my life than what others will pay me to do, and I'd have so much more energy for work if I was given more of my time back to pursue it...

korzun 10 hours ago 0 replies      
You can't measure engineering productivity over a span of a couple of weeks. You can't compare the output from week X to week Y unless the test conditions and subjects are completely in-sync. We all know that would be impossible.

Here, the author claims there is a change in productivity, all the way down to a %.

There is nothing scientific behind this; you might as well let your cat pick the numbers. They will be just as valid.

ianai 17 hours ago 1 reply      
The reason hourly employees are to be paid time and a half in the US is to incentivize companies to hire more people. Decreasing the full time/OT line increases the demand for labor. I just wish I knew how to make that thought agreeable in the US.
erikb 12 hours ago 0 replies      
I find that 20 and 25 hours is too little. You need to spend a considerable time in office just to be part of what's going on.

However 30-35h where perfect. You get stuff done, you are not too far out of the loop, especially if others do the same, you are still able to rest enough.

sshb 15 hours ago 0 replies      
irrational 9 hours ago 0 replies      
How would I have time to both do my work and read Hacker News with only 5 hours in my workday?
gkya 13 hours ago 0 replies      
I know this is off-topic, but I hate the st ligature, so out of place... I mean it's a screen I'm looking at, not press-printed paper, no need to save on fonts.
ensiferum 15 hours ago 0 replies      
As so often is the case the post highlights how work gets in the way of life. Should have been born rich ;-)
davidgerard 13 hours ago 1 reply      
The word "commute" is missing from this essay.
wcummings 14 hours ago 0 replies      
If a company starting doing this, it would be a real boon for recruiting. It's pretty hard to compete w/.
bbcbasic 15 hours ago 0 replies      
Just had my weekly Wednesday off (oceanic the zone) and spent 3 hours working on my side hussle that is a web store. Tweaked the theme, analysed ad data and set up some new ad tests. I felt super productive getting as much done as might take 8 hours if the work was "jobified".

Spent rest of day trying out mattresses, getting a massage, cooking fish and watching star trek. Hoping I'm successful so that every day can be like this!

Asooka 13 hours ago 0 replies      
Well, programming isn't really traditional work. You know,

We don't apply a lot of F and definitely don't move a long s, so at the end of the day there isn't a lot of W we've done. I think it's useful to think of the Puritanian work ethic briefly mentioned in this article, together with the recent article about running modern society on human power alone. Back in the day, we really did run a lot of things on human power. Yes, for the really long s, we used large animals capable of sustaining a great deal of F throughout the day, but other than that, a lot of stuff was done by hand. If you don't apply your share of the F, you literally don't have food to eat. The goods you had were a direct product of hand labour, every bread was made with a nontrivial amount of calories expended by human muscle.

As we see, the more you work your muscles, the better they start working - that's the basic premise of exercise and getting in shape. However, your brain doesn't quite work like that. You can over-exert your mind much more easily than you can your muscles and it just doesn't recover as well as them. For your muscle, the cycle is work->muscle hurts->muscle recovers->work hurts you less now. For your mind, the cycle is the other way around work->you get burned out->you recover->you now burn out more easily. This fundamental difference in the nature of work in the knowledge economy is what we should focus on, to overcome the work ethic, traditions and employer-employee regulations, that were formed in the time when 1 bread = lots and lots of calories expended by muscle.

kenbolton 10 hours ago 0 replies      
I started doing four hour workdays about eight years ago. Every day is a potential workday, and some rare workdays are two! I don't find it burdensome to have a month-long streak of working every day at this pace. And if I "miss" a day (or week), it isn't the end of the world.

I have a seasonal side-hustle that includes the occasional 12-hour workday and even multi-day 24 hour stretches. Funny side-effect is an increase in code quality and productivity during the season.

Damn Cool Algorithms: Log structured storage (2009) notdot.net
287 points by Tomte  1 day ago   33 comments top 5
nostrademons 1 day ago 3 replies      
In the 8 years since this was written, Log-Structured Merge Trees (the concrete realization of this idea) have basically "won". BigTable, AppEngine, LevelDB, Cassandra, HBase, MongoDB, and several others are all built around them.

There's a powerful hardware trend driving this, namely that disk capacities and write bandwidth are still increasing rapidly, but seek times have basically plateaued. That means that data structures that rely on append-only operations can continue to scale to take advantage of bigger disks, but data structures that rely on disk seeks (eg. B-trees) have hit a bottleneck. Also, as number of cores continues to increase, playback & processing from a sequential log can often be parallelized, but updating your on-disk indexes blocks on I/O.

no_protocol 1 day ago 3 replies      
I am impressed by the writing style. Very clear and delivers a solid explanation.

What relation, if any, would this type of system have with "persistent data structures", a term I have seen used in some browsing of functional programming topics. Is this somewhat like a persistent data structure until old parts are overwritten ("garbage collected"?)? Is there a flavor of persistent data structure similar to this?

mwcampbell 8 hours ago 0 replies      
Another notable product based on log-structured storage is ObjectiveFS (https://objectivefs.com/), which implements a POSIX filesystem on top of Amazon S3 and other object stores. It's proprietary, so I don't know much about how it works. But it claims to be a log-structured filesystem.
timClicks 1 day ago 5 replies      
Slightly related perhaps and something I have been curious about for a while... event sourcing seems like a very powerful pattern that I haven't seen wide adoption. The best documentation seems to be some MS dev library notes and a discussion from M Fowler.

Are there any open source implementations of a database that uses event sourcing?

d_t_w 23 hours ago 0 replies      
For further/similar content by Ben Stopford that I personally found a very high quality:


Electric Cars Soon Will Cost Less Than Gas Cars, Research Suggests industryweek.com
215 points by jseliger  1 day ago   296 comments top 14
11thEarlOfMar 1 day ago 6 replies      
The premise turns on the batteries: They are still expensive and need replacing every 7-10 years. If you're modeling the cost of ownership, you should be factoring in a new battery pack after 10 years. (Message to ye who covet 1,000,000 miles) Therefore, the rate of adoption will be somewhat modulated by the ecosystem of lithium ion materials and production. It's obvious Tesla sees this as a crux since they are investing billions in battery production.

Li ion Battery cost per KWh has already been falling at about -14% annualized over the last 10 years [0]. However, mass adoption of EV autos changes the demand side considerably and we therefore cannot assume the same rate of decline as we have seen recently. Perhaps we will, given the ramping production, but if they don't move more or less in step, prices could be somewhat stubborn.

[0] https://cleantechnica.com/2015/03/26/ev-battery-costs-alread...

eltoozero 1 day ago 8 replies      
I couldn't buy a Telsa for the total spent for my '03 Camry, currently pushing 380K miles, including fuel costs.

In the 12 years I've had it I've put on 350,000 miles. Even at a conservative 25 mpg and a generous average fuel cost of $3 that's:

$15,000 in car. $42,000 in gas.

This excludes maintenance etc, but I fail to see how I'm saving money buying an electric car unless I'm already rocking solar.

Maybe it's cheaper by the mile but my assumption is any fuel savings will be quickly erased in additional insurance costs.

r00fus 1 day ago 6 replies      
While this seems drastically unrealistic from the viewpoint of someone in the US, from a European point of view, I can totally see that electrics become very viable - take a look at the new cars from Citroen, Renault and Hyundai in terms of decent to serious range (200-300km+) and Europe seems more likely to push regulations than a Trump-administration US.
clock_tower 1 day ago 2 replies      
This wouldn't surprise me, although manufacturing lithium batteries is still a bit complex. The railroads already use all-electric drivetrains; short-range locomotives carry batteries only, while long-haul ones use diesel to generate electricity and power the drivetrain. The big advantage of an electric drivetrain is regenerative braking, which is just as useful in cars (think of the Prius and its remarkable mileage) as in trains.

Over the medium term, I'd expect to see a car market that looks a bit like the train market: all-electric cars for city driving, plus hybrids for traveling in the country or in underdeveloped areas. Fossil fuels will probably never go away entirely (certainly the military will always need them), but oil is good for things other than fuel; 100 years from now, I expect electric cars will be the usual option while gasoline-burning hybrids are an expensive curiosity.

(Or we might move to a mix of electricity and hydrogen fuel cells; Toyota's betting on this, since natural-gas refining releases waste hydrogen. Safe containment of hydrogen is harder than safe containment of gasoline, though...)

jepler 1 day ago 5 replies      
.. in part due to assuming that the price of the internal combustion engine is "going to go up as a result of more stringent regulations especially regarding to particulate regulations", but also assuming that the battery (A) accounts for 50% of the sticker price today and (B) will drop by 77% by 2025. I wouldn't bet on these being the case (particularly on there being effective regulation in the US)
theprop 1 day ago 4 replies      
1. Gigantic longevity. Electric Vehicles will likely run 1 million miles or more without requiring any significant repairs except a net battery pack. Internal combustion engines develop all kinds of problems related to burning petroleum for energy.

2. No gas charges. For the $35k Tesla, if you get an invite from a current Tesla owner, you can get $1k off and a free lifetime supercharger usage. Supposing you always use a super-charger and had no gas charges, on 200k miles you just saved $25k in gas charges...so your Tesla's net cost came in under $10k!! And it's probably still got a long life ahead of it so would have far more resale value than an internal combustion vehicle with similar mileage.

So it makes a lot of sense for everyone really to buy a Tesla :-D!

That said if electric vehicles do last far longer than traditional vehicles you could reduce car sales by as much as 75%+...if you add in another trend of transportation-as-a-service via on-demand vehicles (e.g. Uber), that could reduce car sales by a similar amount.

Both combined could decimate the current car industry as we know it which has a lot of impacts on inputs e.g. mining for metals, etc. which go into car manufacturing. It does make the world a lot more efficient. If you look around the US, all you see are parking lots and cars everywhere sitting around (for like 97% of their life).

pfooti 21 hours ago 0 replies      
One thing I'm surprised about is the way Gogoro scooters are being rolled out and sold. The model 2 has a retail price of below $2,000 and is pretty much ideal as a city commute scooter. But they're not selling them for individual use, only in cities where they have their battery exchange charger stations set up.

So I have to wonder: what kind of loss-leader is the cheap gogoro 2 scooter? Because I guarantee I would buy one (and a charging station that let me trickle-charge one set of batteries while I ride around on the other) if they were for sale. I have a 10 year old vespa that I love, but is starting to get long enough in the tooth that I'm considering a replacement.

But they're not going to be sold in san francisco (or pretty much anywhere else in the US unless you're really optimistic), because they want to have a citywide network of single-use battery exchange stations.

By single-use, I mean: only good for this one product. With gasoline, you can fuel anything, but they are basically angling for a whole-city distribution of a single thing: battery packs just for a scooter. This strikes me as a pretty bad model, which circles me back to the start. I wonder how much the scooter actually costs, if they're willing to leave this much market on the table for individual bike / charger sales.

I suppose it could also be NTSB certification holding them up too.


olivermarks 1 day ago 1 reply      
'electric cars arent going to take over' there seem to be endless articles on line pro and con electric vehicles http://www.businessinsider.com/why-morgan-stanley-wrong-abou...
Angostura 1 day ago 1 reply      
I'd love one. Unfortunately, I - and quite a few like me in large cities in the UK only have on-street parking, so no way to charge at home.

Once some of the large supermarkets start offering charging in a large proportion of their parking spaces, it'll get more interesting.

grandalf 1 day ago 2 replies      
I've made this point before and people are always skeptical. Consider how many fewer moving parts an electric car has. It's like the difference between solid state electronics and vacuum tube electronics.
rjdagost 1 day ago 1 reply      
This projection that electric cars will soon become cheaper than gas cars assumes an exponential decay in cost. All exponential trends ultimately cease due to some physical limiting factor. We're seeing Moore's law sputter to a halt because it is getting more expensive (and not less) to make smaller features on microprocessors- semiconductor manufacturing processes are just not scaling the way they have for decades. Nevertheless Moore's law was a truly exceptional exponential ramp unrivaled in the history of technology. Does battery manufacturing have the same advantages that will allow for a tremendous long-term exponential ramp analogous to Moore's law? I am wondering how long it will take for battery technology to "hit the wall". Hopefully batteries will become significantly cheaper, safer, and faster to charge but I am skeptical that we will be able to ride this exponential curve for anywhere near as long as we've benefited from Moore's law. I hope I am wrong.
guscost 1 day ago 1 reply      
Unfortunately, still only after factoring in regulations.
appleiigs 1 day ago 0 replies      
> On an upfront basis, these things will start to get cheaper and people will start to adopt them more as price parity gets closer, said Colin McKerracher, analyst at the London-based researcher. After that it gets even more compelling.

"Research" needs to include repairs for accidents (even though I've never been in an accident in my own cars). Because I'm OK with the upfront basis, but definitely not OK with the after. I've seen many articles regarding rare parts and long repair times with Tesla. I've put in $5K of repairs in past 2 years for my aging gas car... about 10% of what I plan to spend on my next car, so repairs are a big deal and even more so when (not if) I get into an accident.

olivermarks 1 day ago 1 reply      
...'will be cheaper to buy in the U.S. and Europe as soon as 2025'...is the sub head. The article makes a number of assumptions to reach this conclusion
Growing a Compiler (2009) dartmouth.edu
234 points by ingve  3 days ago   16 comments top 3
big_spammer 3 days ago 5 replies      
This made me find and read "Growing a language"


"In a race, a small language with warts will beat a well designed language because users will not wait for the right thing; they will use the language that is quick and cheap, and put up with the warts. Once a small language fills a niche, it is hard to take its place."

richard_shelton 3 days ago 0 replies      
An interesting approach! But old META II (mentioned in the original article) system and its derivatives are looking more practical to me. I think, it's more effective to have two simple DSLs, one for parsing and another one for tree transforming like it was done in Cwic and TREE-, which are grew from original META II (self-described in a few lines of code).
jamescostian 3 days ago 2 replies      
The amount of JS bashing here is so high. Pretty much any popular language fits the given description. Humble yourself and read about why your language of choice sucks: https://wiki.theory.org/index.php/YourLanguageSucks
Tallest Lego building with 4 pieces? medium.com
401 points by lorenzosnap  2 days ago   97 comments top 22
gerdesj 2 days ago 2 replies      
One commentator here questioned why is this on HN (and was DVd somewhat) There is the fact that dad teaches stuff to daughter. The lesson is fun and interactive. The concepts dealt with are pretty profound and can be quite deep: constraints, maxima and minima.

Well done dad - you've covered some complex stuff in a fun and accessible way. Good skills.

If anyone else doubts why this is valid Hacker News, they may want to simply hand in their nerd card and do something else.

jacquesm 2 days ago 3 replies      
Ah, jacquesm bait, ok here is my solution (left), right one is an alternative to yours.


just the tops:


gene-h 2 days ago 0 replies      
Reminds of a genetic algorithm that was made to optimize lego structures[0]. One of the most notable results of this was optimizing for a structure as long as possible with a single support. What they got was a 2 meter long organic looking cantilever that experienced significant brick deformation[1]


542458 2 days ago 2 replies      
I think you can do ever so slightly better by moving the pink dot to the highest pip on the 2x2 green cylinder - it extends a little beyond the pip, so you'd gain about a millimeter that way.

If it's not cheating to have parts not completely attached, you could maybe even balance the pink dot on top of the green cylinder for an extra 3-4mm.

jacquesm 2 days ago 2 replies      
Ok, an even better one:


te_platt 2 days ago 1 reply      
Very fun! It reminds me of teaching my son to play tic-tac-toe. He started in the top-left, I went center, he went top-center, I blocked top-right, he went in between top-left and top-center and quickly claimed victory. I gave him that one and said he had to go in an unused square. Next game he started top-left, I went center, he went top-center and then before I could go he hurried and went top-right and again claimed victory. I don't remember what we did after that but I do remember thinking it was more fun pushing the rules than the actual game.
trevyn 2 days ago 4 replies      
>The only rule is The structure needs to stand on its own.

Melt it down and recast it, duh. :)

tdy721 2 days ago 2 replies      
To entertain myself, I gave this some thought before I clicked. I assumed exactly what the title says: which 4 lego pieces create the tallest structure that stands on it's own.

Well, we need the biggest bricks we can find, my mind went to the ship hulls:https://www.ebay.com/p/?iid=282223262072&&&dispItem=1&chn=ps

So how can we connect 2 of these together, and make it stand vertical? Or can we?

It might be that we need the 3 pieces to make the brick stand... I'm not sure.

And then I clicked on the article...

(I'm not sure if I've even found the largest brick, or if I've found a 3 brick component; how do we define "brick"? Why do I care? Please comment and subscribe, it really helps ;)

natch 2 days ago 1 reply      
10.9cm free standing, no glue or tricks but top two bricks are just resting, not attached.


mturmon 2 days ago 0 replies      
It might be interesting to compare the height achieved by a given configuration, to the upper bound on heights (sum over largest dimension of all 4 pieces).
btbuildem 2 days ago 1 reply      
You can do a bit better if you put the round piece at the base, and make the blue piece diagonal just like the big flat piece.

You'd need to balance everything on the round piece, but the little pink dot should assist in that.

Kind of like those balanced rocks thing :)

zachrose 2 days ago 1 reply      
Gain an extra 2mm or so by just resting the pink piece on top?
5ilv3r 2 days ago 0 replies      
Asking questions, challenging assumptions, bending the rules, and just trying things are all core to the hacker mentality. Well done sir!
crazygringo 2 days ago 1 reply      
It's funny, but I feel like this is exactly the kind of skill required in engineering or business generally -- "How can we creatively maximize 'x' given what we've got?"

In fact, it wouldn't strike me as an unreasonable interview question, using the physical pieces. Of course some people are better with spatial reasoning than others (and experience with Legos is another leg up) -- but using several simple, general-purpose questions along these lines almost feels like a FizzBuzz for any job where problem-solving is an important part.

gazarullz 2 days ago 0 replies      
Very cool post, in regards to the people clicking on the link and then bashing the author, it would be nice if you could bring some arguments to your disliking/bashing.
the_unknown 2 days ago 0 replies      
Thank you. I've now had a similar discussion with my two girls (Aged 11 and 4). Was a far better way to spend the evening than a typical night of Netflix.
rena-anju 2 days ago 0 replies      
"My daughter asked me"
Markoff 1 day ago 1 reply      
kudos to Lego marketing department for great viral
aaron695 1 day ago 1 reply      
Sorry but isn't two or three cheating? These are not valid ways to connect Lego are they?
xupybd 2 days ago 0 replies      
I have no idea why this post is so popular. I'm equally confused at why I find it so interesting.
paulcole 2 days ago 1 reply      
asketak 2 days ago 4 replies      
Does this content really belong to hackernews?
Mossberg Final Column: The Disappearing Computer recode.net
264 points by andrewl  2 days ago   61 comments top 12
jonstokes 1 day ago 3 replies      
Mossberg did a lot of good work, but so did a lot of other columnists who didn't attain nearly the status he did.

Mossberg mostly has one guy to thank for the unique place he occupied in the tech media scene: Steve Jobs. For whatever reason, Jobs decided that Mossberg's take on Apple products really mattered, a lot. Mossberg was Jobs's stand-in for Mr. Everyman, and Jobs seemed to believe that if Mossberg couldn't connect with a product, then it needed to be re-thought.

His status as anointed deliverer of the final verdict on Apple's product line gave Mossberg a massive amount of influence in the tech world all by itself, but added to this was the fact that, as with so many other things, the rest of the tech industry slavishly followed Apple's lead -- at least, the rest of the tech industry's PR departments followed it. For a corporate PR flack, having Jobs's own personal oracle say nice things about your product was the ultimate win. It was the fat Harvard admissions envelope, by which I mean something like, "this achievement, while important, has a way outsized significance to a certain segment of the population who compete for it because they've all decided that getting this particular thing means You Won and have a higher status than all of your peers who haven't gotten it."

Interestingly enough, the passing of Jobs was followed by the passing of the positive Mossberg review as the ultimate prize in the PR world, and Mossberg's departure from the WSJ didn't help.

Am I ripping on Mossberg? Not really, more like I'm ripping corporate PR, but Mossberg certainly cultivated this situation (who wouldn't, though). I will say that it was a source of eternal frustration (and envy) among the rest of the tech punditry that Mossberg's reviews had this bizarre status with the PR departments of the companies we covered, sort of like "Harvard as the agreed upon brass ring that all yuppie parents have decided to compete for" no doubt occasions much eye-rolling at Stanford, Brown, Yale, and everywhere else. Nobody is sad to see that era pass.

As for Mossberg himself, Godspeed, dude. May your amulet never tickle.

ComputerGuru 1 day ago 2 replies      
This made me inexplicably sad. I wasn't a huge Mossberg fan or anything and haven't read more than a handful of his articles (nothing against the man, of course). But I think it's hard to hear about this and not be affected, even if only a tiny bit. We're some 35 or so years into the era of the personal computer and it's basically at that point where an entire generation of great names in the industry have spent a full and productive career in the service of technology and are now stepping down... and in some ways, the industry itself is retiring (as he mentions). Times are a-changing and we must change with them, adapt or die.

So long, Walt.

liotier 1 day ago 2 replies      
"Ubiquitous computing (or 'ubicomp') is a concept in software engineering and computer science where computing is made to appear anytime and everywhere. In contrast to desktop computing, ubiquitous computing can occur using any device, in any location, and in any format. [..] This paradigm is also described as pervasive computing. [..] Mark Weiser coined the phrase "ubiquitous computing" around 1988, during his tenure as Chief Technologist of the Xerox Palo Alto Research Center (PARC)"https://en.wikipedia.org/wiki/Ubiquitous_computing

I find interesting that the concepts dreamt in the 80's are becoming common, now that technology has caught up.

ianai 1 day ago 1 reply      
I think privacy concerns could seriously diminish or alter the AI developments. I sometimes wonder if the internet doesn't need a new abstraction. i.e. The academic underpinnings of network protocols are allowing governments and corporations to overstep. Maybe in the future having an ISP provider without a 'privacy provider' will be like having a car without insurance?

Technically, I'm thinking of something like thousands of VPN connections distributing packets across randomly chosen data paths. Some trade off of bandwidth for abstracting away "which IP is doing what". (bleary-eyed thought: some sort of probability field applied to TCP/IP)

hugs 1 day ago 5 replies      
From the article: "and robotics are in their infancy, a niche, with too few practical uses as yet."

Why does it seem like robotics is always niche and not practical yet? It seems like robotics is perennially on the cusp of being the next big thing, but never really is.

(I'd qualify that robotics are the old big thing in industrial manufacturing, though.)

chmaynard 1 day ago 1 reply      
Mossberg was a professional technology critic and pundit, one of the first of his kind. He paved the way for many others who followed his lead, and he certainly deserves credit for his pioneering efforts. I enjoy reading critical reviews of new products and software, but I'm generally not a fan of tech punditry. I believe that we need fewer pundits and prognosticators, and more hard-hitting investigative reporters along the lines of John Carreyrou.
phil9987 1 day ago 1 reply      
Carefully chosen words and full of future visions. Thank you. We will see if this will indeed go in the direction he suggests - as usually one cannot foresee what will disrupt the world next. It can be something completely new which has nothing to do with computers.
ThomPete 1 day ago 0 replies      
There are two ways to look at this.

1) It's really sad that we aren't able to get better progress than we have

2) It's great that we are still far from robots taking over as that leaves a huge option space for startups.

Personally I think many things literally are around the corner and that the corner is getting closer, faster and faster.

Animats 1 day ago 1 reply      
Another one gone. John Markoff retired this year, too.
vowelless 1 day ago 0 replies      
Great column. And end of an era. Good luck to him.
shriphani 1 day ago 0 replies      
Great column. What a body of work! I wish I have something that touched so many lives when I finally hang up my gloves.
ebbv 1 day ago 3 replies      
Good column but this ongoing idea that the smartphone is the new personal computer is wrong. It's a different type of device. Smartphones are not replacing computers in terms of a device to get work done, other than email and phone calls. Doing real work for just about any job on a smartphone, even a big one like my iPhone 7 Plus, is not really viable. You still need a laptop or desktop to work in a spreadsheet, design a full page ad layout or do serious programming.

We need to let go of this idea that new technology is always a version of some other one, or replacing some other one. That's actually rare. Most of the time new inventions are just that; new.

Server room with seismic isolation floor in Japan earthquake disaster [video] youtube.com
231 points by DamnInteresting  1 day ago   59 comments top 14
sjbase 1 day ago 3 replies      
This is a great example of one of those investments that might feel like a waste, until it suddenly REALLY isn't.

I wish I had this video during my infosec & HA consulting days.

laurentl 1 day ago 5 replies      
I visited a couple of DCs in Tokyo, and the tours systematically feature a look at the anti-seismic system. It's actually pretty unimpressive : basically the seismic-protected part of the building is mounted on big rubber dampers, with some huge pistons thrown in for active attenuation. In taller buildings (I visited a DC with around 20 stories IIRC) the stories are not rigidly connected together, so that instead of swaying (and possibly toppling) during an earthquake, the building just kind of wobbles.

Very effective though, as the video shows. My visits were post-2011, and each DC had a record of the building's movement during the big earthquake; max amplitude on the seismic-protected part was a couple of centimeters, vs 50 cms or more for the rest of the building.

rjbwork 1 day ago 2 replies      
Poor sysadmins! They have to wear full 3 pieces suits on the DC Floor!

Besides that, the video/tech is really cool.

cryptonector 1 day ago 1 reply      
I've been in a datacenter in Tokyo where the floors were not seismically isolated, but the rack cabinets were made to sway. It's hard to tell many floors up that an earthquake is in progress, but you can see it when you see the racks swaying!
saalweachter 1 day ago 3 replies      
I wonder if it is disorienting standing or walking on the movement isolating floor.

There's a (thought?) experiment you can do where you put someone inside a fake room built like an overturned box, and then jerk the walls in a random direction while the floor stay still. You get disoriented and fall over. Or the more common experience where you're on a stopped train and the train next to you starts to move, and it takes you a minute to figure out which of you is actually moving.

I'd have to imagine standing on the isolated floor while the rest of the building moves would be similarly confusing.

ChuckMcM 1 day ago 1 reply      
Category "Auto and Vehicles" ? It is interesting to note that the people on the seismically isolated floor are not swaying with the video (it is explained that the camera is mounted to the non-isolated part of the building so it appears the server room floor is moving when instead it is the building that is moving).

Most of the data centers I've looked at in the Bay Area just bolt the racks to the floor and are done with it assuming, I presume, that it is the shifting on the floor that damages the hardware.

sengork 20 hours ago 0 replies      
Commercial racks and datacenter products have optional features that can be utilised in earthquake prone environments. What you see in the video is not only those rack features but also structure of the building itself (both are important and neither one is sufficient for certain areas).

For example see: https://www.ibm.com/support/knowledgecenter/STXN8P/com.ibm.s...

Without employing these techniques, you'd get something similar to this: https://i.imgur.com/Sb2M5Qo.jpg

johnflan 1 day ago 1 reply      
That earthquake goes on for way longer than I would have expected.
vzaliva 1 day ago 2 replies      
What is amazing is how cool are the people in the video. Instead of running out screaming they calmly do their job.
guscost 1 day ago 0 replies      
Can a multimedia person make the video fixed on the servers instead?
i336_ 1 day ago 0 replies      
Any video editors got a few minutes?

There's an open video stabilization request here: https://www.reddit.com/r/ImageStabilization/comments/6e1mgj/...

So far there's https://streamable.com/lrkhg which sorta gets partway there.

pavement 1 day ago 0 replies      
I guess a similar level of movement isolation would probably be required for all those repurposed oil rigs and maritime-oriented machine rooms that anticipate taking advatage of sea water as server coolant.
factsaresacred 1 day ago 0 replies      
"Techno Mind Corporation".

Think I saw them play in Berlin once.

Introduction to ARM Assembly Basics azeria-labs.com
245 points by ingve  3 days ago   44 comments top 8
wyc 3 days ago 0 replies      
In case you're interested in seeing a project that does something, I wrote an IRC bot in ARM: https://github.com/wyc/armbot
partycoder 3 days ago 0 replies      
You can also compile assembly with gcc rather than as + ld.

And you can output assembly from C programs

 gcc -S <source file>
You can disassemble a binary to see how it actually looks. The resulting binary is much larger than your assembly code.

 objdump -d <binary file>
Many disassemblers will show you friendlier output than objdump. I use ht editor (packaged in Debian based distros as ht), an open source clone of Hiew. In ht, press F6 -> select image, and you will have an easy to follow disassembled version of a binary that you can edit, if you happen to know opcodes.

CraigJPerry 3 days ago 3 replies      
Some of the arm instruction set manuals seem harder to come by than others. Recently I reverse engineered one of the firmwares of the ImmersionRC Vortex 150 racing quadcopter (it uses an stm32f3 chip, arm cortex instruction set). It was pretty hard to come by a copy of the full instruction set manual - i just assumed that kind of thing would be a quick google away. Eventually i got there and made my changes but it was harder to get going than i expected but for different reasons than i was anticipating.
big_spammer 3 days ago 2 replies      
What's with the branch instructions "Branch with Link" and "Branch with Exchange"?

What do they do? I haven't seen anything like this before.

I see some explanation in https://azeria-labs.com/arm-conditional-execution-and-branch... but the reason why branching must work this way isn't explained.

edit: I see "Branch with Exchange" switches the processor from ARM to Thumb in http://www.embedded.com/electronics-blogs/beginner-s-corner/.... But I don't see why switching processor modes must happen on branch instructions.

mishurov 3 days ago 1 reply      
Why don't people use AT&T syntax for ARM?
lacampbell 3 days ago 7 replies      
Does any one here use a lot of assembly in their job? If so, what do you do?

I started getting fascinated about computer architecture a while back, but then I saw how dead embedded programming was in my area.

sjtgraham 3 days ago 4 replies      
I'm only a couple of pages in but already a lot of this guide is incorrect for ARMV8-A (A64). Much is different, e.g. no thumb mode, no directly accessible program counter, no load multiple, no PUSH/POP, different stack pointer, etc. Looks good if you're interested in older ARM ISAs, which is probably more applicable for IoT etc.
unixhero 3 days ago 0 replies      

I enjoyed the read!

Someone forged my resignation letter stackexchange.com
268 points by techolic  1 day ago   153 comments top 22
MrQuincle 1 day ago 20 replies      
I wouldn't be surprised if this actually didn't happen. Let's see if there are elements in the story that stand out if we take this perspective. (It is much harder than detecting photoshop effects).

+ He could have been on holidays, or getting a child, but it is the death of his mother. Of course, we are gonna feel really bad for him. That's really good for the story.

+ The security of the system is such that on one hand a dongle is needed, but on the other hand someone can fake sending email through your account. This is apparently widely known by non-expert colleagues to just joke around, but not known by the security staff.

+ Technological "details" that do not seem to make sense such as SQL to make it sound like a true story to the uneducated.

+ The writer already assumes that we think he is lying. Hence, he comes up with the dongle story. We would already have been fine with his word that he just didn't send the resignation letter.

+ He physically shows up 6 weeks later without checking his email once (reply email). Of course the effect of the story is much stronger in that case. However, is it very likely not to check your email at least just before you show up at work 6 weeks later? [Edit: incorrect assumption, see @c8g.]

Then what would be motive of the person asking the question. It is a throwaway account.

+ A researcher who wants to see what the difference is between a fake story on Facebook versus one asked on StackOverflow? How could such an effect be properly compared?

johngalt 1 day ago 7 replies      
I wouldn't be surprised if that "someone" was the CEO or someone working on the CEO's behalf. What else could even make sense?

Even if some coworker had a grudge or a bone to pick with OP, they had to know that OP would return eventually and any CEO worth his salt would demand answers immediately. Specially considering the legal risks exposed here for firing someone on FMLA qualifying leave (assuming US).

Weigh that against a CEO who accepted a resignation and re-hired for the position based on a single email alone without so much as a follow up call. When OP returns seems lackadaisical about investigating. Seems fishy to me.

skrebbel 1 day ago 11 replies      
Hardly related, but does anyone know why resignation letters are so popular in the first place? When I quit the only job I ever had, I went to my boss and told him I wanted to quit. We had a constructive conversation.

Now, this company has a very strong "talk about it" culture, with super supportive management, etc. No bureaucracy or paperwork anywhere. Nevertheless, my boss was totally surprised that I wanted to talk about resigning. Pleasantly surprised I might add, but still: In his entire career, every employee who had left had written a letter and left it at that.

Why do it that way? Of course I understand if there's fundamental disagreements or deep unhappiness, it's a good way to keep emotions out of the way. But that wasn't the case here, and everybody I know who left that company left it on good terms.

I ask because I'm an employer now. I try to be a good and open-minded boss, and I'd much rather have someone tell me what's going on than receive a letter out of the blue. Is this wishful thinking?

ars 1 day ago 1 reply      
I would tell him go to the CEO and say:

"The letter was a forgery, someone in your company is unethical, and you should find out who.

As far as I am concerned I am thinking of suing, but if you simply paid out my leave I would be satisfied, and finding the unethical employee will be on you."

And that's it. If the CEO is as honest as the question makes him out to be, this should be enough, and it's much simpler. If not, then you lost nothing, and can sue, get a lawyer, etc, as all the answers suggest.

bouncycastle 1 day ago 5 replies      
Using a throwaway...

When I was working at a previous company, my manager jumped on my PC and used it to send an email from my account to my colleague, while I was out on my lunch break. It was a joke, which I thought was highly unprofessional. I asked my colleague, and he said that he seen the manager use my computer. I confronted the manager and she owned up to it. I've asked her to explain, and her excuse was that I should have locked my PC and I didn't take it anywhere further. However, I could see how a more serious incident could happen, so I wouldn't be surprised if it was the CEO where hubris can run rampant at those levels. Usually companies would have audit logs of who and when the account was accessed, I would start looking there.

_ph_ 1 day ago 1 reply      
As said on stackexchange, you should get a lawyer and contact the authorities as a felony has been committed. Even if the company isn't involved in the events, they should not treat you as they did and owe you a proper severance pay.
pkaye 1 day ago 4 replies      
How do people afford a knowledgeable lawyer and forensics expert as mentioned in some of the replies? Don't they get expensive fast? What if you lose the case... how much money will you be out?
midnitewarrior 1 day ago 1 reply      
If you could see the email, look at the email headers. It should indicate what IP address and email program sent the email, as in, it may say SQLMail and it IP of the machine.
lancewiggs 1 day ago 0 replies      
I would write a formal letter to the board of directors (and perhaps key external shareholders) as well. It doesn't have to be much - even a forward of the post, letting them know that it refers to you and their company.

If they are complicit - it makes a paper trail for others to discover. If they are not complicit then the CEO will get instructed to do the right thing.

strictfp 1 day ago 1 reply      
Re-use the expoit to resign the new guy.
Mao_Zedang 1 day ago 0 replies      
How do you accept your exit package without signing anything in person, I have never had a job where a single resignation email was enough for HR to close you out as an employee, this smells super made up.
mbubb 1 day ago 1 reply      
"ain't passed the bar/ but I know a little bit"

He probably does not work in a 'right to work' state as the employer would not need to justify their action with a resignation letter.


The scenario is odd - I understand dropping things to take care of a relative. But a 6-week interval is too great in this day. There should have been some further communication of intent.

b3lvedere 1 day ago 1 reply      
"We used to do it as jokes but it was never used for something like this and I cant imagine anyone that would hate me enough to go this far."

It's all fun and games, until ...

jangid 1 day ago 1 reply      
It is difficult to believe that the guy has sent a resignation email and the CEO didn't even talk to him over the phone.
bitmapbrother 1 day ago 1 reply      
>It couldve been a colleague because I know there is a backdoor way to send emails using someone elses account via some sort of a SQL database thing. We used to do it as jokes but it was never used for something like this and I cant imagine anyone that would hate me enough to go this far.

Most places would escort you out of the building for doing this.

forgotpwtomain 1 day ago 0 replies      
Why is this on HN? The top-voted comment here calls it fake, not a single comment adds anything substantial that wasn't written on SE. As far as I can tell it's just tabloidish voyeurism...
forvelin 1 day ago 1 reply      
Local IT guy in the company is only person to help most likely, it would be first person to reach for such sneaky task. Either he would know or would have logs.

Though, seems hard to solve, especially if he is out of US/Europe.

camus2 1 day ago 1 reply      
> You need to talk to a lawyer, ASAP.

The only acceptable answer.

If someone forged your email it becomes wire fraud thus criminal act under federal law. You should consult a lawyer now.

ForFreedom 1 day ago 0 replies      
You don't need a dongle, backend sql db or infact anything to send a forged email but just a terminal on linux would do.

The company should have investigated if he did or didn't send the email

adxlarbi 1 day ago 0 replies      
My only question here, why he haven't reported the email backdoor to the IT ?
partycoder 1 day ago 0 replies      
Someone was probably ghetto testing the possibility of doing it.
Kenji 1 day ago 1 reply      
Well, if the employer is just like "Well, sorry, too late now, we don't care that it's a forgery" then maybe it was a good thing you can move on.
       cached 1 June 2017 02:11:01 GMT