hacker news with inline top comments    .. more ..    17 May 2014 Best
home   ask   best   5 years ago   
The purpose of DRM is not to prevent copyright violations (2013) plus.google.com
714 points by adrianmsmith  1 day ago   219 comments top 36
jbk 1 day ago 6 replies      
This is one of the most important points about DRM.

DRM are marketed to users (and the society, including politicians) and to artists as a way to prevent copies. Most engineers implementing DRMs think so too. And all the discussions we've seen on HTML5 are around this. People have little arguments against this because it "sounds morally good" to help artist "live of their creations".

I am the de facto maintainer of libdvdcss, and have been involved on libbluray (and related projects) and a few other libraries of the same style; I've done several conferences on this precise subject and I've fought the French High-Authority-on-DRM in legal questioning about an unclear piece of law... Therefore, I've studied DRM quite closely...

The truth is that if you consider the main goal of DRM to prevent copies, no DRM actually work. ALL of them got defeated in a way or another. Indeed, GoT-broadcast-to-top-of-TPB time is counted in a couple of hours; so why do they try to push those technologies still?

The answer is probably because the main goal of DRM is to control distribution channels, not copy-prevention. Copy-prevention is a side goal.

This post of Ian is excellent to explain this.

PS: You can see me speaking of the same point, in French, in June 2013 here: http://www.acuraz.net/team-videolan-lors-de-pas-sage-en-sein...

NB: I'm not discussing here whether DRM are good or bad.

programminggeek 1 day ago 3 replies      
I was going to say, the purpose of DRM is to get you to pay for multiple licenses. It's the same reason why a lot of paid download software is now on a SAAS model. If you can buy 1 copy of something for $20 and use it on whatever devices you want, then the company has made $20. If you DRM that to be for just one device, and you have 5 devices, they make $100. If you are a SAAS operator, you are effectively doing the same thing.

Somehow people are more okay with paying an ongoing fee for software or some perceived notion of services, but that same does't yet apply to content in a larger way. The closest equivalent is probably the cable companies and they are taking their huge sums of money and are buying the media companies, so maybe eventually there will be just a flat $100/month fee for experiencing a company's content on whatever device/experience it's available on. Maybe even movie theaters.

couchand 1 day ago 6 replies      
Had CDs been encrypted, iPods would not have been able to read their content, because the content providers would have been able to use their DRM contracts as leverage to prevent it.

Moreover, the iPod most likely would have never been invented. How about that for killing innovation?

jamesbrownuhh 1 day ago 5 replies      
What DRM does is makes the 'pirate' goods, the 'hacked' players, the illegitimate rips, better, more usable, more flexible, and generally superior in every way to the officially released product.

Which I'm sure is not the intention.

Say I can't copy-and-paste a section from an eBook or run it through a speech reader? Tell me I can't skip the trailers before watching the DVD I have paid for? No. Fuck you. Bullshit like that is a red rag to a bull - you just created an army of people who'll bust off your "rights management" just to show you how wrong you are, and that YOU DO NOT GET TO DECIDE how people consume the things they own.

Sorry and all. But that's how it is.

beloch 1 day ago 2 replies      
Nothing makes me want to turn pirate quite like being forced to sit through unskippable anti-piracy ads preceding a movie I've paid for.
noonespecial 1 day ago 2 replies      
Drm is primarily used in practice to do market segmentation. The rest of this comment is not available in your region.
azakai 1 day ago 1 reply      
This is very true, but also preaching to the choir. Probably most of an audience like HN already knows this.

The real question is what we can do to fight DRM. The only real option is to push back against the companies that promote it. For EME, the current DRM in the news, the relevant companies are Google, Microsoft and Netflix.

It's all well and good to talk about how DRM is pointless. Of course it is pointless. But unless we actually push back against those companies, DRM will continue to win.

Karellen 1 day ago 0 replies      
Previous discussion (421 days ago, 22 comments):


josephlord 1 day ago 0 replies      
This rings quite true to me. I had protracted arguments about the limitations the BBC wanted to impose on TVs supporting Freeview HD in the UK (copy protection flags and only encrypted local streaming) despite the fact that the content itself was being broadcast at high power across the country completely unencrypted. What is it the CE companies need to license? The Huffman compression tables for the guide data which in the license agreement you have to warrant that they are trade secrets and that you won't reveal them. I did send the BBC a link to the MythTV source code which contains this trade secret. If you work out who I was working for during this discussion don't worry, the content arm of the company was (at least according to the BBC pressuring them the other way as a supplier).

And the end result? We caved for the shiny Freeview HD sticker.

tn13 1 day ago 0 replies      
I do not think there is any problem with DRM. It is pretty much right of the content providers to chose how they will distribute their content.

What really gets my gears grinding is when I see an open source browser like Firefox is forced against their wishes to implement it because DRM has somehow reached a standard.

The job of W3C standards is to protect the interests of ordinary web users and not content providers.

jljljl 1 day ago 2 replies      
Speaking of controlling distribution channels, does anyone know how I can share this post outside of Google+, or add it to Pocket so that I can reread in more detail later?
crystaln 1 day ago 3 replies      
There is zero evidence of this claim in this article.

DRM is, in fact, to prevent unauthorized usage and copies. In fact, even some of the examples in this article are exactly that.

What is more important is that DRM doesn't have to be perfect, it just has to make unauthorized usage very inconvenient - enough that a few dollars is worth the cost for most people.

shmerl 1 day ago 0 replies      
Of course not. Reasons for demanding DRM can be different, but none of them are valid or good. As discussed here: https://news.ycombinator.com/item?id=7745009 common reasons are:

1. Monopolistic lock-in. DRM is more than often used to control the market. It happened with Apple in the past, and was one of the key reasons that music publishers realized that being DRM-free is actually better for them.

This reason also includes DRM derivatives like DMCA-1201 and the like. It's all about control (over the markets, over users and etc.).

2. Covering one's incompetence. DRM is used to justify failing sales (i.e. when execs are questioned about why the product performs poorly, they say "Pirates! But worry not - we put more DRM in place").

3. Ignorance and / or stupidity (many execs have no clue and might believe that DRM actually provides some benefit). This type can be called DRM Lysenkoism.

jiggy2011 1 day ago 2 replies      
Pretty much this.The people who will pirate are going to pirate regardless, you could offer all your movies DRM free for $1 each and some people will still pirate them.

So the purpose of DRM is to make maximum revenue from those who won't pirate, for example by charging more for group viewings of the movie or viewing on multiple devices.

HackinOut 1 day ago 0 replies      
"Sure, the DRM systems have all been broken [...]"

I have worked with MS PlayReady DRM (which is the "latest" one from Microsoft, the one used by Netflix) for some time and never stumbled upon any cracks. Not because it's impossible or even difficult but probably just because nobody cares about cracking Netflix (which brings PlayReady it's main source of "users")... Once you pay, you can watch as much as you like, why bother. Netflix made it extremely simple and accessible. (Yes some features like multicasting might be missing but it's still way better than Plesk or PopcornTime. For now at least... The main problem is clearly that the Film industry make it too difficult to have all content in one place). There is plenty of other "easier" sources (alternative VOD offerings with already cracked/worse protections, Blu-rays) to get the copyrighted material from for underground channels.

I am sure other DRM systems have a clear log for the same reason: No major incentive to crack them.

userbinator 1 day ago 0 replies      
The purpose of DRM is to give content providers leverage against creators of playback devices.

One thing that's always seemed odd to me is that the DRM use case is presented as a battle with "content providers" on one side and everyone else on the other, but aren't these content providers also users? Do they also consume DRM'd content, and if so, are they perfectly fine with the restrictions? Do those who devise DRM schemes not realise that they may also be the ones who will have these schemes imposed on them?

Kudos 1 day ago 0 replies      
Can someone explain to me how businesses can provide a subscription model without DRM?

I refuse to purchase anything with DRM, but I don't give a shit if it's a rental or subscription service.

mkempe 1 day ago 0 replies      
If one wants a parallel to social-politial battles around the means of production in recent centuries, it's an attempt by licensing companies to abolish ownership of reproductions of works of art -- and to establish a monopoly on the means of distribution.

My perception is that few people understand or care -- and the US political elite mostly acquiesces because it has been (or wants to be) bought by those aspiring monopolists.

RegW 15 hours ago 0 replies      
I have come to find the whole circular debate about DRM particularly boring. So much so, that I won't bother to read the whole article or comments here.

Yes, DRM is always broken - eventually, but yes it does work - sort of. It is a technology and legal arms race in a constantly changing landscape.

> DRM's purpose is to give content providers control over software and hardware providers, and it is satisfying that purpose well.

No not really. DRM's purpose is to give content providers a return on their investment, everything else is just a consequence of trying to achieve this.

DRM isn't going to go away as along as people want to be paid for creating content, and other people want to get that content without paying for it.

Sadly it is probably true that it is always the biggest players that get the biggest slice of the pie. Irritatingly the open source community refuse to engage in this battle and support the small player. As a consequence the smaller content providers have no choice but to hook up with the big commercial channels who decide how big a cut they want.

gagege 1 day ago 4 replies      
Why isn't screen capture software more widely used? It seems like a dead simple screen capture suite could make all these DRM worries go away.
chacham15 1 day ago 0 replies      
While everything that is said in the article is true, the end result is that the control that the distributers want to have is circumvented by pirating. Therefore, by continuing to try and control the content more, what is actually being done is increasing the demand for pirated content. I know of many people who buy content legally and then in addition acquire the pirated version to use as they please. Therefore, as that process becomes easier (lookup popcorn time to see how easy it can be), the purpose of the control becomes more meaningless.
ingenter 1 day ago 1 reply      
>Without DRM, you take the DVD and stick it into a DVD player that ignores "unskippable" labels, and jump straight to the movie.

>With DRM, there is no licensed player that can do this

So, enforcing some rules (via DRM) to the player manufacturing, content provider makes my experience worse as a consumer.

Again, I am a consumer, what is the advantages of DRM for me? So manufacturer must enforce me watching ads?

nijiko 1 day ago 0 replies      
Eh, at the end of the day, there are thousands of ways to go around it, so why implement it in the first place?

People pay for things that are good, easy to pay for, are appropriately priced, and not a burden or expense more than they see it worth (has to deal with pricing and roadblocks). DRM, and poor delivery services are usually those roadblocks.

wyager 1 day ago 4 replies      
Interesting. I was unaware of this.

But if this is the case, why is there such a push to put DRM in HTML? Browsers aren't DVD players. Users are free to use software like ABP to circumvent any features like "unskippable ads" mentioned in the post. Pressure on browser makers seems much less valuable than pressure on device makers.

pje 1 day ago 4 replies      
> Had CDs been encrypted, iPods would not have been able to read their content, because the content providers would have been able to use their DRM contracts as leverage to prevent it.

What? Why? Nothing would have prevented people from recording the playback of an encrypted CD and putting that on their iPod.

mfisher87 1 day ago 0 replies      
Steam would have been a great example for his article. Steam does nothing to prevent you from copying games. In fact, some games on steam can be bought without DRM from other sources. Steam just forces you to use Steam or buy your games again.
torgoguys 12 hours ago 0 replies      
So I read the page, but find the argument VERY unconvincing. If that really was the goal of DRM, then you wouldn't need the really complicated schemes used. You just come up with a simple scheme that legally requires licensing and always use that. No need to keep switching schemes, adding more safeguards, and so forth.

The content creators still get the same leverage over the legal distribution channels because they can still be forced to follow the rules outlined in the examples. That and it lowers your R&D costs on making complicated DRM. If the article is true, what have I missed?

gcb0 1 day ago 0 replies      
dammit, reading G+ on a small 720p laptop screen is absolute hell.
spacefight 1 day ago 0 replies      

"DRM's purpose is to give content providers control over software and hardware providers, and it is satisfying that purpose well."

chrisjlee84 1 day ago 0 replies      
Yes and David Sterling is clearly not a racist.
briantakita 1 day ago 0 replies      
Anyone who commits double speak is not worthy of trust.
briantakita 1 day ago 0 replies      
Good thing it's easier than ever DIY.
tbronchain 23 hours ago 0 replies      
and that makes perfect sense!
QuantumChaos 1 day ago 1 reply      
This argument is completely ridiculous.

Control of how a person consumes content that they legally own is incidental. If a company can force you to buy content rather than pirating, they will make a lot more money. Controlling the exact manner in which you consume that content is the icing on the cake.

webmaven 1 day ago 1 reply      
Needs a [2013] label in the title.
10098 1 day ago 1 reply      
I can still make "unauthorized" copies of DRM'ed media and play those back on non-drm devices. E.g. record sound from a locked-down music player using a microphone, convert that to MP3 and listen to it using a normal MP3 player. So it's not 100% bulletproof.
FCC approves plan to consider paid priority on Internet washingtonpost.com
634 points by jkupferman  1 day ago   327 comments top 54
sinak 1 day ago 10 replies      
The title and post are both quite misleading. The commissioners didn't approve Tom Wheeler's plan (to regulate the Internet under Section 706), but voted to go ahead with the Notice of Proposed Rulemaking and commenting period. Tom Wheeler stated multiple times that Title II classification is still on the table.

There'll now be a 120 day commenting period; 60 days of comments from companies and the public, and then 60 days of replies to those comments from the same. After that, the final rulemaking will happen.

It's likely that the docket number for comments will continue to be 14-28, so if you want to ask the FCC to apply common carrier rules to the Internet under Title II, you can do so here: http://apps.fcc.gov/ecfs/upload/display?z=r8e2h and you can view previous comments here: http://apps.fcc.gov/ecfs/comment_search/execute?proceeding=1...

It's probably best to wait until the actual text of the NPRM is made public though, which'll likely happen very soon.

Edit: WaPo have now updated the title of the article to make it more accurate: "FCC approves plan to consider paid priority on Internet." Old title was "FCC approves plan to allow for paid priority on Internet."

ColinDabritz 1 day ago 8 replies      
"And he promised a series of measures to ensure the new paid prioritization practices are done fairly and don't harm consumers."

I have a measure in mind that won't harm consumers. Don't allow ISPs to discriminate against users regarding their already paid for internet traffic based on what they request. (Gee that sounds a lot like net neutrality.)

Anything less is open for abuse.

Perhaps "Discrimination" is a good word to tar this with, because it is. It's discrimination against companies, but it's also discrimination against users based on their tastes, preferences, and possibly socioeconomic status.

To say nothing of de-facto censorship issues.

todayiamme 1 day ago 1 reply      
In my mind one of the key questions to ask in this debate is, if the eventual rise of a more closely controlled internet destroys this frontier, what's next?

Right now thanks to a close confluence of remarkable factors, the barriers associated with starting something are almost negligible. The steady march of Moore's law combined with visualisation has given us servers that cost fractions of a penny to lease per hour. No one has had to beg or pay middlemen to use that server and reach customers around the world. At the other end, customers can finally view these bits, often streamed wirelessly, on magical slabs of glass and metal in their hand or what would have passed for a super-computer in a bygone age... All of this combined with a myriad of other factors has allowed anyone to start a billion dollar company. If this very fragile ecosystem is damaged and it dies out, where should someone ambitious go next to strike out on their own?

hpaavola 1 day ago 11 replies      
I don't get this whole net neautrality discussion that is going on in US (and maybe somewhere else, just haven't paid attention).

Consumers pay based on speed of their connection. If ISP feels like the consumers are not paying enough, raise the prices.

Service providers (not ISPs, but the ones who run servers that consumers connect to) pay based on speed of their connection. If the ISP feels like service providers are not paying enough, raise the prices.

Why in the earth there is a need for slow/fast lanes and data caps?

I'm four years old. So please keep that in mind when explaing this to me. :)

altcognito 1 day ago 5 replies      
I'm confused by this headline (and a bit by the proceeding).

After watching the FCC hearing, it seemed like all of the people who were "for" open internet, and spoke of it from the consumer level (including Wheeler) voted for the proposal. The commissioners that said the FCC didn't have jurisdiction to regulate and to leave the market alone, voted against the proposal.

Isn't it the case that if they had voted against this, that we would have been in the exact same boat we are in now and therefore the agreement that Netflix signed would continue unabated?

In that case, it really didn't matter what they voted.

corford 1 day ago 0 replies      
If Comcast gets their way, the FCC will have effectively ended up sanctioning the balkanisation of the US's internet users in to cable company controlled fiefdoms.

Each cable company will then assume the role of warlord for their userbase and proceed to dictate the terms and agreements under which their users will experience the internet. All of which guided solely by their desire to maximise profits.

If people aren't worried yet, they should be. Serfs didn't enjoy medieval Europe for a reason.

The only two viable routes out of this nightmare are:

1. Enshrine net-neutrality / common carrier status in law


2. Radically break up the US ISP/cable market so that real competition exists. This way Comcast is free to try and milk every teat they can find. If users or content providers don't like the result, Comcast can wither on the vine and die while competitors pick up their fleeing users.

DevX101 1 day ago 4 replies      
> approved in a three-to-two vote along party lines,

Why the fuck are there party lines in the FCC? Or any other regulatory body for that matter?

mgkimsal 1 day ago 2 replies      
"Even one of the Democratic commissioners who voted yes on Thursday expressed some misgivings about how the proposal had been handled.

"I believe the process that got us to rulemaking today was flawed," she said. "I would have preferred a delay.""


But... she voted yes anyway. WTF?

adamio 1 day ago 3 replies      
The internet is slowly being transformed into cable television
Alupis 1 day ago 0 replies      
Wait a minute! You mean my ever-increasing ISP fees at my home are not for the ISP to build a better network? You mean to tell me the ISP is now going to charge content providers for the ability to provide me with content that I'm already paying my ISP to deliver? You mean to tell me my content providers are now going to likely increase their fees to cope with this "fast lane"?

This sounds an awful lot like extortion, and double billing.

ISP's... you have one (1) job. Deliver packets.

dragonwriter 1 day ago 0 replies      
Its not a plan to allow paid priority on the Internet -- that's already allowed without any restriction since the old Open Internet order was struck down by the D.C. Circuit. Its a plan to, within the limits placed by the court order striking down the old plan, limit practices that violate the neutrality principles the FCC has articulated as part of its Open Internet efforts, including paid prioritization.
coreymgilmore 1 day ago 0 replies      
Simply put, this is absolutely terrible. How are start ups and small business web companies supposed to compete when their reach to consumers will automatically be slowed compared to larger competitors who pay for faster pipes?

And who is to govern the rates (and tiers) of faster speeds? I can only assume ISPs will determine a cost based on aggregate bandwidth. But who is to say there can't be a fast lane, a faster lane, and a fastest lane? Sounds anti competitive to me (even the big name companies are against this!).

Last: "The telecom companies argue that without being able to charge tech firms for higher-speed connections, they will be unable to invest in faster connections for consumers" > Google Fiber is cheaper for one. Seconds, the telecom giants have all increased subscriptions so there is more money there. And, as time goes along shouldn't these providers become more efficient and costs should decrease anyway? Must be nice to have a sudo-monopoly in some markets.

dethstar 1 day ago 0 replies      
Most important quote since the title is misleading:

"The proposal is not a final rule, but the three-to-two vote on Thursday is a significant step forward on a controversial idea that has invited fierce opposition from consumer advocates, Silicon Valley heavyweights, and Democratic lawmakers."

DigitalSea 1 day ago 1 reply      
There is no way in hell this can go ahead. Also, minor nitpick, but this is a rather misleading post. Nobody approved anything, the vote was merely a green light to go ahead with the proposal, nothing has been approved just yet, it's not that easy.

Some of my "favourite" takeaways:

He stressed consumers would be guaranteed a baseline of service Just like your internet provider says they don't throttle torrent traffic, but a few major ISP's have been caught out doing just that. The same is going to happen if this proposal goes ahead. Unless people breaking the rules are reported, they won't be caught and where will the resources for reporting infringer's come from?

Wheeler's proposal is part of a larger "net neutrality" plan that forbids Internet service providers from outright blocking Web sites I have no doubt in my mind, the reform Wheeler is pushing for is merely a door and there are definitely bigger things in store once the flood gates have been opened. The pressure will be too great to close them again.

The agency said it had developed a "multifaceted dispute resolution process" on enforcement and would consider appointing an "ombudsman" to oversee the process. The FCC has a shady history of resolving disputes, this is merely hot air to make the reforms not sound so bad. What happens when the resolution process breaks or is overwhelmed and can't cope with the number of infringements taking place?

As for a handful of key entities controlling what happens with the pipeline, China is a classic example of what happens when you let a sole entity dictate something like the Internet and even then, the great firewall doesn't stop everything.

Then there are questions about conflicts of interest. What happens when say a company like Comcast owns a stake in a company like Netflix and conspire to extort a competitor like Hulu (asking for exorbitant amounts of cash for speed). Who sets the price of these fast lanes and will prices be capped to prevent extortion? Too flawed to work.

Lewisham 1 day ago 0 replies      
After weeks of public outcry over the proposal, FCC Chairman Tom Wheeler said the agency would not allow for unfair, or "commercially unreasonable," business practices. He wouldn't accept, for instance, practices that leave a consumer with slower downloads of some Web sites than what the consumer paid for from their Internet service provider.

Because they've done such a bang-up job of that thus far..? It's no secret that at comparable advertised speed, Netflix on Comcast was far worse than Netflix on other ISPs.

I'm not sure if they're really so deluded to think their enforcement is super great, or if they're just delivering placating sound bites.

joelhaus 1 day ago 1 reply      
Can anyone make a serious argument on behalf of the carriers? Given the court decisions, the only way to protect the American people and the economy is to reclassify ISP's under Title II.

For the skeptics, it appears to come down to the question: which route offers better prospects for upgrading our internet infrastructure? Choice one is relying on a for-profit corporation with an effective monopoly that is beholden to shareholders; Choice two is relying on elected politicians beholden to the voters.

If you think there is a different argument that can be made on behalf of the carriers or if you can make the above one better, I would be very interested in hearing it.

jqm 1 day ago 0 replies      
People having the freedom to look at whatever they choose on a level playing field may not be in the interests of all concerned.

The consolidation of media companies possibly served interests other than profits. Look at what Putin is allegedly doing with the internet. Maybe in a way the eventual intent of this is the same. And for the same purposes. I don't think we should let it get started just in case.

trurl 1 day ago 0 replies      
We truly have the best government money can buy.
kenrikm 1 day ago 1 reply      
Great! the FCC has officially sanctioned ISP's to be Trolls, demanding some gold to cross their bridge. This guarantees that there will always be multiple levels of peering speed even if the connections are upgraded and are able to easily handle the load. They won't want to give up their troll gold. That's just peachy, thanks for letting us get screwed over even more, Go USA! </Sarcasm>
isamuel 1 day ago 0 replies      
The actual notice of proposed rulemaking (or "NPRM," as ad-law nerds call it): http://hraunfoss.fcc.gov/edocs_public/attachmatch/FCC-14-61A...

I haven't read it in full yet, but I've read the introduction, and the press coverage (surprise!) does not seem quite right to me.

ryanhuff 1 day ago 1 reply      
The investment in Obama by tech luminaries must be a huge disappointment.
lazyloop 1 day ago 1 reply      
And now Comcast is planning data limits for all customers, what conincidence. http://money.cnn.com/2014/05/15/technology/comcast-data-limi...
mkempe 1 day ago 0 replies      
Who amongst the political rulers of the country, apart from Jared Polis and Ted Cruz, is fighting against this?

I don't mean populists who make vague promises about net neutrality in order to be elected, then put people in place to undermine their promises -- I mean people who are in a position to fight the FCC, and who are actively doing it.

zacinbusiness 1 day ago 0 replies      
I don't understand the ISP's point of view on this issue. Please correct me if I'm wrong. But it seems that ISPs are saying "Hey, we offer this great service. But bandwidth hungry applications like Netflix are just using too much data. And we need to throttle their data usage, or they need to pay us more money."

The ISPs, then, are claiming to be victims. When in reality they simply promise services that they can't cost-effectively deliver.

If I make contracts to give all of you a new pair of shoes every month. And you pay in advance. And then I run out of shoes before I can deliver on my promise...doesn't that mean that I don't know how to effectively run my business? Isn't that my fault for promising a service that I can't provide? Why would anyone feel sorry for me?

mariusz79 1 day ago 2 replies      
It really is time to decentralize and move forward with mesh networking.
D9u 17 hours ago 0 replies      
The FCC doesn't even agree with ISPs on the definition of what, exactly, constitutes "Broadband" connectivity.

Meanwhile the monopoly in my area continues to receive my payments, no matter what they do.

xhrpost 1 day ago 2 replies      
So what happened? It seems like just yesterday that the FCC was the one creating the rules around net neutrality. A federal court over-turns this and all of a sudden the FCC decides to go the complete opposite direction?
spacefight 1 day ago 0 replies      
"What a nice internet connection you have there. It would be a real shame if something happened to it...".

So we had a good time, haven't we...

hgsigala 1 day ago 0 replies      
At this point everyone is officially invited to comment on the proposal. In around 60 days, the FCC will respond to your comments and redraft a proposal. Please comment! http://www.fcc.gov/comments
couchand 1 day ago 0 replies      
"If a network operator slowed the speed of service below that which the consumer bought, it would be commercially unreasonable and therefore prohibited," Wheeler said.

I find this quote very interesting. Currently the trend seems to be that the sticker speed on a connection bears little resemblance to the actual speed. I wonder if he has a plan to change that or if this was just an offhand remark.

rjohnk 1 day ago 0 replies      
I know all the basic ins and outs of bandwidth. But why is this so complicated? I pay x amount for access to the Internet at x speed. I use internet. I pay the access fee.
Orthanc 1 day ago 1 reply      
This doesn't sound good:

"6. Enhance competition. The Commission will look for opportunities to enhance Internet access competition. One obvious candidate for close examination was raised in Judge Silbermans separate opinion, namely legal restrictions on the ability of cities and towns to offer broadband services to consumers in their communities."


markbnj 1 day ago 0 replies      
This portion of the piece is interesting to me: "He wouldn't accept, for instance, practices that leave a consumer with slower downloads of some Web sites than what the consumer paid for from their Internet service provider." Definitions are tricky, but since we all pay for more bandwidth from our ISPs than we utilize from any one site (or almost all of us, I think), sticking to this rule would mean ISPs would not have the power to throttle individual data sources. Is that not a correct interpretation?
forgotAgain 1 day ago 0 replies      
The fix is in. Now what are you going to do about it?
ozh 1 day ago 3 replies      
I hope there will be companies, upon being asked by an ISP to pay more for higher priority in their network, who will tell them to get the f*k off and advocate usage of VPN and anonymisers for their users so they're not identified as US residents.
jon_black 1 day ago 1 reply      
Assuming the plan were to be approved, and given that the FCC is an American government organisation, are there any implications for those in other countries?

Also, how can an American government organisation consider paid priority on The (global) Internet? Isn't it better to say that "FCC approves plan to consider paid priority on Internet for those who connect to it via a US telecoms provider"?

markcampbell 1 day ago 0 replies      
Just making it easier for other countries. Shoot yourself in the foot, USA!
knodi 1 day ago 0 replies      
No one I know in the public want this, only ISP. Why the fuck are we even having a commenting period on this fucking knock it down.
pushedx 1 day ago 0 replies      
You can't offer bandwidth at a premium, without reducing the bandwidth available to others. That's (physically) how the Internet works. No matter what Wheeler says, there's no way that paid prioritization of traffic can be done fairly.
shna 1 day ago 0 replies      
The mistake will be to allow even a tiny hole in net neutrality. Once they get hold of something it will be only a matter of time to make it larger. However it sounds harmless any dent to net neutrality should be fought against fiercely.
devx 1 day ago 2 replies      
As an European I probably should be glad about this, since this combined with all the NSA spying issues and implementing backdoors into US products [1], should increasingly force innovation out of US and bring it to Europe, but somehow I'm not.

All the ISPs will slow down all the major companies services, unless they pay up. There is no "faster" Internet. It's just "paying to get normal Internet back", like they've already done with Netflix:


[1] - http://arstechnica.com/tech-policy/2014/05/photos-of-an-nsa-...

carsonreinke 1 day ago 2 replies      
Maybe I am missing something, but what is the argument ISPs have for this?
shmerl 1 day ago 0 replies      
I don't really understand why it's divided by partisan membership.
xtx23 15 hours ago 0 replies      
So why isn't internet an utility?
phkahler 1 day ago 0 replies      
Who nominated this former lobbyist for the FCC spot? And who voiced/voted their approval? Voters should know.
mc_hammer 1 day ago 0 replies      
anywhere that the internet can be routed via paid priority is the spot where the snooping can be installed.
rgumus 1 day ago 2 replies      
Well, this is no coincidence. ISPs have been working on this for years.
JimmaDaRustla 1 day ago 0 replies      
There should be a fast lane, it should also be the only lane.
wielebny 1 day ago 1 reply      
If that would pass - wouldn't be this a great opportunity for european hosting companies to seize the hosting market?
QuantumChaos 1 day ago 0 replies      
If this were a matter of prioritizing traffic on the internet backbone, then I would be in favor. There is nothing wrong with charging congestion fees.

However, in this case, we are talking about cable companies, and the bottleneck is presumably the last mile. So what these laws are really doing is enabling cable companies to extract even more monopoly rents, in the form of discriminatory pricing (even though it is the content providers that pay, the pipeline in question is closer to the end user than the content provider, and so if the issue were congestion pricing, and not discriminatory pricing, the charge would be on the end user, who is already paying).

thekylemontag 1 day ago 0 replies      
G_G america.
kirualex 1 day ago 1 reply      
Yet another blow to Net-Neutrality...
graycat 1 day ago 0 replies      
Okay, from all the public discussion so far, NYT, WaPo, various fora, etc., I totally fail to 'get it'. Maybe I know too much or too little; likely a mixture of both.

Help! More details anyone?

To be more clear, let's consider: I pay my ISP, a cable TV company, so much a month for Internet service withspeeds -- Mbps, million bits per second -- as stated in the service, maybe 25 Mbps upload(from me to the Internet) speed and 101 Mbpsdownload speed.

Now those speeds are just between my computer andmy ISP. So, if I watch a video clip fromsome server in Romania, maybe I only get 2 Mbpsfor that video clip because that is all myISP is getting from the server in Romania.

And I am paying nothing perbit moved. So, if I watch 10 movies a dayat 4 billion bytes per movie, even then I don't pay more.

Now, to get the bits they send me, my ISP gets thosefrom some connection(s) to the 'Internet backbone'or some 'points of presence' (PoP) or some such at various backbone 'tiers', 'peering centers', etc.

Now, long common in such digital communications havebeen 'quality of service' (QoS) and 'class of service'(CoS). QoS can have to do with latency (how long haveto wait until the first packet arrives?),'jitter' (the time between packets varies significantly?),dropped packets (TCP notices and requests retransmission),out of order packets (to be straightened out by the TCP logic or just handled by TCP requesting retransmission), etc. Heck, maybewith low QoS some packets come with coffee stains froma pass by the NSA or some such! And CoS might mean,if a router gets too busy (the way the Internet is designed,that can happen), then some packets from a lower 'class' of service can be dropped.

But my not very good understanding is that QoS and CoS, etc., don't much apply betweenmy computer and my ISP and, really, apply mostlyjust to various parts of the 'Internet backbone'where the really big data rates are. And theremy understanding is that QoS and CoS areessentially fixed and not adjusted just forme or Netflix, etc. E.g., onceone of the packets headed for me gets ona wavelength on a long haul optical fiber,that packet will move just like many millionsof others, that is, with full 'network neutrality'.

So, I ask for some packets from a server atNetflix, Google, Facebook, Yahoo,Vimeo, WaPo, NYT, HN, Microsoft's MSDN, etc. Then that serverconnects to essentially an ISP but with likelya connection to the Internet at 1, 10, 40, 100Gbps (billion bits per second). And, really,my packets may come from Amazon Web Services (AWS),CloudFlare, Akamai, some colocation facilityby Level3 or some such; e.g., the ads may comefrom some ad server quite far from wherethe data I personally was interested in came from.

Note: I'm building a Web site, and my localcolocation facility says that they canprovide me with dual Ethernet connectionsto the Internet at 10 Gbps per connection.

Note: Apparently roughly at present it is commoncommercial practice to have one cable withmaybe 144 optical fibers each witha few dozen wavelengths of laser light(dense wavelength division multiplexing -- DWDM)with data rate of 40 or 100 Gbps per wavelength.

So, there is me, a little guy, getting the packetsfor, say, a Web page. Various servers send thepackets, they rattle around in varioustiers of the Internet backbone, treated in thebackbone like any other packets,arrive at my ISP,and are sent to me over coaxto my neighborhood and to me.

So, with this setup, just where could,say, Netflix be asked to pay more and for what?That is, Netflix is already paying their ISP.That ISP dumps the Netflix packets on theInternet backbone, andmillions of consumer ISPs get thepackets.My ISP is just a local guy;tough to believe that Netflix will pay them.Besides, there is no need for Netflixto pay my ISP sincemy ISP is already doing what theysay, that is, as I can confirmwith Web site


I'm getting the speeds I paid myISP for.

Netflix is going to pay more to whom for what?

Now, maybe the issue is: If the Netflix ISP and my ISPare the same huge company, UGE,that, maybe, also provides on-line movies, then UGE canask Netflix to pay more orone or the other of the UGE ISPswill throttle the Netflix data.Dirty business.

But Netflix isa big boy and could get adifferent ISP at their end.Then the UGE ISP who servesa consumer could find thatthe UGE ISP still throttlesdata from Netflix but notfrom the UGE movie service?Then the consumer's ISPwould be failing to providethe data rate the consumerpaid for.

Or, maybe, the UGE ISP that servesme might send the movies from the UGE movieservice not part of the, say, 101 downloadspeed from my ISP to me and, instead,provide me with, say, 141 Mbpswhile the UGE movie is playing.This situation would be 'tying', right?Then if Netflix wants to be part ofthis 141 Mbps to a user who paidfor only 101 Mbps, then Netflixhas to pay their UGE ISP more;this can work for UGE becausethey have two ISPs and 'own bothends of the wire'.

I can easily accept that a big companywith interests at several parts of theInternet and of media more generallymay use parts of their businessto hurt competition.Such should be stopped.

But so far the public discussionsseem to describe non-problems.

Syncthing: Open Source Dropbox and BitTorrent Sync Replacement syncthing.net
581 points by ushi  4 days ago   181 comments top 50
abalone 4 days ago 2 replies      
"Replacement" is too strong a word here. P2P sync requires at least 2 peers to be online at once. For the simple case of syncing your work and home computers or sharing with coworkers that is not always a reliable assumption.

It's only a replacement for a centralized service like Dropbox if you have an always-connected peer (a de facto central server).

stinos 4 days ago 7 replies      
Since we're listing alternatives here: I setup SeaFile (http://seafile.com/) a couple of months ago and I'm loving it so far. Mainly because it has client-side encryption and allows a single client to sync with different servers and selectively choose which 'libraries' (basically directories that are under sync) to use. Typical usecase is having a personal server for personal files and another one at the office for work-related stuff.
XorNot 4 days ago 6 replies      
Can we please please please make it a standard that synchronization tools spell out how they handle conflicts on the front page.

Bittorrent Sync just overwrites files based on last mod time (terrible option). What does this do? Does it support backups? Versioning?

pjkundert 4 days ago 1 reply      
Has anyone else taken a look at Ori (http://ori.scs.stanford.edu):

> Ori is a distributed file system built for offline operation and empowers the user with control over synchronization operations and conflict resolution. We provide history through light weight snapshots and allow users to verify the history has not been tampered with. Through the use of replication instances can be resilient and recover damaged data from other nodes.

It seems well thought out, and competitive with many of the other approaches mentioned here. It uses Merkle trees (as does Git) that encompasses the file system structure and full history.

frabcus 4 days ago 1 reply      
Fancy being interviewed for http://redecentralize.org/ ?

If so, email me francis@redecentralize.org! (I couldn't see an email or contact form for you on the syncthing site)

sinkasapa 4 days ago 1 reply      
One of my favorite open source tools of this kind is unison. It works great. I set it up to go and I don't even notice it is there. It is quick, seems to have been around for a while and is packaged for most Linux distros. It has a GUI but you don't need it.


alyandon 4 days ago 3 replies      
I can't really seem to find this information in the documentation.

Does this support delta/block-level sync for large files (e.g.: does mounting a 100 GB truecrypt container, modifying a file inside the container and unmounting it cause the entire 100 GB container to be uploaded)?

Does it utilize the native OS platform APIs for detecting file modification (e.g. inotify on linux) as opposed to scanning/polling large directories looking for modified date changes?

freework 4 days ago 6 replies      
One thing I've never gotten about these "syncing" apps...

Lets say I install this software on my phone, my desktop, and my work computer. I have 100+ GB free on my work computer and my home desktop, but I only have 16GB on my phone. If I add 20GB worth of movies to my sync folder, its going to fill up my phone.

simbolit 4 days ago 1 reply      
I use http://owncloud.org and am quite happy. But also happy for more competition :-)
taterbase 4 days ago 3 replies      
Git-Annex is a great existing option (made by joeyhess).https://git-annex.branchable.com/
JonAtkinson 4 days ago 2 replies      
I just setup Sparkleshare (http://sparkleshare.org/) this weekend. I wanted something which wasn't Dropbox, and preferably open-source, and while Sparkleshare has a slightly clunky pair mechanism, it works beautifully.

Syncthing looks similar, and LAN sync'ing is a killer feature for those of us in offices with poor bandwidth.

Wilya 4 days ago 0 replies      
That looks like a promising project in a space which definitely needs improvement. Owncloud and Sparkleshare are okay, but they are far from perfect, and there is large room for improvement.
r0muald 4 days ago 0 replies      
A better title would be "Syncthing, an open source Dropbox replacement written in Go".

But seriously, it seems promising.

doctoboggan 4 days ago 1 reply      
A few months ago I looked into using Syncthing for my decentralized browser, Syncnet[0]. At that time it did not seem ready for primetime. Does anyone have a good feel for its maturity as of late? For example, is there an API? Syncthing looks very promising and I would love to integrate Syncnet with it.

[0]: http://jack.minardi.org/software/syncnet-a-decentralized-web...

rsync 4 days ago 1 reply      
Here is the original from a year or so ago:


"Then all current commercial services drop off, including SpiderOak, Bittorrent Sync and git-annex. This resulted in a clever combination of EncFS and dvcs-autosync. Because, in this day and age, you cannot trust any "cloud" provider with your unencrypted data."

aw3c2 4 days ago 4 replies      
This looks very promising. But the documentation is not good. I have not managed to find a "1 minute" friendly overview of how it works. I mean, what data gets sent how where and why.
Karunamon 4 days ago 0 replies      
How good is this at traversing firewalls? AFAIK, Dropbox will do some manner of HTTP trickery to allow syncing when behind overly-restrictive firewalls (so it just goes out the usually-provided web proxy), but the documentation here references forwarding ports + UPNP, so I'm guessing that doesn't apply here?
marcamillion 3 days ago 2 replies      
I have large media files, multiple TBs.

I deal with a constant stream of these and want to have a distributed network - connected via the inet - that allows me to sync the drives in all locations.

i.e. I would like to setup a server in my home office, one in my co-founder's home office and another in my editor's home office.

Whenever my editor runs off a few hundred GB of data to a specific folder or to their drive, I would love for that to be auto-synced to both my server and that of my co-founder.

Will Syncthing allow me to do this easily and will it be appropriate for an application like that?

popey 3 days ago 0 replies      
I've been using Syncthing for some months now and it's working well for my use case of keeping laptop/desktop and home server files in sync. I had one occasion when I lost everything as I'd brought up syncthing on my server without the "sync" directory mounted. It happily deleted all files from my synced laptop as a result. That's now fixed, but it was a buttock clenching moment. Yay backups, and a third machine (desktop) which was suspended and thus out of sync, so still had my data.

Upstream developer is very friendly and attentive & seems happy to discuss new features and use cases.

grey-area 3 days ago 0 replies      
This project's aims seem very similar to the earlier camilstore project, also written in go:


Anyone know how it compares?

davidjhall 4 days ago 3 replies      
Does this need to use the web gui? I tried setting this up on a digital ocean server and it spawns off a webserver on 8080 that I can't reach from my machine. Is there a "headless" mode for client-less servers?Thanks
chrisBob 4 days ago 1 reply      
I am very happy with the timemachine backup on my mac, but I have been looking for a good offsite backup solution so that I can trade storage with my family in case something happens to my house. This might finally be the right option. BT Sync seemed ok, but was more than I wanted my parents to try and setup.
interg12 4 days ago 2 replies      
What's wrong with BitTorrent Sync? The fact that it's a company?
akumen 3 days ago 0 replies      
We love to through around the words "alternative" and " replacement" it is neither until it is as easy to use/deploy as X for the average Joe. You know 90% of people out there who would't be able to out the words 'git', 'deploy' and 'heroku' in the right order as their eyes glaze over in confusion.
bankim 4 days ago 1 reply      
Alternative would be AeroFS (https://aerofs.com/) which also does P2P file sync.
zyngaro 4 days ago 2 replies      
"Each node scans for changes every sixty seconds". There isn't any portable way to get notfications about file changes instead of polling? I know about jnotify in Java but well it's in java.
orblivion 4 days ago 1 reply      
Does somebody fund projects like this? Or is it just that the people in charge of them understand something about UI and marketing? Seems like a nice trend, if so.
nl 4 days ago 1 reply      
Is there any mobile support?

I use Dropbox pretty frequently to share stuff between mobile devices and desktops. If Syncthing can't do that it isn't as useful.

ertdfgcb 4 days ago 0 replies      
Unrelated, is one of the best open source project landing pages I've ever seen.
desireco42 4 days ago 0 replies      
Since I installed Bittorrent Sync, my need for such software stopped as it works really well and provides all I need from it.

I couldn't understand quite advantages and why would I replace BTSync, which BTW, works really well already and does all this nice things. Plus works on my Phone and Ipad and Nexus.

To clarify one thing, I have home server which obviously hosts BTSync repos with ample space. Ability to fine-grained share parts of it is invaluable.

Fede_V 4 days ago 0 replies      
This looks incredibly interesting, and I would very much like to move from Dropbox to something open source. Thanks, will definitely play with it.
nvk 4 days ago 0 replies      
That's great news, have been looking for a OSS sync app for quite some time.
mark_l_watson 4 days ago 0 replies      
I really like the idea but one thing is stopping me: portability to iOS and Android devices, and mobile apps that work with Dropbox. Dropbox has a first-mover advantage.

This is mostly a problem for people like me who use both Android and iOS devices so alternatives need to support both platforms.

ReAzem 4 days ago 0 replies      
I would also like to point out https://www.syncany.org/

Syncany can work with any backend (like AWS S3) and is encrypted.

It is more of a dropbox replacement while sycnthing is a btsync replacement.

binaryanomaly 4 days ago 1 reply      
Let's hope this becomes what it is promising and reliefs me of dropbox and the likes... ;)
Sir_Cmpwn 4 days ago 1 reply      
I would like to see something like this that does not place trust on the server hosting the files.
Lucadg 3 days ago 0 replies      
another alternative: http://www.filement.com/I don't use it but friends do and are pretty happy with it.From their home page:

- Combine devices and cloud services into a single interface.- Transfer data between computers, smartphones, tablets and clouds.- Manage and use data directly on the device or cloud it is stored.

scrrr 3 days ago 0 replies      
Is there a paper / doc explaining, how the synchronisation works in detail?
twosheep 4 days ago 2 replies      
So this may be as good a thread as any to ask for assistance:

My small business is looking for a combined file collaboration / file backup service that doesn't cost an excessive amount of money (we're a non-profit on a budget). Is there a good service for this? For example, Dropbox is mainly for sharing files, whereas Carbonite is mostly for backing up your computer. Is there a solution for both?

emsy 3 days ago 0 replies      
Yet another sync app is Pyd.ioThe Web UI is super neat, and you can choose between various backends for storage. Pyd.io offers its own sync app which I found to be horribly slow. I'd suggest to use Pyd.io as a frontend and BtSync/Seafile/Syncthing as a backend.
Joona 4 days ago 2 replies      
I'm looking for a replacement for Dropbox, but it seems that none support direct links, like in Dropbox's public folder (example: https://dl.dropboxusercontent.com/u/38901452/fox2.jpg ) Is there one?
dead10ck 4 days ago 1 reply      
This looks very promising. And it's written in Go! The only major feature I think it's missing is file versioning.

I am curious, though: what do people use to get their files remotely? And what's the cheapest solution for hosting your own central server? Would a simple AWS instance work fine?

haxxorfreak 4 days ago 0 replies      
I don't see a Solaris build on the download page but it's listed next to the download button on the home page, am I just missing something?
jms703 3 days ago 0 replies      
++ this effort. I'm looking forward to replacing my current BitTorrent Sync (btsync) setup with Syncthing.
biocoder 3 days ago 2 replies      
Have you checked Hive2Hive? Something similar but not yet there. https://github.com/Hive2Hive/Hive2Hive
scragg 4 days ago 0 replies      
I would of liked the name "synctank" better. :)
chris123 4 days ago 0 replies      
Can we get a "Bitcoin meets Dropbox meets Airbnb" already? Thks :)
hellbreakslose 3 days ago 0 replies      
Cool, I always like it when things are open source!
sixothree 4 days ago 2 replies      
It appears HN readers are terrible at self-organizing. Threads for articles like this should include by default a top level node for:

  "Here's the alternative I use"  "Important question about the technology"  "Pertinent question about the article"

downstream1960 4 days ago 1 reply      
So its basically pirating, but its saves across all platforms?
Removing User Interface Complexity, or Why React is Awesome jlongster.com
565 points by jlongster  3 days ago   219 comments top 34
tomdale 3 days ago 5 replies      
This is a really thoroughly researched post and jlongster has my gratitude for writing it up.

I have two concerns with this approach. Take everything I say with a grain of salt as one of the authors of Ember.js.

First, as described here and as actually implemented by Om, this eliminates complexity by spamming the component with state change notifications via requestAnimationFrame (rAF). That may be a fair tradeoff in the end, but I would be nervous about building a large-scale app that relied on diffing performance for every data-bound element fitting in rAF's 16ms window.

(I'll also mention that this puts a pretty firm cap on how you can use data binding in your app, and it tends to mean that people just use binding from the JavaScript -> DOM layer. One of the nicest things about Ember, IMO, is that you can model your entire application, from the model layer all the way up to the templates, with an FRP-like data flow.)

My second concern is that components libraries really don't do anything to help you manage which components are on screen, and in a way that doesn't break the URL. So many JavaScript apps feel broken because you can't share them, you can't hit the back button, you can't hit refresh and not lose state, etc. People think MVC is an application architecture, but in fact MVC is a component architecture your app is composed of many MVCs, all interacting with each other. Without an abstraction to help you manage that (whether it's something like Ember or something you've rolled yourself), it's easy for the complexity of managing which components are on screen and what models they're plugged into to spin quickly out of control. I have yet to see the source code for any app that scales this approach out beyond simple demos, which I hope changes because I would love to see how the rubber hits the pavement.

It's always interesting to see different approaches to this problem. I don't think it's as revolutionary as many people want to make it out to be, but I've never been opposed to borrowing good ideas liberally, either. Thanks again, James!

nostrademons 3 days ago 6 replies      
I think this post is missing something in its description of Web Components: the fundamental difference between a JS-based framework like React and a Web Components-based framework like Polymer is that the former takes JS objects as primitives and the DOM as an implementation artifact, while the latter takes the DOM as a primitive and JS as an implementation artifact. You cannot wrap your head around Web Components and give both it and JS frameworks a fair shake until you can make this mental shift in perspective fluently.

The line in the post where "You can't even do something as basic as that with Web Components.":

  var MyToolbar = require('shared-components/toolbar');
In fact has a direct analogue with HTML imports:

  <link rel="import" href="shared-components/toolbar.html">
And that's key to understanding Web Components. The idea of the standard is that you can now define your own custom HTML elements, and those elements function exactly like the DOM elements that are built into the browser. This is a key strategic point: they function exactly like the DOM elements that are built into the browser because Google/Mozilla/Opera/et al hope to build the popular ones into the browser eventually, just like we've gotten <input type=date> and <details>/<summary> based on common web usage patterns.

A number of the other code samples in the article also have direct analogues in Polymer as well. For example, the App/Toolbar example halfway down the page would be this:

  <polymer-element name="Toolbar" attributes="number">    <template>      <div>        <button value="increment" on-click="{{increment}}">        <button value="decrement" on-click="{{decrement}}">      </div>    </template>    <script>      Polymer('toolbar', {        number: 0,        increment: function() { this.number++; }        decrement: function() { this.number--; }      });    </script>  </polymer-element>  <polymer-element name="App">    <template>      <div>        <span>{{toolbar.number}}</span>        <Toolbar number=0 id="toolbar"></Toolbar>      </div>    </template>    <script>      Polymer('App', {        created: function() {          this.toolbar = this.$.toolbar;        }      });    </script>  </polymer-element>
You can decide for yourself whether you like that or you like the Bloop example more - my point with this post is to educate, not evangelize - but the key point is that you can define your own tags and elements just like regular DOM elements, give them behavior with Javascript, make them "smart" through data-binding so you don't have to manually wire up handlers, and then compose them like you would compose a manual HTML fragment.

rdtsc 3 days ago 5 replies      
As mostly an outsider to the web front end development, React.js is probably the easiest one for me to understand among the typical "frameworks", especially Angular and Ember.

After all the excitement about Angular for example, I went to learn about it and just got lost with new concepts: DOM transclusion, scopes, services, directives, ng-apps, controllers, dependency inversion and so on. I can use it but need someone to hold my hand. It reminded me of Enterprise Java Beans.

But so far just learning how React is put together and looking at the tutorials it seems like less of a framework and easier to understand altogether. I suspect this might become the new way to build web applications.

Well anyway, don't take this too seriously, I as said, I am an outsider to this.

NathanKP 3 days ago 5 replies      
I really like the core concepts of React, especially the way it is designed to help you organize your code into reusable components.

I think the key to making React take off is building a centralized repository for components that are open source. Then building your webapp would be as easy as importing the components you need:

     bower install react-navbar     bower install react-signup-form     bower install react-sso-signin-form
I think this is definitely the future of how front end web development will be one day.

derwildemomo 3 days ago 2 replies      
As a recommendation to the author, it would make sense to show the example/demo area the whole time, not only once I scroll down. It confused me. A lot.
malvosenior 3 days ago 1 reply      
For those that haven't tried it, David Nolen's Om for ClojureScript is an excellent React framework.


I've not used vanilla React, but Om is certainly fantastic and apparently adds a bunch of stuff that's not in the JS version.

Also, a web framework written by the guy that wrote most of the language you're using? Win!

ufo 3 days ago 3 replies      
I experimented with React a bit but I was a bit bugged by how large it was. The basic idea of rendering to a virtual DOM and having unidirectional data flow is really simple but I had trouble actually diving in to React's source code and seeing things under the hood (for example, I had to find a blog to see how the diffing algorithm worked).

What are the other libraries out there that we can use for this virtual DOM pattern right now? I only found mithril[1] that similarly does the template rendering with Javascript but I still don't know how different to React it is in the end? Is the diffing algorithm similar? Do they handle corner cases the same (many attributes need to be treated specially when writing them to DOM)?

Simplifying it a bit: other than the virtual DOM, is the rest of React also the best way to structure apps? What would the ideal "barebones" virtual DOM library look like?

[1] http://lhorie.github.io/mithril/

etrinh 3 days ago 0 replies      
Not to distract from the topic, but jlongster's posts should be a case study into how to make an effective demo/tutorial on the web. The side by side code/demo format is very well done and should be the de facto way to do code + demo. There have been so many times when I've been reading a tutorial and click on a demo link that opens a new tab. This makes me completely lose context as I switch back and forth between the tutorial and various demo links.

For another example of a post that takes advantage of the dynamic nature of the web page, check out jlongster's sweet.js tutorial[1]. It's a tutorial on writing macros with JS that you can actually interact with (you can make modifications to the example code snippets and see your macros expand on the fly). Very cool.

[1]: http://jlongster.com/Writing-Your-First-Sweet.js-Macro

Flimm 3 days ago 3 replies      
Please don't break the back button (Firefox and Chrome).

In Firefox 29.0 on Ubuntu 14.04, the left sidebar with the text of the blog disappears and is replaced with a white space. I do not experience this on Chrome.

adamors 3 days ago 3 replies      
Would you recommend using React instead of Angular for JS heavy areas of a website that is built with a server side framework (like Rails, Django etc.)?

I developed a rather complex SPA with Angular recently and I cannot go back to the ghetto that is jQuery when using server side rendering.

jdnier 2 days ago 1 reply      
Leo Horie, author of Mithril, has written a blog post where he explains how to re-implement some of the article's examples using Mithril (React-like client-side Javascript MVC framework): http://lhorie.github.io/mithril-blog/an-exercise-in-awesomen...
roycehaynes 3 days ago 1 reply      
Great post. I recently started a project using reactjs, and I have nothing but good things to say. The unidirectional data flow, the declarative nature, and the virtual DOM makes it powerful and very easy to like.

The best resource is to follow the tutorial (link below). The tutorial explains everything you may have a question about when comparing it to Backbone, Angular, or Ember.


I also found the IRC channel to be very, very helpful.

The only downside is that you still have to rely on other tools to make a true SPA like routing.

redOctober13 9 hours ago 0 replies      
Just want to say thanks to jlongster and everyone who's commented on here; I created an account just to say thanks. I'm an amateur web dev/designer trying to figure out how to move beyond static web pages, and have read what would probably amount to a literal ton of material were it printed out, on frameworks, libraries, and more acronyms than I could imagine. Outside of HTML, JS, and CSS, even "basic" things like Sass and CoffeeScript I hadn't heard of just a few months ago, and I've since been all over the world (wide web) looking for info on ASP.NET (which my group at work decided last year was what we should be doing "web-kinda stuff" in, as well as Angular, Ember, Backbone, Knockout, Node, etc etc, and everything new (to me) that research like that comes with.

The discussion here has led me to a few more things to research, but I feel it's been very helpful in helping me think critically about the vast array of possibilities a budding web designer has to deal with. I just wanted somebody to provide an objective view of "If you're going to be doing medium-complexity web apps end-to-end, then learn ______" and I still would love that, but don't think it's possible to get. The alternative, as I've been doing, is just to learn a little about everything, try to figure out the kind of things I plan to do, and then find the paradigm that works, be it vanilla technologies, something like React, Web Components, or a framework (and I've been trying to learn Angular and like it, but it's tough to grasp). It just seems like as soon as I've decided on what I want to learn, I read a new post with a title like "Why You Shouldn't Use <whatever I just decided to learn> and Why <something new I've never heard of> Is Really the Way to Go."

So anyways, a long-winded thanks, but a thank you nevertheless for the open discussion here; I feel better now that I'm not trying to find the one-and-done "best" thing for making web apps in general.

mrcwinn 3 days ago 0 replies      
Curious if anyone has experimented with Go+React - specifically rendering on the server side as well. Similar to Rails / react_ujs (react-rails gem), seems like you would need to provide Go with access to a v8 runtime and a FuncMap helper in the template to call for the necessary component (JSX -> markup). I've really enjoyed React and I've enjoyed Go in my spare time, but I still find myself using npm for a lot of the, um, grunt work.
valarauca1 3 days ago 0 replies      
It was really cool, until I realized that scrolling broke the back button.

I thought one of the cardinal sins of web design was don't break the back button.

IanDrake 3 days ago 5 replies      
Tester: The UI is wrong right here...

Developer: Hmm...I wonder how long it's going to take me to figure out where that HTML was generated in my javascript.

iamwil 3 days ago 2 replies      
Has anyone tried to use a different template engine with React? I was just wondering, since I didn't want to use JSX inline, and writing out html with React.DOM isn't appealing either.

I just wanted a way to put templates in <script> tags that get loaded by React Components. That way, I won't be mixing templates and the behavior of the components. Has anyone done this before?

__david__ 3 days ago 2 replies      
So I'm curious how one would implement something like drag and drop in a React app?

Would you model the drag in the main data model somehow? Or would you do that all externally (with traditional DOM manipulation) and then update the model when the drag is complete?

platz 3 days ago 2 replies      
I was at a meetup where the speaker suggested react is great for business-like apps, but for things with an insane amount of dom objects like html games, it tends to get bogged down.

Since React claims to be super fact, has done a performance comparison to see in what situations and how much better react performs in certain cases, compared to say, angular.js or more vanilla frameworks?

(Also I hear that there is a really great speedup that using OM gives you, but I haven't seen any comparisons with om either)

zenojevski 3 days ago 0 replies      
For those who are only interested in the render loop, I made a library[1] around this abstraction.

I plan to expand it with a toolkit to allow thinking in terms of batches, queues and consumers, la bacon.js[2].

[1]: https://github.com/zenoamaro/honeyloops[2]: https://github.com/baconjs/bacon.js

gooserock 3 days ago 2 replies      
I like the state and property features of React, but I still don't understand why more people aren't using Chaplin instead. Because quite honestly, the syntax of every other framework - React, Ember, and especially Angular - is complete gobbledygook by comparison.

Example: in Chaplin, components are views or subviews (because it's still an MVC framework, which is another discussion for another time). The views by default render automatically without you having to call anything. But if you did, you'd write @render() (because hey, Coffeescript saves keystrokes and sanity). That automatically renders the component in the place you've already specified as its container attribute, or if you haven't it renders to the body by default.

Whereas in React, you have to write this garbage: Bloop.renderComponent(Box(), document.body);

WHY. Can't we write a framework that intuits some of this crap? Shouldn't we use a framework that reduces the time we spend writing code?

cellis 2 days ago 0 replies      
Needs a catchy buzzword. So, can we all agree to name this ... NoHTML?
mushishi 3 days ago 1 reply      
The awkward right panel that changed abruptly from time to time, especially the first transition, made me skip the content.

Using a non-common ui idiom is risky. The presentation and topic don't match in a way that reader (i.e. me) had enough expectations left when it comes to what you might actually have to say.

1986v 3 days ago 0 replies      
If school books had online versions like this page, it would make reading so much more fun.

Great read, by the way.

cnp 3 days ago 0 replies      
This is definitely a "Sell this to your boss" type of post :) Great work.
zawaideh 3 days ago 2 replies      
Instead of having React altogether, wouldn't it make sense to have the browser keep track of a virtual DOM, and only repaint whatever changes on its own, while managing its own requestAnimationFrame and paint cycles?

P.S. I haven't looked into this in depth, just throwing an idea out there.

mattdeboard 3 days ago 0 replies      
This is great except the button is broken.
outworlder 3 days ago 0 replies      
I was half-expecting a gambit Scheme post :)
ulisesrmzroche 3 days ago 1 reply      
I'm honestly really, really wary of minimalistic front-end frameworks after two years of working on single page web apps.

All that ends up happening in practice is untested, messy code with deadlines up your ass and 0 developer ergonomics. Zero.

__abc 3 days ago 1 reply      
back button behavior is atrocious ....
gcb0 3 days ago 0 replies      
my gripe with those is that you remove UI complexity but shove usability and accessibility over some dark orifice.

All those things are cool for social apps (a.k.a. fart apps) but for business ready platforms this is just silly.

for example, a link that i can middle click or bookmark or send to someone, etc would be much more useful even if not as spiffy as those scrolls

badman_ting 3 days ago 1 reply      
I recently watched a presentation about React's approach (I think from a recent JSConf) and it sold me, at least enough to try. The approach makes total sense to me, and I agree with many of its criticisms of Angular in particular. I really loved the reconsidering of our idea of "separation of concerns", that if we reconsider the scope of the concern, we can devise an approach where templating and logic go together. I'm excited by these ideas.
jbeja 3 days ago 0 replies      
I hate OP website, why i got to click the back button 10 times to comeback here?
jafaku 3 days ago 1 reply      
Damn, Javascript is only becoming messier. I think I'll just watch it from the distance and wait until someone figures out the best way to deal with it.
OS X Command Line Utilities mitchchn.me
522 points by brianwillis  1 day ago   231 comments top 41
Monkeyget 1 day ago 18 replies      
This was supposed to be few lines of remarks. It expanded quickly in relation with my enthusiasm for this topic.

I've been investing some time in the command line on my Mac. I am moving from a dilettante going to the shell on a per-need basis to a more seasoned terminal native. It pays off handesomely! It's hard to convey how nice it to have to have a keyboard-based unified environment instead of a series of disjoined mouse-based GUI experiences.

Here are some recommendations pertaining to mastering the command line on a Mac specifically:

-You can make the terminal start instantaneously instead of it taking several seconds. Remove the .asl files in /private/var/log/asl/. Also remove the file /users/<username>/Library/Preferences/com.apple.terminal.plist

- Install iterm2. It possesses many fancy features but honestly I hardly ever use them. The main reason to use it instead of the default Terminal application is that It just works.

-Make your terminal look gorgeous. It may sound superficial but it actually is important when you spend expanded period of time in the terminal. You go from this http://i.imgur.com/cx3zZL8.png to this http://i.imgur.com/MQbx8yK.png . You become eager to go to your terminal instead of reluctant. Pick a nice color scheme https://code.google.com/p/iterm2/wiki/ColorGallery . Use a nice font (Monaco, Source Code Pro, Inconsolata are popular). Make it anti aliased.

-Go fullscreen. Not so much for the real estate but for the mental switch. Fullscreen mode is a way to immerse yourself into your productive development world. No browser, no mail, no application notification. Only code.

-Install Alfred. It's the command line for the GUI/Apple part of your system. Since I installed it I stopped using the dock and Spotlight. Press +space then type what you want and it comes up. In just a few keystrokes you can open an application, open gmail/twitter/imdb/..., make a webs search, find a file (by name, by text content), open a directory,... It's difficult to describe how empowering it is to being able to go from 'I want to check something out in the directory x which is somewhere deep deep in my dev folders' to having it displayed in 2 seconds flat.

-Make a few symlinks from your home directory to the directories you use frequently. Instead of doing cd this/that/code/python/project/ you just do cd ~/project.

-Learn the shell. I recommend the (free) book The Linux Command Line: http://linuxcommand.org/tlcl.php . It guides you gently from simple directory navigation all the way up to shell scripting.

-Use tmux. Essential if you want to spend some time in the terminal. You can split the window in multiple independent panes. Your screen will have multiple terminals displayed simultaneously that you can edit independently. For example I'll have the code in one side and on the other side a REPL or a browser. You can also have multiple windows each with its own set of panes and switch from on to the other. With the multiple windows I can switch from one aspect of a project to another instantly. E.g: one window for the front-end dev, a second one for the backend and another for misc file management/git/whatever.

-Pick an editor and work towards mastery. I don't care if you choose vi or emacs. You'll be surprised how simple features can make a big change in how you type. You'll be even more surprised at how good it feels

The terminal is here to stay. It's a skill that bears a lot of fruits and that deprecate slowly. The more you sow the more you reap.

cstross 1 day ago 1 reply      
One command I can't live without: textutil.

Basically it's a command-line front end to Apple's TextKit file import/export library. Works with a bunch of rich text/word processor formats, including OpenDoc, RTF, HTML 5, and MS Word. Critically, the HTML it emits is vastly better than the bloated crap that comes out of Microsoft Word or LibreOffice when you save as HTML ...

Install pandoc and multimarkdown as well and you've got the three pillars of a powerful, easy-to-use multiformat text processing system.

ggreer 1 day ago 2 replies      
I didn't know about `screencapture`. That's a fun one.

The Linux equivalent of `open` is `xdg-open`. I usually alias it to `op`, since `/bin/open` exists.

Another bit of terminal-sugar for OS X users:

    alias lock='/System/Library/CoreServices/"Menu Extras"/User.menu/Contents/Resources/CGSession -suspend'
And most Linux users:

    alias lock='gnome-screensaver-command -l'
If you find yourself accidentally triggering hot corners, the lock command is your savior.

I've sorta-documented this stuff over the years, but only for my own memory. https://gist.github.com/ggreer/3251885 contains some of my notes for what I do with a clean install of OS X. Some of the utility links are dated, but fixing the animation times really improves my quality of life.

eschaton 1 day ago 9 replies      
What always surprises me is that so many don't know or use the directory stack commands, pushd and popd. I'll admit I was also ignorant of them until something like 2005, but once I learned of them I switched and never looked back. Now I can't see someone write or type "cd" without a little bit of a cringe.
greggman 1 day ago 1 reply      
These are awesome. I didn't know about many of them.

One tiny thing though, at the bottom it says

> Recall that OS X apps are not true executables, but actually special directories (bundles) with the extension .app. open is the only way to launch these programs from the command line.

Actually, you can launch them in other ways. Example

    /Applications/Google\ Chrome.app/Contents/MacOS/Google\ Chrome  --user-data-dir=/Users/<username>/temp/delmechrome --no-first-run
Will start a new instance of Chrome with it's own datastore in ~/temp/delmechrome. add some URL to the end to have it auto launch some webpage. Delete ~/temp/delmechrome to start over.

wink 1 day ago 2 replies      
'open' is on of the things I long for most as a Linux user.There are several ways to achieve something that are all inferior or downright broken. Usually you don't have a huge problem, until you have. xdg-open for example could've solved this, if it was universally working.

I wrote related rant once[0] when I tried to debug an issue of a misconfigured default browser.

[0]: http://f5n.org/blog/2013/default-browser-linux/

runjake 1 day ago 0 replies      
9. /usr/sbin/system_profiler

10. /System/Library/CoreServices/Applications/Wireless Diagnostics (with built-in wifi stumbler)

11. /System/Library/CoreServices/Screen Sharing.app (Built-in VNC client with hardware acceleration)

12. /System/Library/PrivateFrameworks/Apple80211.framework/Versions/A/Resources (Command-line wifi configuration and monitoring tool)

Combine with sed, awk, and cut, and these tools can provide useful monitoring.

nmc 1 day ago 5 replies      
/usr/local is the default location for user-installed stuff, but I personally like to have my package manager do its stuff in a separate directory.

I like the way Fink [1] uses the /sw (software) directory.

Does anyone have a valuable opinion on the comparison between Fink and Homebrew or maybe MacPorts?

[1] http://www.finkproject.org

barbs 1 day ago 0 replies      
I use multiple POSIX environments (OS X at work, Linux Mint and Xubuntu at home), and I find it handy to create common aliases for differently implemented commands to keep the environments consistent.

For example, I set the letter 'o' as an alias for 'open' on OS X, and to "thunar" on Xubuntu and "nemo" on Linux Mint.

pirateking 1 day ago 1 reply      
After years of living on the command line, OS X specifically, and learning its quirks and tricks, I am actually ready to move on.

Right now I am more interested in creating simple visual interfaces on top of UNIX-y tools, for my own personal use cases. The main benefit of this is the ability to better experiment with and optimize my workflows for different properties as needed through different combinations of single responsibility interfaces and single responsibility programs.

I am sensing that there is great promise in seeing much higher APMs (actions per minute) for many tasks, even compared to the all-powerful command line. Also, there are lots of interesting possibilities for better visual representations of data to improve comprehension and usability.

pling 1 day ago 0 replies      
Another one that I can't live without:

   ssh-add -k keyfile
Integrates with keychain meaning you can have a passworded private key without having to play around with ssh-agent and shells and profiles. Put keychain access in the menu bar and you can lock the keychain on demand as well. Integration of ssh into the OSX workflow is absolutely awesome.

That and some of the examples in that article really make it a killer platform for Unix bits.

ansimionescu 1 day ago 1 reply      

* lunchy: wrapper over launchctl, written in Ruby https://github.com/mperham/lunchy

* brew cask: "To install, drag this icon... no more", as they say https://github.com/caskroom/homebrew-cask

* have fun with "say" https://github.com/andreis/different

DCKing 1 day ago 3 replies      
Could you imagine Apple would have gone for BeOS or a custom developed kernel with no significant terminal-based userland when making OS X? It would probably still be used by many casual users or those doing graphical work, but I doubt it would be used by hackers at all.
salgernon 1 day ago 1 reply      
pbpaste and pbcopy can specify multiple clipboards; one very handy thing I do is

"cmd-a" "cmd-c" (copy all)

double click on a word I'm looking for, "cmd-e" to enter it into the find clipboard

'pbpaste | fgrep --color `pbpaste -pboard find`'

I have that aliased as 'pbg'.

shurcooL 1 day ago 0 replies      
If you do Go development, you can do this to quickly get to root folder of any Go package:

  function gocd {      cd `go list -f '{{.Dir}}' $1`  }
It uses the same syntax as go list to specify packages, so you can do, e.g.:

  ~ $ gocd .../markdownfmt  markdownfmt $ pwd  /Users/Dmitri/Dropbox/Work/2013/GoLand/src/github.com/shurcooL/markdownfmt  markdownfmt $ _
So nice.

torrent-of-ions 1 day ago 1 reply      
Another thing you can do to improve speed is learn the keybindings for readline. They are the same keybindings as emacs, and lots of other things use readline too like python shell, sqlite, etc. A very useful set of keys to have in your muscle memory. See the readline manual: http://tiswww.case.edu/php/chet/readline/rluserman.html#SEC3
smw 1 day ago 0 replies      
Put this in your path somewhere, find files, links, directories instantly, with globbing. Makes mdfind actually useful.

  $ mdf "*invoice*.pdf"  /Users/smw/Downloads/Invoice-0000006.pdf

_jsn 1 day ago 0 replies      
mdfind / Spotlight can be a fairly powerful tool. Consider this query, which finds all Xcode projects I've tagged as "Active":

  ~$ mdfind tag:Active kind:xcode  /Users/jn/Code/xyz/xyz.xcodeproj  ...
Queries like this also work in the Cmd-Space UI, or as a Saved Search. By default each term is joined with AND, but you can specify OR too.

milla88 1 day ago 4 replies      
My favorite command is 'say'. You can do all kinds of silly voices.

Try this out: say hello -v Good

archagon 1 day ago 1 reply      
Great list! Includes all the old favorites with clear explanations.

This is only tangentially related, but I recently wrote a little Automator Service to gather any selected file and folder paths from Finder. I very often need to grab the path of something for programming-related stuff, and doing it from the command line or with the mini-icon drag-and-drop takes way too long. Maybe somebody here will find it useful! http://cl.ly/1a3s3g1u2Q2w

hibbelig 1 day ago 3 replies      
I want "remote pbcopy"! I'd like to be able to log in to any remote host (usually Linux in my case), then tack something onto the command line I'm typing to copy it into the pastebuffer.

    ssh somehost    cd /some/dir    grep -lr foo . | remote_pbcopy
I guess something like this is possible with GNU Screen or with Tmux, and perhaps the Tmux/iTerm interaction helps, but I've never figured it out.

jpb0104 1 day ago 0 replies      
Here is a very handy script that takes a screenshot, places it in Dropbox's public directory, shortens the public URL, then puts the short URL in your clipboard. Making for very quick screenshot sharing. It combines a few of these hints. https://gist.github.com/jpb0104/1051544
RexRollman 1 day ago 1 reply      
Personally, I was surprised that there is not a command line interface to OS X's Notification system. Seems like it would be handy for long running batch jobs.
allavia88 1 day ago 1 reply      
There's been a few of these lists over the past few years, most recent one is https://news.ycombinator.com/item?id=7494100

It seems like a large portion of HN is less experienced re sysadmin, but is interested in it nonetheless. Perhaps there's room to make a 'codecademy for unix' type course? Curious to see what folks thing.

gotofritz 1 day ago 0 replies      
Also worth mentioning is dotfiles (not specific to OS X). Basically various well known "power users" share their bash, homebrew, etc settings on github so that they can easily set up a new machine with minimum of fuss.There are a lot of neat trick in those boilerplate files.


huskyr 1 day ago 1 reply      
Awesome. I knew about `pbcopy`, but i never knew you could also pipe stuff into it. That saves a lot of time saving script outputs to temporary text files and copying!
stretchwithme 1 day ago 0 replies      
The hot keys for screen capture are more useful for daily use. You can paste what you've capture directly into most email clients. Or go to Preview where creating a new file uses what's on your clipboard if its an image.
conradev 1 day ago 1 reply      
The one utility I can't live without is caffeinate, which prevents a Mac from sleeping.

It's super useful for keeping long running tasks running.

chrisBob 1 day ago 1 reply      
The biggest change I have found for my terminal was adding this to my .bash_profile:

export CLICOLOR=1

export LSCOLORS=GxFxCxDxBxegedabagaced

I thought that was one of the most amazing things when I used a linux system, but OS X is black and white by default.

fuzzywalrus 1 day ago 1 reply      
Notably the screen capture terminal command, while neat, is sold as "more flexible". I think the author is unaware of Command+shift+4 followed by tapping the spacebar. It'll give you the window capture.

Otherwise good article.

chrismorgan 1 day ago 0 replies      
I dont use a Mac, but have used espeak-via-ssh to deliver a message to my sister who was near my laptop, from the comfort of my bed I could have (a) called out, or (b) gotten up, but where would the fun have been in that?
guard-of-terra 1 day ago 1 reply      
I wonder if it's possible to make OS X to say "As you request, Stan" in Lexx's voice.

That alone might be sufficient reason to migrate from ubuntu.

RazerM 1 day ago 3 replies      
It seems odd to have

  open /Applications/Safari.app/
as an example, when

  open -a safari
does the same thing.

cormullion 1 day ago 0 replies      
If you work with images a lot, look up sips. I use it a lot, for converting images, rescaling and resizing, etc.
vladharbuz 1 day ago 1 reply      
The screencapture options are wrong.

    "Select a window using your mouse, then capture its contents without the windows drop shadow and copy the image to the clipboard:    $ screencapture -c W"
-c captures the cursor, W is not an option. The real command for this is:

    $ screencapture -C -o

mmaldacker 1 day ago 5 replies      
no love for macport?
nemasu 1 day ago 0 replies      
This is neat. I'll be getting a mac soon, and this is right up my ally.
fmela 1 day ago 0 replies      
The '-name' argument of mdfind makes it useful to find files with the query string in the name. E.g.: "$ mdfind -name resume".
lastofus 1 day ago 1 reply      
The article doesn't mention the fun you can have with ssh + say.

My co-workers and I used to ssh into the iMacs of non-technical users in our office and have a good laugh from a nearby room.

nicksergeant 1 day ago 2 replies      
Why the hell would someone change the title from "Eight Terminal Utilities Every OS X Command Line User Should Know" to "OS X Command Line Utilities".

The original title is clearly more accurate / useful / canonical. The overwritten title is ambiguous. This is indeed not a list of every OS X command line utility.

AdBlock Pluss effect on Firefoxs memory usage mozilla.org
454 points by harshal  2 days ago   257 comments top 34
gorhill 2 days ago 12 replies      
It's not just memory overhead, it is also CPU overhead.

One approach is to write the filtering engine from scratch. It is what I did, without looking at ABP's code beforehand in order to ensure a clean slate mind.

I didn't get it right the first time, I did spend quite a large amount of time benchmarking, measuring, prototyping, etc.

Once I was satisfied I had finally had a solid code base, I went and benchmarked it against ABP to find out how it compared:


And for HTTPSB's numbers, keep in mind there were an extra over 50,000 rules in the matrix filtering engine (something not found in ABP).

So I think this shows clearly ABP's code can be improved.

neals 2 days ago 6 replies      
The first thing people complain about with browsers is probably memory usage. I doubt that many people actually understand what a browser actually does with that memory. I sure don't.

100mb sounds like a lot of memory for a webpage. Where does all this memory go to?

maaaats 2 days ago 5 replies      
> Many people (including me!) will be happy with this trade-off they will gladly use extra memory in order to block ads.

Well, for me the whole point of blocking ads is because they are often big flash things that hog cpu and memory. If ABP is no better, then most of the reason is gone. I'd actually like to view ads to support more sites.

mullingitover 2 days ago 4 replies      
This is like a commercial for forking Firefox to build highly efficient ad blocking into the browser.

It's sad that the most-demanded feature on every browser, as evidenced by plugin downloads, is ad blocking. However, all the major browsers are produced by companies with their hands in advertising, and this conflict of interest has resulted in this feature request going unfulfilled for a over a decade.

Fork 'em.

graylights 2 days ago 2 replies      
Some sites have tried gimmicks to block adblock users. I wonder now if they'll make thousands of empty iframes.
chrismorgan 2 days ago 1 reply      
I would like to see a lighter blocking listone that doesnt try to be absolutely comprehensive, but just focuses on the 95% of ads that can be fixed at 10% of the cost (actually I think itd be more like 99% at 0.1% of the cost).
bambax 2 days ago 3 replies      
Wouldn't the solution involve allowing plugins to manipulate the content of a page before it is parsed by the browser?

There used to be a proxy adblocker that did that, but I don't think it works anymore.

The Kindle browser uses a proxy to pre-render pages on the server in order to lighten the load of the device.

Could AaaS (Adblock as a service) be a viable business? I think I'd pay for it.

axx 2 days ago 5 replies      
It's 2014.

My computer has 12 Gigabytes of RAM.

I don't give a shit, as long as i don't have to view those terrible stupid Ads.

Advertisers, fix your practises and i will view your Ads again.

BorisMelnik 2 days ago 0 replies      
I'm just gonna say this as a person who uses ABP and has not done any low level memory analysis: browsing using ABP saves me much more time and makes my browsing much quicker:

-ads take so long to load and when ABP is not enabled bogs down my browser hard core

-when watching videos I have to wait 3-20 seconds for them to load, with ABP enabled I do not have to wait at all

It consistently saves me several minutes every day. If it adds a few extra milliseconds to load some style sheets I have never, ever noticed.

bithush 2 days ago 1 reply      
My primary computer is a laptop from 2008 with a 2.5Ghz Core 2 Duo and other than a 1 second delay in startup I don't notice any performance degradation in general use compared to running without ABP.

Currently this machine has been up for 3 days and 17 hours and Firefox has been running the whole time and it is using 303MB with 4 tabs open one of which is the Daily Mail (don't judge!) which is an extremely busy page. This is perfectly acceptable in my opinion. I only have 4GB RAM which by todays standards in not much either. Obviously reducing the memory footprint is great but I can't say it has ever been a problem I have ever noticed.

On a side note the past few updates to Firefox have improved performance a lot. I am really impressed by how much quicker the browser is, especially with Chrome seeming to just get slower and slower with each update.

ladzoppelin 2 days ago 1 reply      
Use Bluhell Firewall and NoScript for Firefox instead of ABP. https://addons.mozilla.org/en-US/firefox/addon/bluhell-firew...
SixSigma 2 days ago 2 replies      
I would recommend Privoxy rather than adblock. I chose it because it can be used with any browser rather than needing to be a plugin and have had great results with it. As a bonus I have it running on a virtual server and use an ssh tunnel on any system I end up using. This gets me round filters and installing things other than ssh. So if I am using my Android phone on free wifi, I know I'm not being dns hijacked or under HTTP MITM attack.


alipang 2 days ago 2 replies      
Anyone knows if this also affects Chrome? I have Chrome saying Adblock is using 126MB of memory, but if there are giant stylesheets injected elsewhere that might no be reported fully in the Chrome task manager.
nhebb 2 days ago 4 replies      
I'm surprised AdBlock is so popular. I tried it a few years ago and noticed the sluggishness immediately. I know it's not a feasible alternative in everyone's opinion, but if you're running a low end machine and want to block the most pernicious ads (flash, multiple external javascript, etc.) then Firefox + NoScript is the way to go.
dbbolton 2 days ago 0 replies      
If you are on a low-end machine and can't afford a larger memory footprint, you should probably be using a lighter browser in the first place, but if you really want to use Firefox and block ads, one option is to just block them through your hosts file, then the addon becomes largely unnecessary.



nclark 2 days ago 0 replies      
i would rather have my browser crash and never open again for the rest of my life than not use adblock. take all the memory, CPU, fuck it GPU that you want, you fantastic addon.
demi_alucard 2 days ago 0 replies      
For those who do not know, there is an EasyList without element hiding rules.


With these rules firefox's memory usage only goes up from 290MB to 412MB, instead of 1.5GB for the website mentioned in the article for me.

The downside is that this list has a more limited coverage than the full version of the list.

SudoNick 2 days ago 1 reply      
I think this is the fourth or fifth time, in recent memory, that I've seen someone from Mozilla criticize Adblock Plus and call on its developers to make changes. ABP startup time and memory consumption were subjects I recall, and its general impact on page load times may have been as well.

I can understand Mozilla taking some interest in how addons behave, and constructive feedback on extensions is a good thing. However, ABP is the type of extension that is likely to have issues in those areas because of what it does. Which is very important to users, especially those who rely upon it for its privacy and security enhancing capabilities. It is those users who should decide whether the performance and resource usage trade-offs are acceptable. Mozilla shouldn't make, or try to make, such decisions.

The situation with ABP 2.6 (https://adblockplus.org/development-builds/faster-firefox-st..., https://adblockplus.org/forum/viewtopic.php?t=22906) might not be a case of this, but that along with the wider pattern of platform developers being more controlling, does make me somewhat concerned about Mozilla taking too much interest in extensions. I hope my worries are for naught.

hbbio 2 days ago 1 reply      
Very interesting.

There might be an architecture problem here. Another solution is to use a proxy to block ads, like GlimmerBlocker for OSX. But I didn't investigate memory usage, though I tend to think it will be lower (plus have the added benefit of working simultaneously for all browsers).

caiob 2 days ago 0 replies      
I'd love to hear the side-effects of using Disconnect.
kmfrk 2 days ago 1 reply      
I use the program Ad Muncher for Windows, and I've found that it sometimes becomes a humongous bandwidth hog, when I'm watching video.

So it's not just memory that can be at risk of hogging.

sdfjkl 2 days ago 0 replies      
On the other hand AdBlock (edge of course, not the co-opted plus), saves quite a bit of network traffic. Nice if you're on metered (or slow) internet.
mwexler 2 days ago 1 reply      
Ah, 3 words that always seem to show up together: firefox and memory and usage. I look forward to when I see that 4th word, "minimal", in there as well. Yes, I know original post is about an addin increasing memory usage, but I guess that doesn't surprise me anymore either, sadly, when thinking of firefox.

Even with the memory pain, I still (mostly) love FF.

Shorel 2 days ago 0 replies      
My browser is fast again!

Thank you, really thank you.

chrismcb 2 days ago 0 replies      
Did I read that correctly? One website had 400 iframes? FOUR HUNDRED? Not 4, but 4 HUNDRED? Surely there is a better way.
linux_devil 2 days ago 1 reply      
I switched from Firefox to Safari , when I found out 900 Mb used by firefox on my machine , now I know the real reason behind that .
spain 2 days ago 1 reply      
Another (related) popular extension that I've noticed negatively affecting Firefox's performance is HTTPS Everywhere.
Vanayad 2 days ago 0 replies      
Same issue appears in chrome as well. Got the site they linked to ~ 2.4GB before the browser stopped responding.
hokkos 2 days ago 1 reply      
LastPass is victim of the same fate.
ColbieBryan 2 days ago 0 replies      
Editing hosts files, installing Privoxy or Glimmerblocker, using Ghostery, Bluhell Firewall - I've tried all of these suggestions and unfortunately ABP is the only way to go if blocking all unwanted pop-up's is a priority.
vladtaltos 1 day ago 0 replies      
if 19 million people use it, why hasn't it become a feature firefox offers itself ? doesn't this mean people want some option of blocking ads ? I'm guessing this a native-integrated version will require a lot less memory...
nly 2 days ago 1 reply      
So is there any insight as to why? My guess, without looking at the code would be the regex engine allocating millions of NFA graphs.
Donzo 2 days ago 0 replies      
You guys: you've got to stop using this.

You're missing important messages from sponsors.

Introducing the WebKit FTL JIT webkit.org
453 points by panic  3 days ago   95 comments top 18
hosay123 3 days ago 4 replies      

    > Profile-driven compilation implies that we might invoke an optimizing    > compiler while the function is running and we may want to transfer the    > functions execution into optimized code in the middle of a loop; to our    > knowledge the FTL is the first compiler to do on-stack-replacement for    > hot-loop transfer into LLVM-compiled code.
Reading this practically made my hair stand on end, it is one hell of a technical feat, considering especially they had no ability to pre-plan so that both LLVM and their previous engine would maintain identical stack layouts in their designs. It's really insane they got this to work at all.

I was reminded of the old story about a search engine that tried to supplant JS with a Java bytecode-clone because at the time it was widely believed Javascript had reached its evolutionary limit. How times change!

nadav256 3 days ago 2 replies      
This project is such a great technological achievement for both the Webkit and LLVM communities. There are so many 'first times' in this project. This is the first time profile guided information is used inside the LLVM _JIT_. This is the first time the LLVM infrastructure supported self-modifying code. This is one of the very few successful projects that used LLVM to accelerate a dynamic language. This is the first time Webkit integrated their runtime (JITs and garbage collector) with an external JIT. This is the first JavaScript implementation that has advance features such as auto-vectorization. Congrats guys!
rayiner 3 days ago 1 reply      
The Bartlett mostly copying collector is just a really neat design. Even if your compiler gives you precise stack maps, conservative root scavenging still has the major advantage of giving the optimizer the most freedom. Its the basis of SBCL's collector, which is probably 20 years old now. Good to see it still has legs.

This is the patch point intrinsic documentation: http://llvm.org/docs/StackMaps.html. This is a really significant addition to LLVM, because it opens up a whole world of speculative optimizations, even in static languages. Java, for example, suffers on LLVM for want of an effective way to support optimistic devirtualization.

tomp 3 days ago 1 reply      
They use a conservative GC, which I understand, as they were using it before FTL JIT, and it required minimal changes for integration with LLVM-based JIT. However, in the blog post, they mention several times that they wanted to avoid stack maps because that would require spilling pointers from registers to stack, which they say is undesirable for performance reasons.

I wonder, however, how slow register spilling really is. I will test it when I have time, but logically, it shouldn't take up much time. Under the x64 ABI, 6 registers are used for argument passing [1], and the rest of the arguments are passed on the stack. So, when the runtime calls into GC functions, all but at most 6 pointers are already in the stack, at (in theory) predictable locations. Those 6 registers can be pushed to stack in 6 instructions that take up 8 bytes [2], so the impact on the code size should be minimal, and performance is probably also much faster than most other memory accesses. Furthermore, both OCaml and Haskell use register spilling, and while not quite at C-like speeds, they are mostly faster than JS engines and probably also faster than FTL JIT.

Of course, predicting the stack map after LLVM finishes its optimisations is another thing entirely, but I sincerely hope the developers implement it. EDIT: it seems that LLVM includes some features [3] that allow one to create a stack map, though I wonder if it can be made as efficient as the GHC stack map, which is simply a bitmap/pointer in each stack frame, identifying which words in the frame are pointers and which aren't.

[1] http://en.wikipedia.org/wiki/X86_calling_conventions#x86-64_...

[2] tested using https://defuse.ca/online-x86-assembler.htm#disassembly

[3] http://llvm.org/docs/GarbageCollection.html

tambourine_man 3 days ago 1 reply      
I'm so glad to see that Webkit is not dead after the Blink fork. I still use Safari as my main browser, but its developer tools and optimizing compiler lag behind.
simcop2387 3 days ago 2 replies      
I wonder how difficult it would be then to take the hints for asm.js and after validating that it meets the contracts it provides push through all the code into the FTLJIT to get a huge speed boost on that code. With the ability to do hot transfers into LLVM compiled code it should be possible to do it without any real noticeable issues to the user.
InTheArena 3 days ago 2 replies      
This is a much bigger deal then people are giving credit for, because of the other thing that Apple uses LLVM for. It's the primary compiler for Objective-C, and thus Cocoa (mac apps) and CocoaTouch (ios apps) as well. If apple has Javascript compiling on-the-fly at this speed, this also means that it would be pretty trivial to expose the objective-c runtime to javascript, and mix and match C & Javascript code.

This is going to be a very very big deal.

leeoniya 3 days ago 0 replies      
> Note that the FTL isnt special-casing for asm.js by recognizing the "use asm" pragma. All of the performance is from the DFGs type inference and LLVMs low-level optimizing power.

doesnt "use asm" simply skip the initial profiling tiers that gather type stats etc? most of the benefit of compiling to asm.js comes from fast/explicit type coersion.

ksec 2 days ago 1 reply      
I am glad WebKit is thriving. I was worried that the fork Blink with Google and Opera would means Webkit gets no love.

Hopefully the next Safari in iOS and OSX will get many more improvements

aaronbrethorst 3 days ago 2 replies      

    dubbed the FTL  short for Fourth Tier LLVM
Is it still a backronym if it redefines an existing acronym? (even a fictional one?)


lobster_johnson 3 days ago 2 replies      
Is this architecture generic enough that one could, say, build a Ruby compiler on top of it?

I imagine even writing a Ruby -> JS transpiler that used the WebKit VM would provide a speedup, similar to how JRuby works on the JVM, but native compilation would be even better.

cromwellian 3 days ago 3 replies      
Would be interesting to see Octane numbers.
jwarren 2 days ago 0 replies      
I adore threads like this. They really bring to focus exactly how much more I have to learn.
otikik 2 days ago 0 replies      
It's called FTL, but I am not actually sure it's Faster Than LuaJIT.
nivertech 3 days ago 2 replies      
no benchmarks versus V8?
jongraehl 2 days ago 0 replies      
wonder how this compares to node - specifically https://github.com/rogerwang/node-webkit
harichinnan 3 days ago 1 reply      
ELI5 version please?
Theodores 3 days ago 1 reply      
Much like how hairdressers don't necessarily have the most promising haircuts, it would seem that companies that make the finest in web browsers don't necessarily have the greatest of web pages! I don't think there is an ounce of javascript on the webkit website yet that article goes waaay over the heads of mere mortals on mega-speedy-javascript.
Is it possible to apply CSS to half of a character? stackoverflow.com
433 points by gioele  4 days ago   85 comments top 17
habosa 4 days ago 5 replies      
I've never seen such polished and thorough answers to a question like this. One answer (not even the accepted answer) made a plugin and a beautiful website to go along with it: http://emisfera.github.io/Splitchar.js/

Pretty amazing that our tools are getting so good that someone can quickly whip up an open-source plugin and splashy, hosted website for a SO answer.

metastew 4 days ago 3 replies      
I wonder if this designer is the one behind the recently unveiled 'Halifax' logo? The X looks remarkably similar.

Link for visual evidence: https://twitter.com/PaulRPalmeter/status/456165443363827712/...

vinkelhake 4 days ago 1 reply      
Those are some imaginative solutions. One problem[1] with drawing over half the character is anti-alias. The border gets a blend of both colors.

[1] http://i.imgur.com/5KspGyc.png

wymy 4 days ago 5 replies      
The 'why' should not matter. Who really cares whether it should be done? or why they would want to do it?

If someone has an interesting problem, let's try and figure out a way to do it. Usually the why comes up along the way, but not relevant.

Thankfully, some great folks stepped in and gave quality responses.

JacksonGariety 4 days ago 3 replies      
So this is just the ::before pseudo-element? I feel like I'm missing something.
m1117 4 days ago 1 reply      
That's cheating :) CSS is applied to the whole character.. I'm sure that you can avoid javascript by using content:attr(letter) in before and after. like <span letter="X" class="half-red-half-green"></span>
MrQuincle 4 days ago 1 reply      
And now apply it to half a character diagonally. :-)
origamimissile 4 days ago 1 reply      
http://jsfiddle.net/CL82F/14/ did it with just CSS pseudoselectors
frik 4 days ago 1 reply      
relevant code snip:

  .halfStyle:before {    width: 50%;    color: #f00;  }

kbart 2 days ago 0 replies      
My eyes hurt just seeing these examples. I hope such technique will not get much attention outside of SO and HN.
brianbarker 4 days ago 1 reply      
It's cool seeing the solution for this, but I never would have imagined such a random request.

Now, go make striped characters.

paulcnichols 4 days ago 0 replies      
Not looking forward to when this catches on.
BorisMelnik 4 days ago 0 replies      
thinking of a ton of new usage cases for this example. would be nice to see this implemented in pure CSS in the next revision perhaps. is that even possible?
joeheyming 3 days ago 0 replies      
yes, but why?
frozenport 4 days ago 5 replies      
This looks repugnant. Just because it can be done, doesn't mean you should do it! :-)
ape4 4 days ago 3 replies      
Just because you can do something...
omegote 4 days ago 1 reply      
It's sad to see that this kind of questions gather so much attention, and other questions closer to the real world receive NONE. StackOverflow has reached his max hipster level.
FSF condemns partnership between Mozilla and Adobe to support DRM fsf.org
407 points by mikegerwitz  1 day ago   276 comments top 27
jeswin 1 day ago 11 replies      
If we should train our guns somewhere it should be at the W3C; the guardians of web standards. W3C shouldn't have legitimized this feature by bringing it into standards discussions. The media companies would have had to comply eventually. They had no future without distribution over the internet. Now of course, they have hope.

Mozilla had no chance once Google, MS, Apple and everybody else decided to support EME. Most users don't care if they fought for open standards. They are probably just going to say that Firefox sucks.

If you ask me, Mozilla could be the most important software company in the world. The stuff they are building today is fundamental to an open internet for the future. It is important that they stay healthy for what lies ahead.

cs702 1 day ago 2 replies      
The key insight for me is this one: "Popularity is not an end in itself. This is especially true for the Mozilla Foundation, a nonprofit with an ethical mission."

Even though non-profit organizations like Mozilla do not seek to maximize financial gain (by definition), they often seek to maximize their relevance in the world. As a result, they ARE susceptible to corruption: most if not all are willing to "compromise" -- that is, sacrifice their mission and values -- in order to remain "important" in the eyes of society.

The folks running Mozilla are sacrificing the organization's mission and values because they're afraid of losing market share. They do not want Firefox to become a niche platform.

Pxtl 1 day ago 3 replies      
Honestly, I think the w3c should've just told Netflix et al to get the heck out of the browser.

Really, that's what this is all about... but those companies are already building fully native applications for every platform other than win32+Web. Telling them to go make a native application (or keep dealing with Silverlight/Flash) for that one last platform would be completely appropriate.

The world of software has changed - now we have major companies building applications for multiple different platforms instead of "just windows" or "just web". The web doesn't need to do everything.

It doesn't need to do this.

valarauca1 1 day ago 4 replies      
The FSF refuses to compromise their principles. They refuse to negotiate. I respect them for that, morally its nice to have a fixed point to hold the line and refuse to change, it gives you a benchmark against where to judge yourself. Even if sometimes you think the old guard ate a bit to much paste.
Rusky 1 day ago 3 replies      
Yes, it's disappointing that Mozilla is adding DRM to Firefox. No, that does not mean they hold "misguided fears about loss of browser marketshare".

People have the freedom to disagree with you, FSF. Just because they do doesn't make them misguided, especially on a future prediction.

How is this any different from flash/silverlight plugins we already have?

blueskin_ 1 day ago 4 replies      
The FSF would have good points, but then they ruin them with things like "or the issues that inevitably arise when proprietary software is installed on a user's computer.". Yes, DRM is bad, but not everything has or needs to be open source to treat its users ethically, and some people do need to make a living from their software.

Not everything needs to be GPL to respect people's rights to do what they want with something they bought, not everything needs to be open source just because they like it that way, and above all, people should have a right to choose to install whatever they want, and distros should have the same right to choose to tell the user about closed source software when it would be helpful to them. If the end user didn't want to hear that, they can either ignore it, or use a FSF-endorsed linux distro like Trisquel. The fact that so few people do shows to me how most people are completely fine with having the ability to install what they want.

Freedom may include giving others freedom to do things you personally don't like, but the FSF tends to think a single, ironically restricted set of freedoms to match their philosophy are all that everyone needs.

sanxiyn 1 day ago 0 replies      
Mozilla is Serving Users. A great Orwellian phrasing.


couchand 1 day ago 3 replies      
Does anybody know what Brendan Eich's stance on DRM is? I can't help but wonder if this would have turned out differently had he still been in charge.

Eich helped found Mozilla back when it was just contributions to Netscape, and then helped break off as a fully-fledged project. My guess is that he understood the loss here. On the other hand, Gal wrote PDF.js which replaced the proprietary PDF reader, so you'd expect him to get it, too.

frik 1 day ago 1 reply      

  Write to Mozilla CTO Andreas Gal and let him know that you oppose DRM.

jpadkins 1 day ago 2 replies      
Has Mozilla really changed its policy? At a certain abstraction level, they already had a plugin in system that allowed for DRM binaries embedded in the browser. So what if the plugin system is a bit different?

You could already watch DRM netflix in firefox. If they were going from no-DRM plugin policy to allowing DRM in plugins, that would be cause for uproar. But Mozilla has always allowed DRM via plugins.

general_failure 1 day ago 0 replies      
Mozilla is very much trying to be a technology company these days with profit in mind (realize that there are 2 Mozilla's - one which is a nonprofit org and another a for-profit inc).

They are not like the fsf. They care about user share, market and all that. Idealists cannot afford to think that way.

lazyjones 18 hours ago 0 replies      
Can we please ignore the W3C and start all over with a new "HTML" definition without all the vendor-specific and compatibility clutter, without loose parsing and redundant "featuritis" tags and attributes?

Build a (much simpler) GPL browser for this new "HTML" syntax and implement compatibility plugins for legacy browsers (perhaps server-side fallback solutions to simple HTML). Put the FSF and EFF in a strong position for future development to ensure we can keep this technology isolated from the corporate interests and patent trolls.

Any takers?

mikhailt 1 day ago 2 replies      
I can't find the information to answer my question, so don't downvote me because it's a stupid question. I admit it is, I just want to know for curiosity.

I don't understand why Adobe has to be used here? Why didn't Mozilla partner with Apple, Google, and Opera on a standard implementation code for this? After that's done, then Mozilla can try to sneak in one last question for all partners, can we do it better than this?

stcredzero 1 day ago 1 reply      
If there was some way we could verify DRM was "what it says on the tin," it could be a tremendous tool for ensuring our privacy and freedom online. When big companies DRM content, it limits our freedom, but if we could DRM our own data, it limits big company and government abuses.

However, there is admittedly a big caveat here. I don't know of a workable way to know that DRM is "what it says on the tin." Big business and governments could place back doors into such mechanisms, which would put us in an even worse position than where we are now.

sutro 1 day ago 3 replies      
Here's hoping that a viable non-Mozilla group emerges that will offer a DRM-disabled version of Firefox, one that is addon-compatible and which pulls in all non-DRM-related upstream changes. Mozilla has lost my support over this decision.
edwintorok 1 day ago 1 reply      
"Use a version of Firefox without the EME code".Well I already use a fork of Firefox called Iceweasel, so I'm curious what Debian will decide to do with Iceweasel.
belorn 1 day ago 0 replies      
Mozilla could and in my opinion should do much more in order to live up to their fundamental principles and stated goals. They could inform the user about each website that uses DRM without prevent the user from viewing the content.

Its not even a revolutionary concept, as they are already requiring a click-to-accept with self-signed certificates. It puts the responsibility to the website if the black box called DRM causes problems, locks up, or cause some general havoc on the user. It highlights that the website is demanding to take control over the users device, and gives the user an option to say no.

It is easy to speak about fundamental principles in PR announcements, but code speak louder. The only bright spot is if Mozilla don't do more for the users, add-ons and forks will try to carry the principles for them.

thefreeman 1 day ago 1 reply      
I really don't know much about e specifics of this DRM proposition. So I accept that my assumptions may be invalid. But just based on the history of DRM and e internet... does anyone really doubt that someone will be able to defeat this DRM?
kumar303 1 day ago 1 reply      
"The decision compromises important principles in order to alleviate misguided fears about loss of browser marketshare"

misguided, as in, Firefox wants people to actually use its browser? I'm seriously surprised at some of these idealists failing to understand that normal people just want to watch House of Cards (or whatever) and that's pretty much it. Mozilla can't turn their back on those users.

ZenoArrow 1 day ago 0 replies      
Why are people attacking Mozilla? Go after the real culprits in this fiasco (you know who they are), not the reluctant consenter. Kick up a fuss with users of the competing browsers. It's still possible to salvage something from this.
budu3 1 day ago 2 replies      
A very sad day for the Open Web. What can we as users do?
jasonlotito 1 day ago 0 replies      
I don't see why the FSF is up in arms about this. Mozilla is essentially doing the same thing that the FSF does with the GNU C library by releasing it under the LGPL.

They even spell out the case when they should adopt the lesser License[0], despite the fact that it goes against the FSF's core values and they advise not using it[1].

At the end of the day, I see this as Mozilla's LGPL.

http://www.gnu.org/licenses/why-not-lgpl.html0. The most common case is when a free library's features are readily available for proprietary software through other alternative libraries.1. But we should not listen to these temptations, because we can achieve much more if we stand together.

CmonDev 1 day ago 0 replies      
"Open" web.
bttf 1 day ago 0 replies      
A victory for the giants.
camus2 1 day ago 0 replies      
Dont worry,Mozilla already betrayed Adobe once with this whole Tamarin/ES4 fiasco,with a little luck they'll change their mind for the best this time too.
judk 1 day ago 0 replies      
I await the resignation of Mozilla's CEO, who clearly has shown an inability to represent the community on this issue of human freedoms that form the cornerstone of the Mozilla Foundation.
judah 1 day ago 0 replies      
Between this and forcing Eich out of a job over a political issue, I've lost a lot of love for Mozilla in the last month.
Alien creator H.R. Giger is dead swissinfo.ch
394 points by lox  3 days ago   67 comments top 24
tluyben2 3 days ago 2 replies      
RIP. He was a nice guy and a great artist, but the last years he was quite disabled by (I think) a stroke and it was almost impossible to talk to him. My friend used to visit him to discuss work they did together and I went with him one time; he talked a few times with him after that stroke but it was never the same. I was a big fan right after I saw Alien begin 80s and it was nice to meet him while he was still producing art.
ThePhysicist 3 days ago 1 reply      
That's really sad. I just recently watched "Jodorowksy's Dune" (http://jodorowskysdune.com/), a documentary on the planned but never realized "Dune" movie by Alejandro Jodorowsky, for which H.R. Giger did a lot of artwork and which features an interview with him in his home in Switzerland. If you look at the designs that Giger did for this movie, you can already see the "Alien" style all over it.
mgw 3 days ago 4 replies      
If you're ever in Switzerland and a fan of H.R. Giger, you should check out his great museum in Gruyre. [1]Additionally, the idyllic mountain village is well worth a visit on its own.

[1] http://www.hrgigermuseum.com/

elecengin 3 days ago 2 replies      
My favorite H.R. Giger story was from when he met the rock band Emerson Lake and Palmer and agreed to do the album art for Brain Salad Surgery. The album name - innuendo for a sex act - inspired an equally sexual album cover. [1] The original image was a futuristic woman with a penis covering her mouth.

The band loved it, but the record company refused to release the album. The band, placed in a difficult position, petitioned to Geiger to adjust the artwork. Geiger refused to bow to the band's and record company's demands, and in the end the record company had to hire an airbrush artist to remove it as much as possible... leading to the "shaft of light" along the neck.

[1] http://images.coveralia.com/audio/e/Emerson,_Lake_y_Palmer-B... SFW)

ChuckMcM 3 days ago 1 reply      
I got to meet the artist when his 'Alien' creation was on display at the California Science Center in Exposition Park. They had a number of sets from the movie on display and the full size creature that was what they build CGI and other latex models from. I remember "Wow, this guy seems completely normal for someone who has the ability to envision something so twisted." It is a rare gift to be able to think about impossible things.
sbirchall 3 days ago 4 replies      
A truly unique talent. In memory, you should check out Aphex Twin's "Window Licker" directed by Chris Cunningham (HN will probably be most familiar with Bjork's "All is Full of Love" music video). A whole host of talent came together there to make one of the most fucked up things you'll ever witness. Suffice to say a big red NSFW warning goes out on this one!



etfb 3 days ago 0 replies      
I'm amazed he wasn't dead years ago. Seventy four is a good twenty years younger than I kind of assumed he would be. He was only forty when he did the design for Alien? I know he already had his artistic style famous before that, meaning he must have been a wee tacker when he started out. Amazing.

Also: vale. A talented artist with a distinctive voice.

JabavuAdams 3 days ago 0 replies      
So long, and thanks for all the nightmares.
coolandsmartrr 3 days ago 0 replies      
I was always haunted and fascinated by Giger's imagination. What first came into mind was the album cover for Emerson, Lake and Palmer's "Brain Salad Surgery". By synthesizing Thanatos and Eros, both primordial in human nature, Giger created instrinsically-appealing artworks. A great loss to the world.
textminer 3 days ago 0 replies      
For those interested in his work, I really recommend viewing the recent documentary Jodorowsky's Dune, a failed film project Giger and several other proto-luminaries worked on (inspiring much of the iconic imagery in Alien, Star Wars, and Indiana Jones). Giger appears throughout the documentary. I believe it's still playing in the Bay Area.
mysteriousllama 3 days ago 1 reply      
I remember picking up an Omni magazine when I was a prepubescent tadpole. The cover had this amazing art that caught my eye. Guess who had drawn it?

Only later did I read it and become fascinated with science. Guess what I do now?

It's amazing how much this man did for the world through his work. Very influential to many people in many ways.

He will be missed.

joel_perl_prog 3 days ago 1 reply      
What a great genius. What a great loss.

Celebrate his life today by watching Alien!

backwardm 3 days ago 0 replies      
I know this won't add much to the discussion, but I really hope he designed his 0wn casket using his signature stylethat would be really fun to see and a great way to show one last piece of artwork.
mililani 3 days ago 1 reply      
Wow. For some reason, I thought he was dead a long time ago. RIP
wiz21 3 days ago 1 reply      
Although well know for Alien, Giger actually made tons of other stuff (including several bars !). Here's a good book about him that I've read :


igorgue 3 days ago 0 replies      
Sad to see him go, great artist!

If you have a chance, check "Alejandro Jodorowsky's Dune". They have one of his last interviews talking about how he got started and how that failed movie was the seed for his ideas for Alien with Jean Giraud.

logfromblammo 3 days ago 0 replies      
Is it wrong of me to hope that he designed his own casket and mausoleum?
doctornemo 3 days ago 0 replies      
Ah, what a loss.

I remember being astonished by Giger's vision in Alien. For years I hunted down posters, calendars, and books, which weren't always available or affordable. Like others here, I relished the Dark Seed game for its tribute to Giger.

This takes me back to an earlier stage of my life, and makes me very sad. What a vision!

outworlder 3 days ago 0 replies      
He deserves a black bar.
Cowicide 3 days ago 0 replies      
Ironically, his futuristic work will be incredibly influential far into the future. Terrible news.

RIP H.R. Giger


ihenriksen 3 days ago 0 replies      
Very sad. I was a huge fan even before the Alien movies.
bussiere 3 days ago 0 replies      
Darkseed was a chok when i was young. I still remember the game. RIP
camus2 3 days ago 0 replies      
great artist!
DENIKUTA 3 days ago 0 replies      
Up The great site
Womans cancer killed by measles virus in trial washingtonpost.com
389 points by arunpjohny  1 day ago   131 comments top 19
zaroth 1 day ago 8 replies      
Oncological Virus, or OV. Pushing science fiction. But did Washington Post forget to mention the 2nd patient they released data on, who didn't have any kind of prolonged response?

The paper presents 2 cases, selected because they were the first 2 cases to be tested at maximum viral load. There are additional people in the trial, and they will release full results once they are available.

It included two slides showing before/after blood levels and imaging. They talk about how they modified the virus to emit a tracking signal, and how they modified it to target the cancer cells. Really, really mind blowing and impressive work. I would love a tour of that lab.

These are end-stage patients for whom everything else has stopped working. One of the patients had already undergone several experimental treatments. There is some really exciting research going on for MM (multiple myeloma) treatments, and maybe even cures.

I think this is one example of the free market working well. Typical MM treatment runs about $60k / year, and with recent developments, patients are living 10+ years. Total number of MM patients is increasing both because the disease is becoming more prevalent, but mostly because people are living so much longer with MM. In short, it's a large and growing market. But it's not a cancer you can treat and have it go into remission. You get on treatment, and you stay on it and keep those levels down. The typical treatment is biweekly therapy.

But these OVs are one-time deals. So a single dose treatment is a very interesting alternative. The only problem is, MM is extremely resilient, and the cells are everywhere. It's so hard to eradicate, unless the OV is a cure, it's just another tool in the box to manage MM and extend lives.

Weird, the PDF of the actual paper was freely downloadable a couple hours ago, but now it seems the paywall is up? http://www.sciencedirect.com/science/article/pii/S0025619614...

lifeisstillgood 1 day ago 6 replies      
OK, this is a silly question but I have to ask

Is there any likelihood that measles, or a similar virus we have held at bay with vaccination, was actively fighting cancer 200 years ago, thus pushing up the incident rates that apparently have gone up and dismissed with "well we weren't dying of cancer because we were dying of $INSERT_DISEASE_HERE"

bambax 1 day ago 7 replies      
Is this real? Can anyone with actual knowledge of the history of fighting cancer with modified virus provide input?

From my completely uninformed point of view it seems that if it's real, it changes everything...

nabla9 1 day ago 1 reply      
Using virus infections against cancer has long history.


Gatsky 1 day ago 0 replies      
Phase III study of virotherapy here (not published yet):



Not, it would seem, a panacea. Approach is interesting in that aspects of cancer biology make the cells more vulnerable to viral infection, eg supressed interferon production. Also a possible platform for immunotherapy ie getting the immune system to attack a virally infected cancer cell might wake up a more generalised immune response. But, medical grade virus is expensive to produce, and hard to think how a viral infection could eradicate 100% of the billions of cancer cells present in advanced disease. Also, humans get immune to viruses after infection.

darkFunction 1 day ago 9 replies      
I am curious about the timescale of treatments for terminal diseases, and how trials can be morally randomised.

It seems to me that a very high percentage of people would opt for a potentially fatal, completely untested course of action as opposed to imminent death. So who gets to try these treatments, who tells dying patients they are not allowed them, and is there a black market or large amounts of money changing hands for experimental procedures?

Ekianjo in this thread quoted 7 years at the earliest for a treatment to become available. Surely with hundreds of thousands of desperate, dying, last chance sufferers, it is better to go to extreme measures and offer the most promising yet dangerous treatments to everyone. Is it simply a side effect of the way pharmaceutical companies have to do business? If so, it's sad, and maybe a larger share of cancer research money should be put towards 'out there' attempts to cure terminal patients.

Genuinely curious.

baldfat 1 day ago 2 replies      
Cancer = Worst word for multitude of diseases where most are not related to each other except cell growth. Wish they could just not use it anymore.

Dad of a child who died from cancer and well the word cancer doesn't mean squat you need to know what type of cancer. Is it sarcoma or what?http://www.cancer.gov/cancertopics/types/commoncancers

cromulent 1 day ago 0 replies      
The immune system is so complex that this is difficult to understand. I imagine there is some link to the Abscopal effect, where radiation treatment of one tumor in one part of the body kills the other metastatic tumors.


NKCSS 1 day ago 2 replies      
I am always happy to see advances in treating cancer. Lost my dad to cancer a few years ago, it would be great if people in the future have a better chance. I know there will always be new diseases, but nipping this one in the bud would be awesome.
anon4 1 day ago 1 reply      
Currently, there are no do-overs since the bodys immune system will recognize the virus and attack it

Can't they circumvent this by injecting more of the virus than the body can fight at once? Though it's starting to sound like regular expressions...

ck425 1 day ago 1 reply      
From what the article says I think the original virus works by attacking tumors which then explode and spread the virus all around the body. If they use a version of the virus that is safe or that the person is immune to then it would target the cancer and cause it to explode but afterwards be harmless.

That's just the impression I get from the article. I know literally nothing about this do if anyone with actual knowledge can explain properly please do!

Stately 1 day ago 2 replies      
This is too similar to how I Am Legend begins.
j_s 1 day ago 0 replies      
Thought this was the same as discussed 2 weeks ago, but that was polio going after a brain tumor: https://news.ycombinator.com/item?id=7686853
majkinetor 1 day ago 0 replies      
Nobody underlined that woman got very high temperature. It could be the reason for the results. Coley's toxins were used century ago for that effect. In fact Sensei Mirai clinick in Japan recently reported remission of ~370 terminal cancer patients using combination of immunotherapy and high dose vitamin C & D, along with termotherapy. Cancer patients usually didnt have temperature long time before diagnosed.


kevin818 1 day ago 3 replies      
Anyone else worried that using virotherapy may result in those virus' building up resistance, similar to what's happening now with antibiotics and superbugs.
programmer_dude 1 day ago 1 reply      
Yay for science! Though I am not quite sure why the virus only attacks cancerous cells. Has it been modified to identify cancer cells?
zenbuzzfizz 1 day ago 0 replies      
I don't see any discussion of the M-count (monoclonal protein) response. My understanding is this is the key measure of myeloma.

It would be really great if this turns out to be a real treatment option because, harsh as it appears from the article, this woman's treatment sounds way easier than the current therapies for myeloma.

jameshk 1 day ago 0 replies      
This is progress! Happy to see some success!
svyft 1 day ago 1 reply      
Some indian ayurveda recipes use poison to cure poison.
Maze Tree ocks.org
389 points by yaph  2 days ago   38 comments top 13
couchand 2 days ago 0 replies      
One of the coolest things about this gist is how little code is used for the animation; basically just these lines:

    d3.selectAll(nodes).transition()        .duration(2500)        .delay(function() { return this.depth * 50; })        .ease("quad-in-out")        .tween("position", function() {          var d = this, i = d3.interpolate([d[0], d[1]], [d.y, d.x]);          return function(t) { var p = i(t); d.t = t; d[0] = p[0]; d[1] = p[1]; };        });
The tree layout auto-magically gives every node a depth property, we delay the transition by an amount proportional to the depth, and then over the course of two-and-a-half seconds tween the line segment into the new position. Simple and effective. The hard part is generating the maze.

roberthahn 2 days ago 3 replies      
It's not clear to me whether the tree is mapping the paths of the maze or the walls - the transformation makes it appear as though the walls are being mapped but that doesn't make sense.

I wonder if this works backwards - given a tree could you construct a maze? efficiently?

d0m 2 days ago 0 replies      
And that is how you get undergrad students interested in graph theory.
pmontra 1 day ago 0 replies      
I did something similar more than 20 years ago using Prim's algorithm for the minimal spanning tree. Spanning trees are not the most efficient way to generate a maze but I was studying CS and the maze generation was a good incentive to actually translate the textbook pseudocode into actual C (without the ++). I didn't do the fancy tree animation but you'll excuse me as all I had was a 80x25 ASCII terminal so it was probably a 40x12 maze :-)However I added a nethack style @ character and the hjkl keys to move from the entrance to the exit in the shorter time plus a leaderboard shared across all the users of our Unix system. Our terminals had a repeat key (i.e. keys didn't autorepeat, you had to press the repeat key as if it were a shift to make a key repeat) and that added to dexterity required to go through the maze quickly. The fastest time was in the 5-6 seconds range. I'm afraid the source algorithm has been lost on some tape long ago.
ShardPhoenix 1 day ago 1 reply      
Looks cool, but I'm a bit confused about the colorized examples. It seems like in some of the examples, there are blocks that are colored red that are further away in maze-distance than other blocks that are green, etc. Do the colors roll-over after a certain distance?
kin 2 days ago 1 reply      
the things people visualize w/ D3 never ceases to amaze me
dazmax 1 day ago 0 replies      
I'd like to see the tree starting from the other end of the maze too.
ars 2 days ago 2 replies      
Does this maze have a single unique path through it?
xabi 1 day ago 0 replies      
Old simple maze generator algorithms: http://imgur.com/a/5miDZ
Donch 2 days ago 0 replies      
Frankly, that is true art. Inspired.
soheil 1 day ago 0 replies      
Justen 2 days ago 0 replies      
That animation is really freakin' sweet
icefox 2 days ago 1 reply      
It would be even cooler if rather than it being all white it was colorized.
iMessage purgatory adampash.com
343 points by mortenjorck  3 days ago   170 comments top 49
jc4p 3 days ago 5 replies      
I went through this last month when I switched to the Nexus 5, had no clue I wasn't getting messages until someone with an iPhone tweeted at me asking why I as ignoring them.

However, all it took from me was a call to Apple's customer service, I told them I had just switched off my iPhone and no longer got texts from people with iMessage and they immediately sent me to a tech that fixed the problem for me.

Have you been explaining it correctly when you call? All I said was "I had an iPhone until last week, switched to another phone but I'm still registered for iMessage"

Edit: According to my phone I called 1-800-692-7753 (Which is just 1-800-MY-APPLE) and my call took 8 mins 25 seconds total. Not too bad of an experience.

saurik 3 days ago 1 reply      
What is interesting to me always is that my experience trying to send iMessages to people is usually the opposite: if someone's phone runs out of batteries or they start a phone call (on CDMA, which can't do voice and data simultaneously) I nearly instantly am forced to send them text messages. I also have found the "Delivered" notices very reliable: AFAIK they require a device to actually receive the message. Note however that it is "a device": if you have iMessage associated with another random device, it might be receiving your messages for you; I would be very interested in knowing if these people cataloged all devices they have from Apple (including Macs) and logged out of all of them. (It could, of course, just be a bug; but it at least doesn't seem to be some fundamental aspect of the design that it permanently hijacks messages.
gdeglin 3 days ago 1 reply      
Mashable wrote about this problem months ago: http://mashable.com/2013/09/16/imessage-problem/

A lot of my friends have complained about this problem as well.

I'm surprised Apple hasn't moved faster to come up with a solution. Seems like a lawsuit waiting to happen.

mwsherman 3 days ago 0 replies      
IM and SMS are different ideas. SMS moves a message from one device to another. SMS also knows nothing about the receiving device or whether the message got there.

IM (under which I include iMessage) is user-to-user, not device-to-device. It can know about the recipient and whether messages are received.

Each of these things has advantages.

SMS works because the phone network is always tracking a device. It is very addressable. The receiving device is mobile, rarely changes, and is singular.

IM has a notion of sessions. The user signs off and on. It can travel over any IP connection. The device on which the user is addressable changes a lot. There may be multiple devices, making the definition of delivered a bit less deterministic.

Conflating these two makes for a confusing mental model for the user, and for failures like this.

fossuser 3 days ago 2 replies      
Even more frustrating, reading this just reminds me how much better the google voice solution to this problem is and it predated iMessage by several years. Google has just let it atrophy - then they introduced hangouts late (relative to other chat apps in the market) and still have not integrated the google voice features or pushed them with android.

Does anyone who works at/worked at Google know why this happened?

Were they trying to turn the telecoms into dumb pipes with the original Nexus and gizmo5 purchase, but when that failed just abandoned the idea? You'd think the success of whatsapp and facebook chat would make chat a priority. If people communicate using your platform they're more likely to use your account for other things.

benstein 3 days ago 0 replies      
I went through this a few months ago and discussed on HN: https://news.ycombinator.com/item?id=7166955

Here's my update: It's been about 4 months or so since I switched.

Nothing I was able to do or Apple was able to do fixed the problem. I was able to put Messages into debug mode and I sent Apple a full debug log (Apple bug report #15966535). They marked the ticket as "Duplicate" and I was no longer able to view any updates.

After about 3 months, most of the issue has resolved.

The majority of group-texts work now; iPhones now send the whole thread is MMS not iMessage. It's still not 100% but pretty good.

Most of my friends can send SMS without failures, but quite a few still get "iMessage failed" and have to "resend as SMS".

I've completely given up trying to fix the problem. Just hoping the remaining iOS devices resolve themselves at some point, or Apple fixes in next update.

<rant>Everyone thinks this is an Android problem that they can't message me anymore. Really tough to explain to the world that it's _their_ phone that's buggy.</rant>

steven2012 3 days ago 1 reply      
Wow, this has been going on for at least a year. I can't believe that Apple hasn't already fixed this problem, it really calls into question their commitment to doing the right thing.
izacus 3 days ago 0 replies      
Well it seems that Apple has little incentive to fix this and have been dragging their feet: They're hoping that a brand new Android user will blame non-working SMS messages to their phone and return it for another iPhone.

Just another lock-in behaviour from them.

mrcwinn 3 days ago 1 reply      
For what it's worth, I switched to Android and did not have to pay Apple $20 to disable my iMessages, despite not having an active support contract.

I did, however, have to do a quick Google search, log in to my iTunes account through the web, and de-register my Apple devices. The problem was solved relatively quickly and for free.

This should be simpler, but I'm not sure how much easier Apple can make it. They can't make an iOS app to help you because you got rid of the iPhone. It seems strange to make an Android app to help you. That leaves the web. Better luck to you!

benhansenslc 3 days ago 0 replies      
I switched to an Android phone 3 weeks ago and I am also still not receiving texts from iPhones. Apple's customer support said the same thing to me as they did to you. They told me that one customer had to wait 40 days before they were fully removed from the system. I am just hoping that it gets fixed for me by the time 40 days has come around.

I filled a complaint with the FCC at https://esupport.fcc.gov/ccmsforms/form2000.action?form_type.... It is a form for complaints against wireless carriers for number portability issues.

pmorici 3 days ago 2 replies      
The easiest way to avoid this problem is to make sure you turn off iMessage on your iPhone before you switch devices then you don't have this problem in the first place.


edgesrazor 3 days ago 0 replies      
I ran into this issue 2 years ago when I dropped iPhone for Android - I seriously can't believe it's still a problem. Even following Apple's official KB article, it will still take a few days for all of your messages to start going through again.
greggman 2 days ago 0 replies      
A friend found this


TLDR; Deactive iMessages on your iPhone before switching or go here if you don't have access to your phone. http://supportprofile.apple.com/MySupportProfile.do

Sounds like that's not enough from reports below tho

hert 3 days ago 0 replies      
Even more of a disaster with iMessage group threads. I had a thread w/ two friends, and when I switched to my Moto X, I didn't realize that I was no longer receiving messages on the thread from ONE of them.

Turns out, one of their iPhones recognized that it should start texting me, while the other's iPhone kept iMessaging me w/out delivering failure reports. So frustrating that I forced them to get WhatsApp!

Scorpion 3 days ago 0 replies      
I briefly switched from an iOS phone to an Nexus 5 and had the same issue. For other reasons, I switched back. A colleague of mine liked the Nexus a lot and made the switch after I did. He has been fighting this for months. Everything works properly for a while. Sometimes, several weeks will go by. Then, out of the blue, my phone tries to send the message to him as an iMessage. It's bizarre and frustrating.
prutschman 3 days ago 2 replies      
Google Hangouts on my Android phone keeps bugging me to intergrate SMS into Hangouts, as well as to "confirm" my mobile number. My fear is that something analogous to the iMessage "purgatory" might happen, though I haven't heard of anyone experiencing it.
boqeh 3 days ago 1 reply      
I had this same issue. Apple wouldn't help me, unfortunately. If I recall correctly, I had to disassociate iMessage from my AppleID completely, which seems to have worked. Although I still can't be sure.

I have a weird feeling the messages still aren't going through and a lot of my friends think I'm being an asshole.

joshstrange 3 days ago 2 replies      
There was a story posted the HN that was very similar to this semi-recently, in fact I though it was a repost until I saw this was published today. I can't seem to find the previous post, does anyone else remember that/have a link? Thanks!
mwill 3 days ago 0 replies      
I encountered this problem recently on a smaller scale. I disable iMessage for various reasons, but recently had to factory reset my iPhone, which led to iMessage being enabled. I completely forgot about it and after I remembered to disable it, I suddenly could no longer receive text messages from my friends with iPhones.

I gave Apple a call and initially the only response I could get from them was "Just turn on iMessage" and general confusion about why I had it turned off in the first place.

Eventually someone I talked to said they could fix it, and shortly after I started receiving messages again.

ahassan 3 days ago 0 replies      
My friends and I have never ran into this issue switching from iOS to Android. The main thing to do is to disable iMessage on your old iPhone before you get rid of it; that should unregister it on Apple's servers. If you do that, then you should be unregistered unless you have another device hooked up (i.e. Mac).
harmonicon 3 days ago 0 replies      
This happened to me when I got my android phone with a new number. When my friends with iPhones texted me, the text always shows up as iMessage and I would never receive it.

The thing is, I have not owned ANY smartphone before this one. My guess was the previous owner of this number had an iPhone and registered for iMessage service. iMessage route sms through its own server it never reached my carrier's network. I tried to get help from Apple store technician, but since I am not an apple customer, past or present, employee did not see a need to help.

Problem is I really liked that number. After 2 months of struggle I gave up and changed to a new number.

I will admit I never liked Apple and do my best to purge iProducts from my life. But I guess you just cannot avoid being screwed, anyway.

cstrat 3 days ago 0 replies      
I have read about these issues plaguing people. It is strange because whenever I have roamed overseas or have disabled data for whatever reason. People still can text me, albeit I am sure there is a delay between when they hit send and when I got the message.

Friends of mine have moved from iPhone to Android, when I send them a message it tries with iMessage - and I get the message failure exclamation mark. It then resends as a text and doesn't try iMessage again for some time. Haven't really had the black hole experience yet...

K0nserv 3 days ago 0 replies      
How can the engineering team be clueless on how to fix this? Now I admit that I don't know the inner workings of the iMessage protocol and servers, but presumably all that needs to be done is to disassociate the number with the Apple ID. If I were to guess this would involve dropping a row in a table somewhere.
jamra 3 days ago 1 reply      
There is (or at least used to be) an option on your iPhone that forces the messages to be sent and received over SMS rather than iMessage. I wonder if one could turn on that option on their old phone before switching numbers.

There was a fairly recent change in how iMessages are handled. In one of the iOS updates, you can receive iMessage messages on numerous devices tied to your Apple ID such as your iPad. I wonder if that's where the bug comes from.

The other option is to switch to Android at home and get new friends.

dangoldin 3 days ago 0 replies      
I ran into this too and ended up calling Apple. Their solution was to tell everyone who had my number to erase their iMessage history with me.

Somewhat odd - I can receive individual texts from two people that have iPhones but if one of them sends a group text to both of us, I do not receive it.

kevinherron 3 days ago 0 replies      
Unfortunately I'm in the same boat right now. Tried calling them and having the number removed, etc...

Been going on like this for over a month.

ironghost 3 days ago 0 replies      
Easy (yet long) fix:

on iPhone - Disable iMessage from the settings menu. - Go back to messages and send a standard text message to the phone number. - Enable iMessage from the settings menu.


X-Istence 2 days ago 0 replies      
Had a friend recently go through this. She called up Apple, had her number deregistered and about a day or two later everything started flowing correctly again...
JimmaDaRustla 3 days ago 1 reply      
Sounds like it would be easier to find new friends.

Seriously though, iMessage should have some sort of interoperability on other devices, even if its just a web interface you can log into to make configuration changes, including the deletion/deactivation of an account associated with a mobile phone number.

Edit: Or even monitoring iPhones associated with a number and disabling iMessage if said phone is no longer online with that number? Could possibly even forward unread messages, etc.

enscr 3 days ago 0 replies      
Whenever I look at the iMessage icon on my iPhone/iPad, I feel it had so much potential when it came out but Apple just squandered it like a brat. If only they had opened the gates on interoperability ..sigh !

Sometime back they were arrogant and brilliant, not just the former.

e79 2 days ago 0 replies      
Their support page makes it sound like they can just de-register iMessage with your account.


A few comments here seem to suggest that this is a carrier or cellular infrastructure issue. It isn't! iMessage doesn't route over SMS-- that's the whole point. It routes to Apple's servers, which should be capable of doing a lookup to see if the number still has an associated iCloud or iMessage account.

NeliX4 1 day ago 1 reply      
What's wrong with iMessage on Android app? http://imessageonandroid.com/

Why this exact same issue is poppup up in HN every now and then...

vasundhar 3 days ago 0 replies      
1. Validation seem to happen when you send the first message to check if given number is associated with iMessage2. Second time onward it only checks if the sender is in Data Network or not.3. There is an option in the iMessage settings > Messages > "Send as SMS" if this option is not selected once the device/iAccount knows the other device is iPhone and you are on Data ... it just sends an iMessage.

Turn "Send as SMS" so that it falls back to SMS if the destination is not available for iMessage.

cek994 3 days ago 0 replies      
I had a very similar problem when I drowned my iPhone and switched my SIM to an old Windows Phone 7 I had lying around. If you have your old phone, you can disable iMessage while the SIM is still in it, which apparently works -- but if you don't, you're basically up a creek. I ended up changing the email address on my iCloud account.

It baffles me that online iCloud doesn't have a dashboard for controlling this. Doesn't seem it should be that hard to unlink phone numbers from iMessage.

justizin 3 days ago 1 reply      
frustrating, indeed. the short answer is, if you are in the know, and you switch from iphone to android, disable iMessage on your iPhone first.

It would be great to see an interoperable solution replace iMessage, but for now, it is (purportedly) secure and often more reliable than text messaging. I still pay for an unlimited sms plan.

rnovak 3 days ago 0 replies      
I had the same issue, but I was able to still retrieve the messages via another apple device that was still connected to the iMessage service. I was then able to disassociate my number with the service.

When I had my iPhone, I had originally linked both my email and my phone number to the same iMessage account, so fortunately I never lost messages.

If it was tied to your email as well, you might be able to disable the service via another apple device.

softinio 3 days ago 0 replies      
I've been having the same issue and it ruined part of my vacation as people I was meeting up with on vacation thought I was ignoring their texts and we never met up.

What adds insult to injury is that all ios devices are shipped by default with the setting set to not send by sms when user not found on iMessage.

Apple should own up to this problem publicly and compensate users.

lurien 3 days ago 0 replies      
It's even worse if you want to keep an iDevice registered for facetime/iMessage use. You can't toggle it on and off on demand.
JacksonGariety 3 days ago 0 replies      
Why aren't they making sure there's an active iPhone number associated before delivering any iMessages?

Obvious solution.

sturmeh 2 days ago 1 reply      
Is it conceivable that Apple are deliberately ignoring this issue as it does exactly what they would want?

It punishes people who move away from their platform with social isolation.

It's easy for them to overlook this issue and not put any effort into fixing it, because the investment would result in a better experience for everyone who switching away from Apple.

jms703 3 days ago 0 replies      
The only reliable fix I've seen for this is to have your friends remove and re-add your mobile number from their contacts.
vhost- 3 days ago 0 replies      
Same story here. Switched to an android decide and no one could text me for months. Months! It's almost unbelievable.
george_ciobanu 3 days ago 0 replies      
I have similar issues with an iPhone and iPad synced to the same account. Stuff is always out of sync.
JohnHaugeland 3 days ago 0 replies      
I've been here for almost a year.

It's not clear why this seems okay. "We've stolen contact for a year. We're working on it."

Seems like anti-competitive behavior. Stopped buying Apple 100% immediately once I found out.

_Simon 3 days ago 1 reply      
This again? FFS RTFM...
VLM 3 days ago 3 replies      
Is the destruction of SMS as a technology necessarily bad? I don't think so. Like ripping off a band aid, get it over with and move on.
headShrinker 3 days ago 2 replies      
> save the green vs. blue bubbles, which are in their own way a sort of weird social/status indicator

Save your opinionated anti Apple rhetoric. The color coded indicator allows people to know which features are included in the service, or whether your text was delivered and read or in your case, not delivered...

shurcooL 3 days ago 2 replies      
I find it interesting how so many people still find it acceptable in 2014 to be using a "phone number" as their id.

It's a number you can't even pick yourself: you _pay_ to get a randomly assigned digits, at best with the ability to reroll (also not always free).

To me, it feels like someone using an `@aol.com` email in 2014. Or a rotary phone.

uptown 3 days ago 3 replies      
This site has a solution:

1. Reset your Apple ID password and do not log back in on your device(s)

2. Send a text to 48369 with the word STOP

It wont happen immediately but over a 12-hour period, you should start receiving texts on your Android device that are sent from iPhone users.


Can This Web Be Saved? Mozilla Accepts DRM, and We All Lose eff.org
336 points by DiabloD3  2 days ago   364 comments top 47
suprgeek 2 days ago 10 replies      
Mozilla had to be dragged into this acceptance kicking and screaming (metaphorically).

They were faced with a hard choice, Not implement EME (HTML5 DRM) and risk users moving to other browsers (user loss) or implement EME and risk looking like they are contradicting their core mission (trust loss).

They figured a little loss of trust is worth keeping most of the users on the Mozilla platform - which in my view is the correct decision. If users start to abandon Mozilla (FireFox) in droves then they lose their power to influence the development of the open web.

ep103 2 days ago 3 replies      
There are so many people here claiming this is the wrong choice, and yet I wonder what percentage of the commenters here are using chrome? By most sources I've seen, Chrome has 2x the marketshare, and actively pushed FOR EME. Perhaps if FF had Chrome's current marketshare, they would have been in a position to say no, but its the users who made that impossible. Mozilla should be commended for fighting as far as they did. And if you don't like this decision, make sure you switch off chrome before commenting.
Daiz 2 days ago 2 replies      
Oh, so the web has given up and is now genuflecting at the altar of video DRM. Next up: picture DRM, because since we're protecting videos we should naturally protect still pictures too. You know what? We also have all this professional writing on the web, and anyone can just copy & paste that! That clearly shouldn't fly in our brand new DRM-protected world - authors should be able to control exactly who can view and read their texts, and copying is strictly forbidden. Screenshots should be blocked too. Browser devtools will naturally have to be disabled on the World Wide Web, as they are capable of hurting our benevolent protector, the almighty DRM. Eventually, we'll arrive at The Right To Read[1].

Or we could just not give the devil our little finger.

[1] http://www.gnu.org/philosophy/right-to-read.html

Also, a reminder about the nature of this beast that everyone should be aware of:

HTML DRM will not give you plugin-free or standardized playback. It will simply replace Flash/Silverlight with multiple custom and proprietary DRM black boxes that will likely have even worse cross-platform compatibility than the existing solutions. In other words, giving in to HTML DRM will only make the situation worse than it currently is. Especially since it paves the way to an even more closed web.

pdeuchler 2 days ago 3 replies      
Seriously, who is building this DRM software/hardware? If you are a software developer you have no excuse but ignorance, and as someone who makes a living on a computer (where the Internet resides) that excuse is waning extremely thin. I harbor a hairs breadth more grace for the hardware engineers designing locked down chips, but that's no more than a rhetorical nicety due to the fact that I'm am not very familiar with the intricacies of their work.

I honestly don't get it, you could make just as much, if not more, money doing something that's not 100% ethically wrong... especially in this job market! It's easy to work remotely, so you can't claim geographic entrapment. I'm sure if those who were especially financially encumbered could make a kickstarter page people would literally pay them to quit their job!

As much as this stings, it stings even more that it's "our own" selling us out, that it's the people who should know better that are killing everything so many of us have worked so hard for.

themoonbus 2 days ago 10 replies      
Can some one explain to me why having, say, an HTML 5 based video player with DRM would be worse than one implemented in a closed platform like Silverlight or Flash? I'm genuinely curious, and not trying to make an argument here.
kator 2 days ago 4 replies      
DRM is a fantasy that uninformed media executives cling onto with a dream that it will put the genie back in the bottle. It's sad to see this stuff but totally understandable considering the divide between the technology people who understand the reality and the people in charge of dreaming of a fantasy that gets them back to the '80s..

Yes I know.. I was an executive as a major record label, trust me it's hard to be on both sides of this argument and it's not as simple as everyone makes it out to be...

I can't explain how many times I tried to help executives understand that the path between the media to the human eye or ear was vulnerable to so many attacks it clearly was a fruitless goal to protect media in that way. They hear some bright young person tell them they can protect their media like it was in the good 'ole days and they have the need to believe because without that belief they are out of a job..

And artists are on their own to figure out how to make money on their work...

13throwaway 2 days ago 3 replies      
If all major browsers support eme, every website will use it. Say goodbye to youtube-dl. Maybe next year eme will be updated to "protect" html. Soon the entire web may be a closed system. It doesn't matter what the FCC decides on Friday, today is the day the web dies, at mozilla's hands.

There is always a choice mozilla, please make the right one.

rectangletangle 2 days ago 0 replies      
Reading between the lines here, I have a strong suspicion this and the smear campaign directed against Eich are related events. Only a few short months ago Mozilla was staunchly opposed to this. Then Eich gets forcefully removed, and Mozilla's stance turns a 180. In retrospect the "outrage" against Eich felt very artificial. It could have been a deliberate attempt at character assassination in order to further someone's goals of destroying our internet freedoms. The motive of course being heavy financial incentives. This likely had nothing to do with LGBT* rights (which I'm a proponent of, FYI). Instead it was all about someone lining their greedy pockets, at the personal expense of Eich and us to a much lesser extent. Keep in mind the same people who have a vested interest in strengthening DRM, are the same people who own the media outlets which propagated this story.
27182818284 2 days ago 1 reply      
I must be missing something. I read the article and clicked the attached link to Mozilla's blog, and nothing seems radically to change for users other than a move sideways. Though I'm a little disappointed that there isn't a move forward, it certainly doesn't feel like a step backwards. Even Mozilla writes, "At Mozilla we think this new implementation contains the same deep flaws as the old system. " (emphasis mine)

Right now if you want to lock something down, like watching Netflix on your browser, you install Silverlight. In the future, Silverlight is replaced and Netflix uses XYZ technology but maybe with DRM-in-HTML or whatever. And as a user, it doesn't matter because most people I know today use a tablet with the native app, a streaming device such as the Roku player, or a SmartTV.

aestetix 2 days ago 3 replies      
I am curious whether this would have happened if Eich were still the CEO.
DigitalSea 2 days ago 2 replies      
This is most certainly the wrong choice, but people need to understand they essentially had no choice. Their options were rather limited:

Option 1) Stick to your laurels and refuse to implement DRM. Other browser vendors implement DRM, certain parts of the web become inaccessible via Firefox as DRM is implemented into more and more web services (think Youtube, Vimeo, Netflix, Hulu). Firefox's lack of DRM means its users are being disadvantaged.

Option 2) Implement DRM. Accept temporary defeat, don't lose browser share to Chrome and continue fighting from within against DRM.

Which option do you think sounds more appealing to Mozilla? Die on your sword, keep the trust of your dwindling user base or implement DRM and retain most of your user base (minus the people that will leave because of this decision). I think someone needs to create a fork and build a DRM-less browser, that's the beautiful thing about open source, don't like something, change it.

a3_nm 2 days ago 1 reply      
I would hope that rebranded versions of Firefox such as Iceweasel will strip the DRM support, so I guess it is not like it will be forced on people who don't want it.

Of course, this is still bad news, because it means there is no more pressure on content owners against this DRM, which can eventually become painful for people who want to avoid the DRM.

acak 2 days ago 1 reply      
I liked Ubuntu's approach where they asked you on startup if you want to install/enable proprietary packages like Flash or some graphics drivers.

Is it tough for Mozilla to prompt the user to do something similar with DRM stuff on first run? i.e. Telling them these are the features not supported by Mozilla in principle but are a) implemented to be in compliance with standards and b) required if you choose to use services like Netflix.

That way I would still have the option to run a DRM free browser (and voluntarily not use websites that require DRM).

bsder 2 days ago 0 replies      

When the plugin fails (and it will fail ... anything which is default deny will have lots of mysterious fails), more and more people who paid for the content will switch to pirated versions.

Personally, I can't wait. Piracy was roughly at a balance point recently with all the mobile consumption. Now, the lazy-ass social media generation is going to discover the need for it.

Welcome to Popcorn Time.

snird 2 days ago 2 replies      
Technology should never force it's users (the sites creators) to use one technology or another. Not implementing DRM is idiotic for couple of reasons:

1. The decision of whether or not to use DRM should be of the site creators. That's as "free" as it gets. forcing them otherwise by not letting them the option is bad.

2. Even without implementing it, DRM is available through flash or silverlight or any other third party plugin. The only result not implementing DRM gets is that of hurting the HTML5 video component.

dredmorbius 1 day ago 0 replies      
Debian GNU/Linux ships with Iceweasel, not Firefox. Iceweasel is based on Firefox, but differs in some particulars (with which I'm unfamiliar).

The question: will Iceweasel implement the DRM which Mozilla is implementing into Firefox?

EduardoBautista 2 days ago 3 replies      
How about just ignore the websites that implement DRM and let the free market decide if those websites will survive? People treat DRM as if it was going to turn us into North Korea when it comes to internet access.
Spooky23 2 days ago 1 reply      
The EFF stance is a little shrill. Remember how FairPlay and Zune were going to create proprietary music forever? Didn't quite work out that way.

End of the day, content is a commodity, and like any other commodity, prices are falling as the supply expands. Unless it's a Pixar movie, most films in DVD/Blueray are in the $10 range within weeks. Digital rights for new releases are as little as $3 when you buy with physicial media. Access to Netflix's catalog costs less than basic cable.

So I don't care about this, I do care about the trolls under the bridge (ISPs) who want to extract a toll for transit.

wolfgke 2 days ago 0 replies      
Just an idea that came to my mind: What about the idea to refuse support to any Firefox user that has installed a CDM component (for the same reason the Linux kernel developers don't give any support to users of "tainted" Linux kernels (i. e. kernels that have loaded non-open-source modules)).
skylan_q 2 days ago 1 reply      
First gender identity/equality is more important than free speech, and now DRM is more important than freedom from submitting to vendor demands.

What's the point of Mozilla?

CHY872 2 days ago 1 reply      
So I'm guessing that this will backfire. Mozilla say that Adobe will create the software, which will presumably be bundled with the browser. This means that if Adobe's DRM gets cracked, suddenly every site using that DRM is vulnerable. At the moment, one just updates the client to obfuscate a bit better, which is then downloaded on next launch of the software. If it's all client side, surely then the DRM will have to be updated every few days - which would be a nightmare for sysadmins etc.
prewett 2 days ago 0 replies      
I dislike DRM as much as the next person, but what do you suggest as an alternative way that content creators can protect their work? Movies cost tens of millions of dollars to make; I expect that you would not spend $10 million of your money to make something that everyone could just freely download. Would you invest a year writing a great novel if everyone could read it for free without paying you? Some people will, of course, say "yes" but I think most of us would not, and we would end up with less art.

I think the tech community needs to come up with a better alternative, rather than just complaining.

jevinskie 2 days ago 0 replies      
Has anyone found where you can get the Adobe CDM binary blob?
tn13 2 days ago 1 reply      
The article is written as if Mozilla is making a mistake. I think Mozilla's stand is perfectly reasonable and pragmatic. Mozilla must adhere to all the web standards.

I think the war for an open web was lost when EME became W3C standard. We should have fought it at that time.

chris_wot 2 days ago 3 replies      
So time to fork Firefox?
chimeracoder 2 days ago 3 replies      
From a comment by a Mozilla employee on another thread, it seems that the UI for this has not yet been determined[0]. It's possible that this may be presented to the user in a way similar to a plugin installation, except that the plugin happens to be provided by Mozilla (not a third-party).

This isn't great, but to the end user, it looks the same as Flash and Silverlight.

Especially if Mozilla were to add click-to-play for all such plugins, along with an explanation of what they are (think of the warnings that are currently shown for self-signed certs), they may still have an opportunity to do good with this yet.

I'd really love for Mozilla to remain as true to its mission as possible. On the other hand, Mozilla's power to do good in the world is intrinsically linked to its marketshare[1]. If Mozilla ends up being the lone holdout, it's possible that they will just lose marketshare as DRM content becomes more widespread - that would be quite a Pyrrhic victory[2].

I share in the EFF's disappointment at the situation, though (saldy) this has been inevitable for some time.

[0] https://news.ycombinator.com/item?id=7744954

[1] Perhaps not 100%, but it's a major component of it.

[2] https://en.wikipedia.org/wiki/Pyrrhic_victory

hughw 2 days ago 0 replies      
DRM just won't be that useful in a web browser. I can share a link with you, but it won't work for you. If Mozilla omitted this feature, users would just fire up the Netflix app. I honestly doubt it would cause users to switch away.
swetland 1 day ago 0 replies      
I don't get the hand-wringing over this. Don't install the plugin and only view non-DRM-encumbered video content. Seems simple enough. Just like you don't install flash because you don't want to support flash-based DRM, right?
Mandatum 2 days ago 0 replies      
I'm not sure why they even bother trying, it'll be broken within a week. They're pointlessly adding to a spec which has enough bloat and hacked-together features.
dangayle 2 days ago 0 replies      
I'm just sad that DRM is in the HTML5 spec at all. That's the real loss.
whyenot 2 days ago 1 reply      
I wonder how Mozilla would have handled this if they still were dependent on donations from users for funding instead of Google.
hmsimha 2 days ago 1 reply      
What does this mean for the TOR project? Will they have to bundle an old version of Firefox without the proprietary DRM component?
mkhalil 2 days ago 1 reply      
They should keep their browser open. Most people that use Mozilla know why they're using it. They would also understand how to install an official Firefox plugin to get crappy EME websites to work.
pjc50 2 days ago 0 replies      
Remember, the closed-source component is there because there has to be a place for the deliberately inserted NSA vulnerabilities (note the proposed use of Adobe). It's likely that this bizarre and unpopular decision is the result of some behind-the-scenes arm-twisting.
chris_mahan 2 days ago 0 replies      
The web is finished.

On to the next thing!

(I'm sooo glad my web writings are in plain text files rather than in other people's mysql databases)

spoiler 2 days ago 0 replies      
Maybe I'm missing something, but what's so bad about EME? I think it's a great idea that copyrighted material will be protected. I understand that it becomes "closed web" in a way, but it's not a big deal for me. Frankly, I can think of a few places where it could be useful, even.
enterx 1 day ago 1 reply      
/me thinks its about time for these digital rights fucks to taxed for exploiting our common good
knodi 2 days ago 0 replies      
Corporate attacks on all fronts on users. Revolt we must.
dalek2point3 2 days ago 0 replies      
Can someone explain how DRM on HTML5 would work?
hidden-markov 2 days ago 2 replies      
Maybe it can be disassembled? Like it what was done with some proprietary blobs of Linux kernel.
higherpurpose 2 days ago 0 replies      
> eliminates your fair-use rights

This is true. I'd like to see a case in Court where the law about fair use is in conflict with the law that allows something that has been DRMed to be completely protected, legally. I assume fair use would win, but I'd still like to see that case, because then it could become a lot easier to break DRM in order to "excercise your fair use right", and circumventing DRM might become legal again, in effect killing DRM for good. Then even companies like Mozilla could implement DRM unlockers in their browsers and so on, since it would be perfectly legal to do so.

camus2 2 days ago 0 replies      
Fascinating how people accept handcuffs so easily.Because it' better than flash...not,you'll still need a plugin for the DRM and you're going to have to download a few different plugins because of course different vendors will have different DRM schemes,so back to 1999,realplayer,windows media plugin,ect...
jasonlingx 2 days ago 0 replies      
Fork it.
dviola 2 days ago 2 replies      
Time to fork Firefox maybe?
paulrademacher 2 days ago 1 reply      
TLDR: Market share > Principles

> We have come to the point where Mozilla not implementing the W3C EME specification means that Firefox users have to switch to other browsers to watch content restricted by DRM.

felipebueno 2 days ago 1 reply      
I'm done with Mozilla but I think the problem is not just the browser we choose anymore. The whole thing is compromised. We need a new internet, "new" ways to share and consume information. There are many people who think so as well.

e.g.: http://electronician.hubpages.com/hub/Why-We-Need-a-New-Inte...

discardorama 2 days ago 0 replies      
Mozilla gets most of their money from Google. When Google says "jump", Baker says "how high?". They've become so addicted to the funds from Google, that they can't live without it.
Octotree: the missing GitHub tree view (Chrome extension) chrome.google.com
336 points by yblu  3 days ago   106 comments top 37
jburwell 3 days ago 4 replies      
To me, the lack a fast tree browser has been one of the biggest weakness of the Github interface. This plugin solves that problem exceeding well. Github should hire the author, and officially fund his efforts to make it a first class feature that does not require a plugin.
yblu 3 days ago 6 replies      
I built this to scratch my own itch, as somebody who frequently reads GitHub code and feels annoyed having to click countless of links to navigate through a large project. Hope it is useful for other people too.
jxf 3 days ago 3 replies      
This is a fantastic extension! Browsing is fast and efficient, and creating the token for my private repos was painless.

A "search for files/folders named ..." feature would be a nice bonus, too, so that you can quickly get to the right spot in a big hierarchy.

To the author (https://twitter.com/buunguyen): please add a donation link somewhere so I can send you a thank-you (or you can just e-mail me with your PayPal/other address; my e-mail's in my profile).

manish_gill 3 days ago 1 reply      
Fantastic! You planning to add Bitbucket support? That would be really nice. :)
bnj 3 days ago 0 replies      
Wow, giving it a quick try I can't believe how fast it is. This is one of those things that I've always desperately needed, and I never knew until now.

Be sure to tweet it at some of the github engineers Thy should bring this into their core product.

whizzkid 2 days ago 1 reply      
Great work but i want to point out one small feature that Github has but not known to everyone.

Press 't' when you go to a repository, it will activate the file finder. From there you can just start typing for the file/folder name you want to see and it will filter the repo instantly.

I wonder why this feature is not popular yet.

ahmett 2 days ago 1 reply      
Here's an idea: Automatically expand all the tree until there are more than 1 items in the level

e.g. src->com->twitter->scalding->scala->test (in this example, these are all folders in hierarchy and they are the only one until the 'test' so expanding them automatically all the way through makes sense).

granttimmerman 3 days ago 2 replies      
You can also press `t` on any repo on github to find files/filetypes quickly.
Jonovono 3 days ago 0 replies      
This is awesome. Much better than my similar project! : http://gitray.com.
xwowsersx 3 days ago 1 reply      
This is great. It would be even better if you could resize the tree. Some projects have really deep trees and at a certain point you can't seem the names of the files.
dewey 3 days ago 0 replies      
I'd love to see something like this being on the site by default. Maybe just a button next to the repository title where you'd be able to toggle between the current view and the tree view. Both of these options have their advantages for different use cases.

In the meantime that's a great solution. Thanks!

jhspaybar 3 days ago 1 reply      
I've been using Firefox almost exclusively for months. This may very well make me go back to Chrome. Looks amazing!
ubercow 3 days ago 1 reply      
I'd love to see a setting that makes the tree view collapsed by default. If I have some time later I might whip up a pull req.
mzahir 2 days ago 0 replies      
Github also has a file finder similar to the Command-T plugin for vim - https://github.com/blog/793-introducing-the-file-finder

This extension is great for exploring but if you know what you're looking for, cmd+t will save you more time.

gknoy 3 days ago 1 reply      
Is there an easy way to extend this so that it can also be used when accessing Enterprise Github installations, e.g. `github.mycompany.com`?
ntoshev 2 days ago 0 replies      
I don't really find the tree view useful. But I wish there was a way to see the code weight by individual files and whole repos: as KLOCS, size, anything. Is there such an extension?
Chris911 3 days ago 0 replies      
vdm 2 days ago 0 replies      
@creationix's Tedit mounts git repos directly; it will melt your brain. http://www.youtube.com/watch?v=U4eJTBXJ54I https://github.com/creationix/tedit-app
spullara 3 days ago 1 reply      
Press 't' and search the filenames in repo instantly. Very useful.
houshuang 3 days ago 0 replies      
Brilliant - it's often quite slow to change between directories in the web view, this is blazingly fast. Especially useful for deeply nested (templated) projects.
nilkn 3 days ago 1 reply      
Is there a way to use this for Github Enterprise repos?
dustingetz 3 days ago 2 replies      
Great extension, except in private repos, every time I click on a file (from github proper) the extension animates outward while telling me that it doesn't work with private repos. Extremely annoying and resulted in uninstall :(

edit: i'm not willing to give extension access to private repos, that would defeat the point of being private

mrdmnd 3 days ago 1 reply      
Did you get API rate limited, by any chance?
GowGuy47 3 days ago 0 replies      
I had the same idea a couple weeks ago but never finished it: https://github.com/Gowiem/GitHubTree. Crazy to see this. Glad somebody got around to it. Thanks man!
bshimmin 3 days ago 0 replies      
This is seriously excellent.

I bet Github have had this feature on their issue tracker for years - and I suspect it probably just got bumped a good few places up the list.

Dorian-Marie 3 days ago 0 replies      
Good idea. Having nicer icons and align the icon with the text would even more awesome.
StepR 3 days ago 1 reply      
Hacker News will never cease to amaze me everyday. You guys are the best. Is this going to be open sourced?
piratebroadcast 3 days ago 0 replies      
Epic. So fucking cool.
cmancini 3 days ago 0 replies      
Brilliant work. This will be a huge timesaver for me. Thanks!
ika 2 days ago 0 replies      
that wasn't a lacking feature for me but still, good job!also would be nice if author uses github like design instead of windows-ish one
mitul_45 3 days ago 0 replies      
What about enterprise GitHub support?
cdelsolar 3 days ago 0 replies      
Wow, you rock.
chadhietala 3 days ago 0 replies      
Thank you for this!
dorolow 3 days ago 0 replies      
This is incredible. Thank you.
dud3z 2 days ago 0 replies      
Wow, great work!
sideproject 3 days ago 0 replies      
soooooo good!! Thanks!
Demiurge 3 days ago 0 replies      
Tremor-cancelling spoon for Parkinson's tremors liftlabsdesign.com
335 points by mhb  1 day ago   89 comments top 18
97s 1 day ago 0 replies      
Things like this are just a total blessing to people who need them and its awesome to see affordable technology like this taking place for people who can't even take part in the one of the most important functions of life. A lot of people are talking about if there is a big enough market/etc, we should probably assume the creator only wants to cover his cost since this invention was probably created by people who had family suffering from such a thing. Any profits would probably make the inventor(s) delighted.
caublestone 1 day ago 0 replies      
My brother has downsyndrome and struggles with muscle control. When he eats he tremors quite a bit losing food on his plate forcing him to take a desperate eat fast approach to his meals. He has gone through quite a bit of muscle therapy to help. Needless to say, I will be buying this for him and can't wait to see what other types of people find an improvement in their lives with this product.
gottebp 22 hours ago 2 replies      
I'm a little late commenting, but I developed a Windows app [1] that does exactly this for the mouse. It basically applies some fancy FIR filters to the x and y deltas. [1] www.steadymouse.com
srean 1 day ago 2 replies      
I think some will remember the news story that Sergey Brin has the Parkinson's mutation. He is keen to fund technologies that makes progress in this field. That would indeed be welcome. Diagnosing Parkinson's with confidence with a specific authoritative test is no easy task. For my dad, the doctors cannot decide if it is essential tremor or Parkinson's.

I recall a post on HN which showed that our vocal signals have enough information to help wwith such diagnosis. Voice as carried by phone's now dont have the spectral resolution for this. However it might have enough bandwidth if voice is coded for that purpose. Seems like a worthy problem to take a stab at.

I think the corresponding NPR story https://news.ycombinator.com/item?id=7752627 was posted a few days ago but did not make the front page. So I am glad this made it.

enjo 7 hours ago 1 reply      
I would like to "sponsor" a few of these for folks. Does anyone know of a resource, non-profit, or...something that can help facilitate that?
stevesearer 1 day ago 1 reply      
My grandfather has Parkinson's. My dad has slightly noticeable hand tremors - no diagnosis at this point. I haven't read up on the likelihood that I'll get it, but I assume that I'm at high risk.

That said, this looks pretty neat and seems like basically a spoon version of Canon AF lenses. Will definitely look into these for my grandpa :)

jmadsen 21 hours ago 4 replies      
This is wonderful!

I have to wonder - why is it that it took so long to come up with such a simple idea for such an obvious problem?

I'd like to blame it on the "get rich, web app startup" mentality, but really, that's a fairly recent thing.

Perhaps this is a harder problem than it seems? Or is it true that all our new inventors are chasing riches?

fletchowns 1 day ago 0 replies      
Heard about this on All Things Considered this week: http://www.npr.org/blogs/health/2014/05/13/310399325/a-spoon...
Jim_Neath 15 hours ago 0 replies      
As a sufferer of early onset parkinson's, it's great to see products like this being developed. Hats off.
brianbreslin 15 hours ago 0 replies      
My grandfather had parkinsons and this would have been a true blessing to him. I watched in frustration many times as he had a hard time doing tasks like this.

Seeing this kind of GOOD being invented makes me happy. This made my day.

caycep 1 day ago 8 replies      
This is interesting, but IMHO, will not be as useful in most cases of Parkinson's disease.

1) Parkinsonian tremor is typically at rest or with distraction. When you actually engage in purposeful action, such as using a spoon, the tremor usually dampens or goes away.

2) The main problem in Parkinson's is actually "bradykinesia" or slowness of movement. A more accurate term, in my opinion, is slowness of motor planning, that is, the brain systems cannot process the information require to, and then generate, a plan of movement for the limbs fast enough. The spoon won't fix that.

3) Parkinsonian and other forms of tremor do have relatively safe and effective treatments in medications or deep brain stimulation implantation.

My thoughts are - this spoon, if it works and isn't a mechanical nightmare, would be useful in a limited subset of cases where the tremor resembles another condition known as essential tremor, where the tremor instead is an intention or action tremor, where purposeful activity amplifies the tremor. Even so, these would only be in those patients who cannot get deep brain stimulation surgery for some reason. The reason being that DBS, while brain surgery, once done, is the more elegant solution. I.e. your tremor goes away, rather than requiring a superficial "hack" like this spoon.

asgard1024 12 hours ago 1 reply      
I like the idea, but I think the spoon is a bit shallow. It would be hard to eat soup with it.
jablan 18 hours ago 0 replies      
I wonder why wouldn't be possible to use some kind of passive method for tremor cancellation, like a miniature version of a steadicam. http://en.wikipedia.org/wiki/Steadicam
bigmattystyles 1 day ago 1 reply      
This sort of filtering should also be put in a mouse driver
maddisc2 19 hours ago 0 replies      
Well done, Really great work!
logicallee 1 day ago 7 replies      
Let's entertain the notion, prevalent here, that this idea itself (like all ideas themselves) is worth absolutely nothing, and that any one of the teams in the world who have access to gyroscopes, servos, and microchips, should have the right to reproduce this hack at unit cost and drive these guys out of business. Its cost is listed at $295.00[1] and this is their only product. It's huge compared to what the best facilities and teams in the world can produce to the same specifications, relatively unaesthetic as a utensil in its present form as compared to what a better-funded team can produce in a matter of weeks, and is clearly first-generation. They have no brand in the medical space. It is an eating utensil, yet it is not waterproof.

It is, however, patented.[2] This single fact allows this company (in its present form) to exist, to have done its research, to have raised its investment, and to bring their prototype in the form we currently see to market, at the price that it is currently listed at.

But patents are "wrong" and "stifle innovation".

Granted it's pretty obvious that this makes a huge difference in people's lives. So perhaps we should leaven our desire that their margin disappear, with hoping that more teams are somehow magically and irrationally in a position to do the world's R&D before losing their shirts as their margins disappear from under them. Having seen the size of this thing, the fact that it's not waterproof, the lack of an existing trusted brand-name behind the product, and a lack of any distribution except their web site (see their FAQ) shows that there is no way that is the best that any team in the world can build now that they've seen how it's done. Therefore, lacking protection, this team would no longer be competitive in a short number of weeks/months.

We can, however, agree that it is good that this has been made. So, how shall we reconcile this? Well, perhaps we should simply hope that funding, such as they have raised, would magically continue to be available even without any any margins that are guaranteed in the results (should they succeed in embodying their claims), which would allow the investors to recoup their investment. In short, we need investors to be crazy enough to keep funding innovations such as this one, while we remove the protections that would justify that craziness. Basically, we would have to hope that investors never catch on.

This is crucial for our purposes, as otherwise ideas such as this would not exist. Someone could have had this idea in 1999 or 1989. But without the investment, it might have stayed at the "worth-nothing" idea stage, rather than what we might call the "worth-nothing-but-has-now-actually-beend-developed-and-actually-been-built-and-proven-and-it's-no-longer-a-pipe-dream" worth-nothing stage. Which the rest of us deserve to access for free without investing anything.

There is an analogy to be made here with Tesla, who died nearly penniless: his meager earnings had come from his patent royalties. Imagine if we had the ability to rewrite history and take even this away from him, along with the food from his stomach and equipment from his labs, so that Edison and the rest of the world's teams would have full access to all of his inventions, working or impractical, without any protection. Imagine what progress that would have led to.

In summary:

1- ideas like this deserve no patent protection

2- teams should prove incredibly speculative techniques such as this for free and with no compensation or protection

3- investors should continue to invest in such ideas forever, even after it becomes abundantly clear that there is no way to recoup the investment in such a fledgling idea.;


Just kidding!! Phew. I hope we can all agree how ridiculous the above position would be. It's great that these guys brought their patented, high-margin product to the world. All this could have been done in 1984 - 40 years ago - if someone had had the 'wortheless' idea then -- and we would all have access to it today.

It took a team's phenomenal genius and dedication today to bring this to market, and the whole world will have it soon - unlike any of the Idea Sunday ideas that have no such protection, were never fully developed, and for which the entire program was crapped.

Here's to sane patents and to the progress they bring!

[1] http://store.liftlabsdesign.com/ $ 295.00. Note that my estimated margin at this price: (for copycat) 98.64% margins - ex assumed "worthless" idea - assuming a COGS at scale of $4, which is enough for several batteries, gyroscopes, microchips, cases, what have you. If we assume a $20 cost of goods that margin shrinks to 93.22% gross profit, again excluding the idea (which we assume is "worthless") or its development (which we assume the world should have access to for free, now that they've proven it.)

[2] http://www.cnet.com/news/smart-spoon-helps-stabilize-parkins...

logicallee 1 day ago 1 reply      
I made a point about the importance of patents for this invention, but it is better to receive it straight from the source: http://imgur.com/Q99L8V3

(the bottom just quotes my letter, already visible)

This product and company would not exist without patents on what amounts to an idea.

rubyn00bie 22 hours ago 0 replies      
Holy crap was I confused for a minute, I misread the title as "Tremor-canceling spoon for Pakistan's tremors." I was just thinking "there must be A LOT of earthquakes in Pakistan."

... Anyway this is a great invention, even if it's not intended to be used during earthquakes ;-)

Realistic terrain in 130 lines of JavaScript playfuljs.com
317 points by hunterloftis  4 days ago   53 comments top 27
gavanwoolery 4 days ago 6 replies      
Just a small note, not to sound snooty, just to educate people on what realistic terrain looks like...

This is what midpoint displacement looks like as a heightmap:http://imgur.com/ksETpO0,7gykFEV#0This is what realistic terrain looks like (this is based on real-world heightmap data):http://imgur.com/ksETpO0,7gykFEV#1

That said, midpoint displacement, perlin/simplex noise, etc are good for modeling terrain at a less macroscopic scale and are plenty sufficient for the use of most games.

colonelxc 4 days ago 1 reply      
zhemao 4 days ago 0 replies      
There was a Clojure example of this algorithm posted a few months back. Funnily enough, it's been in my "read later" bookmarks for a while now and I just got around to reading it this morning before I saw this post.


pheelicks 3 days ago 1 reply      
Nice demo. I made a terrain rendering engine/demo in WebGL a few months back, that used Perlin noise: http://felixpalmer.github.io/lod-terrain/

If anyone wants to play around with Hunter's algorithm in WebGL, it should be pretty straightforward to swap out the Perlin Noise implementation for his. Note the shaders do a fractal sampling of the the height map, so you may want to disable this.

huskyr 3 days ago 0 replies      
What i like most about this demo is that the code is actually very readable, and the blog article explains it very well. Most of the times the code for these kinds of demos looks like line noise :)
fogleman 4 days ago 1 reply      
Perlin noise is another good algorithm for terrain generation.


elwell 4 days ago 1 reply      
blahpro 3 days ago 0 replies      
It'd be interesting to see an animation of the diamond/square iteration progressing in 3D, starting with a flat surface and ending with the finished terrain :)
twistedpair 3 days ago 1 reply      
Reminds me of the results easy to achieve with Bryce3D back in the mid 90's. They had a pretty great terrain engine. I don't think they're making Bryce any more. It would be great if they could release some of that code.
callumprentice 3 days ago 0 replies      
I made a quick first pass at an interactive WebGL version this evening. http://callum.com/sandbox/webglex/webgl_terrain/ - ground realism needs a bit of work :) but it was a lot of fun. Thanks for sharing your code Hunter.
the_french 4 days ago 1 reply      
can this algorithm be run lazily? ie, can you continue to generate continuous terrain using this technique or do you need to generate the whole map ahead of time?
namuol 3 days ago 1 reply      
Brings me back to a ray casting experiment I did a while ago [1]. I always wanted to revisit it to include a terrain generation step (it uses a pregenerated height map). Now I have an excuse! ;)

[1] http://namuol.github.io/earf-html5

happywolf 3 days ago 0 replies      
For those who only want to look at the result(s)


Refreshing the page will generate a new terrain

rgrieselhuber 4 days ago 0 replies      
Reminds me of T'Rain.
nitrogen 4 days ago 0 replies      
This midpoint displacement algorithm is also how a lot of the "plasma" effects from 1990s-era PC demos were created.
galapago 3 days ago 0 replies      
zimpenfish 3 days ago 0 replies      
I remember implementing this on a Sam Coupe from the description in (either BYTE or Dr Dobbs, I forget) back in ~1987. Somewhat slower and lower resolution, of course...
good-citizen 3 days ago 1 reply      
after thinking about this one for a while, it occurred to me that this really helps illustrate the point of 'Life May Be A Computer Simulation'. Take this world creation a step further, and rather than teaching a computer how to create rocks, each one slightly different, imagine creating humans, each one slightly different. If you think about 'God' as just some alien programmer dude, it helps make so much sense of the world. How can a caring God let so many terrible things happen to us humans? Well, how much empathy do you feel about each rock in this program? When you click refresh, and create a whole new world, do you stop and think about all the exist rocks you are 'killing'? If we are living in a computer simulation, perhaps our creator doesn't even realize we are sentient?
hixup 3 days ago 0 replies      
I was playing with something similar a while ago. It's a procedurally generated overlay for Google Maps: http://dbbert.github.io/secondworld/
SteveDeFacto 4 days ago 0 replies      
Some of you might find this algorithm I created a few years ago interesting: http://ovgl.org/view_topic.php?topic=91JL96IHFS
sebnukem2 3 days ago 0 replies      
I think implementing parallel computing using webworkers would be a good item for the "What's Next" list of suggestions.
nijiko 4 days ago 0 replies      
You can simplify this even further by using frameworks like lodash / underscore or ES6 native methods.
brickmort 4 days ago 0 replies      
This is awesome!! nice job!
good-citizen 4 days ago 0 replies      
stuff like this makes me remember why I love programming
snodgrass23 4 days ago 0 replies      
Great tutorial on a fun topic!
CmonDev 3 days ago 0 replies      
Had to put "JavaScript" into the title - typical HN... It was about algorithm rather than a language.
TheyCalledHimBo 4 days ago 0 replies      
I may just be a prick, but seeing this promoted as a variant of the midpoint displacement algorithm for terrain generation seem far less gimmicky. "X done in Y lines of Z" Whoop-dee freakin' do.

Still, cool algorithm.

I'm About as Good as Dead: The End of Xah Lee ergoemacs.org
315 points by craftsman  15 hours ago   289 comments top 52
wfjackson 14 hours ago 6 replies      
Edit: He seems to have a lot of interest and worked a lot on good documentation, which is the bane of the typical OSS project. Why doesn't Red Hat/Google etc. throw some money at him to write docs for underdocumented OSS stuff? Sounds like win win for all. It's hard to find really smart developers that are interested in writing documentation.[End edit]

If you don't really know him, his LinkedIn profile shows more details about his work.


Xah Lee's Summary

Full stack web site development. Heavy backend experience + unix sys adimn. Seek startupish, small team engineers environment.

Accomplishment highlights:

Autodidact. High school dropout. No degree. Taught graduate math students at National Center for Theoretical Sciences, Taiwan. Invited speaker to Geometry And Visualization workshop, Tokyo Metropolitan University. Work cited in US Patent. Well-known open source contributor in emacs and LISP communities. Expert in: JavaScript, Perl, Python, PHP, Emacs Lisp, Mathematica, MyLinux, SQL, Second Life Linden Scripting Language. Each with at least ten thousand lines of code. (working knowledge Java)

Specialties: Design, code, entire system. Understand language, protocols, raw. Do not depends on frameworks/libs when unnecessary.


Xah's JavaScript Tutorial http://xahlee.info/js/js.html Xah's {Python, Perl, Ruby} Tutorial http://xahlee.info/perl-python/index.html Xah Emacs Lisp Tutorial http://ergoemacs.org/emacs/elisp.html Xah's Java Tutorial http://xahlee.info/java-a-day/java.html Xah Linux Tutorial http://xahlee.info/linux/linux_index.html Xah's HTML5 Tutorial http://xahlee.info/js/index.html Xah's CSS53 Tutorial http://xahlee.info/js/css_index.html Programing Language Design http://xahlee.info/comp/comp_lang.html

Xah Lee's Experience

Author and WebmasterXahLee.infoJanuary 2007 Present (7 years 5 months) San Francisco Bay Area

Creator and author of award-winning website http://xahlee.info/ , since 1997.

8 thousand visitors per day. 240 thousand visitors per month. 5 thousand HTML pages Frequently cited in academic journals as well as online sites such as StackOverflow, Hacker News, Reddit, Wikipedia. Also cited by Microsoft TypeScript publication. (see list of citations below.)

Published more than 50 software. Have a look at

Xah's JavaScript Tutorial http://xahlee.info/js/js.html Xah's {Python, Perl, Ruby} Tutorial http://xahlee.info/perl-python/index.html Xah Emacs Lisp Tutorial http://ergoemacs.org/emacs/elisp.html Xah's Java Tutorial http://xahlee.info/java-a-day/java.html Xah Linux Tutorial http://xahlee.info/linux/linux_index.html Programing Language Design http://xahlee.info/comp/comp_lang.html Xah Lee's Projects

Programming Tutorial: Python June 2006 to PresentTeam Members: Xah Lee

andywood 12 hours ago 3 replies      
I am willing to 'out' myself as a human being who has struggled with serious mental illness my whole life, if there's a chance it could help this person. I have also had a long and prosperous career, including 5 years as a senior engineer and lead at Microsoft. These two things are very nearly orthogonal. I.e. my medical history has very little to do with my employment history.

But being compassionate does not require you to analyze this person's merits. The only thing to 'analyze' is that he begged you for help.

Also, please remember the hackers, family, and friends we have lost to suicide. People say "I wish we'd known. Maybe we could have helped." Well, you know; and there is a chance you could help.


mercer 13 hours ago 8 replies      
Cases like this, when the talented person in question is a geek who "doesn't interview well," always frustrate me.

Most of my professional success is a result of social skills and a lot of experience with wildly different social environments and cultures. While I do work hard at being actually good at what I do (because that's just more fun!), I could get pretty far as a cruddy developer.

And that just feels unfair, and frustrating. I do understand that social skills matter when you work for a company, in a team, but it often feels like they're way too important (evidence: incompetent people at top positions in companies who primarily excel at manipulation or are extremely socially skilled).

Of course, there's a difference between a geek who just lacks social skills, and a geek who is just not a nice person and also lacks social skills to hide this (or lacks social skills because of this). I've met my share of those.

But even then, it's hard to (fully) blame the person in question. I've met my fair share of arrogant, abrasive, unsympathetic, or misogynistic geeks who seem to have mostly become that as a reaction to having been bullied or ostracized. Often, it's self-defense or just inability, and quite frankly the main reason I didn't turn out that way was because I was in the right place at the right time, and I had wonderful peers who dragged me out of my isolation.

It just sucks. And I wish I could fix that. I can't imagine how frustrating it must be to know that you are competent, or even well above average, and still not get the jobs that downright incompetent people seem to have no trouble getting.

(I don't mean to justify 'bad' behavior, by the way, but to a degree understanding where it comes from allows me to still sympathize with, say, a racist.)

Beanis 12 hours ago 2 replies      
I interviewed/phone screened Xah Lee about 2 years ago.

I'd never heard of him, so I skimmed over his resume and checked out his site an hour or two before the call. I remember when I was looking at his site I was seeing various articles around math and programming, and then articles about things like "2 girls 1 cup". The breadth of articles completely threw me off. He seemed to have this unfocused interest in EVERYTHING. After looking over everything I could I had no idea how we'd be able to use him, but I was really interested in talking with him.

I went into the call expecting a lot of tangents about various topics he was passionate about and thinking I'd constantly have to refocus the conversation. Instead the conversation was pretty boring and not really going anywhere. He had breadth, but the depth was not there. At least not for the things I brought up. I couldn't understand how someone who seemed to be interested in everything could have no real interest in anything.

For the last couple of minutes we talked about his site, and how he maintains/updates it. He finally seemed to light up, and we hit on something he was really interested in talking about. The basics of the site looked like it was just a couple of scripts, a lot of static text files and some emacs. The setup might have been impressive in the mid-to-early 90's, but it wasn't relevant to anything we would have wanted/needed.

The call lasted maybe 20 minutes, and then we wrapped it up. Every topic, other than talking about his site, was a dead-end. My internal feedback at the end was: "No! Maybe... if we were trying to hire encyclopedia writers".

I think his main problem with interviews is that his real-life personality doesn't even come close to his online personality. If I would have gone in expecting the standard slightly-awkward developer interview, things might have gone better. I still would have said no, but it would have been a weaker no.

terhechte 14 hours ago 2 replies      
Xah, if you're reading this: Based on the comments on your page it is not clear whether you can or can't accept money via Paypal. Your Emacs resources have always been very helpful to me. I'd be willing to send you some money. Do you happen to have a Bitcoin address that I could send something to?
louhike 14 hours ago 0 replies      
At the bottom of the page:

"if you can help, paypal to xah@xahlee.org

buy my tutorial:

Buy Xah Emacs Tutorial (http://ergoemacs.org/emacs/buy_xah_emacs_tutorial.html)

Xah's JavaScript Tutorial (http://xahlee.info/js/js.html)

Xah's {Python, Perl, Ruby} Tutorialor buy my entire xahlee.info site content for $50. see Xah Code (http://xahlee.info/perl-python/index.html)

Xah Lee's Resume

or, send paypal to this my previous effort to something similar to a kickstart. https://pledgie.com/campaigns/19973

i won't actually be able to draw money from paypal, due to bank/IRS problem. So

better, send your check to:

Huynh Kang at huynh.kang facebook (https://www.facebook.com/huynh.kang) or huynh-kang linkedin (http://www.linkedin.com/pub/huynh-kang/24/8b/535)professor Richard Palais, home page at University of California Irvine http://vmm.math.uci.edu/, Wikipedia Richard Palais, richard-palais on linkedin"

brudgers 12 hours ago 2 replies      
Writing is thinking and one person's philosophical pondering can be another's trolling, e.g. this post.

Sure trolling is sometimes just being an asshat. But it's also a way of creating diversity. An attempt to deprogram members of a cargo cult. It can be fighting group think and ideological tribalism. I've come to think about trolling as often being an expression of a desire to write - it's very essence is writing something that didn't need to be written and doing so in a way that's tailored to one's audience. Just as sarcasm can be a low form of wit, trolling can be a low form of literature.

If everyone who trolled the Usenet or a mailing list or an online forum was condemned for it, who would be left and would anyone want to read what they had to say.

Xah Lee is a person whose particular genius doesn't fit well into a prefabricated category. But his website expresses a genuine desire to help others by sharing what he knows - and what he knows, he knows really well. There but for the grace of god, go I.

throwaway283719 14 hours ago 5 replies      
He could start by rewriting his resume. This in particular stands out -

  Accomplishment highlights:    * Autodidact. High school dropout. No degree.
To almost any employer, that is not an accomplishment, much less a highlight! If you're a high school dropout who is very accomplished since then, just silently drop any mention of your education from your resume. Everyone will assume that you left it off because it's irrelevant given all your experience since then.

If the first paragraph of your resume calls attention to the fact that you didn't even graduate high school, it sets a bad tone for the rest. Many people will throw your resume away without reading further.

saalweachter 12 hours ago 0 replies      
Reading the blog/comments I don't think a development job is what he needs/wants. I think what he really needs is a patron.

So why not Patreon?

Some people find value in the content he creates, and his needs are modest. It is not implausible that enough people are willing to chip in to keep him going, and Patreon provides a way to do that.

rando289 14 hours ago 0 replies      
Xah's website has been an amazing resource for emacs users, me included. I really hope someone can help him out a bit.
muyuu 12 hours ago 1 reply      
What in the f*.

I've been following this guy since a bit before Orkut/Tribe days when he had full hair (~2003?). He's been very inspirational. At first I thought he was a bit of a jerk and a troll (not the right place or time to elaborate on this, but well this was long ago so it doesn't matter) but when I saw him take on his education in his 30s I was quite impressed with what he achieved.

I think this guy is top drawer and seems hard working. I have no idea what happened to him, although I do guess his peculiar character has a lot to do with it.

rafekett 12 hours ago 1 reply      
Xah needs to seek help for his mental illness. Look at his posts on comp.lang.emacs for evidence. He's been spending all these years on $3 a day trying to figure out a more "ergonomic" keyboard for emacs, but many of his suggestions are really just based on what he wants and are less ergonomic than standard emacs.

EDIT: comp.lang.emacs doesn't exist, it's been many years. gnu.emacs.BLAH

userbinator 13 hours ago 2 replies      
Did he just post a screenshot of his bank account with the session key visible in the URL!?

(I know it's probably not too significant and there are other checks in place, but I found it rather ironic when the text right above it happens to be "I try to be VERY VERY careful".)

cognivore 14 hours ago 2 replies      
Is there context here I'm missing? The screen shot with the legal fee leads me to believe so, but I can't sort it.
moistgorilla 14 hours ago 0 replies      
Props for asking for help.
thegeomaster 14 hours ago 2 replies      
I don't think anyone should judge him based on the small amount of information available. I understand that his situation is mostly his fault, but there may be other things at play. Why be so fast to condemn someone when an unfortunate sequence of events could've put some of us into a similar situation? I'm not defending Xah Lee's irresponsibility nor I am defending people who condemn him, I just want to say that we are all imperfect, after all, and these kinds of things can and do happen because of that fact.
BadassFractal 8 hours ago 0 replies      
The problem with someone like Xah is that you don't quite know what you're getting yourself into.

Why has he not worked for a decade, and what reassurances are there that he will be able to be productive? There's no chain of trust that would prove that you're not hiring some kind of a ticking timebomb. There's no proof that he's up to speed with any tech since the 90s. There's no proof that he'd not be potentially very quirky and difficult to work with in a team environment. It's a very difficult position to be hired from, and it mostly has to do with the fact that he "let himself go" for a long time.

If you have the option between hiring someone good and "someone potentially good, but also very unpredictable, high-risk and likely to be a long-term project", you'd go with the former every single time, it's a no-brainer.

partisan 14 hours ago 1 reply      
I can empathize and I feel bad for anyone in what feels like a helpless and hopeless situation. I sometimes feel like I am there as well despite having "enough" money for a rainy season or two.

Some people need to hit rock bottom to even begin to understand that there is light at the end of the tunnel. Manual labor, however humiliating and seemingly beneath him, may give him the proper motivation he seems to be lacking.

seansoutpost 14 hours ago 1 reply      
Xah,It really sounds like you need a bitcoin address. Coinbase.com (YC S12) is a great place to start. Once you have some coins, you will find no shortage of people in the bay area willing to buy them off of you.

I would even be willing to run a btc fundraiser on your behalf. I don't have a lot of rep on here, but it's pretty solid in the bitcoin space. Google Jason King Bitcoin or Sean's Outpost

Best of luck, man. Stay strong.

deadghost 14 hours ago 2 replies      
I've come across his blog multiple times when I started with emacs and he seems like a cool guy(anyone that can spoon raw oatmeal into his mouth like it's the greatest thing ever is cool by my book).

It's great he's asking for help instead of showing up on the front page as another dead hacker.

kayyyl 13 hours ago 0 replies      
If you really think you can code, Fiverr (http://www.fiverr.com/) is the place you have to go. There're many little jobs for you. Do 5 per day, let's do it now mate.
ww520 3 hours ago 0 replies      
It's saddening to see fellow hackers falling on hard time. I'm going to Palo Alto next week. I'll swing by to Mt View to see what I can help out.
auganov 13 hours ago 0 replies      
Wow, that's pretty shocking. As an emacs user I'd often stumble upon his articles. Even bought an emacs autohotkey-mode from him. It's so strange to see a person you thought was this amazing genius that surely did well for himself struggle like that.Hope you take care of your problems Xah.
ThinkBeat 7 hours ago 0 replies      
Standing as the precipice about to lose your home, your possession and things you have tried to hold on to isan extremely traumatic affair.

Its very hard for someone honest and hard working in our world to admit to having problems and asking for help fromthe general population.

He has done good things, and he is a fellow human, who needshelp. Leave it at that.

Who cares if he has upset some people on usenet. You think its some kinda karma?Wut? Troll the net, lose your house?

If there are trolls on the net that should be taken careof it is assholes to humiliate and denigrate a man in hishour of need, when he is down and humiliated. That takesa lot more of an asshole than having spirited discussionsregarding technical topics on usenet.

peterwwillis 14 hours ago 4 replies      
He needs to find a social services office that can help him get organized and take care of his personal responsibilities. It's completely his fault that he's in this mess, but he probably has some kind of personal/mental health problem that requires assistance.

The one thing nobody should do is simply send him money or give him a job, since there's nothing on this page that indicates he won't go back to exactly the same thing he's been doing for the past 10 years.

(Also, 1600 rent? what the fucking fuck? dude needs to go sleep in a shelter and use that money to get a thrift store interview suit and pay off his bills)

nkozyra 14 hours ago 0 replies      
I don't know that this is necessarily a good idea when the IRS has a lock on your account(s), either:

>> better, send your check to: [snip]

chaired 12 hours ago 0 replies      
1 ) Perhaps this could be the start of Xah Lee learning some social skills. If so, good on him. He'll go a lot further with them.

2 ) Surely someone in this community can offer him a job writing documentation, at the least? Think of it as charity if you must, but he will probably deliver good value, and he would probably be happy to work cheap, if you're into that sort of thing.

3 ) I will donate $, since that's what I can do from where I am.

cenazoic 13 hours ago 0 replies      
Sacha Chua recently did one of her "Emacs Chats" with Xah Lee:


danso 14 hours ago 2 replies      
Almost sent the OP money through PayPal, but I saw this at the bottom of the post:

> i won't actually be able to draw money from paypal, due to bank/IRS problem.

Which is confusing because in a few lines above, this is written:

> if you can help, paypal to xah@xahlee.org

Can someone who knows the OP provide more context to all of this?

garretraziel 13 hours ago 0 replies      
This is mostly unfortunate. As an Emacs user, I have read a lot of his tutorials and they are very good, I will consider buying one of them (but I don't understand; does or doesn't he accept money through paypal?).

On the side note, he should really edit that bank account screenshot. These downloaded images really looks like they were downloaded from imgur. And if you type those filenames into imgur...

octopus 11 hours ago 0 replies      
Best thing someone can do for this guy is to help him fill his taxes declaration retroactively and create a fund for him to write an Emacs book for e.g. Seems quite competent writing about Emacs, so it should be right on his alley.

Maybe someone from Apress, Pragmatic or Packt Publishing can contact him about an Emacs book.

Globz 12 hours ago 0 replies      
From the python mailing list:

Xah Lee wrote:

> What does a programer who wants to use regex gets out from this piece of motherfking irrevalent drivel?

> Any resume that ever crosses my desk that includes 'Xah Lee' anywhere in the name will be automatically trashed.

-rbt at athop1.ath.vt.edu

driverdan 14 hours ago 2 replies      
> why i didn't seek job all these years? well, i can only say i procrastinate and is ok living on a dime.

Why would anyone give this guy money? It sounds like his money woes are all his fault. He's just begging online rather than begging in the street.

fharper1961 11 hours ago 0 replies      
Am I the only one who thinks that the IRS should not be able to make someone who is mentally fragile homeless, just because he hasn't filed his taxes? It seems very inhumane to me.
pawelkomarnicki 13 hours ago 1 reply      
WTF? "why i didn't seek job all these years? well, i can only say i procrastinate and is ok living on a dime."
methehack 12 hours ago 0 replies      
I sent him $10.
BryanBigs 12 hours ago 0 replies      
I feel for him - he seems to suffering both mentally and physically. But I have a hard time believing he hasn't legally had to file his taxes for the past 10 years. At the very least, if he really has only had $1k in income (which isn't credible - he's showing $700 of income on that bank statement alone) he would have gotten a few hundo from the EITC - which when you only make $1k is a big deal.And admitting you didn't file for 10 years isn't going to help him going forward either.Hate to see someone in pain - but this really looks self-inflicted.
geetee 13 hours ago 0 replies      
I couldn't help but see what tqOSxqI.jpg is on imgur. NSFW
arjn 12 hours ago 0 replies      
What about an Indiegogo campaign to help this guy out ?
logfromblammo 13 hours ago 1 reply      
Sounds like the guy needs a friend more than he needs a job.
stefap2 10 hours ago 0 replies      
"well, i can only say i procrastinate"

No there is my motivation to end the lunch break earlier.

chj 13 hours ago 0 replies      
I read some of his emacs related posts before, not that pleasant to read, but can be very helpful.
fantomass 12 hours ago 0 replies      
If you look at Xah's website with the eyes of business owner it could be a problem that that it handles a lot of exotic stuff, that is almost never a subject in a small to medium IT companies, but is written and fostered with great passion and care.

So there might be concerns if he would fit in to company with all its mundane day to day problems.

I think I would be helpful if he could do an internship or some sort of a program, where he can prove his capabilities to do - "boring" stuff - follow orders of his boss- working teams

Once this is approved, he might had better chances.

yiedyie 12 hours ago 0 replies      
I anything this story tells is how much hacker need to adapt to business as usual. So much for the power to change.
roghummal 9 hours ago 0 replies      
Someone get this man some vim, stat.
itsameta4 11 hours ago 0 replies      
Go. On. Welfare.


graycat 14 hours ago 0 replies      
There may be some government and/or privately funded social services in the area that could help him, e.g., emergency rent money, housing, food, counseling.

His Web site with 240 K visitors a month, with some ads, should be enough to help him significantly.

yung_ether 14 hours ago 6 replies      
Who would hire a 45 year old programmer? Certainly good as dead. Have fun at the laundry.
jqm 12 hours ago 1 reply      
This kind of case is why I don't think guaranteed income will work. Some people won't even be able to manage that money. We have to find a way to insure people have the basics without money (in their hands anyway). The salvation army type/soup kitchen programs aren't it. No dignity, 5000 other guys (a significant percentage of which would slit your throat for a rock), an underlying "join the cult theme".... no, that environment doesn't make things better. Basic privacy and dignity are needs too. And if someone is going to become productive they need access to basic tools like a computer also. If people like Lee could forget about money and work on what they love I think the world would have more for it. Or else a lot more trolling. IDK...

I think what we are seeing here is similar to heroin addiction or alcoholism. But this is an internet 24-7/ trolling rush/ porn addiction that makes it hard to live in the real world. I bet his isn't the only case either. I recommend a three month cold turkey session in the woods of Canada this summer. No net. Up at dawn chopping wood. Real face to face interaction most of the day. Fix this poor fellow right up and give him a new lease on life. I do feel for his case... in the same way someone who drinks a little more than they should might feel for homeless drunk passed out on the sidewalk.

phkahler 14 hours ago 3 replies      
So you claim to be a decent programmer, but haven't been working a traditional job in years because why? You claim to be smart but don't know the value of maintaining a basic financial safety net. It won't be hard to bail you out, but this should not be happening in the first place. Let it be a lesson to others. Or is this just a fake?
bestest 14 hours ago 6 replies      
I simply can't resist it, but alas, see what will happen to you if you code in emacs!

On the serious note though, he did admit he enjoys procrastinating. Why would I help someone who never had and still does not have any motivation?

Computers are fast jvns.ca
296 points by bdcravens  3 days ago   153 comments top 23
nkurz 3 days ago 5 replies      
1/4 second to plow through 1 GB of memory is certainly fast compared to some things (like a human reader), but it seems oddly slow relative to what a modern computer should be capable off. Sure, it's a lot faster than a human, but that's only 4 GB/s! A number of comments here have mentioned adding some prefetch statements, but for linear access like this that's usually not going to help much. The real issue (if I may be so bold) is all the TLB misses. Let's measure.

Here's the starting point on my test system, an Intel Sandy Bridge E5-1620 with 1600 MHz quad-channel RAM:

  $ perf stat bytesum 1gb_file  Size: 1073741824  The answer is: 4  Performance counter stats for 'bytesum 1gb_file':  262,315 page-faults         #    1.127 M/sec  835,999,671 cycles          #    3.593 GHz  475,721,488 stalled-cycles-frontend   #   56.90% frontend cycles idle  328,373,783 stalled-cycles-backend    #   39.28% backend  cycles idle  1,035,850,414 instructions            #    1.24  insns per cycle  0.232998484 seconds time elapsed
Hmm, those 260,000 page-faults don't look good. And we've got 40% idle cycles on the backend. Let's try switching to 1 GB hugepages to see how much of a difference it makes:

  $ perf stat hugepage 1gb_file  Size: 1073741824  The answer is: 4  Performance counter stats for 'hugepage 1gb_file':  132 page-faults               #    0.001 M/sec  387,061,957 cycles                    #    3.593 GHz  185,238,423 stalled-cycles-frontend   #   47.86% frontend cycles idle  87,548,536 stalled-cycles-backend     #   22.62% backend  cycles idle  805,869,978 instructions              #    2.08  insns per cycle  0.108025218 seconds time elapsed
It's entirely possible that I've done something stupid, but the checksum comes out right, but the 10 GB/s read speed is getting closer to what I'd expect for this machine. Using these 1 GB pages for the contents of a file is a bit tricky, since they need to be allocated off the hugetlbfs filesystem that does not allow writes and requires that the pages be allocated at boot time. My solution was a run one program that creates a shared map, copy the file in, pause that program, and then have the bytesum program read the copy that uses the 1 GB pages.

Now that we've got the page faults out of the way, the prefetch suggestion becomes more useful:

  $ perf stat hugepage_prefetch 1gb_file  Size: 1073741824  The answer is: 4  Performance counter stats for 'hugepage_prefetch 1gb_file': 132 page-faults            #    0.002 M/sec 265,037,039 cycles         #    3.592 GHz 116,666,382 stalled-cycles-frontend   #   44.02% frontend cycles idle 34,206,914 stalled-cycles-backend     #   12.91% backend  cycles idle 579,326,557 instructions              #    2.19  insns per cycle 0.074032221 seconds time elapsed
That gets us up to 14.5 GB/s, which is more reasonable for a a single stream read on a single core. Based on prior knowledge of this machine, I'm issuing one prefetch 512B ahead per 128B double-cacheline. Why one per 128B? Because the hardware "buddy prefetcher" is grabbing two lines at a time. Why do prefetches help? Because the hardware "stream prefetcher" doesn't know that it's dealing with 1 GB pages, and otherwise won't prefetch across 4K boundaries.

What would it take to speed it up further? I'm not sure. Suggestions (and independent confirmations or refutations) welcome. The most I've been able to reach in other circumstances is about 18 GB/s by doing multiple streams with interleaved reads, which allows the processor to take better advantage of open RAM banks. The next limiting factor (I think) is the number of line fill buffers (10 per core) combined with the cache latency in accordance with Little's Law.

exDM69 3 days ago 2 replies      
I posted the following as a comment to the blog, I'll duplicate it here in case someone wants to discuss:

This program is so easy on the CPU that it should be entirely limited by memory bandwidth and the CPU should be pretty much idle. The theoretical upper limit ("speed of light") should be around 50 gigabytes per second for modern CPU and memory.

In order to get closer to the SOL figure, try adding hints for prefetching the data closer to the CPU. Use mmap and give the operating system hints to load the data from disk to memory using madvise and/or posix_fadvise. This should probably be done once per a big chunk (several megabytes) because the system calls are so expensive.

Then try to make sure that the data is as close to the CPU as possible, preferably in the first level of the cache hierarchy. This is done with prefetching instructions (the "streaming" part of SSE that everyone always forgets). For GCC/Clang, you could use __builtin_prefetch. This should be done for several cache lines ahead because the time to actually process the data should be next to nothing compared to fetching stuff from the caches.

Because this is limited on memory bandwidth, it should be possible to do some more computation for the same price. So while you're at it, you can compute the sum, the product, a CRC sum, a hash value (perhaps with several hash functions) at the same cost (if you count only time and exclude the power consumption of the CPU).

personalcompute 3 days ago 4 replies      
I particularly enjoyed the writing style in this article, largely because of the extent that the author provided unverified and loose figures in the article - cputime distributions etc. My experience is usually people are extremely hesitant to publish any uninformed, fast, and incomplete conclusions despite them being, in my opinion, still extremely valuable. It may not be perfectly correct, but that small conclusion is still often much better than the practically non existent data on the situation I start off with and allows me to additionally read the article far faster than slowing down to make these minor fuzzy conclusions myself. There is this misconception that when writing you can do two things - you can tell a fact or you can say a false statement. In reality it is a gray gradient space, and when the reader starts off knowing nothing, that gray is many times superior. Anyways, awesome job, I really want to see more of this writing style in publications like personal blogs.

[In case it isn't clear, I'm referring to statements like "So I think that means that it spends 32% of its time accessing RAM, and the other 68% of its time doing calculations.", and "So weve learned that cache misses can make your code 40 times slower." (comment made in the context of a single non-comprehensive datapoint)]

krick 3 days ago 6 replies      
Pretty nave, I'm surprised to see it here. Not that this is pointless study, but it's pretty easy to guess up these numbers if you know about how long it takes to use a register, L1, L2, RAM, hard drive (and you should). And exactly how long it would take is task-specific question, because depends more on what optimization techniques can be used for the task and what cannot, so unless you are interested specifically in summation mod 256, this information isn't much of use, as "processing" is much broader than "adding moulo 256".

But it's nice that somewhere somebody else understood, that computers are fast. Seriously, no irony here. Because it's about time for people to realize, what disastrous world modern computing is. I mean, your home PC processes gigabytes of data in the matter of seconds, amount of computations (relative to its cost) it is capable of would drive some scientist 60 years ago crazy and it gets wasted. It's year 2014 and you have to wait for your computer. It's so much faster than you, but you are waiting for it! What an irony! You don't even want to add up gigabyte of numbers, you want to close a tab in your browser or whatever, and there are quite a few processes running in the background that actually have to be running right now to do something useful, unfortunately OS doesn't know about that. Unneeded data cached in the RAM and you wait while OS fetches memory page from HDD. But, well, after 20 layers of abstraction it's pretty hard to do only useful computations, so you make your user wait to finish some computationally simple stuff.

About every time I write code I feel guilty.

chroma 3 days ago 0 replies      
For an in-depth presentation on how we got to this point (cache misses dominating performance), there's an informative and interesting talk by Cliff Click called A Crash Course in Modern Hardware: http://www.infoq.com/presentations/click-crash-course-modern...

The talk starts just after 4 minutes in.

dbaupp 3 days ago 1 reply      
Interesting investigation!

I had an experiment with getting the Rust compiler to vectorise things itself, and it seems LLVM does a pretty good job automatically, e.g. on my computer (x86-64), running `rustc -O bytesum.rs` optimises the core of the addition:

  fn inner(x: &[u8]) -> u8 {      let mut s = 0;      for b in x.iter() {          s += *b;      }      s  }

  .LBB0_6:  movdqa%xmm1, %xmm2  movdqa%xmm0, %xmm3  movdqu-16(%rsi), %xmm0  movdqu(%rsi), %xmm1  paddb%xmm3, %xmm0  paddb%xmm2, %xmm1  addq$32, %rsi  addq$-32, %rdi  jne.LBB0_6
I can convince clang to automatically vectorize the inner loop in [1] to equivalent code (by passing -O3), but I can't seem to get GCC to do anything but a byte-by-byte tranversal.

[1]: https://github.com/jvns/howcomputer/blob/master/bytesum.c

userbinator 3 days ago 2 replies      
I wrote a new version of bytesum_mmap.c [...]and it took about 20 seconds. So weve learned that cache misses can make your code 40 times slower

What's being benchmarked here is not (the CPU's) cache misses, but a lot of other things, including the kernel's filesystem cache code, the page fault handler, and the prefetcher (both software and hardware). The prefetcher is what's making this so much faster than it would otherwise be if each one of those accesses were full cache misses. If only cache misses were only 40 times slower, performance profiles would be very different than they are today!

Here are some interesting numbers on cache latencies in (not so) recent Intel CPUs:


Im also kind of amazed by how fast C is.

For me, one of the points that this article seems to imply is that modern hardware can be extremely fast, but in our efforts to save "programmer time", we've sacrificed an order of magnitude or more of that.

ChuckMcM 3 days ago 0 replies      
Nice. I remember the first time I really internalized how fast computers were, even when people claimed they were slow. At the time I had a "slow" 133Mhz machine but we kept finding things it was doing that it didn't need too, and by the time we had worked through that there it was idling a lot while doing our task.

The interesting observation is that computers got so fast so quickly, that software is wasteful and inefficient. Why optimize when you can just throw CPU cycles or memory at the problem? What made that observation interesting for me was that it suggested the next 'era' of computers after Moore's law stopped was going to be about who could erase that sort of inefficiency the fastest.

I expect there won't be as much time in the second phase, and at the end you'll have approached some sort of limit of compute efficiency.

And hats off for perf, that is a really cool tool.

mrb 3 days ago 2 replies      
The author's SSE code is a terribly overcomplicated way of summing up every byte. The code is using PMADDW (a multiply and add?!), and is strangely trying to interleave hardcoded 0s and 1s into registers with PUNPCKHBW/PUNPCKLBW, huh?

All the author needs is PADDB (add packed bytes).

bane 3 days ago 3 replies      
It's pretty clear that we're wasting unbelievably huge amounts of computing power with the huge stacks of abstraction we're towering on.

So let's make this interesting, assuming a ground up rewrite of an entire highly optimized web application stack - from the metal on up, how many normal boxes full of server hardware could really just be handled by one? 2? a dozen?

I'd be willing to bet that a modern machine with well written, on the metal software could outperform a regular rack full of the same machines running all the nonsense we run on today.

Magnified over the entire industry, how much power and space are being wasted? What's the dollar amount on that?

What's the developer difference to accomplish this? 30% time?

What costs more? All the costs of potentially millions of wasted machines, power and cooling or millions of man hours writing better code?

cessor 3 days ago 3 replies      
I like the "free" style of the article. Here is another conclusion: In my professional life I have heard many, many excuses in the name of performance. "We don't need the third normal form, after all, normalized databases are less performant, because of the joins". Optimizing for performance should not mean to make it just as fast as it could possibly run, but to make it just fast enough.

Julia's article shows a good example for this. Of course, the goal appears to generate a feeling of what tends to make a program fast and slow and get a feeling for how slow it will be or how fast it can get; yet I'd like to point out that this...


... might be 0.1 Seconds faster than the original code when started as "already loaded into ram" which she claims runs at 0.6 seconds. Yet this last piece of code is way more complicated and hard to read. Code like this

Line 11: __m128i vk0 = _mm_set1_epi8(0);

might be idiomatic, fast and give you a great sense of mastery, but you can't even pronounce it and it it's purpose does not become clear in any way.

Writing the code this way may make it faster, but that makes it 1000x harder to maintain. I'd rather sacrifice 0.1 seconds running time and improve the development time by 3 days instead.

chpatrick 3 days ago 2 replies      
It's 1.08s on my computer for one line of Python, which is respectable:

  python2 -m timeit -v -n 1 -s "import numpy" "numpy.memmap('1_gb_file', mode='r').sum()"                                                                                      raw times: 1.08 1.09 1.08

sanxiyn 3 days ago 3 replies      
I wonder why GCC does not autovectorize the loop in bytesum.c even with -Ofast. With autovectorizer, GCC should make the plain loop as fast as SIMD intrinsics. Autovectorizer can't handle complex cases, but this is as simple as it can get.

Anyone has ideas?

infogulch 3 days ago 1 reply      
Nice writeup! I like how even simplistic approaches to performance can easily show clear differences! However! I noticed you use many (many!) exclamation points! It gave me the impression that you used one too many caffeine patches! [1]

[1]: https://www.youtube.com/watch?v=UR4DzHo5hz8

zokier 3 days ago 1 reply      
> So I think that means that it spends 32% of its time accessing RAM, and the other 68% of its time doing calculations

Not sure if you can do such conclusion actually, because of pipelining etc. I'd assume that the CPU is doing memory transfers simultaneously while doing the calculations.

I also think that only the first movdqa instruction is accessing RAM, the others are shuffling data from one register to another inside the CPU. I'd venture a guess that the last movdqa is shown taking so much time because of a pipeline stall. That would probably be the first place I'd look for further optimization.

On the other hand, I don't have a clue about assembly programming or low-level optimization, so take my comments with a chunk of salt.

userbinator 3 days ago 1 reply      
One of the things I've always wanted is autovectorisation by the CPU - imagine if there was a REP ADDSB/W/D/Q instruction (and naturally, repeated variants of the other ALU operations.) It could make use of the full memory bandwidth of any processor by reading and summing entire cache lines the fastest way the current microarchitecture can, and it'd also be future-proof in that future models may make this faster if they e.g. introduce a wider memory bus. Before the various versions of SSE there was MMX, and now AVX, so the fastest way to do something like sum bytes in memory changes with each processor model; but with autovectorisation in hardware, programs wouldn't need to be recompiled to take advantage of things like wider buses.

Of course, the reason why "string ALU instructions" haven't been present may just be because most programs wouldn't need them and only some would receive a huge performance boost, but then again, the same could be said for the AES extensions and various other special-purpose instructions like CRC32...

cgag 3 days ago 0 replies      
The rest of her blog is great as well, I really like her stuff about os-dev with rust.
enjoy-your-stay 3 days ago 0 replies      
The first time I realised how fast computers could be was when I first booted up BeOS on my old AMD single core, probably less than 1Ghz machine.

The thing booted in less than 10 seconds and performed everything so quickly and smoothly - compiling code, loading files, playing media and browsing the web (dial up modem then).

It performed so unbelievably well compared to Windows and even Linux of the day that it made me wonder what the other OSes were doing differently.

Now my 4 core SSD MacBook pro has the same feeling of raw performance, but it took a lot of hardware to get there.

thegeomaster 3 days ago 0 replies      
Anyone notice how the author is all excited? Got me in a good mood, reading this.
tejbirwason 3 days ago 0 replies      
Great post. If you want to dig in even deeper you can learn certain nuances of underlying assembly language like loop unrolling, reducing the number of memory accesses, number of branch instructions per iteration of any loops by rewriting the loop, rearranging instructions or register usage to reduce the dependencies between instructions.

I took a CPSC course last year and for one of the labs we improved the performance of fread and fwrite C library calls by playing with the underlying assembly. We maintained a leader board with the fastest times achieved and it was a lot of fun to gain insight into the low level mechanics of system calls.

I digged up the link to the lab description - http://www.ugrad.cs.ubc.ca/~cs261/2013w2/labs/lab4.html

hyp0 3 days ago 1 reply      

  I timed it, and it took 0.5 seconds!!!  So our program now runs twice as fast,
minor typo above: time is later stated as 0.25. super neat!

okso 3 days ago 1 reply      
Nave Python3 is not as fast as Numpy, but pretty elegant:

  def main(filename):      d = open(filename, 'rb').read()      result = sum(d) % 256      print("The answer is: ", result)

sjtrny 3 days ago 0 replies      
But not fast enough
Glenn Greenwald: The NSA tampers with US-made routers theguardian.com
284 points by not_dirnsa  4 days ago   136 comments top 18
perlpimp 4 days ago 6 replies      
So RMS was right after all, OpenSource gives you visible security where proprietary products are encumbered with all sorts of unwated and even dangerous "features".

my 2c

slacka 4 days ago 2 replies      
I am not surprised by the hypocrisy of the US government here, but where is the proof? He doesn't directly link to the June 2010 report to back his claims. While I trust him, the critical thinker in me despises not being able to check sources.

> Yet what the NSA's documents show is that Americans have been engaged in precisely the activity that the US accused the Chinese of doing.

Only points to the generic page http://www.theguardian.com/world/the-nsa-filesCouldn't he be more specific?

middleclick 4 days ago 5 replies      
Is anything safe? I mean, at this point, would it be too much to assume that given that the NSA has so much brain power (mathematicians) working for them, that they have not already cracked most encryption schemes we trust? I am not being a conspiracy theorist, I am genuinely curious.
suprgeek 4 days ago 2 replies      
"The NSA has been covertly implanting interception tools in US servers heading overseas..."

Which is Somewhat Ok, given the NSA charter.

What is the more interesting question - Is this limited to "US servers heading overseas..?" I mean we already know that NSA intercepts Laptops, Keyboards and such routinely for special "people of interest" within the US. Does it do the same i.e. routinely and indiscriminately bug routers even within the US?

resu 4 days ago 8 replies      
So stay away from routers that are Made in China and Made in USA - what's left?

Is there a country small enough without a world domination agenda, yet large enough to not be swayed by bullying from U.S, China etc.? It's time to start a router manufacturing business there...

xacaxulu 4 days ago 0 replies      
The NSA continues to undermine US businesses, further isolating us from the rest of the world.
SeanDav 4 days ago 4 replies      
Perhaps software and virtual routers are the way to go, especially if any are open source. It would be great if someone with knowledge in this domain could comment on this.
backwardm 4 days ago 3 replies      
I'm curious to know if using a different firmware would be a valid way to secure a (potentially compromised) router, or is this kind of tampering done at the hardware levelin some hidden part of a microprocessor?
brianbarker 4 days ago 0 replies      
So essentially the NSA warned us about China tampering with hardware because they knew how it could be done. They just forgot to mention they'd been doing it already.
Htsthbjig 4 days ago 1 reply      
Remove "Patriot Act" or the fascist law obligation of any American to collaborate with 3 letters agencies by force.

It converts any American worker in a spy of the Government.

mschuster91 4 days ago 2 replies      
Well, the NSA tampering here at least doesn't happen in the factories...
Sami_Lehtinen 4 days ago 0 replies      
When you register WatchGuard firewall it asks all kind of questions which are absolutely strategic. What kind of data it is used to protect, are you in tech or military business etc. And you won't be able to even use it without registration. And they call it security appliance. Lol. How about honestly calling it spy appliance.
cheetahtech 4 days ago 1 reply      
Just read something else he pushed.

He used some pretty strong words against the politicians.

Call Hillary a Neocon and corrupted, but he guesses she will win the next election. Page 5. http://www.gq.com/news-politics/newsmakers/201406/glenn-gree...

strgrd 3 days ago 0 replies      
I can't help but thinking Intel has something to do with this mission.

I mean think about how many hundreds of thousands of consumer computers come with Intel AMT vPro by default.

angersock 4 days ago 1 reply      
I'm watching to see if CSCO takes a hit from this--so far, doesn't seem to be a big issue.

It's not like this is surprising, as such; it's just really bad that these chucklefucks got caught doing it.

(Yes, it's arguably morally wrong and so on, but just from a purely economic perspective, bad show.)

zby 4 days ago 0 replies      
"surveillance competition"!
Zigurd 4 days ago 0 replies      
If you wanted to build an Internet product that could be trusted internationally where and how would you build it?

Unfortunately it looks like one part of the answer that's known is "not in the US."

We have only begin to feel the effects on this massive violation of trust. Unless trust can be restored, the US will become techno-provincial and only trustable with unimportant technologies like entertainment products.

jrockway 4 days ago 2 replies      
Greenwald is back at the Guardian? I thought he left to do his own thing.
Creative Cloud outage leaves Adobe users unable to work macuser.co.uk
278 points by danso  1 day ago   215 comments top 27
nnq 23 hours ago 6 replies      
My recipe for dealing with "cranky" proprietary software like this:

Step 1. Buy CC subscription and install what you need.

Step 2. Look for a good patch/crack that makes everything work ofline, and that still allows you to update.

Step 3. Make peace with the risk of having installed some possible malware on your machine with the patch/crack (ie. do the sensible thing of doing you shopping and ebanking on the other dedicated machine you only use for this).

Step 4. Stop caring that step 2 is illegal and get on with your life, you paid for the damn thing and nobody will really sue you for using it in a way that breaks the damn EULA anyway...

e12e 1 day ago 6 replies      
And this is why I won't trust SaaS that doesn't provide a viable self-host solution (which for practical purposes tends to mean Free software, although I suppose "binary" only self-contained jars might be a realistic alternative). And also why I can't see myself selling/working on such a solution without providing some form of viable/realistic exit strategy/alternative.

With traditional apps, you run the risk of eg: your laptop crashing/being stolen -- but if you need to work, you can just go and pick up a new laptop, burn an hour or so reinstalling your application(s) -- and hopefully get your work done by the deadline. With a self-hostable SaaS, you can spin up a vps/dedicated server and install, maybe even in less time -- but with "closed" SaaS -- you have no option.

Of course, with all (high-bandwidth) SaaS-solutions, network access becomes a single point of failure.

peterkelly 21 hours ago 1 reply      
"A distributed system is one in which the failure of a computer you didn't even know existed can render your own computer unusable."

- Leslie Lamport

bitL 19 hours ago 0 replies      
I find it funny that my graphics card is now faster than the world's fastest supercomputer in 1997, yet I am forced to "offload" stuff to cloud.

I bought CS6 suite as it became clear Adobe is moving full steam ahead with CC. I think it is one of those examples where the move serves only company's interests and doesn't make much sense to users (except for continuous updates that might accidentally break things as well). The initial pricing might have been competitive for some packages, for many users however caused substantial increase if they used to skip one cycle for upgrades. I understand Adobe needs a predictable revenue stream though I consider this as a flop.

aeberbach 1 day ago 4 replies      
And this is why proprietary lock-in is wrong.

Adobe seem to do all they can to screw up the computer their software is installed on. On a Mac you are supposed to drag an app into the Applications folder; to delete it, drag it to trash. Adobe software doesn't work like this. To install a trial of their software you have to install multiple apps, that can't be easily removed, and put up with their stupid Adobe logo in the title bar even when you're not using their software. You can't even use a trial without signing up for a "Creative Cloud" account.

Adobe's "Creative Cloud" is a great example of how to alienate and annoy your customers.

(Don't get me started on Flash)

rgrieselhuber 1 day ago 2 replies      
As a result of this outage today, I found myself switching Pixelmator and Sketch. So far, so good.
abandonliberty 20 hours ago 0 replies      
Could this be grounds for a class action?

> Adobe had categorically assured users and journalists, when replacing Creative Suite with Creative Cloud in May 2013, that apps only needed to check in with the server every 30 days, telling MacUser in a written reply that products would continue to work for 99 days in the absence of a server connection.

kbatten 1 day ago 3 replies      
I have a hard time understanding why anyone would lease a service that is vital to their job, or at least without a backup. If your livelihood depends on something then spend your money appropriately. Even if you did sign up for CC you can still have an older version ready to go in a pinch.
ISL 1 day ago 3 replies      
Does anyone have access to GIMP download statistics for the past ~10 days?

Edit: Answering my own question... Not the main repository, but it's something.


edj 7 hours ago 0 replies      
Here's a list of Mac OSX alternatives to everything in the Creative Cloud: http://mac.appstorm.net/roundups/graphics-roundups/the-best-...
codeshaman 20 hours ago 1 reply      
Cloud computing is a great theoretical idea, but extremely fragile in the face of serious crisis, like economic meltdown, wars, sanctions or even natural disasters.

All the websites, always-on apps, mobile operating systems, etc will be worthless if (when) the shit hits the fan. In case of global economic meltdown, the companies would be unable to pay for the huge data-centers, the providers will go belly up and it would be next to impossible to restore or recover the data stored in the 'cloud'.

Imagine waking up one day and not having access to the Internet. Try it and see how much you can do with your computer.

And with Russia under KGB dictatorship (read: insane, evil people), that shit can hit the fan as soon as this year.

That's why it is imperative to create an offline database of important things which would be available even when the clouds evaporate.

By 'important' things, I mean open source code (eg. offline github), wikipedia and other encyclopaedias, scientific works, books, music, movies. From this angle, I consider thepiratebay to be the most important archive of art that humanity has collectively produced. I've even started working on some sketches of a distributed read-only filesystem based partly on the concepts in the bitcoin blockchain, but I guess a simple solution using torrents plus a distributed offline index, can do the job just fine. Anyone else thought about this ?

cageface 21 hours ago 0 replies      
Bohemian Coding has finally fixed the grid snapping bug that plagued all previous versions of Sketch so I will definitely not be renewing my CC subscription when it expires. CC feels like it has much more do with Adobe's needs than mine.
thejosh 1 day ago 0 replies      
Was just restored an hour ago apparently, with no compensation for the downtime of the people affected....
girvo 23 hours ago 2 replies      
I'm a developer, and I only use Adobes products to pull images and the like out of Photoshop/Illustrator files given to me by my design team. I don't want to use Adobes software, as its entirely wasted on me. But I don't think any of the other apps handle PSD's (a horrid file format) well enough to allow me to replace them :( Anyone in my position that has replaced them? What should I look at?
ShaneOG 14 hours ago 1 reply      
What should happen due to this outage:

Adobe Management: Oh no, our users cannot work. Let's remove the requirement for our software to phone home just to open/work.

What will probably happen:

Adobe Management: Hmm, maybe let's try to make our servers more available?

bane 15 hours ago 0 replies      
One of the issues with the move to the cloud with formerly desktop software is exactly this. I used to use the analogy of an army using guns, except instead of a magazine holding bullets they had a long hose feeding them rounds. It sounded great to the generals, because instead of carrying around 20lbs of munitions, the soldiers just needed to carry around their guns. Of course all it took is for somebody to put a kink in the hose to put them out of the fight completely.
vetrom 22 hours ago 1 reply      
It really is the moments like this when I look at my stack of Gimp, Inkscape, Scribus, and Darktable, and am glad that I invested in the alternate solution.
Mandatum 1 day ago 0 replies      
And this is the sticking point for cloud-focussed systems. It's too early to implement an "always-on" attitude in so many parts of the world. I hope the backlash to this 24-hour outage (which in some cases will lose clients over) acts as the poster-child for would-be cloud-only services.
theFletch 14 hours ago 0 replies      
I have CC and was able to work fine all day. I wasn't able to sign in but that has no bearing on me doing my job. Was this something that affected the licensing servers and I just happened to be lucky?
blueskin_ 17 hours ago 1 reply      
Meanwhile, sensible people who don't rent their software are laughing from CS6.
derengel 23 hours ago 1 reply      
I just bought acorn and idraw but everyone here is recommending pixelmator and sketch :(
lotsofmangos 20 hours ago 3 replies      
Complete non-hacked photoshop CS2 is freely available with serial - http://www.techspot.com/downloads/3689-adobe-photoshop-cs2.h...

It works pretty well in Wine as well, if you fiddle with the settings.

pasbesoin 8 hours ago 0 replies      
7.5 hours worth of repeated (and I mean repeatedly explaining the situation to each new contact) tech support calls over a licensing issue that was, ultimately, a 10 minute fix (allowing for included administrivia) that was already sitting available for access/correction on the computer in front of a "supervisor".

That left me swearing Adobe would never see another dime from me or people I advise. And it leaves me repeating this anecdote every time the news surfaces another story about their cr-p administration systems and support.

And these are the people who are going to be involved in e.g. EME DRM in our browsers? I sincerely hope not.

jacquesm 18 hours ago 0 replies      
When I had to go out to buy licenses a few months ago for an intern at our company and found out that it is no longer possible to buy the regular Adobe licenses for the latest products we solved this by using competitor products and open source projects to give us a patched together set of tools.

The money was not an issue, what was an issue is that I think that tools should not be shoehorned against all logic into a pay-to-play model, they should just work. Imagine your c compiler or your editor failing to work because some third party service is down. To me that is not an option.

I hope Adobe learns their lesson and re-instates the licensing model they used in the past and gets rid of their 'Creative Cloud' nonsense asap.

And if they don't then I hope some competitor will realize this is a huge opportunity and will jump into the gap opened up here.

Adobe is good, but they can be beaten, especially if they shoot themselves in the foot (repeatedly).

camus2 22 hours ago 0 replies      
That's what you get when a business has a monopole on a market.You're now free to go to competition... not.Put the industry itself is responsible for this.
l0stb0y 1 day ago 4 replies      
I wasn't a fan if CC at first but it's turned into an incredibly good deal for me at $30 per month. A little downtime from time to time can't be avoided. People just love to complain.
jawngee 22 hours ago 4 replies      
You can still use CC software, you just can't update or have access to their shitty sync software.

But all the apps work, they don't require an internet connection (for 30 days at least), so it's not the disaster everyone is making it out to be.

Passwords for JetBlue accounts cannot contain a Q or a Z jetblue.com
276 points by alexdmiller  3 days ago   209 comments top 34
lvs 3 days ago 5 replies      
Looks like it has to do with the venerable Sabre system (scroll to bottom):


dredmorbius 2 days ago 5 replies      
As several people have noted, the Q/Z restriction likely arises from inputting passwords from a telephone keypad.

What I haven't seen is a statement as to why this would have been a problem. The reason is that Q and Z were mapped inconsistently across various phone keypads. The present convention of PQRS on 7 and WXYZ on 9 wasn't settled on until fairly late in the game, and as noted, the airline reservation system, SABRE, is one of the oldest widely-used public-facing computer systems still in existence, dating to the 1950s.


The 7/9 standard, by the way comes from the international standard ITU E 1.161, also known as ANSI T1.703-1995/1999 and ISO/IEC 9995-8:1994).


Other keypads may not assign Q or Z at all, or assign it to various other numbers, 1 for Australian Classic, 0 for UK Classic and Mobile 1.


Similarly, special characters can be entered via numerous mechanisms on phone keyboards.

My suspicion is that there's a contractual requirement somewhere to retain compatibility with an existing infrastructure somewhere.

eli 3 days ago 4 replies      
I'd caution against making assumptions about the competence of the developers based only what you can see from the outside. More likely than not there are good reasons to maintain interoperability with legacy systems. This may well be the most elegant way to solve a complex problem.

I've certainly written my share of code that would look weird to an outsider who didn't know the backstory and the constraints and the evolution.

seanmccann 3 days ago 2 replies      
They use Sabre (like others), and it's an archaic holdover from when phones didn't have Qs or Zs.
skizm 3 days ago 6 replies      
Actually this kind of gives me an idea: what if modern systems decided to just tell people they can't use "p" so that people stop using the word "password" or variants as their password.

Hell, for that matter, tell users they can't use vowels so they can't make words. They might do leet speak, or whatever which is pretty easy to crack given time, but it stops things like password re-use attacks (people less likely to have the same password as their other apps) and simple guessing attacks (try top 3 most popular passwords on all known emails/accounts).

For such a simple rule set (no vowels) it forces a decent level of password complexity.

theboss 3 days ago 4 replies      
That's nothing.... A friend of mine forwarded some emails shes gotten from jet blue.

First this screenshot:http://i.imgur.com/oKKpFM1.png

Followed by the money screenshot:http://i.imgur.com/DlAlQPt.png

She redacted some of the information before she sent it (obviously). This is from Jan 21 of this year. It's just so sad.... It's incredible people still have plaintext passwords serverside....

phlo 2 days ago 1 reply      
As many sources have pointed, out, this is very likely related to Sabre. Interestingly, there is another reason why such a restriction might be useful:

There are three popular key arrangements. English/US QWERTY, French AZERTY, and German QWERTZ. Apart from switching around A, W, Y, Z, and most special characters, they are mostly identical.

If your goal is to ensure successful password entry even if a user is unexpectedly using an unfamiliar keyboard scheme, all you need to do is replace all instances of A or Q by one value; and all instances of W, Y, Z by another. Or you could, of course, disallow these characters.

I hear Facebook had a similar approach to coping with input problems in the early days of mobile access: for each passWord1, three hashes were stored: "PassWord1" (uppercase first letter), "PASSwORD1" (caps lock) and "passWord1" (unchanged). As far as I remember, they didn't deal with i18n issues -- or publish the results of their approach.

Edit: This would, of course, weaken password security significantly. If my very rough back-of-the-envelope calculation is correct, by a bit less than 50%.

jfoster 3 days ago 2 replies      
If they were OK with applying more duct tape, why not map Q and Z to characters (eg. A and B) that can be part of passwords? (eg. a password of "quiz" would become "auib")

It would make their password system slightly weaker perhaps, since freq(a) then becomes more like freq(a)+freq(q) and freq(b) more like freq(b)+freq(z). I'm not sure that's much weaker than just excluding Q and Z, though. The user experience is improved. The major downside would be in technical debt.

slaundy 3 days ago 1 reply      
I just changed my Jetblue password to contain both a Q and a Z. Seems the support documentation is out of date.
Iterated 3 days ago 2 replies      
Question to all those saying this is because of Sabre:

How? Does the TrueBlue password somehow go through Sabre's systems? The truly old business unit of Sabre that everyone is referencing is Travel Network. I'm not sure why an airline's loyalty program would intersect with Travel Network other than through the back end of a booking tool.

stephengillie 3 days ago 1 reply      
When I saw the Sabre password requirements, I couldn't help but imagine that passwords are stored entirely numerically - "badpass" would be entered (hashed?) as "2237277", as in dialing a phone. So the password "abesass" would collide with "badpass" and grant access.

Has Sabre at least upgraded their storage mechanism, or do (did?) they reduce entropy on passwords?

amichal 3 days ago 1 reply      
guessing... Touch tone phone keypads dont always show q and z. I suspect that some older JetBlue system allows you to use your password via a touch tone system (with a vastly reduced keyspace)
dragonwriter 3 days ago 0 replies      
They also can't contain symbols (so apparently just digits and letters except Q and Z). The combination suggests to me the horrible possibility that they actually reduce the password to just digits for storage, and to support entry on devices that look like old touchtone phones [1] (I say "old" because newer ones usually have "PQRS" instead of "PRS" and "WXYZ" instead of "WXY"):

[1] Like: http://www.cs.utexas.edu/users/scottm/cs307/utx/assignment5....

jedberg 2 days ago 0 replies      
One of my bank accounts has the same restriction, so that you can enter you password through the phone system. It's stupid, but at least it has a reason.
eigenrick 2 days ago 0 replies      
Everyone in the conversation seems to be pointing out the fact that this is due to integration with legacy software. That's not an acceptable reason.

In the broader sense, there is a great irony in making password "strength" restrictions, like "must include" and "must not include" because they often end up making passwords easier to brute force.

If you start with the restriction that all passwords must have > 8 characters, you have basically an infinite number of possibilities, smart users will use a passPHRASE that is easy to remember. Dumb users will try to hit the bare minimum characters. When you put a restriction of 20 chars, it reduces the possibility that a persons favorite passphrase and guarantees that the set of all passwords is 8-20 characters, which means that the set of all passwords is smaller still.

They disallow special chars, which probably includes space, which further reduces the likelihood that someone will pick a passphrase.

Disallow repeating characters and you've further reduced the entropy.

Disallow Q and Z and it's reduced it further still.

I can't be arsed to do the math, so I'll reference XKCD http://xkcd.com/936/

But Sabre would do well to correct this, the optimal case is simply making a single requirement: passwords must be greater than 8 characters. The don't use your last N passwords requirement isn't bad, but people usually find hacky ways around this.

kirab 2 days ago 0 replies      
For everyone who designs password rules: Please do not require the password to contain uppercase, lowercase letter, numbers and so on. Because this actually makes passwords statistically easier guessable. The only thing you should require is a minimum length, I recommend at least 10, better 12 characters. Even 12 digits are more secure than say "Apple1".
r0m4n0 11 hours ago 0 replies      
Looks like they removed this requirement recently. We have JetBlue in our presence :O
sp332 2 days ago 0 replies      
Have you tried it? This person says it works just fine. https://twitter.com/__apf__/status/466327291027804160 And it doesn't make sense that it's a holdover from phones, because then it wouldn't be case-sensitive.
rjacoby5 2 days ago 0 replies      
I think everyone is completely missing the reason behind the omission of Q and Z.

Due to the database storage engine they chose, it was necessary to put a limitation on the number of Scrabble points that a password would award.

Q and Z are both 10-pointers, so passwords with them frequently blew past the limit. You can use J and X, but that's really pushing it.

And the "cannot contain three repeating character" rule is due to that being the trigger for the stored procedure that implements 'triple word score'.

tn13 2 days ago 0 replies      
There might be some very good reasons why such policy may exist. For example this system may involve telling the password to someone over the phone or using a TV remote to enter a password or some other keypad other than QWERTY.
manojit 3 days ago 1 reply      
Why people are still restricting password complexity. As long as passwords are carefully & cryptographically processed (read hashed with individual salt). I recently designed a system where the only password policy is the length (8 char minimum) and they are stored hashed with salt being a specially encoded user id (thus unique for each user).

I also like to contradict myself. Password complexity and and all the policy are needed to make the social engineering not feasible. I mean a strong and secure system and with that people are using 'password1234' is a very bad practice.

jamieomatthews 3 days ago 5 replies      
Can anyone explain why this is? I've never heard a security reason for this.
GrinningFool 3 days ago 3 replies      
That's ok, here's a better one.

etrade - yeah, THAT etrade? Yeah. They make your passwords case-insensitive.

Sami_Lehtinen 3 days ago 1 reply      
My bank allows only passwords which are six digits like 123456. No longer or other characters or symbols.
gt21 3 days ago 1 reply      
Here's a pic of when phone keypads don't have Q and Z: http://www.dialabc.com/words/history.html
bgia 3 days ago 1 reply      
Why didn't phone have Q and Z? Everyone is mentioning that they did not have them, but I can't find a reason for that.
jrockway 3 days ago 0 replies      
Shouldn't this mean that the OUTPUT FROM THE HASH FUNCTION can't contain Q or Z!? Certainly no system other than the web frontend would be looking at the password itself...
brianlweiner 2 days ago 0 replies      
for Bank of America customers, you might notice your mobile app requires you to use a password < 21 characters . There is no such restriction for desktop browsers.

Attempting to login to my mobile app requires me to DELETE characters from my password until the overall length is less than 21. I'm then able to login.

What does this tell us about BoA's password storage?

codexon 3 days ago 0 replies      
why not hash the the password and encode it in base34? (36-2)
DonHopkins 2 days ago 0 replies      
How can people that stupid be allowed to operate airplanes?
maxmem 3 days ago 0 replies      
Also no special characters.
guelo 3 days ago 0 replies      
My guess, some kind of harebrained master password scheme for support.
codezero 3 days ago 1 reply      
My guess is that this is just a rule to force people to read the rules.
Source code of ASP.NET github.com
268 points by wfjackson  3 days ago   99 comments top 13
skrebbel 3 days ago 4 replies      
Folks, not much of this is new. Both Entity Framework and ASP.NET MVC were already open source for quite some time [0][1]. All the other repos are nearly empty.

The only real news here is that, indeed, ASP.NET vNext is going to be developed in the open, or at least to some extent. But right now, not a lot of code seems to be released that wasn't already out there (although I did not go through all the repos).

I don't think you should expect to find many current/legacy parts of ASP.NET that aren't open yet: this seems to be mostly for new stuff.

Finally, don't forget that "ASP.NET" doesn't seem to mean a lot (anymore): it's basically Microsoft actively shipping the org chart. Anything that's web related and from MS appears to get tacked "ASP.NET" on front of it. Cause really, what does ASP.NET MVC, basically a pretty uninspired Rails port to C# (and just an open source library like any other), have to do with "active server pages"?

[0] https://aspnet.codeplex.com/wikipage?title=MVC[1] https://entityframework.codeplex.com/

moskie 3 days ago 1 reply      
The URL of that link is a pretty surprising thing, in and of itself.
daigoba66 3 days ago 1 reply      
One should note that this is the "new" ASP.NET. The old version, the one explicitly tied to IIS, is not and will probably never be open source software.

They're building a new stack from the ground up. Which is the only way, really, to make it "cross platform".

d0ugie 3 days ago 0 replies      
For those curious, Microsoft went with the Apache 2 license: http://www.asp.net/open-source
ellisike 3 days ago 1 reply      
Scott Guthrie is amazing. He's behind all the open source projects. Some of them are even taking pull requests. Entity Framework, MVC, ASP.NET, etc are all popular and open source.
WoodenChair 2 days ago 6 replies      
Is this snippet of code bad? I was just randomly browsinghttps://github.com/aspnet/KRuntime/blob/dev/src/Microsoft.Fr...

        private bool IsDifferent(ConfigurationsMessage local, ConfigurationsMessage remote)        {            return true;        }        private bool IsDifferent(ReferencesMessage local, ReferencesMessage remote)        {            return true;        }        private bool IsDifferent(DiagnosticsMessage local, DiagnosticsMessage remote)        {            return true;        }        private bool IsDifferent(SourcesMessage local, SourcesMessage remote)        {            return true;        }

turingbook 2 days ago 0 replies      
Some clarification:This seems to be only for demos and samples, not really the home of source code to cooperate on.

The Home repository is the starting point for people to learn about ASP.NET vNext, it contains samples and documentation to help folks get started and learn more about what we are doing. [0]

The GitHub issue list is for bugs, not discussions. If you have a question or want to start a discussion you have several options:

- Post a question on StackOverflow- Start a discussion in our ASP.NET vNext forum or JabbR chat room [1]

ASP.NET vNext includes updated versions of MVC, Web API, Web Pages, SignalR and EF... Can run on Mono, on Mac and Linux. [2]

MVC, Web API, and Web Pages will be merged into one framework, called MVC 6. MVC 6 has no dependency on System.Web. [3]

[0] https://github.com/aspnet/Home

[1] https://github.com/aspnet/Home/blob/master/CONTRIBUTING.md

[2] http://blogs.msdn.com/b/dotnet/archive/2014/05/12/the-next-g...

[3] http://blogs.msdn.com/b/webdev/archive/2014/05/13/asp-net-vn...

dev360 3 days ago 2 replies      
Is this admission that codeplex is dead?
githulhu 3 days ago 2 replies      
Not all of ASP.NET though...notably absent: Web Forms.
V-2 2 days ago 0 replies      
ICanHasViewContext :) (Mvc / src / Microsoft.AspNet.Mvc.Core / Rendering / ICanHasViewContext.cs)
MrRed 2 days ago 1 reply      
But why are they looking if their code runs on Mono [1] ?

[1] https://github.com/aspnet/FileSystem/blob/dev/src/Microsoft....

badman_ting 3 days ago 0 replies      
Oh, whoever did this is gonna be in big trouble. Heads are gonna roll.

Hmm? What do you mean "they meant to do that"?

lucidquiet 3 days ago 10 replies      
Too little, too late (imo). I'll think about it again once they have visual studio and all the good things running on *.nix.

It's too much of a pain to get anything to work with a .net project, and then deploy on anything other than IIS.

Big Cable says investment is flourishing, but their data says it's falling vox.com
265 points by luu  4 days ago   53 comments top 9
meric 4 days ago 5 replies      
"The industry is acting like a low-competition industry, scaling back investment and plowing its profits into dividends and share buybacks and merger efforts."

Most US industries are in a similar state (plowing profits into dividends and share buybacks and mergers). What's happening is companies are seeing there will be more benefit to their share holders to borrow money against their existing capital and paying that out as dividends than to risk that borrowed capital to make new investments. This is happening because the Federal Reserve has pushed the interest rate to near zero, at the same time people are over leveraged (since money has been so cheap for so long) and don't have the money to increase their spending in the future, which reduces the chance new investments will pay off.

EDIT: This website tends to be very pessimistic, but I found the following article informative and would illustrate my point well: http://www.zerohedge.com/news/2014-05-12/writing-wall-and-we...

Strilanc 3 days ago 1 reply      
Also, on top of being cumulative and using different periods, the chart just directly visually lies.

The pixel height difference between the 78.2 and 148.8 bars is ~110px for 70.6B$. But between 148.8 and 210 it's 198px for 61.2B.

So the pixel difference increases despite the money difference decreasing. I have no idea how this can be justified. It makes the right side of the chart look more steep than the rest instead of less (except the left-most part).

dba7dba 3 days ago 3 replies      
I am just amazed at the lies and stupidity these people are pushing all so that a few people at the top can buy a few mansions/private-jets. Little do they realize what kind of damage they are doing to the competitiveness of America's economy.

US flourished as it did partly because of open/affordable road system. Physical goods and people and ideas were able to move about freely and hence the economy grew.

Now it's all about the internet access. The goods people buy are often sent over internet connection and people/ideas flow the best when internet is working.

And here we are, with the few cable companies that we have doing their best to hamper flow of idea over the internet, the lifeblood of our economy.

coldcode 4 days ago 2 replies      
Oldest tricks in the chart book. Why do people lie in such an obvious manner and think no one will notice?
EricDeb 4 days ago 2 replies      
I love the grand total of one option I have for broadband internet at my apartment.
sirdogealot 3 days ago 1 reply      
> in the years that broadband service has been subjected to relatively little regulation, investment and deployment have flourished

Perhaps they are referring to the majority of the years/graph between 1997 and 2008? Which if they were, would make that statement true.

Even by saying that investment has increased overall between 1997 and 2013 would be true imho.

jsz0 3 days ago 0 replies      
This is cable industry trade data not really something intended for the general public. Dollar amounts aren't going to provide the context required to understand this data. For example over the last 5 years most cable MSOs have gone mostly/all digital which has reclaimed hundreds of megahertz's of spectrum. As a result spending on plant/infrastructure upgrades has slowed. The costs of the digital migrations wouldn't be classified as broadband investments even though it's directly related. Also in this time span most cable providers completed their transition to DOCSIS 3. Big upfront cost but less expensive to scale out over time. Soon they will have another big upfront cost for the DOCSIS 3.1 transition.
nessup 4 days ago 1 reply      
why is this not getting upvotes? awareness about telecom/broadband bullshit needs to be going up these days, if anything.
727374 3 days ago 0 replies      
Least controversial HN post... EVER.
The shock of playing the Ouya, one year later wololo.net
252 points by dumpsterkid  17 hours ago   126 comments top 22
lhnz 15 hours ago 8 replies      
The problem with calling it the perfect "party gaming" console is that it completely doesn't work for that when "it took me almost an hour to update the firmware and configure the 4 controllers."

I'm not saying that to hurt OUYA, I'm just saying that if they want to find this niche they should focus some effort on fixing that.

Edit: I'm not implying that other consoles are better fitted to this. I'm implying that engineering a console so that setup time is always fast even when you've not touched it for weeks could be a valuable feature in the "party gaming" market.

dpcan 14 hours ago 4 replies      
My take-away from this article was that if every OUYA shipped with a $10-$25 gift card or starting account balance, people would get into it a lot easier, and not box it up after a week.

Most consoles ship with full games. If the OUYA is only shipping with free trials, they aren't really competing.

Side-note, the home page of their website, what are they thinking? It looks like a stop sign. Or a "come back later". I literally went to their site earlier this year and thought it wasn't live yet, didn't bother going any further - until today. Shoot, it might as be one of those giant red circles with a line through it saying "go away!"

hahainternet 16 hours ago 2 replies      
This is exactly how I feel about my Ouya. It sits on the side of my TV stand completely unobtrusively, and if friends come over within 30 seconds we can be playing frantic, immersive games. Not desperately struggling to figure out how we get a second player online without a second xbox live account.

It's a shame enough people didn't realise this from the start, but $150 is not a lot of outlay for what I got in return and when the games eventually move on to Ouya 2, I will just put XBMC on it and it's still a great device.

KVFinn 8 hours ago 1 reply      
The trend the author is seeing is not just on Ouya. 'Sportsfriends' on PS4 (and PC soon) has four unique games as good as any he played for local multiplayer. Towerfall is better on PS4. And many more like these the pipe! I heartily recommend Nidhogg when it comes out for PS4 later.

If you have a PS4 and more than one other local person to play with occasionally you must be Sportsfriends right away!

ch0wn 15 hours ago 0 replies      
Wired Xbox 360 controllers work out of the box, as well. I feel like they missed an opportunity in promoting the compatibility more, especially as they got so much negative press about their controllers.
arrrg 15 hours ago 2 replies      
Local multiplayer is also coming back on other platforms, by the way.

For example, the recently released Sportsfriends (a collection of four local multiplayer games): https://www.youtube.com/watch?v=7zh5EXf4rpo

sehugg 13 hours ago 1 reply      
These are all fun games, but the Ouya's problem is that these are all the same games the author would have been playing a year ago.
programminggeek 13 hours ago 0 replies      
The Ouya has some fun games and it's a cool little console. It's not even remotely perfect, but I'm glad it exists. Between Ouya, Kindle TV, and maybe someday Apple TV or other smart TV's, there's going to be a nice place to put fun little indie games in more places, which is good for developers.
TruthSHIFT 2 hours ago 0 replies      
His favorite Ouya game, Hidden in Plain Site, is also available for Xbox 360 and it's totally excellent.
bovermyer 16 hours ago 0 replies      
This article put the Ouya back on my radar. I'll have to look into it.
lingoberry 15 hours ago 1 reply      
Very interesting, I'm surprised to see that Ouya is still alive. This is sort of what I want in a console today. I don't have time to play lengthy AAA games and prefer social games, but there are few alternatives for that type of gaming.
reidmain 15 hours ago 1 reply      
Towerfall is also on the PS4.
ebbv 15 hours ago 4 replies      
> I have terrible memories of coming to a gamers place to spend the afternoon playing Fifa, Street Fighter 4, etc in all these AAA titles, the guy who owns the game basically beats your ass so hard that all the fun is gone.

This is not a failure of those games, this is a failure of the person who owns the game to not be a dickhead.

Street Fighter 4, for example, has handicapping. You can tilt the game wildly in the novice's favor, to the point where if they land a few lucky hits they win.

I've never played FIFA, but I'd be surprised if it didn't have some way to skew the balance of the game in favor of the novice.

notlisted 12 hours ago 0 replies      
Do not own an ouya, but I have been very pleasantly surprised by both performance of the device and quality of the games on my FireTV. Expect great things going forward.
trustyhank 13 hours ago 0 replies      
A lot of his arguments (good party console, hardware doesnt always matter, etc) also apply to Nintendo consoles. Ive always loved the Wii for similar reasons (the fact that it is easy to softmod doesnt hurt either :P )
dclowd9901 10 hours ago 1 reply      
Is this the same wololo who hacks Vitas and PSPs? If so, this guy's a legend in the handheld hacking community.
raldi 14 hours ago 2 replies      
Is anyone else finding this article's pale-gray-on-white color scheme very difficult to read?
Rayne 10 hours ago 2 replies      
I probably would have actually enjoyed the Ouya, but it had such terrible input lag that it was effectively unplayable. Couldn't find any way to fix it, so it just sits on a shelf in my apartment now.
NicoJuicy 16 hours ago 0 replies      
I also bought an Ouya, but for some reason.. I only use it for Plex...

Haven't gamed with it, but perhaps i should after reading this.

nebulous1 14 hours ago 0 replies      
I just can't see the OUYA surviving the Fire TV
higherpurpose 16 hours ago 0 replies      
I know this is exactly the opposite point the article is trying to make, but I'll wait until they're selling a version with a Denver CPU and a Maxwell GPU, before I even consider buying one. However, I'm not sure they'll survive that long. Maybe releasing one with Tegra K1 this year would sustain them a bit longer. I don't think you can even do 1080p games on OUYA, unless they are 2D. Maybe that's fine for kids or something, but not for me.
everyone 16 hours ago 6 replies      
I wish people would stop using the term 'AAA' . Its meaningless imo as any other members from the implied scale (AAA, AAB, AAC, whatever) are never referred to.I find it one of the more annoying americanisms.

edit: What people actually mean when they say " an AAA game" is "a game with a very large development budget"

Introducing ASP.NET vNext hanselman.com
235 points by ragesh  4 days ago   205 comments top 17
slg 4 days ago 7 replies      
As a .Net developer, I find all of the recent announcements from Microsoft really exciting. I just wonder if these type of things are enough to sway people's opinions regarding the platform. There is just so much baggage in the developer community when you say .Net or Microsoft (edit: as one of the three comments at the time of this posting proves). Are these moves just going to stave a potential exodus of .Net developers or will it actually lead to new developers picking up the language?
Goosey 4 days ago 3 replies      
This is extremely exciting. The lack of a 'No-compile developer experience' has been one of the biggest annoyances for me and my team. It actually has lead to influencing our coding patterns: since we can "refresh and see new code" for anything that is in the view templates (Razor *.cshtml in our case) we have become increasingly in favor of putting code there (or in javascript frontend 'thick client' code) to take advantage of not needing to recompile. It's not like recompiling is slow (maybe 5sec in our case), but it still breaks your flow and more importantly requires stopping the debugger if it is in use. In some ways the code has improved, in some ways it hasn't, but in either case it feels like the tail wagging the dog when you are changing how you structure code based on your tool's inadequacies.

I'm equally excited for the intentional mono support and "Side by side - deploy the runtime and framework with your application". ASP.NET MVC and Web API are really pleasant and mature frameworks, but configuring IIS has always been really unpleasant and clunky.

Xdes 4 days ago 4 replies      
"ASP.NET vNext (and Rosyln) runs on Mono, on both Mac and Linux today. While Mono isn't a project from Microsoft, we'll collaborate with the Mono team, plus Mono will be added to our test matrix. It's our aspiration that it 'just work.'"

I wonder whether we will be seeing a .NET web server for mac and linux. Hosting a C# MVC app on linux will be sweet.

konstruktor 4 days ago 0 replies      
I can hardly imagine a more effective developer advocate than Scott Hanselman. He seems to be doing more good for Microsoft's reputation among developers than anybody else. Of course he out-HNed the official msdn article. For those not familiar with his name, here is some of his other stuff:http://www.hanselman.com/blog/MakingABetterSomewhatPrettierB...http://www.hanselman.com/blog/ScottHanselmans2014UltimateDev...
troygoode 4 days ago 6 replies      
Finally switching away from the horrible XML-based CSPROJ files to a more sane JSON format (that hopefully doesn't require you to list every. single. file. individually) is the feature I'd be most excited about if I was still using .NET.

I recall CSPROJ files being the primary source of pain for me as I started to transition out of the Microsoft world and into the open source world, as it prevents you from using editors like vim & emacs if you're working in a team environment.

kr4 4 days ago 0 replies      
> ... your choice of operating system,

> we'll collaborate with the Mono team, plus Mono will be added to our test matrix. It's our aspiration that it "just work

This. is. superb! I love developing on VS with ASP.NET, and I love *nix tooling (ssh is pure fun), I was secretly hoping for this to happen.

daviding 4 days ago 3 replies      
What is a 'cloud-optimized library'? Does it mean 'small' or have I underestimated it?
malaporte 4 days ago 1 reply      
Seems pretty interesting. And official MS support for running the whole thing on Mono, right now, isn't that pretty big?
bananas 4 days ago 5 replies      
I've been through EVERY ASP.net update on every version of .net and every MVC update from CTP2 onwards, dealt with WWF being canned and rewritten, moved APIs between old SOAP stuff (asmx), WCF and WebAPI and rewritten swathes of ASP VBnand C++ COM code, ported EF stuff to later versions and worked around piles of framework bugs including the MS11-100 fiasco. That and been left royally in the shit with silverlight.

Not one of the above has actually improved the product we produce and are all reactionary "we might get left in the shit again" changes.

I'm really tired of it now.

robertlf 4 days ago 0 replies      
So glad I'm no longer a .NET developer. Every year it's a new razor blade.
cuong 4 days ago 1 reply      
How realistic is it to use a self-hosted OWIN server running ASP.NET vNext on Mono? What can we expect in terms of performance? I was always under the impression it was pretty far away from being a viable option, Microsoft help or not.
TheRealDunkirk 4 days ago 9 replies      
Yet another piece of mature web-development puzzle that Microsoft is trying to emulate. That's great, and good luck to them, but my recent efforts with trying to use Entity Framework suggest that this may not be a viable solution for a long time to come.

I'm typing this to delay the effort of ripping EF out of my project, and do ADO.NET Linq-to-SQL. (I guess. Maybe it'll just be raw SQL at this point.) Unless someone here can answer this question? It's worth a shot... http://stackoverflow.com/questions/23528335/how-can-i-implem...

I miss Rails.

adrianlmm 4 days ago 1 reply      
I'd really like that the next ASP MVC comes with full Owin support.
slipstream- 4 days ago 4 replies      
Does anyone else spot the irony of an MS guy using Chrome?
chris_wot 4 days ago 0 replies      
When will they be releasing ASP.NET vNext Compact Enterprise MVC Edition?
mountaineer 4 days ago 0 replies      
Tomorrow is my last day as a professional .NET developer, nothing here to make me think twice about saying goodbye.
li2 4 days ago 3 replies      
If you are serious about your career path as software engineer stay away from windows technologies.
Xeer wikipedia.org
228 points by mazsa  3 days ago   116 comments top 16
tokenadult 3 days ago 2 replies      
"The life of the law has not been logic; it has been experience." -- Oliver Wendell Holmes, Jr., The Common Law (1881) page 1. In other words, the Anglo-American system of common law is a system that has developed by generalizing from particular cases as they come up, and not by thinking from the top down about what kind of rules would be ideal.

It's rich with deeper meaning that there are a number of comments here about the development of rules and laws as we comment on an article posted on Wikipedia. I am one of thousands of volunteer editors on Wikipedia (since May 2010) years after having been (1) an editor of a glossy monthly bilingual publication about a foreign country as an expatriate resident of that country, (2) an editor of a series of English-language trade magazines about manufactured products from that same country, and (3) a student-editor (usually the only kind of editor such a publication has) of a law review. I started editing Wikipedia as late as I did, years after Wikipedia was founded, because when I first heard about Wikipedia I thought its editorial policies are madness--and, sure enough, the articles that resulted from the original policy included a lot of cruft. As Wikipedia has continued in existence, it has not been able to continue an Ayn Rand anarchy of bullies but has gradually had to develop rules and procedures and (a little bit of) hierarchy and organization. Most of the articles on the topics I do the most of my professional research in are still LOUSY, and I have been interviewed twice by the Wikipedia Signpost in the last several months about what needs to be done to improve articles on Wikipedia for various WikiProjects. The article kindly submitted here illustrates the problem, with its incoherent presentation of facts and speculation from a mixture of good and poor sources.

I live among the largest Somali expatriate community in the world outside Somalia (Minneapolis and its suburbs--we can listen to Somali-language local radio here since the 1990s) and have a new client for my supplementary mathematics classes whose family is from Somalia. That country's internal conditions during my adult life have been HARSH, and I don't envy any Somali patriot's task in trying to build up a country with peace, stability, and justice for all Somali citizens. I do wish all Somalis well in adapted customary legal systems to the modern world.

cup 3 days ago 7 replies      
I must admit I was confused to see xeer being posted to HN. Its interesting to contemplate the unique history of Somalia and the Somali people and how it fits into the greater African jigsaw puzzle.

I think the article is slightly misinformed however, the Sharia legal and judicial instrument which was adopted by the Somali people after the growth of the Muslim faith in the region was another system of justice and social order that arrived well before attempted European colonisation.

On a tangent, interesting things are happening with the Somali federal government now with respect to the telecommunications industry. Not only does Somalia now have its own top down domain (.so) but fiber optic lines are slowly being rolled out in the capital.

I find it ironic to think that in Australia the government is singing praises for copper network lines (after repealing the NBN) yet war torn anarchic Somali is pushing in the other direction. Somalia and Africas future really does look interesting.

gbog 3 days ago 3 replies      
There is this angle that says that natural laws are good, better than "artificial" laws. It seems trendy nowadays and is emphasised in the article.

But another angle, that seem to describe more closely the long term evolution and progress of human societies, is that laws and ethics have been slowly built by human societies against the law of nature. The direst way to express this is that in a natural environment, the weak and the disabled are left aside and die quickly, which we humans have decided to try hard to avoid.

So maybe a softer, more informal, "stateless" society like this Xeer could be valuable. But if it was, it would be because it would better protect us from the law of nature.

antirez 3 days ago 1 reply      
That may seem strange, but in Europe there are places where a similar "juridical" system was used too, which is, in Sicily. It was common for Mafia bosses and other respected older people to act as third party in order to judge disputes between people.
616c 3 days ago 3 replies      
I also find the name somewhat ironic.

Xeer clearly comes from , or khayr, which is Arabic for good. It is good, but in the higher moralistic and religious sense in addition to the normal sense. So I wonder if it goes back to original interaction with the Arabs, 7th century as noted or prior. The general idea, consensus-based law as I see it, seems similar on the basic principle in Islam to Ijma'[0], or consensus-based formation of jurisprudence. There are varying views, but that idea is that Islamic law (despite outside views of it) is not controlled by one but must be agreed upon by popular approval of jurisprudence scholars (of course this is loosely defined, but what can you do).

Xeer is definitely from the Arabic, as are many loanwords from Somali (as an Arabic speaker, who sat in linguistics courses where Somali speakers presented, I could be wrong). So I am not sure where the "no foreign loanwords" comment in the Wikipedia article came from.

Then again, maybe I am just reading to much into this name/book cover.

[0] https://en.wikipedia.org/wiki/Ijma

johnzim 3 days ago 1 reply      
From a jurisprudential point of view it's interesting to see how it evolved - the law in England moved out of the church and Xeer appears to have been born out of the reigning power in Somalia (elders) and remained therein.

I'll take the English common law and equity any day of the week - flexible where it needs to be so it's capable of applying concepts of natural justice constrained by well established principle, while still providing vital certainty as to the law. This passage in the wikipedia article makes the legal scholar in me shiver:

"The lack of a central governing authority means that there is a slight variation in the interpretation of Xeer amongst different communities"

Dealing with conflict of laws without prejudicing parties in an international setting is hard enough: imagine having to pursue justice according to discrepancies between individual communities! Better have some cast-iron choice-of-law clauses in those trade agreements!

fiatjaf 3 days ago 1 reply      
For people interested in common law and the problems of the State law system, I recommend the articles on the topic by John Hasnas:

THE MYTH OF THE RULE OF LAW: http://faculty.msb.edu/hasnasj/GTWebSite/MythWeb.htmHAYEK, THE COMMON LAW, AND FLUID DRIVE: http://faculty.msb.edu/hasnasj/GTWebSite/NYUFinal.pdf

neotrinity 3 days ago 1 reply      
How is it different from http://en.wikipedia.org/wiki/Local_self-government_in_India#... ??

which has been practised from way before 7th century ?

[ The Tone of the question is curiosity and not a flame-bait please]

disputin 3 days ago 0 replies      
"Court procedure..... In a murder case, the offender flees to a safe place including outside the country to avoid prosecution or execution "Qisaas." "
vacri 3 days ago 3 replies      
Several scholars have noted that even though Xeer may be centuries old, it has the potential to serve as the legal system of a modern, well-functioning economy.

This makes no sense, given the remainder of the article, as a modern, well-functioning economy (of which Somalia certainly does not have one) requires diversity. Xeer relies heavily on ingrained cultural norms, and is discriminatory against minorities and women. Lack of impartiality is also a question, given that you are assigned a judge at birth.

It might work well in Somalia, but I can't see what is described as being translatable elsewhere. There are some elements that aren't Xeer-specific (like reducing focus on punitive measures), but as a whole, I can't see it working somewhere else that doesn't have the same social structure.

blueskin_ 3 days ago 2 replies      
>stateless society

Sounds to me like a nicer way of saying failed state, which is what Somalia is.

noiv 3 days ago 2 replies      
Very interesting. Wasn't aware of alternatives to the western legal system hundreds of years old and actually widely accepted.
mcguire 3 days ago 1 reply      
"People who have migrated to locations far removed from their homes can also find themselves without adequate representation at Xeer proceedings."

That kinda sounds like a problem.

nighthawk24 3 days ago 0 replies      
Gram Panchayat in Indian villages often meet under trees too https://en.wikipedia.org/wiki/Gram_panchayat
anubiann00b 3 days ago 0 replies      
This won't work for large societies (unfortunately).
dr_faustus 3 days ago 0 replies      
And everybody knows: Somalia is paradise! You can even become a pirate! Arrrrrr!
Europe's top court: people have right to be forgotten on Internet reuters.com
227 points by kevcampb  3 days ago   205 comments top 21
buro9 3 days ago 3 replies      
I could and would argue that there are times in which a person should have the right to not be found.

An example scenario: Alice is a victim of a crime, reports the crime and Bob is arrested and goes on trial. Bob pleads not guilty and Alice participates in the trial as a witness. Bob is sentenced, the court record is made. The Daily News (fictional paper) reports on the court records of the day and has a reporter who attends the more interesting cases, and mentions Bob's sentence and gives some of Alice's statements as quotes.

In that scenario, the court record should always be a matter of public record, a statement of fact. The newspaper certainly has the right to access public record and to make a news story of the set of facts that are in the public record.

But, here starts the problems... Alice applies for a job and the employer Googles her name and comes across the news article. There are many types of crimes in which the public have great difficulty accepting a victim is a victim. For example, rape. It isn't too much of a stretch to say that the culture of victim blaming means that a matter of public record has just had the effect of defaming Alice.

Alice as a victim is never given the opportunity to move on with her life when every person that ever searches for her will find the story very quickly. She has been sentenced too by participating in the justice system, which is an open book.

The newspaper, just as in this case, will argue this is public record and cannot be silenced. Sure, I agree... but that doesn't mean that it's in the victims interest that the information be extraordinarily easy to find.

And Google are a better place in which to attempt to stop the information being found, given that they (and only 1 or 2 other search engines) cover the vast majority of searches made about someone.

Alice certainly does have the right to make information that she didn't explicitly choose to make public and that can cause her harm not be found so easily, even when that information is a matter of fact and public record.

She has the right to not be found (by that method - Google).

PS: I know a girl experiencing almost exactly that scenario, who cannot get a news story off of the front page results for her name. This isn't even a stretch scenario. The local newspaper just hasn't bothered responding to requests.

babarock 3 days ago 4 replies      
A couple of questions pop to mind:

- Will that affect the work of archive.org and the wayback machine?

- Is it okay for a politician to "erase" something he/she said 10 years ago?

dasil003 3 days ago 4 replies      
I sincerely think it's a good thing for the courts to look out for individuals's rights, but they are overestimating the power of the law. A thing can't be removed from the Internet once published, and forcing Google to remove it from their index is at best a middling measure that may slightly limit the exposure of said material.

I wish the court would grant me the right to fly as well, but it's beyond their power. I guess they just need a few more decades for the judges to die off and for the new old men to have a better intuitive understanding of the way the digital world works.

jerguismi 3 days ago 2 replies      
One quite an important fact is forgotten there, that publishing information is basically irreversible action. Even if google removed the information from their search engines, other search engines probably won't. And of course decentralized solutions to search engines are coming also, where information can't be removed even theoretically (for example, yacy.de)
hartator 3 days ago 1 reply      
Weirdly, I think it's more for politicians to forgot their past mistakes and their past actions than for the average citizen.

Taking France as an example, a lot of content (An good example will be some old racist video of our actual primer minister, past corruption of the mayor of one of major cities, stupid tweets...) is going to be censored and removed from the internet. And this is going to happen. Don't ever think one minute, the first thousand of "forgottenness" will be for citizens and not for politics.

I think that's one of the stupidest backward law ever. Thanks for fucking up the internet.

Karunamon 3 days ago 0 replies      
I am not looking forward to how this will impact discussion forums like the one we're on. Someone wants to be forgotten, therefore we must remove all posts someone made and destroy the context for everyone who may come along afterwards?

Just ick. Ick ick ick. More ill-thought-out "feel good" legislation like the cookie law.

buro9 3 days ago 1 reply      
So how does one go about asking Google to remove a front page search result about yourself that you do not wish to exist?

Google are famed for having virtually no way of contacting them, does it require the individual to jump through hoops to do so?

And no, not thinking of myself... but wondering just whether there are mechanisms available already to those who will now seek to exercise their right.

stuki 3 days ago 0 replies      
I guess the takeaway is: Don't operate Big Data companies out of Europe..... Pack up your bags, apply for YC and move to SV instead...

All harassing publicly famous entities will achieve, is to make obtaining available information more difficult for regular people. While those with deeper pockets and better connections, will simply pay niche providers for deeper searches and indexing.

From a privacy POV, you would WANT this kind of White Hat demonstrations of where your privacy weak points are. That way, you are aware of them and can make accommodations. While third party services can spring up to address the most widespread concerns. Rather than show up for a job interview, and have the interviewer "know" something about you, that you have no idea is available to them at all.

fixermark 3 days ago 0 replies      
"Dearest Max, my last request: Everything I leave behind me ... in the way of diaries, manuscripts, letters (my own and others'), sketches, and so on, [is] to be burned unread." ~Franz Kafka

... and I wonder how much of the work of a genius would have been lost forever if his wishes had been honored.

brador 3 days ago 3 replies      
Why didn't he ask the newspaper to remove his information?

Is Google to remove the search results (the link) or just their cache?

pekko 3 days ago 3 replies      
The decision rules that it would be Google's responsibility to filter search results, instead of the responsibility of actual page removing private data. So you can find the data if you know where to look at, just don't use Google?
nissehulth 3 days ago 0 replies      
Not just a can of worms, more like a full barrel. Shouldn't the publisher of the data be the one you turn to in the first place? I hope there is more to this story than is being told by Reuters.
aquadrop 3 days ago 1 reply      
So where this sensitive information starts? If I write on my blog something like "Today I went to the zoo and saw John Doe talking to giraffes", will John Doe have the right to force me delete this text?
justinpaulson 3 days ago 1 reply      
I am not sure how most countries in the EU handle the press, but without digging into this too much, it seems like this ruling greatly limits the freedom of press. What if a scandal is uncovered regarding a political leader or someone closely related with them? Does that person have the "right" to kill the right that the free press has to go public with the information? I really don't think something like this would stand up in the US at all, but I'm unfamiliar with press laws in most of Europe.
aerophilic 3 days ago 1 reply      
Question: Assuming for a moment that there is a right to be "forgotten". Should that right be permanent? I would argue that while it is relevant during a person's lifetime, it actually would hurt the public good if we made it permanent. My thought process goes out to say 100 years from now, where there may be researchers/family members that want to know more. Should they still be restricted well after I am dead? Thoughts?
krisgenre 3 days ago 0 replies      
The reason why most applications don't have an undo operation is because it is something that needs to designed from the ground up. Its really too late for the Internet to have an undo.
ozim 3 days ago 1 reply      
"The company says forcing it to remove such data amounts to censorship."

Don't they see that personal censorship is something good opposed to government censorship?

cyberneticcook 3 days ago 2 replies      
The biggest issue is that we don't own our data. It's stored in Google, Facebook, Twitter, LinkedIn etc.. servers. It should work the other way around, every individual should keep his own data and provide permissions to external services and other people to access it. Is there any project looking into this direction ? How do we reverse this situation ?
D9u 3 days ago 0 replies      
The NSA, etc, never forgets...
beejiu 3 days ago 5 replies      
And they wonder why so many Brits want to leave the EU.
A Windows 7 deployment image was accidently sent to all Windows machines emory.edu
227 points by slyall  7 hours ago   86 comments top 33
perlgeek 6 hours ago 9 replies      

"To make error is human. To propagate error to all server in automatic way is #devops."

Frankly, I'm surprised things like this don't happen more often. Kudos for the incident management. Also a big plus for having working backups, it seems.

miles 6 hours ago 6 replies      
Snark and sarcasm aside, I am impressed with the level of detail that the IT department is sharing; it is refreshing to see such a disaster being discussed so openly and honestly, while at the same time treating customers like adults.
beloch 6 hours ago 1 reply      
This reminds me of my undergrad CPSC days. The CPSC department had their own *nix-based mainframe system that was separate from the rest of the University. The sysadmin was a pretty smart guy who was making less than a third of what he could get in industry. Eventually he got fed up and left. About a week or two later the servers had a whole cascade of failures that resulted in everyone losing every last bit of work they'd done over the weekend (This was a weekend near the end of the semester when everyone was in crunch mode).

Long story short, the sysadmin was hired back and paid more than most of the profs. Academia may tend to skimp on salaries for certain positions, but sysadmins probably shouldn't be one of them.

Fuzzwah 6 hours ago 2 replies      
I've just been hired to run a project using SCCM to upgrade ~5000 PCs from XP to Win7.

This was amazing reading. Reading such a detailed wrap up of an IT team going through my worst possible nightmare was enlightening.

Fomite 5 hours ago 0 replies      
Reminds me of some emails that went out at my old university during a cluster outage, and got progressively more informal as the night went on, detailing people leaving dinners with extended families, a growing sense of desperation, etc. The last email might as well have ended with "Tell my wife I love her."

It was both direct and funny enough that I was only mildly annoyed that the cluster was down.

randlet 4 hours ago 0 replies      
Reading that just made me feel sick to my stomach and my heart goes out to the poor gal/guy that pushed "Go" on that one. Shit happens, but a screw up that big can be devastating to ones psyche.
jonmrodriguez 6 hours ago 2 replies      
Forgive my beginner question:

Since a reformat was done to the affected machines, does this mean that researchers' datasets, drafts of papers, and other IP were lost? Or were researchers' machines not affected?

Fuzzwah 6 hours ago 0 replies      
I was just watching the "Whats New with OS Deployment in Configuration Manager and the Microsoft Deployment Toolkit" session from TechED and hit the section on "check readiness" option which MS have added to SCCM 2012 in R2. It sounds like having this in part of the task sequence at Emory would have (at the very least) stopped this OS push from at least hosing all the servers.


chromaton 1 hour ago 0 replies      
Reminds me of The Website Is Down, episode 4:https://www.youtube.com/watch?v=v0mwT3DkG4w
facorreia 7 hours ago 1 reply      
> A Windows 7 deployment image was accidently sent to all Windows machines, including laptops, desktops, and even servers. This image started with a repartition / reformat set of tasks.

Wow. That is very unfortunate, to say the least...

rfrey 3 hours ago 0 replies      
My nomination of the top bullet point of 2014:

* As soon as the accident was discovered, the SCCM server was powered off however, by that time, the SCCM server itself had been repartitioned and reformatted.

8ig8 3 hours ago 1 reply      
Mistakes are made. In related news...

Lawn care error kills most of Ohio college's grass


mehrdada 3 hours ago 0 replies      
As soon as the accident was discovered, the SCCM server was powered off however, by that time, the SCCM server itself had been repartitioned and reformatted.

I guess that's how robot apocalypse is gonna look like.

stark3 6 hours ago 1 reply      
There was a similar catastrophe at Jewel osco stores many years ago. Nightly, items added to the store pos were merged back with the main item file at each store location. The format of the merged data was exactly the same as loading a new file, except the first statement would be /EDIT instead of /LOAD.

One of the programmers decided to eliminate some code by combining the two functions, with a switch to control whether /LOAD or /EDIT was used for the first statement.

There was a bug in the program, and the edits were sent down as loads.

A guy I knew, Barry, was the main operator that night. He started getting calls from the stores after around 10 of them had been reloaded with 5 or 6 items.

Barry said it was the first time he got to meet the president of the company that day.

keehun 1 hour ago 0 replies      
I asked my friend attending Emory right now, and he didn't even realize anything was going on. He says that the Emory IT department has a notorious distinction on campus as being regularly terrible, mostly with an unreliable internet connection.

However, it looks like they handled this accident the best they could! Perhaps this accident would not have happened at a more reliable IT department.

imgur 7 minutes ago 0 replies      

  > As soon as the accident was discovered, the SCCM server was powered off  however, by that time, the SCCM server itself had been repartitioned and reformatted.
That made me laugh. Poor SCCM server :)

pling 5 hours ago 0 replies      
Not quite as disastrous but when I was at university the resident administrators configured the entire site's tftp server (everything was netbooted Suns) to boot from the network. This was fine until there was a site-wide power blip and it was shut down. When it came back it couldn't tftp to itself to boot because it wasn't booted yet (feel the paradox!). Cue 300 angry workstation users descend on the computer centre with pitchforks and torches because their workstations couldn't boot either...

Bad stuff doesn't just happen to Windows networks.

ww520 3 hours ago 0 replies      
Disasters as well as mistakes are unavoidable, such is life. A hallmark of a competent organization is how they handle the situation and recover from disasters or mistakes.

So far all the signs have indicated they are doing great in recovering. I just hope there won't be onerous processes and restriction afterward due to desire on "make sure it won't happen again" stance.

rfolstad 6 hours ago 0 replies      
On the bright side they are no longer running XP!
smegel 6 hours ago 3 replies      
Automation can also mean automated disaster.
zacharycohn 5 hours ago 0 replies      
I thought this "accident" may have been on purpose... until they mentioned the servers.

In my days of university tech support.

svec 4 hours ago 0 replies      
With great power comes great responsibility.
sergiotapia 6 hours ago 1 reply      
Isn't this more the fault of the system architect than the guy who accidentally fired the bad deploys?

It's similar to a database firehose: If you accidentally start deleting all data you should have a quick working backup ready to quickly bring the dead box up to production.

grumblepeet 5 hours ago 0 replies      
I _very_ nearly did this whilst working for a University back in the early noughties. Luckily I managed to get to the server before the "advert" activated and wiped out everything. It was so easy to do I am surprised that it is stil possible. I feel for their pain, but it does sound like they are doing a good job of mopping up. I did allow myself a snort of laughter when I read the bit about the server being re imaged as well. That is pretty darn impressive carpet bombing the entire campus.
sorennielsen 6 hours ago 0 replies      
This happened at one a former workplace too. Only the Solaris and Linux servers was untouched.

It "mildly" amused the *nix operations guys to see all the "point and click" colleagues panic.

mantrax5 4 hours ago 0 replies      
You know how in movies you need at least two people to bring their special secret keys, plug them in, and turn them at once to enable a self-destruct sequence?

That is a real principle in interface design - if something would be really, really bad to activate unintentionally, make it really, really hard to activate.

If you design a nuclear missile facility, you don't put the "launch nukes" button right next to "check email" and "open facebook".

Same way it shouldn't be easy for users to delete or corrupt their data by accident due to some omnipotent action innocently shoved right in between other trivial actions.

I wouldn't blame the person who triggered this re-imaging process. I'd blame those who designed the re-imaging interface, to allow it to happen so easily by accident.

k_sze 2 hours ago 1 reply      
Funny how they mention iTunes as one of the "key components" that are restored first, whereas Visio, Project, Adobe application are relegated to a second round.
gojomo 1 hour ago 0 replies      
"... to the cloud!"


"Yay, cloud!"

tbyehl 5 hours ago 0 replies      
I've built a few systems for deploying Windows... and the last thing that every one of them did before writing a new partition table and laying down an image was to check for existing partitions and require manual intervention if any were found.
lucio 2 hours ago 0 replies      
reads like a short dystopian novel
CamperBob2 4 hours ago 1 reply      
stark3, you seem to be hellbanned.
leccine 5 hours ago 1 reply      
We accidentally re-imaged all of the Windows servers with Linux the other day. Nobody noticed though...
filmgirlcw 7 hours ago 1 reply      
I've never been prouder of my alma mater. /s
Why Van Halen's tour contract had a "no brown M&M's" clause snopes.com
227 points by magsafe  20 hours ago   153 comments top 26
Monkeyget 16 hours ago 4 replies      
It reminds me of the orange juice test.You organize an annual convention for hundreds of people.

You tell the banquet manager of the hotel you are considering that the morning breakfast must include a large glass of freshly squeezed orange juice for everyone of the attendees. It must be squeezed no more than two hours before the breakfast.

It is not possible to do so. Squeezing that much orange in much a short amount of time would be prohibitively expensive.

If the manager says yes he is either lying or incompetent and you'd better find someone else who will tell you it's not possible.

JonnieCache 19 hours ago 9 replies      
This seems like a good opportunity to post RMS' rider again:


Fabulous stuff. I wonder how often he has/gets to hang out with random parrots since this document became widely known. In my mind he is surrounded constantly by sandal-wearing acolytes wielding exotic birds of every variety.

emiliobumachar 15 hours ago 0 replies      
Engineer's version: Write-only memory

"Out of frustration with the long and seemingly useless chain of approvals required of component specifications during which no actual checking seemed to occur, an engineer at Signetics once created a specification for a write-only memory and included it with a bunch of other specifications to be approved. This inclusion came to the attention of Signetics management only when regular customers started calling and asking for pricing information. Signetics published a corrected edition of the data book and requested the return of the 'erroneous' literature."

bane 16 hours ago 0 replies      
One of the interesting things about tech work is that it's almost all "brown M&Ms". It's amazing how important attention to detail is in this field and how quickly something will simply not work if the details aren't sweated.

We see it time and again when things go into production where the "brown M&Ms" haven't been looked into and we end up with things like enterprise class websites that cost millions of dollars to produce crumbling under the load of a dozen simultaneous users.

exDM69 18 hours ago 1 reply      
I recall reading that the "no brown M&M's" clause was added after a near-fatal accident on stage where a member of the Van Halen band got electrocuted because of bad wiring on the stage.
JunkDNA 15 hours ago 0 replies      
See previous HN discussion (360 points, 1,744 days ago) here: https://news.ycombinator.com/item?id=743860
Nanzikambe 19 hours ago 5 replies      
A contract "poison pill" or litmus test. Pretty ingenious, is that sort of thing common practice in contracts?
spingsprong 18 hours ago 1 reply      
An episode of the TRC podcast covered this, and came to a different conclusion than snopes.


dvanduzer 13 hours ago 0 replies      
You can listen to Ira Glass and John Flansburgh of They Might Be Giants talk about it in the prologue: http://www.thisamericanlife.org/radio-archives/episode/386/f...

"there's 30 people the promoter's going to hire on our behalf ... but in only half of them did we require that they be sober"

gnyman 17 hours ago 1 reply      
There are quite a few applications that do something similar, they leave a "disabled=1" or similar in the config to make sure people look at the config before trying to run the software.I remember the eggdrop IRC bot doing it (http://cvs.eggheads.org/viewvc/eggdrop1.6/eggdrop.conf?view=... , look for the lines starting with die) this and I'm sure there are more.
ctdonath 10 hours ago 1 reply      
I've read thru a bunch of other nit-picky riders. Strikes me that an under-discussed factor is that these high-value stars (contracts running into the $millions) are under extreme pressure, which is severely aggravated by so much change on a daily/hourly basis; something as "trivial" as wrong-temperature or brand drinks (I dislike Poland Spring water, and prefer Mt Dew in cans not bottles), uncomfortable seats, or even brown M&Ms (hey, everyone has a pet peeve) can be an unnerving "last straw". Having a few "perfect" arrangements everywhere gives them something to center on for mental stability.

ETA: I realize this is a tangent. Methinks it's relevant.

YesThatTom2 11 hours ago 0 replies      
Once I buried a crazy request in a list of "you need to agree on these points or the book won't make the deadline" email to my publisher. My editor flat out agreed to them all.

That's how I knew she was lying about having read them and I had to escalate to the production editor.

It saved the book.

salehenrahman 10 hours ago 0 replies      
I'm stealing this tactic when interviewing a QA guy, if I ever do end up looking to hire QA guys, that is.

Me: "So here are a set of instructions are programmers were asked to follow. Can you see anything wrong"

QA candidate: "Why yes. They forgot to remove the brown M&M's"

Me: "You start tomorrow."

Roonerelli 18 hours ago 3 replies      
Ive heard similar stories regarding developers and IT Services

Where the devs werent allowed access to the Production environment so would have to leave written instructions on how to deploy the software theyve written. And convinced that IT Services werent reading their instructions they would write something really offensive in there and see if they complained

Possibly just a myth, but amusing all the same

rhizome 19 hours ago 2 replies      
Contrary to Snopes' last-updated, this is at least 10 years old.

Summary/spoiler: It was to ensure the contract was read thoroughly.

sgdread 10 hours ago 0 replies      
I've seen same kind of tests on one of the fire ranges. You have to read safety rules. One of the points was to put x mark on the 2nd page if you read that.
yp_all 14 hours ago 0 replies      
Maybe a more interesting question is whether they ever exercised their right to terminate for brown M&M's.

Is there any notion of material breach, major vs minor breach, etc. in "tour contracts"?

bttf 14 hours ago 0 replies      
This reminds me of some IRCd configurations, in which the server will not function properly unless you've thoroughly read through the conf file and found the single commented line which disables the entire process.
supergeek133 15 hours ago 1 reply      
HERRbPUNKT 16 hours ago 1 reply      

Interview with Eddy Van Halen, telling the story first hand. :)

quotha 11 hours ago 0 replies      
You have to remember, it was the 80s!
harryb 12 hours ago 0 replies      
Interesting lesson about getting confidence by adding bugs. This was mentioned at a recent tech talk about Java Mutation Testing and PIT http://pitest.org/ - video here http://vimeo.com/89083982
circa 14 hours ago 0 replies      
How is there no mention of Wayne's World in these comments?
pjbrunet 19 hours ago 3 replies      
I'm not a lawyer, but as I understand contract law, delivering something "close enough" is all that's required to satisfy a contract. Let's pretend everything's perfect except for one brown M&M. I'm sure a lawyer can explain it better, but if everything else is in order, I think Van Halen would have to perform their end of the deal.
bjourne 18 hours ago 3 replies      
Maybe a mathematician can explain whether this "trick" works or not? Intuitively, I can't see that knowing whether the M&M demand was filled makes it more probable that the other demands are filled.

Say you have a pile with five black or white marbles. You want them all to be black. So you check that the first marble in the pile is black (ie no brown m&m:s). Is it now more probable that the other four marbles also are black?

Because you are just checking one specific marble instead of sampling a number of randomly chosen marbles (which of course would increase the probability), I don't see how it can work.

       cached 17 May 2014 04:11:01 GMT