hacker news with inline top comments    .. more ..    16 May 2014 Best
home   ask   best   5 years ago   
1
Syncthing: Open Source Dropbox and BitTorrent Sync Replacement syncthing.net
580 points by ushi  3 days ago   180 comments top 50
1
abalone 3 days ago 2 replies      
"Replacement" is too strong a word here. P2P sync requires at least 2 peers to be online at once. For the simple case of syncing your work and home computers or sharing with coworkers that is not always a reliable assumption.

It's only a replacement for a centralized service like Dropbox if you have an always-connected peer (a de facto central server).

2
stinos 3 days ago 7 replies      
Since we're listing alternatives here: I setup SeaFile (http://seafile.com/) a couple of months ago and I'm loving it so far. Mainly because it has client-side encryption and allows a single client to sync with different servers and selectively choose which 'libraries' (basically directories that are under sync) to use. Typical usecase is having a personal server for personal files and another one at the office for work-related stuff.
3
XorNot 3 days ago 6 replies      
Can we please please please make it a standard that synchronization tools spell out how they handle conflicts on the front page.

Bittorrent Sync just overwrites files based on last mod time (terrible option). What does this do? Does it support backups? Versioning?

4
pjkundert 3 days ago 1 reply      
Has anyone else taken a look at Ori (http://ori.scs.stanford.edu):

> Ori is a distributed file system built for offline operation and empowers the user with control over synchronization operations and conflict resolution. We provide history through light weight snapshots and allow users to verify the history has not been tampered with. Through the use of replication instances can be resilient and recover damaged data from other nodes.

It seems well thought out, and competitive with many of the other approaches mentioned here. It uses Merkle trees (as does Git) that encompasses the file system structure and full history.

5
frabcus 3 days ago 1 reply      
Fancy being interviewed for http://redecentralize.org/ ?

If so, email me francis@redecentralize.org! (I couldn't see an email or contact form for you on the syncthing site)

6
sinkasapa 3 days ago 1 reply      
One of my favorite open source tools of this kind is unison. It works great. I set it up to go and I don't even notice it is there. It is quick, seems to have been around for a while and is packaged for most Linux distros. It has a GUI but you don't need it.

http://www.cis.upenn.edu/~bcpierce/unison/

7
alyandon 3 days ago 3 replies      
I can't really seem to find this information in the documentation.

Does this support delta/block-level sync for large files (e.g.: does mounting a 100 GB truecrypt container, modifying a file inside the container and unmounting it cause the entire 100 GB container to be uploaded)?

Does it utilize the native OS platform APIs for detecting file modification (e.g. inotify on linux) as opposed to scanning/polling large directories looking for modified date changes?

8
freework 3 days ago 6 replies      
One thing I've never gotten about these "syncing" apps...

Lets say I install this software on my phone, my desktop, and my work computer. I have 100+ GB free on my work computer and my home desktop, but I only have 16GB on my phone. If I add 20GB worth of movies to my sync folder, its going to fill up my phone.

9
simbolit 3 days ago 1 reply      
I use http://owncloud.org and am quite happy. But also happy for more competition :-)
10
taterbase 3 days ago 3 replies      
Git-Annex is a great existing option (made by joeyhess).https://git-annex.branchable.com/
11
JonAtkinson 3 days ago 2 replies      
I just setup Sparkleshare (http://sparkleshare.org/) this weekend. I wanted something which wasn't Dropbox, and preferably open-source, and while Sparkleshare has a slightly clunky pair mechanism, it works beautifully.

Syncthing looks similar, and LAN sync'ing is a killer feature for those of us in offices with poor bandwidth.

12
Wilya 3 days ago 0 replies      
That looks like a promising project in a space which definitely needs improvement. Owncloud and Sparkleshare are okay, but they are far from perfect, and there is large room for improvement.
13
r0muald 3 days ago 0 replies      
A better title would be "Syncthing, an open source Dropbox replacement written in Go".

But seriously, it seems promising.

14
rsync 3 days ago 1 reply      
Here is the original from a year or so ago:

https://raymii.org/s/articles/Set_up_your_own_truly_secure_e...

"Then all current commercial services drop off, including SpiderOak, Bittorrent Sync and git-annex. This resulted in a clever combination of EncFS and dvcs-autosync. Because, in this day and age, you cannot trust any "cloud" provider with your unencrypted data."

15
aw3c2 3 days ago 4 replies      
This looks very promising. But the documentation is not good. I have not managed to find a "1 minute" friendly overview of how it works. I mean, what data gets sent how where and why.
16
Karunamon 3 days ago 0 replies      
How good is this at traversing firewalls? AFAIK, Dropbox will do some manner of HTTP trickery to allow syncing when behind overly-restrictive firewalls (so it just goes out the usually-provided web proxy), but the documentation here references forwarding ports + UPNP, so I'm guessing that doesn't apply here?
17
marcamillion 2 days ago 2 replies      
I have large media files, multiple TBs.

I deal with a constant stream of these and want to have a distributed network - connected via the inet - that allows me to sync the drives in all locations.

i.e. I would like to setup a server in my home office, one in my co-founder's home office and another in my editor's home office.

Whenever my editor runs off a few hundred GB of data to a specific folder or to their drive, I would love for that to be auto-synced to both my server and that of my co-founder.

Will Syncthing allow me to do this easily and will it be appropriate for an application like that?

18
doctoboggan 3 days ago 1 reply      
A few months ago I looked into using Syncthing for my decentralized browser, Syncnet[0]. At that time it did not seem ready for primetime. Does anyone have a good feel for its maturity as of late? For example, is there an API? Syncthing looks very promising and I would love to integrate Syncnet with it.

[0]: http://jack.minardi.org/software/syncnet-a-decentralized-web...

19
grey-area 2 days ago 0 replies      
This project's aims seem very similar to the earlier camilstore project, also written in go:

https://camlistore.org

Anyone know how it compares?

20
davidjhall 2 days ago 3 replies      
Does this need to use the web gui? I tried setting this up on a digital ocean server and it spawns off a webserver on 8080 that I can't reach from my machine. Is there a "headless" mode for client-less servers?Thanks
21
popey 2 days ago 0 replies      
I've been using Syncthing for some months now and it's working well for my use case of keeping laptop/desktop and home server files in sync. I had one occasion when I lost everything as I'd brought up syncthing on my server without the "sync" directory mounted. It happily deleted all files from my synced laptop as a result. That's now fixed, but it was a buttock clenching moment. Yay backups, and a third machine (desktop) which was suspended and thus out of sync, so still had my data.

Upstream developer is very friendly and attentive & seems happy to discuss new features and use cases.

22
chrisBob 3 days ago 1 reply      
I am very happy with the timemachine backup on my mac, but I have been looking for a good offsite backup solution so that I can trade storage with my family in case something happens to my house. This might finally be the right option. BT Sync seemed ok, but was more than I wanted my parents to try and setup.
23
akumen 2 days ago 0 replies      
We love to through around the words "alternative" and " replacement" it is neither until it is as easy to use/deploy as X for the average Joe. You know 90% of people out there who would't be able to out the words 'git', 'deploy' and 'heroku' in the right order as their eyes glaze over in confusion.
24
interg12 3 days ago 2 replies      
What's wrong with BitTorrent Sync? The fact that it's a company?
25
bankim 2 days ago 1 reply      
Alternative would be AeroFS (https://aerofs.com/) which also does P2P file sync.
26
zyngaro 3 days ago 2 replies      
"Each node scans for changes every sixty seconds". There isn't any portable way to get notfications about file changes instead of polling? I know about jnotify in Java but well it's in java.
27
nl 2 days ago 1 reply      
Is there any mobile support?

I use Dropbox pretty frequently to share stuff between mobile devices and desktops. If Syncthing can't do that it isn't as useful.

28
ertdfgcb 3 days ago 0 replies      
Unrelated, is one of the best open source project landing pages I've ever seen.
29
desireco42 3 days ago 0 replies      
Since I installed Bittorrent Sync, my need for such software stopped as it works really well and provides all I need from it.

I couldn't understand quite advantages and why would I replace BTSync, which BTW, works really well already and does all this nice things. Plus works on my Phone and Ipad and Nexus.

To clarify one thing, I have home server which obviously hosts BTSync repos with ample space. Ability to fine-grained share parts of it is invaluable.

30
Fede_V 3 days ago 0 replies      
This looks incredibly interesting, and I would very much like to move from Dropbox to something open source. Thanks, will definitely play with it.
31
nvk 3 days ago 0 replies      
That's great news, have been looking for a OSS sync app for quite some time.
32
ReAzem 2 days ago 0 replies      
I would also like to point out https://www.syncany.org/

Syncany can work with any backend (like AWS S3) and is encrypted.

It is more of a dropbox replacement while sycnthing is a btsync replacement.

33
mark_l_watson 3 days ago 0 replies      
I really like the idea but one thing is stopping me: portability to iOS and Android devices, and mobile apps that work with Dropbox. Dropbox has a first-mover advantage.

This is mostly a problem for people like me who use both Android and iOS devices so alternatives need to support both platforms.

34
binaryanomaly 3 days ago 1 reply      
Let's hope this becomes what it is promising and reliefs me of dropbox and the likes... ;)
35
Sir_Cmpwn 3 days ago 1 reply      
I would like to see something like this that does not place trust on the server hosting the files.
36
Lucadg 2 days ago 0 replies      
another alternative: http://www.filement.com/I don't use it but friends do and are pretty happy with it.From their home page:

- Combine devices and cloud services into a single interface.- Transfer data between computers, smartphones, tablets and clouds.- Manage and use data directly on the device or cloud it is stored.

37
scrrr 2 days ago 0 replies      
Is there a paper / doc explaining, how the synchronisation works in detail?
38
orblivion 3 days ago 1 reply      
Does somebody fund projects like this? Or is it just that the people in charge of them understand something about UI and marketing? Seems like a nice trend, if so.
39
twosheep 3 days ago 2 replies      
So this may be as good a thread as any to ask for assistance:

My small business is looking for a combined file collaboration / file backup service that doesn't cost an excessive amount of money (we're a non-profit on a budget). Is there a good service for this? For example, Dropbox is mainly for sharing files, whereas Carbonite is mostly for backing up your computer. Is there a solution for both?

40
emsy 2 days ago 0 replies      
Yet another sync app is Pyd.ioThe Web UI is super neat, and you can choose between various backends for storage. Pyd.io offers its own sync app which I found to be horribly slow. I'd suggest to use Pyd.io as a frontend and BtSync/Seafile/Syncthing as a backend.
41
Joona 3 days ago 2 replies      
I'm looking for a replacement for Dropbox, but it seems that none support direct links, like in Dropbox's public folder (example: https://dl.dropboxusercontent.com/u/38901452/fox2.jpg ) Is there one?
42
dead10ck 3 days ago 1 reply      
This looks very promising. And it's written in Go! The only major feature I think it's missing is file versioning.

I am curious, though: what do people use to get their files remotely? And what's the cheapest solution for hosting your own central server? Would a simple AWS instance work fine?

43
jms703 2 days ago 0 replies      
++ this effort. I'm looking forward to replacing my current BitTorrent Sync (btsync) setup with Syncthing.
44
haxxorfreak 3 days ago 0 replies      
I don't see a Solaris build on the download page but it's listed next to the download button on the home page, am I just missing something?
45
biocoder 2 days ago 2 replies      
Have you checked Hive2Hive? Something similar but not yet there. https://github.com/Hive2Hive/Hive2Hive
46
scragg 3 days ago 0 replies      
I would of liked the name "synctank" better. :)
47
chris123 3 days ago 0 replies      
Can we get a "Bitcoin meets Dropbox meets Airbnb" already? Thks :)
48
hellbreakslose 2 days ago 0 replies      
Cool, I always like it when things are open source!
49
sixothree 3 days ago 2 replies      
It appears HN readers are terrible at self-organizing. Threads for articles like this should include by default a top level node for:

  "Here's the alternative I use"  "Important question about the technology"  "Pertinent question about the article"

50
downstream1960 3 days ago 1 reply      
So its basically pirating, but its saves across all platforms?
2
FCC approves plan to consider paid priority on Internet washingtonpost.com
576 points by jkupferman  10 hours ago   303 comments top 50
1
sinak 9 hours ago 10 replies      
The title and post are both quite misleading. The commissioners didn't approve Tom Wheeler's plan (to regulate the Internet under Section 706), but voted to go ahead with the Notice of Proposed Rulemaking and commenting period. Tom Wheeler stated multiple times that Title II classification is still on the table.

There'll now be a 120 day commenting period; 60 days of comments from companies and the public, and then 60 days of replies to those comments from the same. After that, the final rulemaking will happen.

It's likely that the docket number for comments will continue to be 14-28, so if you want to ask the FCC to apply common carrier rules to the Internet under Title II, you can do so here: http://apps.fcc.gov/ecfs/upload/display?z=r8e2h and you can view previous comments here: http://apps.fcc.gov/ecfs/comment_search/execute?proceeding=1...

It's probably best to wait until the actual text of the NPRM is made public though, which'll likely happen very soon.

Edit: WaPo have now updated the title of the article to make it more accurate: "FCC approves plan to consider paid priority on Internet." Old title was "FCC approves plan to allow for paid priority on Internet."

2
ColinDabritz 10 hours ago 7 replies      
"And he promised a series of measures to ensure the new paid prioritization practices are done fairly and don't harm consumers."

I have a measure in mind that won't harm consumers. Don't allow ISPs to discriminate against users regarding their already paid for internet traffic based on what they request. (Gee that sounds a lot like net neutrality.)

Anything less is open for abuse.

Perhaps "Discrimination" is a good word to tar this with, because it is. It's discrimination against companies, but it's also discrimination against users based on their tastes, preferences, and possibly socioeconomic status.

To say nothing of de-facto censorship issues.

3
todayiamme 9 hours ago 1 reply      
In my mind one of the key questions to ask in this debate is, if the eventual rise of a more closely controlled internet destroys this frontier, what's next?

Right now thanks to a close confluence of remarkable factors, the barriers associated with starting something are almost negligible. The steady march of Moore's law combined with visualisation has given us servers that cost fractions of a penny to lease per hour. No one has had to beg or pay middlemen to use that server and reach customers around the world. At the other end, customers can finally view these bits, often streamed wirelessly, on magical slabs of glass and metal in their hand or what would have passed for a super-computer in a bygone age... All of this combined with a myriad of other factors has allowed anyone to start a billion dollar company. If this very fragile ecosystem is damaged and it dies out, where should someone ambitious go next to strike out on their own?

4
hpaavola 10 hours ago 10 replies      
I don't get this whole net neautrality discussion that is going on in US (and maybe somewhere else, just haven't paid attention).

Consumers pay based on speed of their connection. If ISP feels like the consumers are not paying enough, raise the prices.

Service providers (not ISPs, but the ones who run servers that consumers connect to) pay based on speed of their connection. If the ISP feels like service providers are not paying enough, raise the prices.

Why in the earth there is a need for slow/fast lanes and data caps?

I'm four years old. So please keep that in mind when explaing this to me. :)

5
altcognito 10 hours ago 5 replies      
I'm confused by this headline (and a bit by the proceeding).

After watching the FCC hearing, it seemed like all of the people who were "for" open internet, and spoke of it from the consumer level (including Wheeler) voted for the proposal. The commissioners that said the FCC didn't have jurisdiction to regulate and to leave the market alone, voted against the proposal.

Isn't it the case that if they had voted against this, that we would have been in the exact same boat we are in now and therefore the agreement that Netflix signed would continue unabated?

In that case, it really didn't matter what they voted.

6
corford 8 hours ago 0 replies      
If Comcast gets their way, the FCC will have effectively ended up sanctioning the balkanisation of the US's internet users in to cable company controlled fiefdoms.

Each cable company will then assume the role of warlord for their userbase and proceed to dictate the terms and agreements under which their users will experience the internet. All of which guided solely by their desire to maximise profits.

If people aren't worried yet, they should be. Serfs didn't enjoy medieval Europe for a reason.

The only two viable routes out of this nightmare are:

1. Enshrine net-neutrality / common carrier status in law

or

2. Radically break up the US ISP/cable market so that real competition exists. This way Comcast is free to try and milk every teat they can find. If users or content providers don't like the result, Comcast can wither on the vine and die while competitors pick up their fleeing users.

7
DigitalSea 1 hour ago 0 replies      
There is no way in hell this can go ahead. Also, minor nitpick, but this is a rather misleading post. Nobody approved anything, the vote was merely a green light to go ahead with the proposal, nothing has been approved just yet, it's not that easy.

Some of my "favourite" takeaways:

He stressed consumers would be guaranteed a baseline of service Just like your internet provider says they don't throttle torrent traffic, but a few major ISP's have been caught out doing just that. The same is going to happen if this proposal goes ahead. Unless people breaking the rules are reported, they won't be caught and where will the resources for reporting infringer's come from?

Wheeler's proposal is part of a larger "net neutrality" plan that forbids Internet service providers from outright blocking Web sites I have no doubt in my mind, the reform Wheeler is pushing for is merely a door and there are definitely bigger things in store once the flood gates have been opened. The pressure will be too great to close them again.

The agency said it had developed a "multifaceted dispute resolution process" on enforcement and would consider appointing an "ombudsman" to oversee the process. The FCC has a shady history of resolving disputes, this is merely hot air to make the reforms not sound so bad. What happens when the resolution process breaks or is overwhelmed and can't cope with the number of infringements taking place?

As for a handful of key entities controlling what happens with the pipeline, China is a classic example of what happens when you let a sole entity dictate something like the Internet and even then, the great firewall doesn't stop everything.

Then there are questions about conflicts of interest. What happens when say a company like Comcast owns a stake in a company like Netflix and conspire to extort a competitor like Hulu (asking for exorbitant amounts of cash for speed). Who sets the price of these fast lanes and will prices be capped to prevent extortion? Too flawed to work.

8
mgkimsal 10 hours ago 2 replies      
"Even one of the Democratic commissioners who voted yes on Thursday expressed some misgivings about how the proposal had been handled.

"I believe the process that got us to rulemaking today was flawed," she said. "I would have preferred a delay.""

---------------------------------

But... she voted yes anyway. WTF?

9
DevX101 10 hours ago 4 replies      
> approved in a three-to-two vote along party lines,

Why the fuck are there party lines in the FCC? Or any other regulatory body for that matter?

10
adamio 10 hours ago 3 replies      
The internet is slowly being transformed into cable television
11
Alupis 9 hours ago 0 replies      
Wait a minute! You mean my ever-increasing ISP fees at my home are not for the ISP to build a better network? You mean to tell me the ISP is now going to charge content providers for the ability to provide me with content that I'm already paying my ISP to deliver? You mean to tell me my content providers are now going to likely increase their fees to cope with this "fast lane"?

This sounds an awful lot like extortion, and double billing.

ISP's... you have one (1) job. Deliver packets.

12
dragonwriter 10 hours ago 0 replies      
Its not a plan to allow paid priority on the Internet -- that's already allowed without any restriction since the old Open Internet order was struck down by the D.C. Circuit. Its a plan to, within the limits placed by the court order striking down the old plan, limit practices that violate the neutrality principles the FCC has articulated as part of its Open Internet efforts, including paid prioritization.
13
coreymgilmore 9 hours ago 0 replies      
Simply put, this is absolutely terrible. How are start ups and small business web companies supposed to compete when their reach to consumers will automatically be slowed compared to larger competitors who pay for faster pipes?

And who is to govern the rates (and tiers) of faster speeds? I can only assume ISPs will determine a cost based on aggregate bandwidth. But who is to say there can't be a fast lane, a faster lane, and a fastest lane? Sounds anti competitive to me (even the big name companies are against this!).

Last: "The telecom companies argue that without being able to charge tech firms for higher-speed connections, they will be unable to invest in faster connections for consumers" > Google Fiber is cheaper for one. Seconds, the telecom giants have all increased subscriptions so there is more money there. And, as time goes along shouldn't these providers become more efficient and costs should decrease anyway? Must be nice to have a sudo-monopoly in some markets.

14
dethstar 10 hours ago 0 replies      
Most important quote since the title is misleading:

"The proposal is not a final rule, but the three-to-two vote on Thursday is a significant step forward on a controversial idea that has invited fierce opposition from consumer advocates, Silicon Valley heavyweights, and Democratic lawmakers."

15
isamuel 5 hours ago 0 replies      
The actual notice of proposed rulemaking (or "NPRM," as ad-law nerds call it): http://hraunfoss.fcc.gov/edocs_public/attachmatch/FCC-14-61A...

I haven't read it in full yet, but I've read the introduction, and the press coverage (surprise!) does not seem quite right to me.

17
Lewisham 9 hours ago 0 replies      
After weeks of public outcry over the proposal, FCC Chairman Tom Wheeler said the agency would not allow for unfair, or "commercially unreasonable," business practices. He wouldn't accept, for instance, practices that leave a consumer with slower downloads of some Web sites than what the consumer paid for from their Internet service provider.

Because they've done such a bang-up job of that thus far..? It's no secret that at comparable advertised speed, Netflix on Comcast was far worse than Netflix on other ISPs.

I'm not sure if they're really so deluded to think their enforcement is super great, or if they're just delivering placating sound bites.

18
joelhaus 10 hours ago 1 reply      
Can anyone make a serious argument on behalf of the carriers? Given the court decisions, the only way to protect the American people and the economy is to reclassify ISP's under Title II.

For the skeptics, it appears to come down to the question: which route offers better prospects for upgrading our internet infrastructure? Choice one is relying on a for-profit corporation with an effective monopoly that is beholden to shareholders; Choice two is relying on elected politicians beholden to the voters.

If you think there is a different argument that can be made on behalf of the carriers or if you can make the above one better, I would be very interested in hearing it.

19
jqm 10 hours ago 0 replies      
People having the freedom to look at whatever they choose on a level playing field may not be in the interests of all concerned.

The consolidation of media companies possibly served interests other than profits. Look at what Putin is allegedly doing with the internet. Maybe in a way the eventual intent of this is the same. And for the same purposes. I don't think we should let it get started just in case.

20
trurl 9 hours ago 0 replies      
We truly have the best government money can buy.
21
kenrikm 10 hours ago 1 reply      
Great! the FCC has officially sanctioned ISP's to be Trolls, demanding some gold to cross their bridge. This guarantees that there will always be multiple levels of peering speed even if the connections are upgraded and are able to easily handle the load. They won't want to give up their troll gold. That's just peachy, thanks for letting us get screwed over even more, Go USA! </Sarcasm>
22
zacinbusiness 8 hours ago 0 replies      
I don't understand the ISP's point of view on this issue. Please correct me if I'm wrong. But it seems that ISPs are saying "Hey, we offer this great service. But bandwidth hungry applications like Netflix are just using too much data. And we need to throttle their data usage, or they need to pay us more money."

The ISPs, then, are claiming to be victims. When in reality they simply promise services that they can't cost-effectively deliver.

If I make contracts to give all of you a new pair of shoes every month. And you pay in advance. And then I run out of shoes before I can deliver on my promise...doesn't that mean that I don't know how to effectively run my business? Isn't that my fault for promising a service that I can't provide? Why would anyone feel sorry for me?

23
lazyloop 10 hours ago 1 reply      
And now Comcast is planning data limits for all customers, what conincidence. http://money.cnn.com/2014/05/15/technology/comcast-data-limi...
24
mariusz79 9 hours ago 2 replies      
It really is time to decentralize and move forward with mesh networking.
25
ryanhuff 10 hours ago 1 reply      
The investment in Obama by tech luminaries must be a huge disappointment.
26
couchand 8 hours ago 0 replies      
"If a network operator slowed the speed of service below that which the consumer bought, it would be commercially unreasonable and therefore prohibited," Wheeler said.

I find this quote very interesting. Currently the trend seems to be that the sticker speed on a connection bears little resemblance to the actual speed. I wonder if he has a plan to change that or if this was just an offhand remark.

27
xhrpost 10 hours ago 2 replies      
So what happened? It seems like just yesterday that the FCC was the one creating the rules around net neutrality. A federal court over-turns this and all of a sudden the FCC decides to go the complete opposite direction?
28
hgsigala 9 hours ago 0 replies      
At this point everyone is officially invited to comment on the proposal. In around 60 days, the FCC will respond to your comments and redraft a proposal. Please comment! http://www.fcc.gov/comments
29
spacefight 10 hours ago 0 replies      
"What a nice internet connection you have there. It would be a real shame if something happened to it...".

So we had a good time, haven't we...

30
rjohnk 9 hours ago 0 replies      
I know all the basic ins and outs of bandwidth. But why is this so complicated? I pay x amount for access to the Internet at x speed. I use internet. I pay the access fee.
31
markbnj 8 hours ago 0 replies      
This portion of the piece is interesting to me: "He wouldn't accept, for instance, practices that leave a consumer with slower downloads of some Web sites than what the consumer paid for from their Internet service provider." Definitions are tricky, but since we all pay for more bandwidth from our ISPs than we utilize from any one site (or almost all of us, I think), sticking to this rule would mean ISPs would not have the power to throttle individual data sources. Is that not a correct interpretation?
32
Orthanc 9 hours ago 1 reply      
This doesn't sound good:

"6. Enhance competition. The Commission will look for opportunities to enhance Internet access competition. One obvious candidate for close examination was raised in Judge Silbermans separate opinion, namely legal restrictions on the ability of cities and towns to offer broadband services to consumers in their communities."

http://www.fcc.gov/document/statement-fcc-chairman-tom-wheel...

33
forgotAgain 10 hours ago 0 replies      
The fix is in. Now what are you going to do about it?
34
ozh 10 hours ago 3 replies      
I hope there will be companies, upon being asked by an ISP to pay more for higher priority in their network, who will tell them to get the f*k off and advocate usage of VPN and anonymisers for their users so they're not identified as US residents.
35
jon_black 8 hours ago 1 reply      
Assuming the plan were to be approved, and given that the FCC is an American government organisation, are there any implications for those in other countries?

Also, how can an American government organisation consider paid priority on The (global) Internet? Isn't it better to say that "FCC approves plan to consider paid priority on Internet for those who connect to it via a US telecoms provider"?

36
markcampbell 10 hours ago 0 replies      
Just making it easier for other countries. Shoot yourself in the foot, USA!
37
knodi 6 hours ago 0 replies      
No one I know in the public want this, only ISP. Why the fuck are we even having a commenting period on this fucking knock it down.
38
pushedx 6 hours ago 0 replies      
You can't offer bandwidth at a premium, without reducing the bandwidth available to others. That's (physically) how the Internet works. No matter what Wheeler says, there's no way that paid prioritization of traffic can be done fairly.
39
shna 6 hours ago 0 replies      
The mistake will be to allow even a tiny hole in net neutrality. Once they get hold of something it will be only a matter of time to make it larger. However it sounds harmless any dent to net neutrality should be fought against fiercely.
40
devx 10 hours ago 2 replies      
As an European I probably should be glad about this, since this combined with all the NSA spying issues and implementing backdoors into US products [1], should increasingly force innovation out of US and bring it to Europe, but somehow I'm not.

All the ISPs will slow down all the major companies services, unless they pay up. There is no "faster" Internet. It's just "paying to get normal Internet back", like they've already done with Netflix:

http://knowmore.washingtonpost.com/2014/04/25/this-hilarious...

[1] - http://arstechnica.com/tech-policy/2014/05/photos-of-an-nsa-...

41
carsonreinke 8 hours ago 2 replies      
Maybe I am missing something, but what is the argument ISPs have for this?
42
shmerl 9 hours ago 0 replies      
I don't really understand why it's divided by partisan membership.
43
phkahler 9 hours ago 0 replies      
Who nominated this former lobbyist for the FCC spot? And who voiced/voted their approval? Voters should know.
44
mc_hammer 9 hours ago 0 replies      
anywhere that the internet can be routed via paid priority is the spot where the snooping can be installed.
45
rgumus 10 hours ago 2 replies      
Well, this is no coincidence. ISPs have been working on this for years.
46
JimmaDaRustla 10 hours ago 0 replies      
There should be a fast lane, it should also be the only lane.
47
wielebny 9 hours ago 1 reply      
If that would pass - wouldn't be this a great opportunity for european hosting companies to seize the hosting market?
48
thekylemontag 7 hours ago 0 replies      
G_G america.
49
graycat 8 hours ago 0 replies      
Okay, from all the public discussion so far, NYT, WaPo, various fora, etc., I totally fail to 'get it'. Maybe I know too much or too little; likely a mixture of both.

Help! More details anyone?

To be more clear, let's consider: I pay my ISP, a cable TV company, so much a month for Internet service withspeeds -- Mbps, million bits per second -- as stated in the service, maybe 25 Mbps upload(from me to the Internet) speed and 101 Mbpsdownload speed.

Now those speeds are just between my computer andmy ISP. So, if I watch a video clip fromsome server in Romania, maybe I only get 2 Mbpsfor that video clip because that is all myISP is getting from the server in Romania.

And I am paying nothing perbit moved. So, if I watch 10 movies a dayat 4 billion bytes per movie, even then I don't pay more.

Now, to get the bits they send me, my ISP gets thosefrom some connection(s) to the 'Internet backbone'or some 'points of presence' (PoP) or some such at various backbone 'tiers', 'peering centers', etc.

Now, long common in such digital communications havebeen 'quality of service' (QoS) and 'class of service'(CoS). QoS can have to do with latency (how long haveto wait until the first packet arrives?),'jitter' (the time between packets varies significantly?),dropped packets (TCP notices and requests retransmission),out of order packets (to be straightened out by the TCP logic or just handled by TCP requesting retransmission), etc. Heck, maybewith low QoS some packets come with coffee stains froma pass by the NSA or some such! And CoS might mean,if a router gets too busy (the way the Internet is designed,that can happen), then some packets from a lower 'class' of service can be dropped.

But my not very good understanding is that QoS and CoS, etc., don't much apply betweenmy computer and my ISP and, really, apply mostlyjust to various parts of the 'Internet backbone'where the really big data rates are. And theremy understanding is that QoS and CoS areessentially fixed and not adjusted just forme or Netflix, etc. E.g., onceone of the packets headed for me gets ona wavelength on a long haul optical fiber,that packet will move just like many millionsof others, that is, with full 'network neutrality'.

So, I ask for some packets from a server atNetflix, Google, Facebook, Yahoo,Vimeo, WaPo, NYT, HN, Microsoft's MSDN, etc. Then that serverconnects to essentially an ISP but with likelya connection to the Internet at 1, 10, 40, 100Gbps (billion bits per second). And, really,my packets may come from Amazon Web Services (AWS),CloudFlare, Akamai, some colocation facilityby Level3 or some such; e.g., the ads may comefrom some ad server quite far from wherethe data I personally was interested in came from.

Note: I'm building a Web site, and my localcolocation facility says that they canprovide me with dual Ethernet connectionsto the Internet at 10 Gbps per connection.

Note: Apparently roughly at present it is commoncommercial practice to have one cable withmaybe 144 optical fibers each witha few dozen wavelengths of laser light(dense wavelength division multiplexing -- DWDM)with data rate of 40 or 100 Gbps per wavelength.

So, there is me, a little guy, getting the packetsfor, say, a Web page. Various servers send thepackets, they rattle around in varioustiers of the Internet backbone, treated in thebackbone like any other packets,arrive at my ISP,and are sent to me over coaxto my neighborhood and to me.

So, with this setup, just where could,say, Netflix be asked to pay more and for what?That is, Netflix is already paying their ISP.That ISP dumps the Netflix packets on theInternet backbone, andmillions of consumer ISPs get thepackets.My ISP is just a local guy;tough to believe that Netflix will pay them.Besides, there is no need for Netflixto pay my ISP sincemy ISP is already doing what theysay, that is, as I can confirmwith Web site

http://www.speedtest.net

I'm getting the speeds I paid myISP for.

Netflix is going to pay more to whom for what?

Now, maybe the issue is: If the Netflix ISP and my ISPare the same huge company, UGE,that, maybe, also provides on-line movies, then UGE canask Netflix to pay more orone or the other of the UGE ISPswill throttle the Netflix data.Dirty business.

But Netflix isa big boy and could get adifferent ISP at their end.Then the UGE ISP who servesa consumer could find thatthe UGE ISP still throttlesdata from Netflix but notfrom the UGE movie service?Then the consumer's ISPwould be failing to providethe data rate the consumerpaid for.

Or, maybe, the UGE ISP that servesme might send the movies from the UGE movieservice not part of the, say, 101 downloadspeed from my ISP to me and, instead,provide me with, say, 141 Mbpswhile the UGE movie is playing.This situation would be 'tying', right?Then if Netflix wants to be part ofthis 141 Mbps to a user who paidfor only 101 Mbps, then Netflixhas to pay their UGE ISP more;this can work for UGE becausethey have two ISPs and 'own bothends of the wire'.

I can easily accept that a big companywith interests at several parts of theInternet and of media more generallymay use parts of their businessto hurt competition.Such should be stopped.

But so far the public discussionsseem to describe non-problems.

50
kirualex 10 hours ago 1 reply      
Yet another blow to Net-Neutrality...
3
Removing User Interface Complexity, or Why React is Awesome jlongster.com
560 points by jlongster  2 days ago   218 comments top 33
1
tomdale 2 days ago 5 replies      
This is a really thoroughly researched post and jlongster has my gratitude for writing it up.

I have two concerns with this approach. Take everything I say with a grain of salt as one of the authors of Ember.js.

First, as described here and as actually implemented by Om, this eliminates complexity by spamming the component with state change notifications via requestAnimationFrame (rAF). That may be a fair tradeoff in the end, but I would be nervous about building a large-scale app that relied on diffing performance for every data-bound element fitting in rAF's 16ms window.

(I'll also mention that this puts a pretty firm cap on how you can use data binding in your app, and it tends to mean that people just use binding from the JavaScript -> DOM layer. One of the nicest things about Ember, IMO, is that you can model your entire application, from the model layer all the way up to the templates, with an FRP-like data flow.)

My second concern is that components libraries really don't do anything to help you manage which components are on screen, and in a way that doesn't break the URL. So many JavaScript apps feel broken because you can't share them, you can't hit the back button, you can't hit refresh and not lose state, etc. People think MVC is an application architecture, but in fact MVC is a component architecture your app is composed of many MVCs, all interacting with each other. Without an abstraction to help you manage that (whether it's something like Ember or something you've rolled yourself), it's easy for the complexity of managing which components are on screen and what models they're plugged into to spin quickly out of control. I have yet to see the source code for any app that scales this approach out beyond simple demos, which I hope changes because I would love to see how the rubber hits the pavement.

It's always interesting to see different approaches to this problem. I don't think it's as revolutionary as many people want to make it out to be, but I've never been opposed to borrowing good ideas liberally, either. Thanks again, James!

2
nostrademons 2 days ago 6 replies      
I think this post is missing something in its description of Web Components: the fundamental difference between a JS-based framework like React and a Web Components-based framework like Polymer is that the former takes JS objects as primitives and the DOM as an implementation artifact, while the latter takes the DOM as a primitive and JS as an implementation artifact. You cannot wrap your head around Web Components and give both it and JS frameworks a fair shake until you can make this mental shift in perspective fluently.

The line in the post where "You can't even do something as basic as that with Web Components.":

  var MyToolbar = require('shared-components/toolbar');
In fact has a direct analogue with HTML imports:

  <link rel="import" href="shared-components/toolbar.html">
And that's key to understanding Web Components. The idea of the standard is that you can now define your own custom HTML elements, and those elements function exactly like the DOM elements that are built into the browser. This is a key strategic point: they function exactly like the DOM elements that are built into the browser because Google/Mozilla/Opera/et al hope to build the popular ones into the browser eventually, just like we've gotten <input type=date> and <details>/<summary> based on common web usage patterns.

A number of the other code samples in the article also have direct analogues in Polymer as well. For example, the App/Toolbar example halfway down the page would be this:

  <polymer-element name="Toolbar" attributes="number">    <template>      <div>        <button value="increment" on-click="{{increment}}">        <button value="decrement" on-click="{{decrement}}">      </div>    </template>    <script>      Polymer('toolbar', {        number: 0,        increment: function() { this.number++; }        decrement: function() { this.number--; }      });    </script>  </polymer-element>  <polymer-element name="App">    <template>      <div>        <span>{{toolbar.number}}</span>        <Toolbar number=0 id="toolbar"></Toolbar>      </div>    </template>    <script>      Polymer('App', {        created: function() {          this.toolbar = this.$.toolbar;        }      });    </script>  </polymer-element>
You can decide for yourself whether you like that or you like the Bloop example more - my point with this post is to educate, not evangelize - but the key point is that you can define your own tags and elements just like regular DOM elements, give them behavior with Javascript, make them "smart" through data-binding so you don't have to manually wire up handlers, and then compose them like you would compose a manual HTML fragment.

3
rdtsc 2 days ago 5 replies      
As mostly an outsider to the web front end development, React.js is probably the easiest one for me to understand among the typical "frameworks", especially Angular and Ember.

After all the excitement about Angular for example, I went to learn about it and just got lost with new concepts: DOM transclusion, scopes, services, directives, ng-apps, controllers, dependency inversion and so on. I can use it but need someone to hold my hand. It reminded me of Enterprise Java Beans.

But so far just learning how React is put together and looking at the tutorials it seems like less of a framework and easier to understand altogether. I suspect this might become the new way to build web applications.

Well anyway, don't take this too seriously, I as said, I am an outsider to this.

4
NathanKP 2 days ago 5 replies      
I really like the core concepts of React, especially the way it is designed to help you organize your code into reusable components.

I think the key to making React take off is building a centralized repository for components that are open source. Then building your webapp would be as easy as importing the components you need:

     bower install react-navbar     bower install react-signup-form     bower install react-sso-signin-form
I think this is definitely the future of how front end web development will be one day.

5
derwildemomo 2 days ago 2 replies      
As a recommendation to the author, it would make sense to show the example/demo area the whole time, not only once I scroll down. It confused me. A lot.
6
malvosenior 2 days ago 1 reply      
For those that haven't tried it, David Nolen's Om for ClojureScript is an excellent React framework.

https://github.com/swannodette/om

I've not used vanilla React, but Om is certainly fantastic and apparently adds a bunch of stuff that's not in the JS version.

Also, a web framework written by the guy that wrote most of the language you're using? Win!

7
ufo 2 days ago 3 replies      
I experimented with React a bit but I was a bit bugged by how large it was. The basic idea of rendering to a virtual DOM and having unidirectional data flow is really simple but I had trouble actually diving in to React's source code and seeing things under the hood (for example, I had to find a blog to see how the diffing algorithm worked).

What are the other libraries out there that we can use for this virtual DOM pattern right now? I only found mithril[1] that similarly does the template rendering with Javascript but I still don't know how different to React it is in the end? Is the diffing algorithm similar? Do they handle corner cases the same (many attributes need to be treated specially when writing them to DOM)?

Simplifying it a bit: other than the virtual DOM, is the rest of React also the best way to structure apps? What would the ideal "barebones" virtual DOM library look like?

[1] http://lhorie.github.io/mithril/

8
etrinh 2 days ago 0 replies      
Not to distract from the topic, but jlongster's posts should be a case study into how to make an effective demo/tutorial on the web. The side by side code/demo format is very well done and should be the de facto way to do code + demo. There have been so many times when I've been reading a tutorial and click on a demo link that opens a new tab. This makes me completely lose context as I switch back and forth between the tutorial and various demo links.

For another example of a post that takes advantage of the dynamic nature of the web page, check out jlongster's sweet.js tutorial[1]. It's a tutorial on writing macros with JS that you can actually interact with (you can make modifications to the example code snippets and see your macros expand on the fly). Very cool.

[1]: http://jlongster.com/Writing-Your-First-Sweet.js-Macro

9
Flimm 2 days ago 3 replies      
Please don't break the back button (Firefox and Chrome).

In Firefox 29.0 on Ubuntu 14.04, the left sidebar with the text of the blog disappears and is replaced with a white space. I do not experience this on Chrome.

10
adamors 2 days ago 3 replies      
Would you recommend using React instead of Angular for JS heavy areas of a website that is built with a server side framework (like Rails, Django etc.)?

I developed a rather complex SPA with Angular recently and I cannot go back to the ghetto that is jQuery when using server side rendering.

11
jdnier 1 day ago 1 reply      
Leo Horie, author of Mithril, has written a blog post where he explains how to re-implement some of the article's examples using Mithril (React-like client-side Javascript MVC framework): http://lhorie.github.io/mithril-blog/an-exercise-in-awesomen...
12
roycehaynes 2 days ago 1 reply      
Great post. I recently started a project using reactjs, and I have nothing but good things to say. The unidirectional data flow, the declarative nature, and the virtual DOM makes it powerful and very easy to like.

The best resource is to follow the tutorial (link below). The tutorial explains everything you may have a question about when comparing it to Backbone, Angular, or Ember.

http://facebook.github.io/react/docs/tutorial.html

I also found the IRC channel to be very, very helpful.

The only downside is that you still have to rely on other tools to make a true SPA like routing.

13
mrcwinn 2 days ago 0 replies      
Curious if anyone has experimented with Go+React - specifically rendering on the server side as well. Similar to Rails / react_ujs (react-rails gem), seems like you would need to provide Go with access to a v8 runtime and a FuncMap helper in the template to call for the necessary component (JSX -> markup). I've really enjoyed React and I've enjoyed Go in my spare time, but I still find myself using npm for a lot of the, um, grunt work.
14
valarauca1 2 days ago 0 replies      
It was really cool, until I realized that scrolling broke the back button.

I thought one of the cardinal sins of web design was don't break the back button.

15
IanDrake 2 days ago 5 replies      
Tester: The UI is wrong right here...

Developer: Hmm...I wonder how long it's going to take me to figure out where that HTML was generated in my javascript.

16
iamwil 2 days ago 2 replies      
Has anyone tried to use a different template engine with React? I was just wondering, since I didn't want to use JSX inline, and writing out html with React.DOM isn't appealing either.

I just wanted a way to put templates in <script> tags that get loaded by React Components. That way, I won't be mixing templates and the behavior of the components. Has anyone done this before?

17
platz 2 days ago 2 replies      
I was at a meetup where the speaker suggested react is great for business-like apps, but for things with an insane amount of dom objects like html games, it tends to get bogged down.

Since React claims to be super fact, has done a performance comparison to see in what situations and how much better react performs in certain cases, compared to say, angular.js or more vanilla frameworks?

(Also I hear that there is a really great speedup that using OM gives you, but I haven't seen any comparisons with om either)

18
zenojevski 2 days ago 0 replies      
For those who are only interested in the render loop, I made a library[1] around this abstraction.

I plan to expand it with a toolkit to allow thinking in terms of batches, queues and consumers, la bacon.js[2].

[1]: https://github.com/zenoamaro/honeyloops[2]: https://github.com/baconjs/bacon.js

19
cellis 1 day ago 0 replies      
Needs a catchy buzzword. So, can we all agree to name this ... NoHTML?
20
__david__ 2 days ago 2 replies      
So I'm curious how one would implement something like drag and drop in a React app?

Would you model the drag in the main data model somehow? Or would you do that all externally (with traditional DOM manipulation) and then update the model when the drag is complete?

21
gooserock 2 days ago 2 replies      
I like the state and property features of React, but I still don't understand why more people aren't using Chaplin instead. Because quite honestly, the syntax of every other framework - React, Ember, and especially Angular - is complete gobbledygook by comparison.

Example: in Chaplin, components are views or subviews (because it's still an MVC framework, which is another discussion for another time). The views by default render automatically without you having to call anything. But if you did, you'd write @render() (because hey, Coffeescript saves keystrokes and sanity). That automatically renders the component in the place you've already specified as its container attribute, or if you haven't it renders to the body by default.

Whereas in React, you have to write this garbage: Bloop.renderComponent(Box(), document.body);

WHY. Can't we write a framework that intuits some of this crap? Shouldn't we use a framework that reduces the time we spend writing code?

22
mushishi 2 days ago 1 reply      
The awkward right panel that changed abruptly from time to time, especially the first transition, made me skip the content.

Using a non-common ui idiom is risky. The presentation and topic don't match in a way that reader (i.e. me) had enough expectations left when it comes to what you might actually have to say.

23
1986v 2 days ago 0 replies      
If school books had online versions like this page, it would make reading so much more fun.

Great read, by the way.

24
zawaideh 2 days ago 2 replies      
Instead of having React altogether, wouldn't it make sense to have the browser keep track of a virtual DOM, and only repaint whatever changes on its own, while managing its own requestAnimationFrame and paint cycles?

P.S. I haven't looked into this in depth, just throwing an idea out there.

25
cnp 2 days ago 0 replies      
This is definitely a "Sell this to your boss" type of post :) Great work.
26
mattdeboard 2 days ago 0 replies      
This is great except the button is broken.
27
outworlder 2 days ago 0 replies      
I was half-expecting a gambit Scheme post :)
28
ulisesrmzroche 2 days ago 1 reply      
I'm honestly really, really wary of minimalistic front-end frameworks after two years of working on single page web apps.

All that ends up happening in practice is untested, messy code with deadlines up your ass and 0 developer ergonomics. Zero.

29
__abc 2 days ago 1 reply      
back button behavior is atrocious ....
30
gcb0 2 days ago 0 replies      
my gripe with those is that you remove UI complexity but shove usability and accessibility over some dark orifice.

All those things are cool for social apps (a.k.a. fart apps) but for business ready platforms this is just silly.

for example, a link that i can middle click or bookmark or send to someone, etc would be much more useful even if not as spiffy as those scrolls

31
badman_ting 2 days ago 1 reply      
I recently watched a presentation about React's approach (I think from a recent JSConf) and it sold me, at least enough to try. The approach makes total sense to me, and I agree with many of its criticisms of Angular in particular. I really loved the reconsidering of our idea of "separation of concerns", that if we reconsider the scope of the concern, we can devise an approach where templating and logic go together. I'm excited by these ideas.
32
jbeja 2 days ago 0 replies      
I hate OP website, why i got to click the back button 10 times to comeback here?
33
jafaku 2 days ago 1 reply      
Damn, Javascript is only becoming messier. I think I'll just watch it from the distance and wait until someone figures out the best way to deal with it.
4
The purpose of DRM is not to prevent copyright violations (2013) plus.google.com
522 points by adrianmsmith  8 hours ago   164 comments top 29
1
jbk 7 hours ago 4 replies      
This is one of the most important points about DRM.

DRM are marketed to users (and the society, including politicians) and to artists as a way to prevent copies. Most engineers implementing DRMs think so too. And all the discussions we've seen on HTML5 are around this. People have little arguments against this because it "sounds morally good" to help artist "live of their creations".

I am the de facto maintainer of libdvdcss, and have been involved on libbluray (and related projects) and a few other libraries of the same style; I've done several conferences on this precise subject and I've fought the French High-Authority-on-DRM in legal questioning about an unclear piece of law... Therefore, I've studied DRM quite closely...

The truth is that if you consider the main goal of DRM to prevent copies, no DRM actually work. ALL of them got defeated in a way or another. Indeed, GoT-broadcast-to-top-of-TPB time is counted in a couple of hours; so why do they try to push those technologies still?

The answer is probably because the main goal of DRM is to control distribution channels, not copy-prevention. Copy-prevention is a side goal.

This post of Ian is excellent to explain this.

PS: You can see me speaking of the same point, in French, in June 2013 here: http://www.acuraz.net/team-videolan-lors-de-pas-sage-en-sein...

NB: I'm not discussing here whether DRM are good or bad.

2
programminggeek 7 hours ago 3 replies      
I was going to say, the purpose of DRM is to get you to pay for multiple licenses. It's the same reason why a lot of paid download software is now on a SAAS model. If you can buy 1 copy of something for $20 and use it on whatever devices you want, then the company has made $20. If you DRM that to be for just one device, and you have 5 devices, they make $100. If you are a SAAS operator, you are effectively doing the same thing.

Somehow people are more okay with paying an ongoing fee for software or some perceived notion of services, but that same does't yet apply to content in a larger way. The closest equivalent is probably the cable companies and they are taking their huge sums of money and are buying the media companies, so maybe eventually there will be just a flat $100/month fee for experiencing a company's content on whatever device/experience it's available on. Maybe even movie theaters.

3
couchand 7 hours ago 6 replies      
Had CDs been encrypted, iPods would not have been able to read their content, because the content providers would have been able to use their DRM contracts as leverage to prevent it.

Moreover, the iPod most likely would have never been invented. How about that for killing innovation?

4
jamesbrownuhh 6 hours ago 5 replies      
What DRM does is makes the 'pirate' goods, the 'hacked' players, the illegitimate rips, better, more usable, more flexible, and generally superior in every way to the officially released product.

Which I'm sure is not the intention.

Say I can't copy-and-paste a section from an eBook or run it through a speech reader? Tell me I can't skip the trailers before watching the DVD I have paid for? No. Fuck you. Bullshit like that is a red rag to a bull - you just created an army of people who'll bust off your "rights management" just to show you how wrong you are, and that YOU DO NOT GET TO DECIDE how people consume the things they own.

Sorry and all. But that's how it is.

5
noonespecial 5 hours ago 0 replies      
Drm is primarily used in practice to do market segmentation. The rest of this comment is not available in your region.
6
beloch 5 hours ago 2 replies      
Nothing makes me want to turn pirate quite like being forced to sit through unskippable anti-piracy ads preceding a movie I've paid for.
7
azakai 5 hours ago 1 reply      
This is very true, but also preaching to the choir. Probably most of an audience like HN already knows this.

The real question is what we can do to fight DRM. The only real option is to push back against the companies that promote it. For EME, the current DRM in the news, the relevant companies are Google, Microsoft and Netflix.

It's all well and good to talk about how DRM is pointless. Of course it is pointless. But unless we actually push back against those companies, DRM will continue to win.

8
crystaln 5 hours ago 3 replies      
There is zero evidence of this claim in this article.

DRM is, in fact, to prevent unauthorized usage and copies. In fact, even some of the examples in this article are exactly that.

What is more important is that DRM doesn't have to be perfect, it just has to make unauthorized usage very inconvenient - enough that a few dollars is worth the cost for most people.

9
josephlord 6 hours ago 0 replies      
This rings quite true to me. I had protracted arguments about the limitations the BBC wanted to impose on TVs supporting Freeview HD in the UK (copy protection flags and only encrypted local streaming) despite the fact that the content itself was being broadcast at high power across the country completely unencrypted. What is it the CE companies need to license? The Huffman compression tables for the guide data which in the license agreement you have to warrant that they are trade secrets and that you won't reveal them. I did send the BBC a link to the MythTV source code which contains this trade secret. If you work out who I was working for during this discussion don't worry, the content arm of the company was (at least according to the BBC pressuring them the other way as a supplier).

And the end result? We caved for the shiny Freeview HD sticker.

10
Karellen 7 hours ago 0 replies      
Previous discussion (421 days ago, 22 comments):

https://news.ycombinator.com/item?id=5406733

11
chacham15 1 hour ago 0 replies      
While everything that is said in the article is true, the end result is that the control that the distributers want to have is circumvented by pirating. Therefore, by continuing to try and control the content more, what is actually being done is increasing the demand for pirated content. I know of many people who buy content legally and then in addition acquire the pirated version to use as they please. Therefore, as that process becomes easier (lookup popcorn time to see how easy it can be), the purpose of the control becomes more meaningless.
12
shmerl 7 hours ago 0 replies      
Of course not. Reasons for demanding DRM can be different, but none of them are valid or good. As discussed here: https://news.ycombinator.com/item?id=7745009 common reasons are:

1. Monopolistic lock-in. DRM is more than often used to control the market. It happened with Apple in the past, and was one of the key reasons that music publishers realized that being DRM-free is actually better for them.

This reason also includes DRM derivatives like DMCA-1201 and the like. It's all about control (over the markets, over users and etc.).

2. Covering one's incompetence. DRM is used to justify failing sales (i.e. when execs are questioned about why the product performs poorly, they say "Pirates! But worry not - we put more DRM in place").

3. Ignorance and / or stupidity (many execs have no clue and might believe that DRM actually provides some benefit). This type can be called DRM Lysenkoism.

13
jiggy2011 7 hours ago 2 replies      
Pretty much this.The people who will pirate are going to pirate regardless, you could offer all your movies DRM free for $1 each and some people will still pirate them.

So the purpose of DRM is to make maximum revenue from those who won't pirate, for example by charging more for group viewings of the movie or viewing on multiple devices.

14
HackinOut 5 hours ago 0 replies      
"Sure, the DRM systems have all been broken [...]"

I have worked with MS PlayReady DRM (which is the "latest" one from Microsoft, the one used by Netflix) for some time and never stumbled upon any cracks. Not because it's impossible or even difficult but probably just because nobody cares about cracking Netflix (which brings PlayReady it's main source of "users")... Once you pay, you can watch as much as you like, why bother. Netflix made it extremely simple and accessible. (Yes some features like multicasting might be missing but it's still way better than Plesk or PopcornTime. For now at least... The main problem is clearly that the Film industry make it too difficult to have all content in one place). There is plenty of other "easier" sources (alternative VOD offerings with already cracked/worse protections, Blu-rays) to get the copyrighted material from for underground channels.

I am sure other DRM systems have a clear log for the same reason: No major incentive to crack them.

15
jljljl 6 hours ago 1 reply      
Speaking of controlling distribution channels, does anyone know how I can share this post outside of Google+, or add it to Pocket so that I can reread in more detail later?
16
userbinator 3 hours ago 0 replies      
The purpose of DRM is to give content providers leverage against creators of playback devices.

One thing that's always seemed odd to me is that the DRM use case is presented as a battle with "content providers" on one side and everyone else on the other, but aren't these content providers also users? Do they also consume DRM'd content, and if so, are they perfectly fine with the restrictions? Do those who devise DRM schemes not realise that they may also be the ones who will have these schemes imposed on them?

17
tn13 5 hours ago 0 replies      
I do not think there is any problem with DRM. It is pretty much right of the content providers to chose how they will distribute their content.

What really gets my gears grinding is when I see an open source browser like Firefox is forced against their wishes to implement it because DRM has somehow reached a standard.

The job of W3C standards is to protect the interests of ordinary web users and not content providers.

18
Kudos 4 hours ago 0 replies      
Can someone explain to me how businesses can provide a subscription model without DRM?

I refuse to purchase anything with DRM, but I don't give a shit if it's a rental or subscription service.

19
gagege 7 hours ago 4 replies      
Why isn't screen capture software more widely used? It seems like a dead simple screen capture suite could make all these DRM worries go away.
20
nijiko 3 hours ago 0 replies      
Eh, at the end of the day, there are thousands of ways to go around it, so why implement it in the first place?

People pay for things that are good, easy to pay for, are appropriately priced, and not a burden or expense more than they see it worth (has to deal with pricing and roadblocks). DRM, and poor delivery services are usually those roadblocks.

21
ingenter 3 hours ago 0 replies      
>Without DRM, you take the DVD and stick it into a DVD player that ignores "unskippable" labels, and jump straight to the movie.

>With DRM, there is no licensed player that can do this

So, enforcing some rules (via DRM) to the player manufacturing, content provider makes my experience worse as a consumer.

Again, I am a consumer, what is the advantages of DRM for me? So manufacturer must enforce me watching ads?

22
10098 6 hours ago 1 reply      
I can still make "unauthorized" copies of DRM'ed media and play those back on non-drm devices. E.g. record sound from a locked-down music player using a microphone, convert that to MP3 and listen to it using a normal MP3 player. So it's not 100% bulletproof.
23
mfisher87 4 hours ago 0 replies      
Steam would have been a great example for his article. Steam does nothing to prevent you from copying games. In fact, some games on steam can be bought without DRM from other sources. Steam just forces you to use Steam or buy your games again.
24
pje 6 hours ago 4 replies      
> Had CDs been encrypted, iPods would not have been able to read their content, because the content providers would have been able to use their DRM contracts as leverage to prevent it.

What? Why? Nothing would have prevented people from recording the playback of an encrypted CD and putting that on their iPod.

25
wyager 7 hours ago 4 replies      
Interesting. I was unaware of this.

But if this is the case, why is there such a push to put DRM in HTML? Browsers aren't DVD players. Users are free to use software like ABP to circumvent any features like "unskippable ads" mentioned in the post. Pressure on browser makers seems much less valuable than pressure on device makers.

26
spacefight 6 hours ago 0 replies      
TL;DR

"DRM's purpose is to give content providers control over software and hardware providers, and it is satisfying that purpose well."

27
briantakita 4 hours ago 0 replies      
Anyone who commits double speak is not worthy of trust.
28
briantakita 5 hours ago 0 replies      
Good thing it's easier than ever DIY.
29
webmaven 7 hours ago 1 reply      
Needs a [2013] label in the title.
5
OS X Command Line Utilities mitchchn.me
494 points by brianwillis  20 hours ago   214 comments top 40
1
Monkeyget 17 hours ago 17 replies      
This was supposed to be few lines of remarks. It expanded quickly in relation with my enthusiasm for this topic.

I've been investing some time in the command line on my Mac. I am moving from a dilettante going to the shell on a per-need basis to a more seasoned terminal native. It pays off handesomely! It's hard to convey how nice it to have to have a keyboard-based unified environment instead of a series of disjoined mouse-based GUI experiences.

Here are some recommendations pertaining to mastering the command line on a Mac specifically:

-You can make the terminal start instantaneously instead of it taking several seconds. Remove the .asl files in /private/var/log/asl/. Also remove the file /users/<username>/Library/Preferences/com.apple.terminal.plist

- Install iterm2. It possesses many fancy features but honestly I hardly ever use them. The main reason to use it instead of the default Terminal application is that It just works.

-Make your terminal look gorgeous. It may sound superficial but it actually is important when you spend expanded period of time in the terminal. You go from this http://i.imgur.com/cx3zZL8.png to this http://i.imgur.com/MQbx8yK.png . You become eager to go to your terminal instead of reluctant. Pick a nice color scheme https://code.google.com/p/iterm2/wiki/ColorGallery . Use a nice font (Monaco, Source Code Pro, Inconsolata are popular). Make it anti aliased.

-Go fullscreen. Not so much for the real estate but for the mental switch. Fullscreen mode is a way to immerse yourself into your productive development world. No browser, no mail, no application notification. Only code.

-Install Alfred. It's the command line for the GUI/Apple part of your system. Since I installed it I stopped using the dock and Spotlight. Press +space then type what you want and it comes up. In just a few keystrokes you can open an application, open gmail/twitter/imdb/..., make a webs search, find a file (by name, by text content), open a directory,... It's difficult to describe how empowering it is to being able to go from 'I want to check something out in the directory x which is somewhere deep deep in my dev folders' to having it displayed in 2 seconds flat.

-Make a few symlinks from your home directory to the directories you use frequently. Instead of doing cd this/that/code/python/project/ you just do cd ~/project.

-Learn the shell. I recommend the (free) book The Linux Command Line: http://linuxcommand.org/tlcl.php . It guides you gently from simple directory navigation all the way up to shell scripting.

-Use tmux. Essential if you want to spend some time in the terminal. You can split the window in multiple independent panes. Your screen will have multiple terminals displayed simultaneously that you can edit independently. For example I'll have the code in one side and on the other side a REPL or a browser. You can also have multiple windows each with its own set of panes and switch from on to the other. With the multiple windows I can switch from one aspect of a project to another instantly. E.g: one window for the front-end dev, a second one for the backend and another for misc file management/git/whatever.

-Pick an editor and work towards mastery. I don't care if you choose vi or emacs. You'll be surprised how simple features can make a big change in how you type. You'll be even more surprised at how good it feels

The terminal is here to stay. It's a skill that bears a lot of fruits and that deprecate slowly. The more you sow the more you reap.

2
cstross 17 hours ago 1 reply      
One command I can't live without: textutil.

Basically it's a command-line front end to Apple's TextKit file import/export library. Works with a bunch of rich text/word processor formats, including OpenDoc, RTF, HTML 5, and MS Word. Critically, the HTML it emits is vastly better than the bloated crap that comes out of Microsoft Word or LibreOffice when you save as HTML ...

Install pandoc and multimarkdown as well and you've got the three pillars of a powerful, easy-to-use multiformat text processing system.

3
ggreer 17 hours ago 2 replies      
I didn't know about `screencapture`. That's a fun one.

The Linux equivalent of `open` is `xdg-open`. I usually alias it to `op`, since `/bin/open` exists.

Another bit of terminal-sugar for OS X users:

    alias lock='/System/Library/CoreServices/"Menu Extras"/User.menu/Contents/Resources/CGSession -suspend'
And most Linux users:

    alias lock='gnome-screensaver-command -l'
If you find yourself accidentally triggering hot corners, the lock command is your savior.

I've sorta-documented this stuff over the years, but only for my own memory. https://gist.github.com/ggreer/3251885 contains some of my notes for what I do with a clean install of OS X. Some of the utility links are dated, but fixing the animation times really improves my quality of life.

4
eschaton 18 hours ago 9 replies      
What always surprises me is that so many don't know or use the directory stack commands, pushd and popd. I'll admit I was also ignorant of them until something like 2005, but once I learned of them I switched and never looked back. Now I can't see someone write or type "cd" without a little bit of a cringe.
5
greggman 18 hours ago 1 reply      
These are awesome. I didn't know about many of them.

One tiny thing though, at the bottom it says

> Recall that OS X apps are not true executables, but actually special directories (bundles) with the extension .app. open is the only way to launch these programs from the command line.

Actually, you can launch them in other ways. Example

    /Applications/Google\ Chrome.app/Contents/MacOS/Google\ Chrome  --user-data-dir=/Users/<username>/temp/delmechrome --no-first-run
Will start a new instance of Chrome with it's own datastore in ~/temp/delmechrome. add some URL to the end to have it auto launch some webpage. Delete ~/temp/delmechrome to start over.

6
runjake 10 hours ago 0 replies      
9. /usr/sbin/system_profiler

10. /System/Library/CoreServices/Applications/Wireless Diagnostics (with built-in wifi stumbler)

11. /System/Library/CoreServices/Screen Sharing.app (Built-in VNC client with hardware acceleration)

12. /System/Library/PrivateFrameworks/Apple80211.framework/Versions/A/Resources (Command-line wifi configuration and monitoring tool)

Combine with sed, awk, and cut, and these tools can provide useful monitoring.

7
nicksergeant 6 hours ago 1 reply      
Why the hell would someone change the title from "Eight Terminal Utilities Every OS X Command Line User Should Know" to "OS X Command Line Utilities".

The original title is clearly more accurate / useful / canonical. The overwritten title is ambiguous. This is indeed not a list of every OS X command line utility.

8
wink 14 hours ago 2 replies      
'open' is on of the things I long for most as a Linux user.There are several ways to achieve something that are all inferior or downright broken. Usually you don't have a huge problem, until you have. xdg-open for example could've solved this, if it was universally working.

I wrote related rant once[0] when I tried to debug an issue of a misconfigured default browser.

[0]: http://f5n.org/blog/2013/default-browser-linux/

9
barbs 17 hours ago 0 replies      
I use multiple POSIX environments (OS X at work, Linux Mint and Xubuntu at home), and I find it handy to create common aliases for differently implemented commands to keep the environments consistent.

For example, I set the letter 'o' as an alias for 'open' on OS X, and to "thunar" on Xubuntu and "nemo" on Linux Mint.

10
pirateking 14 hours ago 1 reply      
After years of living on the command line, OS X specifically, and learning its quirks and tricks, I am actually ready to move on.

Right now I am more interested in creating simple visual interfaces on top of UNIX-y tools, for my own personal use cases. The main benefit of this is the ability to better experiment with and optimize my workflows for different properties as needed through different combinations of single responsibility interfaces and single responsibility programs.

I am sensing that there is great promise in seeing much higher APMs (actions per minute) for many tasks, even compared to the all-powerful command line. Also, there are lots of interesting possibilities for better visual representations of data to improve comprehension and usability.

11
ansimionescu 13 hours ago 1 reply      
Additions:

* lunchy: wrapper over launchctl, written in Ruby https://github.com/mperham/lunchy

* brew cask: "To install, drag this icon... no more", as they say https://github.com/caskroom/homebrew-cask

* have fun with "say" https://github.com/andreis/different

12
nmc 18 hours ago 5 replies      
/usr/local is the default location for user-installed stuff, but I personally like to have my package manager do its stuff in a separate directory.

I like the way Fink [1] uses the /sw (software) directory.

Does anyone have a valuable opinion on the comparison between Fink and Homebrew or maybe MacPorts?

[1] http://www.finkproject.org

13
pling 18 hours ago 0 replies      
Another one that I can't live without:

   ssh-add -k keyfile
Integrates with keychain meaning you can have a passworded private key without having to play around with ssh-agent and shells and profiles. Put keychain access in the menu bar and you can lock the keychain on demand as well. Integration of ssh into the OSX workflow is absolutely awesome.

That and some of the examples in that article really make it a killer platform for Unix bits.

14
DCKing 12 hours ago 3 replies      
Could you imagine Apple would have gone for BeOS or a custom developed kernel with no significant terminal-based userland when making OS X? It would probably still be used by many casual users or those doing graphical work, but I doubt it would be used by hackers at all.
15
salgernon 10 hours ago 1 reply      
pbpaste and pbcopy can specify multiple clipboards; one very handy thing I do is

"cmd-a" "cmd-c" (copy all)

double click on a word I'm looking for, "cmd-e" to enter it into the find clipboard

'pbpaste | fgrep --color `pbpaste -pboard find`'

I have that aliased as 'pbg'.

16
RexRollman 3 hours ago 1 reply      
Personally, I was surprised that there is not a command line interface to OS X's Notification system. Seems like it would be handy for long running batch jobs.
18
torrent-of-ions 17 hours ago 1 reply      
Another thing you can do to improve speed is learn the keybindings for readline. They are the same keybindings as emacs, and lots of other things use readline too like python shell, sqlite, etc. A very useful set of keys to have in your muscle memory. See the readline manual: http://tiswww.case.edu/php/chet/readline/rluserman.html#SEC3
19
smw 16 hours ago 0 replies      
Put this in your path somewhere, find files, links, directories instantly, with globbing. Makes mdfind actually useful.

  $ mdf "*invoice*.pdf"  /Users/smw/Downloads/Invoice-0000006.pdf
https://gist.github.com/smw/a21a9f675ed3358830da

20
_jsn 18 hours ago 0 replies      
mdfind / Spotlight can be a fairly powerful tool. Consider this query, which finds all Xcode projects I've tagged as "Active":

  ~$ mdfind tag:Active kind:xcode  /Users/jn/Code/xyz/xyz.xcodeproj  ...
Queries like this also work in the Cmd-Space UI, or as a Saved Search. By default each term is joined with AND, but you can specify OR too.

21
milla88 18 hours ago 4 replies      
My favorite command is 'say'. You can do all kinds of silly voices.

Try this out: say hello -v Good

22
archagon 18 hours ago 1 reply      
Great list! Includes all the old favorites with clear explanations.

This is only tangentially related, but I recently wrote a little Automator Service to gather any selected file and folder paths from Finder. I very often need to grab the path of something for programming-related stuff, and doing it from the command line or with the mini-icon drag-and-drop takes way too long. Maybe somebody here will find it useful! http://cl.ly/1a3s3g1u2Q2w

23
stretchwithme 8 hours ago 0 replies      
The hot keys for screen capture are more useful for daily use. You can paste what you've capture directly into most email clients. Or go to Preview where creating a new file uses what's on your clipboard if its an image.
24
gotofritz 9 hours ago 0 replies      
Also worth mentioning is dotfiles (not specific to OS X). Basically various well known "power users" share their bash, homebrew, etc settings on github so that they can easily set up a new machine with minimum of fuss.There are a lot of neat trick in those boilerplate files.

http://dotfiles.github.io/

25
jpb0104 11 hours ago 0 replies      
Here is a very handy script that takes a screenshot, places it in Dropbox's public directory, shortens the public URL, then puts the short URL in your clipboard. Making for very quick screenshot sharing. It combines a few of these hints. https://gist.github.com/jpb0104/1051544
26
fuzzywalrus 9 hours ago 0 replies      
Notably the screen capture terminal command, while neat, is sold as "more flexible". I think the author is unaware of Command+shift+4 followed by tapping the spacebar. It'll give you the window capture.

Otherwise good article.

27
hibbelig 17 hours ago 3 replies      
I want "remote pbcopy"! I'd like to be able to log in to any remote host (usually Linux in my case), then tack something onto the command line I'm typing to copy it into the pastebuffer.

    ssh somehost    cd /some/dir    grep -lr foo . | remote_pbcopy
I guess something like this is possible with GNU Screen or with Tmux, and perhaps the Tmux/iTerm interaction helps, but I've never figured it out.

28
chrisBob 11 hours ago 1 reply      
The biggest change I have found for my terminal was adding this to my .bash_profile:

export CLICOLOR=1

export LSCOLORS=GxFxCxDxBxegedabagaced

I thought that was one of the most amazing things when I used a linux system, but OS X is black and white by default.

29
allavia88 18 hours ago 1 reply      
There's been a few of these lists over the past few years, most recent one is https://news.ycombinator.com/item?id=7494100

It seems like a large portion of HN is less experienced re sysadmin, but is interested in it nonetheless. Perhaps there's room to make a 'codecademy for unix' type course? Curious to see what folks thing.

30
huskyr 18 hours ago 1 reply      
Awesome. I knew about `pbcopy`, but i never knew you could also pipe stuff into it. That saves a lot of time saving script outputs to temporary text files and copying!
31
conradev 14 hours ago 1 reply      
The one utility I can't live without is caffeinate, which prevents a Mac from sleeping.

It's super useful for keeping long running tasks running.

32
chrismorgan 18 hours ago 0 replies      
I dont use a Mac, but have used espeak-via-ssh to deliver a message to my sister who was near my laptop, from the comfort of my bed I could have (a) called out, or (b) gotten up, but where would the fun have been in that?
33
fmela 6 hours ago 0 replies      
The '-name' argument of mdfind makes it useful to find files with the query string in the name. E.g.: "$ mdfind -name resume".
34
guard-of-terra 16 hours ago 0 replies      
I wonder if it's possible to make OS X to say "As you request, Stan" in Lexx's voice.

That alone might be sufficient reason to migrate from ubuntu.

35
RazerM 18 hours ago 3 replies      
It seems odd to have

  open /Applications/Safari.app/
as an example, when

  open -a safari
does the same thing.

36
cormullion 15 hours ago 0 replies      
If you work with images a lot, look up sips. I use it a lot, for converting images, rescaling and resizing, etc.
37
vladharbuz 18 hours ago 1 reply      
The screencapture options are wrong.

    "Select a window using your mouse, then capture its contents without the windows drop shadow and copy the image to the clipboard:    $ screencapture -c W"
-c captures the cursor, W is not an option. The real command for this is:

    $ screencapture -C -o

38
mmaldacker 18 hours ago 5 replies      
no love for macport?
39
nemasu 19 hours ago 0 replies      
This is neat. I'll be getting a mac soon, and this is right up my ally.
40
lastofus 17 hours ago 1 reply      
The article doesn't mention the fun you can have with ssh + say.

My co-workers and I used to ssh into the iMacs of non-technical users in our office and have a good laugh from a nearby room.

6
Introducing the WebKit FTL JIT webkit.org
450 points by panic  2 days ago   94 comments top 18
1
hosay123 2 days ago 4 replies      

    > Profile-driven compilation implies that we might invoke an optimizing    > compiler while the function is running and we may want to transfer the    > functions execution into optimized code in the middle of a loop; to our    > knowledge the FTL is the first compiler to do on-stack-replacement for    > hot-loop transfer into LLVM-compiled code.
Reading this practically made my hair stand on end, it is one hell of a technical feat, considering especially they had no ability to pre-plan so that both LLVM and their previous engine would maintain identical stack layouts in their designs. It's really insane they got this to work at all.

I was reminded of the old story about a search engine that tried to supplant JS with a Java bytecode-clone because at the time it was widely believed Javascript had reached its evolutionary limit. How times change!

2
nadav256 2 days ago 2 replies      
This project is such a great technological achievement for both the Webkit and LLVM communities. There are so many 'first times' in this project. This is the first time profile guided information is used inside the LLVM _JIT_. This is the first time the LLVM infrastructure supported self-modifying code. This is one of the very few successful projects that used LLVM to accelerate a dynamic language. This is the first time Webkit integrated their runtime (JITs and garbage collector) with an external JIT. This is the first JavaScript implementation that has advance features such as auto-vectorization. Congrats guys!
3
rayiner 2 days ago 1 reply      
The Bartlett mostly copying collector is just a really neat design. Even if your compiler gives you precise stack maps, conservative root scavenging still has the major advantage of giving the optimizer the most freedom. Its the basis of SBCL's collector, which is probably 20 years old now. Good to see it still has legs.

This is the patch point intrinsic documentation: http://llvm.org/docs/StackMaps.html. This is a really significant addition to LLVM, because it opens up a whole world of speculative optimizations, even in static languages. Java, for example, suffers on LLVM for want of an effective way to support optimistic devirtualization.

4
tomp 2 days ago 1 reply      
They use a conservative GC, which I understand, as they were using it before FTL JIT, and it required minimal changes for integration with LLVM-based JIT. However, in the blog post, they mention several times that they wanted to avoid stack maps because that would require spilling pointers from registers to stack, which they say is undesirable for performance reasons.

I wonder, however, how slow register spilling really is. I will test it when I have time, but logically, it shouldn't take up much time. Under the x64 ABI, 6 registers are used for argument passing [1], and the rest of the arguments are passed on the stack. So, when the runtime calls into GC functions, all but at most 6 pointers are already in the stack, at (in theory) predictable locations. Those 6 registers can be pushed to stack in 6 instructions that take up 8 bytes [2], so the impact on the code size should be minimal, and performance is probably also much faster than most other memory accesses. Furthermore, both OCaml and Haskell use register spilling, and while not quite at C-like speeds, they are mostly faster than JS engines and probably also faster than FTL JIT.

Of course, predicting the stack map after LLVM finishes its optimisations is another thing entirely, but I sincerely hope the developers implement it. EDIT: it seems that LLVM includes some features [3] that allow one to create a stack map, though I wonder if it can be made as efficient as the GHC stack map, which is simply a bitmap/pointer in each stack frame, identifying which words in the frame are pointers and which aren't.

[1] http://en.wikipedia.org/wiki/X86_calling_conventions#x86-64_...

[2] tested using https://defuse.ca/online-x86-assembler.htm#disassembly

[3] http://llvm.org/docs/GarbageCollection.html

5
tambourine_man 2 days ago 1 reply      
I'm so glad to see that Webkit is not dead after the Blink fork. I still use Safari as my main browser, but its developer tools and optimizing compiler lag behind.
6
InTheArena 2 days ago 2 replies      
This is a much bigger deal then people are giving credit for, because of the other thing that Apple uses LLVM for. It's the primary compiler for Objective-C, and thus Cocoa (mac apps) and CocoaTouch (ios apps) as well. If apple has Javascript compiling on-the-fly at this speed, this also means that it would be pretty trivial to expose the objective-c runtime to javascript, and mix and match C & Javascript code.

This is going to be a very very big deal.

7
simcop2387 2 days ago 2 replies      
I wonder how difficult it would be then to take the hints for asm.js and after validating that it meets the contracts it provides push through all the code into the FTLJIT to get a huge speed boost on that code. With the ability to do hot transfers into LLVM compiled code it should be possible to do it without any real noticeable issues to the user.
8
leeoniya 2 days ago 0 replies      
> Note that the FTL isnt special-casing for asm.js by recognizing the "use asm" pragma. All of the performance is from the DFGs type inference and LLVMs low-level optimizing power.

doesnt "use asm" simply skip the initial profiling tiers that gather type stats etc? most of the benefit of compiling to asm.js comes from fast/explicit type coersion.

9
ksec 1 day ago 1 reply      
I am glad WebKit is thriving. I was worried that the fork Blink with Google and Opera would means Webkit gets no love.

Hopefully the next Safari in iOS and OSX will get many more improvements

10
aaronbrethorst 2 days ago 2 replies      

    dubbed the FTL  short for Fourth Tier LLVM
Is it still a backronym if it redefines an existing acronym? (even a fictional one?)

http://en.wikipedia.org/wiki/List_of_Battlestar_Galactica_ob...

11
lobster_johnson 1 day ago 2 replies      
Is this architecture generic enough that one could, say, build a Ruby compiler on top of it?

I imagine even writing a Ruby -> JS transpiler that used the WebKit VM would provide a speedup, similar to how JRuby works on the JVM, but native compilation would be even better.

12
jwarren 1 day ago 0 replies      
I adore threads like this. They really bring to focus exactly how much more I have to learn.
13
cromwellian 2 days ago 3 replies      
Would be interesting to see Octane numbers.
14
otikik 1 day ago 0 replies      
It's called FTL, but I am not actually sure it's Faster Than LuaJIT.
15
jongraehl 1 day ago 0 replies      
wonder how this compares to node - specifically https://github.com/rogerwang/node-webkit
16
nivertech 2 days ago 2 replies      
no benchmarks versus V8?
17
harichinnan 1 day ago 1 reply      
ELI5 version please?
18
Theodores 2 days ago 1 reply      
Much like how hairdressers don't necessarily have the most promising haircuts, it would seem that companies that make the finest in web browsers don't necessarily have the greatest of web pages! I don't think there is an ounce of javascript on the webkit website yet that article goes waaay over the heads of mere mortals on mega-speedy-javascript.
7
AdBlock Pluss effect on Firefoxs memory usage mozilla.org
449 points by harshal  1 day ago   257 comments top 34
1
gorhill 1 day ago 12 replies      
It's not just memory overhead, it is also CPU overhead.

One approach is to write the filtering engine from scratch. It is what I did, without looking at ABP's code beforehand in order to ensure a clean slate mind.

I didn't get it right the first time, I did spend quite a large amount of time benchmarking, measuring, prototyping, etc.

Once I was satisfied I had finally had a solid code base, I went and benchmarked it against ABP to find out how it compared:

https://github.com/gorhill/httpswitchboard/wiki/Net-request-...

And for HTTPSB's numbers, keep in mind there were an extra over 50,000 rules in the matrix filtering engine (something not found in ABP).

So I think this shows clearly ABP's code can be improved.

2
neals 1 day ago 6 replies      
The first thing people complain about with browsers is probably memory usage. I doubt that many people actually understand what a browser actually does with that memory. I sure don't.

100mb sounds like a lot of memory for a webpage. Where does all this memory go to?

3
maaaats 1 day ago 5 replies      
> Many people (including me!) will be happy with this trade-off they will gladly use extra memory in order to block ads.

Well, for me the whole point of blocking ads is because they are often big flash things that hog cpu and memory. If ABP is no better, then most of the reason is gone. I'd actually like to view ads to support more sites.

4
mullingitover 1 day ago 4 replies      
This is like a commercial for forking Firefox to build highly efficient ad blocking into the browser.

It's sad that the most-demanded feature on every browser, as evidenced by plugin downloads, is ad blocking. However, all the major browsers are produced by companies with their hands in advertising, and this conflict of interest has resulted in this feature request going unfulfilled for a over a decade.

Fork 'em.

6
graylights 1 day ago 2 replies      
Some sites have tried gimmicks to block adblock users. I wonder now if they'll make thousands of empty iframes.
7
chrismorgan 1 day ago 1 reply      
I would like to see a lighter blocking listone that doesnt try to be absolutely comprehensive, but just focuses on the 95% of ads that can be fixed at 10% of the cost (actually I think itd be more like 99% at 0.1% of the cost).
8
bambax 1 day ago 3 replies      
Wouldn't the solution involve allowing plugins to manipulate the content of a page before it is parsed by the browser?

There used to be a proxy adblocker that did that, but I don't think it works anymore.

The Kindle browser uses a proxy to pre-render pages on the server in order to lighten the load of the device.

Could AaaS (Adblock as a service) be a viable business? I think I'd pay for it.

9
BorisMelnik 1 day ago 0 replies      
I'm just gonna say this as a person who uses ABP and has not done any low level memory analysis: browsing using ABP saves me much more time and makes my browsing much quicker:

-ads take so long to load and when ABP is not enabled bogs down my browser hard core

-when watching videos I have to wait 3-20 seconds for them to load, with ABP enabled I do not have to wait at all

It consistently saves me several minutes every day. If it adds a few extra milliseconds to load some style sheets I have never, ever noticed.

10
axx 1 day ago 5 replies      
It's 2014.

My computer has 12 Gigabytes of RAM.

I don't give a shit, as long as i don't have to view those terrible stupid Ads.

Advertisers, fix your practises and i will view your Ads again.

11
ladzoppelin 1 day ago 1 reply      
Use Bluhell Firewall and NoScript for Firefox instead of ABP. https://addons.mozilla.org/en-US/firefox/addon/bluhell-firew...
12
bithush 1 day ago 1 reply      
My primary computer is a laptop from 2008 with a 2.5Ghz Core 2 Duo and other than a 1 second delay in startup I don't notice any performance degradation in general use compared to running without ABP.

Currently this machine has been up for 3 days and 17 hours and Firefox has been running the whole time and it is using 303MB with 4 tabs open one of which is the Daily Mail (don't judge!) which is an extremely busy page. This is perfectly acceptable in my opinion. I only have 4GB RAM which by todays standards in not much either. Obviously reducing the memory footprint is great but I can't say it has ever been a problem I have ever noticed.

On a side note the past few updates to Firefox have improved performance a lot. I am really impressed by how much quicker the browser is, especially with Chrome seeming to just get slower and slower with each update.

13
SixSigma 1 day ago 2 replies      
I would recommend Privoxy rather than adblock. I chose it because it can be used with any browser rather than needing to be a plugin and have had great results with it. As a bonus I have it running on a virtual server and use an ssh tunnel on any system I end up using. This gets me round filters and installing things other than ssh. So if I am using my Android phone on free wifi, I know I'm not being dns hijacked or under HTTP MITM attack.

http://www.privoxy.org/

14
alipang 1 day ago 2 replies      
Anyone knows if this also affects Chrome? I have Chrome saying Adblock is using 126MB of memory, but if there are giant stylesheets injected elsewhere that might no be reported fully in the Chrome task manager.
15
nhebb 1 day ago 4 replies      
I'm surprised AdBlock is so popular. I tried it a few years ago and noticed the sluggishness immediately. I know it's not a feasible alternative in everyone's opinion, but if you're running a low end machine and want to block the most pernicious ads (flash, multiple external javascript, etc.) then Firefox + NoScript is the way to go.
16
dbbolton 1 day ago 0 replies      
If you are on a low-end machine and can't afford a larger memory footprint, you should probably be using a lighter browser in the first place, but if you really want to use Firefox and block ads, one option is to just block them through your hosts file, then the addon becomes largely unnecessary.

http://someonewhocares.org/hosts/

http://winhelp2002.mvps.org/hosts.htm

17
demi_alucard 1 day ago 0 replies      
For those who do not know, there is an EasyList without element hiding rules.

https://easylist-downloads.adblockplus.org/easylist_noelemhi...

With these rules firefox's memory usage only goes up from 290MB to 412MB, instead of 1.5GB for the website mentioned in the article for me.

The downside is that this list has a more limited coverage than the full version of the list.

18
nclark 1 day ago 0 replies      
i would rather have my browser crash and never open again for the rest of my life than not use adblock. take all the memory, CPU, fuck it GPU that you want, you fantastic addon.
19
SudoNick 1 day ago 1 reply      
I think this is the fourth or fifth time, in recent memory, that I've seen someone from Mozilla criticize Adblock Plus and call on its developers to make changes. ABP startup time and memory consumption were subjects I recall, and its general impact on page load times may have been as well.

I can understand Mozilla taking some interest in how addons behave, and constructive feedback on extensions is a good thing. However, ABP is the type of extension that is likely to have issues in those areas because of what it does. Which is very important to users, especially those who rely upon it for its privacy and security enhancing capabilities. It is those users who should decide whether the performance and resource usage trade-offs are acceptable. Mozilla shouldn't make, or try to make, such decisions.

The situation with ABP 2.6 (https://adblockplus.org/development-builds/faster-firefox-st..., https://adblockplus.org/forum/viewtopic.php?t=22906) might not be a case of this, but that along with the wider pattern of platform developers being more controlling, does make me somewhat concerned about Mozilla taking too much interest in extensions. I hope my worries are for naught.

20
caiob 1 day ago 0 replies      
I'd love to hear the side-effects of using Disconnect.
21
hbbio 1 day ago 1 reply      
Very interesting.

There might be an architecture problem here. Another solution is to use a proxy to block ads, like GlimmerBlocker for OSX. But I didn't investigate memory usage, though I tend to think it will be lower (plus have the added benefit of working simultaneously for all browsers).

22
kmfrk 1 day ago 1 reply      
I use the program Ad Muncher for Windows, and I've found that it sometimes becomes a humongous bandwidth hog, when I'm watching video.

So it's not just memory that can be at risk of hogging.

23
sdfjkl 1 day ago 0 replies      
On the other hand AdBlock (edge of course, not the co-opted plus), saves quite a bit of network traffic. Nice if you're on metered (or slow) internet.
24
Shorel 1 day ago 0 replies      
My browser is fast again!

Thank you, really thank you.

25
chrismcb 1 day ago 0 replies      
Did I read that correctly? One website had 400 iframes? FOUR HUNDRED? Not 4, but 4 HUNDRED? Surely there is a better way.
26
mwexler 1 day ago 1 reply      
Ah, 3 words that always seem to show up together: firefox and memory and usage. I look forward to when I see that 4th word, "minimal", in there as well. Yes, I know original post is about an addin increasing memory usage, but I guess that doesn't surprise me anymore either, sadly, when thinking of firefox.

Even with the memory pain, I still (mostly) love FF.

27
linux_devil 1 day ago 1 reply      
I switched from Firefox to Safari , when I found out 900 Mb used by firefox on my machine , now I know the real reason behind that .
28
spain 1 day ago 1 reply      
Another (related) popular extension that I've noticed negatively affecting Firefox's performance is HTTPS Everywhere.
29
vladtaltos 15 hours ago 0 replies      
if 19 million people use it, why hasn't it become a feature firefox offers itself ? doesn't this mean people want some option of blocking ads ? I'm guessing this a native-integrated version will require a lot less memory...
30
Vanayad 1 day ago 0 replies      
Same issue appears in chrome as well. Got the site they linked to ~ 2.4GB before the browser stopped responding.
31
hokkos 1 day ago 1 reply      
LastPass is victim of the same fate.
32
ColbieBryan 1 day ago 0 replies      
Editing hosts files, installing Privoxy or Glimmerblocker, using Ghostery, Bluhell Firewall - I've tried all of these suggestions and unfortunately ABP is the only way to go if blocking all unwanted pop-up's is a priority.
33
nly 1 day ago 1 reply      
So is there any insight as to why? My guess, without looking at the code would be the regex engine allocating millions of NFA graphs.
34
Donzo 1 day ago 0 replies      
You guys: you've got to stop using this.

You're missing important messages from sponsors.

8
Is it possible to apply CSS to half of a character? stackoverflow.com
431 points by gioele  3 days ago   85 comments top 17
1
habosa 3 days ago 5 replies      
I've never seen such polished and thorough answers to a question like this. One answer (not even the accepted answer) made a plugin and a beautiful website to go along with it: http://emisfera.github.io/Splitchar.js/

Pretty amazing that our tools are getting so good that someone can quickly whip up an open-source plugin and splashy, hosted website for a SO answer.

2
metastew 3 days ago 3 replies      
I wonder if this designer is the one behind the recently unveiled 'Halifax' logo? The X looks remarkably similar.

Link for visual evidence: https://twitter.com/PaulRPalmeter/status/456165443363827712/...

3
vinkelhake 3 days ago 1 reply      
Those are some imaginative solutions. One problem[1] with drawing over half the character is anti-alias. The border gets a blend of both colors.

[1] http://i.imgur.com/5KspGyc.png

4
wymy 3 days ago 5 replies      
The 'why' should not matter. Who really cares whether it should be done? or why they would want to do it?

If someone has an interesting problem, let's try and figure out a way to do it. Usually the why comes up along the way, but not relevant.

Thankfully, some great folks stepped in and gave quality responses.

5
JacksonGariety 3 days ago 3 replies      
So this is just the ::before pseudo-element? I feel like I'm missing something.
6
m1117 3 days ago 1 reply      
That's cheating :) CSS is applied to the whole character.. I'm sure that you can avoid javascript by using content:attr(letter) in before and after. like <span letter="X" class="half-red-half-green"></span>
7
MrQuincle 3 days ago 1 reply      
And now apply it to half a character diagonally. :-)
8
origamimissile 3 days ago 1 reply      
http://jsfiddle.net/CL82F/14/ did it with just CSS pseudoselectors
9
frik 3 days ago 1 reply      
relevant code snip:

  .halfStyle:before {    width: 50%;    color: #f00;  }

10
kbart 1 day ago 0 replies      
My eyes hurt just seeing these examples. I hope such technique will not get much attention outside of SO and HN.
11
brianbarker 3 days ago 1 reply      
It's cool seeing the solution for this, but I never would have imagined such a random request.

Now, go make striped characters.

12
paulcnichols 3 days ago 0 replies      
Not looking forward to when this catches on.
13
BorisMelnik 3 days ago 0 replies      
thinking of a ton of new usage cases for this example. would be nice to see this implemented in pure CSS in the next revision perhaps. is that even possible?
14
joeheyming 2 days ago 0 replies      
yes, but why?
15
frozenport 3 days ago 5 replies      
This looks repugnant. Just because it can be done, doesn't mean you should do it! :-)
16
ape4 3 days ago 3 replies      
Just because you can do something...
17
omegote 3 days ago 1 reply      
It's sad to see that this kind of questions gather so much attention, and other questions closer to the real world receive NONE. StackOverflow has reached his max hipster level.
9
Alien creator H.R. Giger is dead swissinfo.ch
393 points by lox  2 days ago   66 comments top 24
1
tluyben2 2 days ago 2 replies      
RIP. He was a nice guy and a great artist, but the last years he was quite disabled by (I think) a stroke and it was almost impossible to talk to him. My friend used to visit him to discuss work they did together and I went with him one time; he talked a few times with him after that stroke but it was never the same. I was a big fan right after I saw Alien begin 80s and it was nice to meet him while he was still producing art.
2
ThePhysicist 2 days ago 1 reply      
That's really sad. I just recently watched "Jodorowksy's Dune" (http://jodorowskysdune.com/), a documentary on the planned but never realized "Dune" movie by Alejandro Jodorowsky, for which H.R. Giger did a lot of artwork and which features an interview with him in his home in Switzerland. If you look at the designs that Giger did for this movie, you can already see the "Alien" style all over it.
3
mgw 2 days ago 4 replies      
If you're ever in Switzerland and a fan of H.R. Giger, you should check out his great museum in Gruyre. [1]Additionally, the idyllic mountain village is well worth a visit on its own.

[1] http://www.hrgigermuseum.com/

4
elecengin 2 days ago 2 replies      
My favorite H.R. Giger story was from when he met the rock band Emerson Lake and Palmer and agreed to do the album art for Brain Salad Surgery. The album name - innuendo for a sex act - inspired an equally sexual album cover. [1] The original image was a futuristic woman with a penis covering her mouth.

The band loved it, but the record company refused to release the album. The band, placed in a difficult position, petitioned to Geiger to adjust the artwork. Geiger refused to bow to the band's and record company's demands, and in the end the record company had to hire an airbrush artist to remove it as much as possible... leading to the "shaft of light" along the neck.

[1] http://images.coveralia.com/audio/e/Emerson,_Lake_y_Palmer-B... SFW)

5
etfb 2 days ago 0 replies      
I'm amazed he wasn't dead years ago. Seventy four is a good twenty years younger than I kind of assumed he would be. He was only forty when he did the design for Alien? I know he already had his artistic style famous before that, meaning he must have been a wee tacker when he started out. Amazing.

Also: vale. A talented artist with a distinctive voice.

6
sbirchall 2 days ago 4 replies      
A truly unique talent. In memory, you should check out Aphex Twin's "Window Licker" directed by Chris Cunningham (HN will probably be most familiar with Bjork's "All is Full of Love" music video). A whole host of talent came together there to make one of the most fucked up things you'll ever witness. Suffice to say a big red NSFW warning goes out on this one!

http://www.youtube.com/watch?v=7MBaEEODzU0

[EDIT: I mean MOST DEFINITELY NOT SAFE FOR WORK!!!!]

7
ChuckMcM 2 days ago 0 replies      
I got to meet the artist when his 'Alien' creation was on display at the California Science Center in Exposition Park. They had a number of sets from the movie on display and the full size creature that was what they build CGI and other latex models from. I remember "Wow, this guy seems completely normal for someone who has the ability to envision something so twisted." It is a rare gift to be able to think about impossible things.
8
coolandsmartrr 2 days ago 0 replies      
I was always haunted and fascinated by Giger's imagination. What first came into mind was the album cover for Emerson, Lake and Palmer's "Brain Salad Surgery". By synthesizing Thanatos and Eros, both primordial in human nature, Giger created instrinsically-appealing artworks. A great loss to the world.
9
JabavuAdams 2 days ago 0 replies      
So long, and thanks for all the nightmares.
10
textminer 2 days ago 0 replies      
For those interested in his work, I really recommend viewing the recent documentary Jodorowsky's Dune, a failed film project Giger and several other proto-luminaries worked on (inspiring much of the iconic imagery in Alien, Star Wars, and Indiana Jones). Giger appears throughout the documentary. I believe it's still playing in the Bay Area.
11
mysteriousllama 2 days ago 1 reply      
I remember picking up an Omni magazine when I was a prepubescent tadpole. The cover had this amazing art that caught my eye. Guess who had drawn it?

Only later did I read it and become fascinated with science. Guess what I do now?

It's amazing how much this man did for the world through his work. Very influential to many people in many ways.

He will be missed.

12
joel_perl_prog 2 days ago 1 reply      
What a great genius. What a great loss.

Celebrate his life today by watching Alien!

13
backwardm 2 days ago 0 replies      
I know this won't add much to the discussion, but I really hope he designed his 0wn casket using his signature stylethat would be really fun to see and a great way to show one last piece of artwork.
14
mililani 2 days ago 1 reply      
Wow. For some reason, I thought he was dead a long time ago. RIP
15
wiz21 2 days ago 1 reply      
Although well know for Alien, Giger actually made tons of other stuff (including several bars !). Here's a good book about him that I've read :

http://www.taschen.com/pages/en/catalogue/art/all/01777/fact...

16
igorgue 2 days ago 0 replies      
Sad to see him go, great artist!

If you have a chance, check "Alejandro Jodorowsky's Dune". They have one of his last interviews talking about how he got started and how that failed movie was the seed for his ideas for Alien with Jean Giraud.

17
logfromblammo 2 days ago 0 replies      
Is it wrong of me to hope that he designed his own casket and mausoleum?
18
doctornemo 2 days ago 0 replies      
Ah, what a loss.

I remember being astonished by Giger's vision in Alien. For years I hunted down posters, calendars, and books, which weren't always available or affordable. Like others here, I relished the Dark Seed game for its tribute to Giger.

This takes me back to an earlier stage of my life, and makes me very sad. What a vision!

19
outworlder 2 days ago 0 replies      
He deserves a black bar.
20
Cowicide 2 days ago 0 replies      
Ironically, his futuristic work will be incredibly influential far into the future. Terrible news.

RIP H.R. Giger

https://imgur.com/gallery/SsFg0Hu/

21
bussiere 2 days ago 0 replies      
Darkseed was a chok when i was young. I still remember the game. RIP
22
ihenriksen 2 days ago 0 replies      
Very sad. I was a huge fan even before the Alien movies.
23
camus2 2 days ago 0 replies      
great artist!
24
DENIKUTA 2 days ago 0 replies      
Up The great site
10
Maze Tree ocks.org
386 points by yaph  1 day ago   38 comments top 13
1
couchand 1 day ago 0 replies      
One of the coolest things about this gist is how little code is used for the animation; basically just these lines:

    d3.selectAll(nodes).transition()        .duration(2500)        .delay(function() { return this.depth * 50; })        .ease("quad-in-out")        .tween("position", function() {          var d = this, i = d3.interpolate([d[0], d[1]], [d.y, d.x]);          return function(t) { var p = i(t); d.t = t; d[0] = p[0]; d[1] = p[1]; };        });
The tree layout auto-magically gives every node a depth property, we delay the transition by an amount proportional to the depth, and then over the course of two-and-a-half seconds tween the line segment into the new position. Simple and effective. The hard part is generating the maze.

2
roberthahn 1 day ago 3 replies      
It's not clear to me whether the tree is mapping the paths of the maze or the walls - the transformation makes it appear as though the walls are being mapped but that doesn't make sense.

I wonder if this works backwards - given a tree could you construct a maze? efficiently?

3
d0m 22 hours ago 0 replies      
And that is how you get undergrad students interested in graph theory.
4
pmontra 18 hours ago 0 replies      
I did something similar more than 20 years ago using Prim's algorithm for the minimal spanning tree. Spanning trees are not the most efficient way to generate a maze but I was studying CS and the maze generation was a good incentive to actually translate the textbook pseudocode into actual C (without the ++). I didn't do the fancy tree animation but you'll excuse me as all I had was a 80x25 ASCII terminal so it was probably a 40x12 maze :-)However I added a nethack style @ character and the hjkl keys to move from the entrance to the exit in the shorter time plus a leaderboard shared across all the users of our Unix system. Our terminals had a repeat key (i.e. keys didn't autorepeat, you had to press the repeat key as if it were a shift to make a key repeat) and that added to dexterity required to go through the maze quickly. The fastest time was in the 5-6 seconds range. I'm afraid the source algorithm has been lost on some tape long ago.
5
ShardPhoenix 20 hours ago 1 reply      
Looks cool, but I'm a bit confused about the colorized examples. It seems like in some of the examples, there are blocks that are colored red that are further away in maze-distance than other blocks that are green, etc. Do the colors roll-over after a certain distance?
6
dazmax 10 hours ago 0 replies      
I'd like to see the tree starting from the other end of the maze too.
7
kin 1 day ago 1 reply      
the things people visualize w/ D3 never ceases to amaze me
8
ars 1 day ago 2 replies      
Does this maze have a single unique path through it?
9
Donch 1 day ago 0 replies      
Frankly, that is true art. Inspired.
10
xabi 17 hours ago 0 replies      
Old simple maze generator algorithms: http://imgur.com/a/5miDZ
11
soheil 21 hours ago 0 replies      
'maze-balls!
12
Justen 1 day ago 0 replies      
That animation is really freakin' sweet
13
icefox 1 day ago 1 reply      
It would be even cooler if rather than it being all white it was colorized.
11
FSF condemns partnership between Mozilla and Adobe to support DRM fsf.org
379 points by mikegerwitz  13 hours ago   256 comments top 26
1
jeswin 12 hours ago 10 replies      
If we should train our guns somewhere it should be at the W3C; the guardians of web standards. W3C shouldn't have legitimized this feature by bringing it into standards discussions. The media companies would have had to comply eventually. They had no future without distribution over the internet. Now of course, they have hope.

Mozilla had no chance once Google, MS, Apple and everybody else decided to support EME. Most users don't care if they fought for open standards. They are probably just going to say that Firefox sucks.

If you ask me, Mozilla could be the most important software company in the world. The stuff they are building today is fundamental to an open internet for the future. It is important that they stay healthy for what lies ahead.

2
cs702 12 hours ago 2 replies      
The key insight for me is this one: "Popularity is not an end in itself. This is especially true for the Mozilla Foundation, a nonprofit with an ethical mission."

Even though non-profit organizations like Mozilla do not seek to maximize financial gain (by definition), they often seek to maximize their relevance in the world. As a result, they ARE susceptible to corruption: most if not all are willing to "compromise" -- that is, sacrifice their mission and values -- in order to remain "important" in the eyes of society.

The folks running Mozilla are sacrificing the organization's mission and values because they're afraid of losing market share. They do not want Firefox to become a niche platform.

3
Pxtl 12 hours ago 3 replies      
Honestly, I think the w3c should've just told Netflix et al to get the heck out of the browser.

Really, that's what this is all about... but those companies are already building fully native applications for every platform other than win32+Web. Telling them to go make a native application (or keep dealing with Silverlight/Flash) for that one last platform would be completely appropriate.

The world of software has changed - now we have major companies building applications for multiple different platforms instead of "just windows" or "just web". The web doesn't need to do everything.

It doesn't need to do this.

4
valarauca1 12 hours ago 4 replies      
The FSF refuses to compromise their principles. They refuse to negotiate. I respect them for that, morally its nice to have a fixed point to hold the line and refuse to change, it gives you a benchmark against where to judge yourself. Even if sometimes you think the old guard ate a bit to much paste.
5
Rusky 12 hours ago 3 replies      
Yes, it's disappointing that Mozilla is adding DRM to Firefox. No, that does not mean they hold "misguided fears about loss of browser marketshare".

People have the freedom to disagree with you, FSF. Just because they do doesn't make them misguided, especially on a future prediction.

How is this any different from flash/silverlight plugins we already have?

6
blueskin_ 13 hours ago 4 replies      
The FSF would have good points, but then they ruin them with things like "or the issues that inevitably arise when proprietary software is installed on a user's computer.". Yes, DRM is bad, but not everything has or needs to be open source to treat its users ethically, and some people do need to make a living from their software.

Not everything needs to be GPL to respect people's rights to do what they want with something they bought, not everything needs to be open source just because they like it that way, and above all, people should have a right to choose to install whatever they want, and distros should have the same right to choose to tell the user about closed source software when it would be helpful to them. If the end user didn't want to hear that, they can either ignore it, or use a FSF-endorsed linux distro like Trisquel. The fact that so few people do shows to me how most people are completely fine with having the ability to install what they want.

Freedom may include giving others freedom to do things you personally don't like, but the FSF tends to think a single, ironically restricted set of freedoms to match their philosophy are all that everyone needs.

7
sanxiyn 12 hours ago 0 replies      
Mozilla is Serving Users. A great Orwellian phrasing.

http://ebb.org/bkuhn/blog/2014/05/14/to-serve-users.html

8
couchand 12 hours ago 3 replies      
Does anybody know what Brendan Eich's stance on DRM is? I can't help but wonder if this would have turned out differently had he still been in charge.

Eich helped found Mozilla back when it was just contributions to Netscape, and then helped break off as a fully-fledged project. My guess is that he understood the loss here. On the other hand, Gal wrote PDF.js which replaced the proprietary PDF reader, so you'd expect him to get it, too.

9
jpadkins 11 hours ago 2 replies      
Has Mozilla really changed its policy? At a certain abstraction level, they already had a plugin in system that allowed for DRM binaries embedded in the browser. So what if the plugin system is a bit different?

You could already watch DRM netflix in firefox. If they were going from no-DRM plugin policy to allowing DRM in plugins, that would be cause for uproar. But Mozilla has always allowed DRM via plugins.

10
frik 12 hours ago 1 reply      

  Write to Mozilla CTO Andreas Gal and let him know that you oppose DRM.
mailto:agal@mozilla.com

11
general_failure 11 hours ago 0 replies      
Mozilla is very much trying to be a technology company these days with profit in mind (realize that there are 2 Mozilla's - one which is a nonprofit org and another a for-profit inc).

They are not like the fsf. They care about user share, market and all that. Idealists cannot afford to think that way.

12
mikhailt 7 hours ago 2 replies      
I can't find the information to answer my question, so don't downvote me because it's a stupid question. I admit it is, I just want to know for curiosity.

I don't understand why Adobe has to be used here? Why didn't Mozilla partner with Apple, Google, and Opera on a standard implementation code for this? After that's done, then Mozilla can try to sneak in one last question for all partners, can we do it better than this?

13
stcredzero 10 hours ago 1 reply      
If there was some way we could verify DRM was "what it says on the tin," it could be a tremendous tool for ensuring our privacy and freedom online. When big companies DRM content, it limits our freedom, but if we could DRM our own data, it limits big company and government abuses.

However, there is admittedly a big caveat here. I don't know of a workable way to know that DRM is "what it says on the tin." Big business and governments could place back doors into such mechanisms, which would put us in an even worse position than where we are now.

14
sutro 9 hours ago 3 replies      
Here's hoping that a viable non-Mozilla group emerges that will offer a DRM-disabled version of Firefox, one that is addon-compatible and which pulls in all non-DRM-related upstream changes. Mozilla has lost my support over this decision.
15
ZenoArrow 3 hours ago 0 replies      
Why are people attacking Mozilla? Go after the real culprits in this fiasco (you know who they are), not the reluctant consenter. Kick up a fuss with users of the competing browsers. It's still possible to salvage something from this.
16
edwintorok 9 hours ago 1 reply      
"Use a version of Firefox without the EME code".Well I already use a fork of Firefox called Iceweasel, so I'm curious what Debian will decide to do with Iceweasel.
17
thefreeman 10 hours ago 1 reply      
I really don't know much about e specifics of this DRM proposition. So I accept that my assumptions may be invalid. But just based on the history of DRM and e internet... does anyone really doubt that someone will be able to defeat this DRM?
18
kumar303 8 hours ago 1 reply      
"The decision compromises important principles in order to alleviate misguided fears about loss of browser marketshare"

misguided, as in, Firefox wants people to actually use its browser? I'm seriously surprised at some of these idealists failing to understand that normal people just want to watch House of Cards (or whatever) and that's pretty much it. Mozilla can't turn their back on those users.

19
budu3 11 hours ago 2 replies      
A very sad day for the Open Web. What can we as users do?
20
jasonlotito 10 hours ago 0 replies      
I don't see why the FSF is up in arms about this. Mozilla is essentially doing the same thing that the FSF does with the GNU C library by releasing it under the LGPL.

They even spell out the case when they should adopt the lesser License[0], despite the fact that it goes against the FSF's core values and they advise not using it[1].

At the end of the day, I see this as Mozilla's LGPL.

http://www.gnu.org/licenses/why-not-lgpl.html0. The most common case is when a free library's features are readily available for proprietary software through other alternative libraries.1. But we should not listen to these temptations, because we can achieve much more if we stand together.

21
CmonDev 12 hours ago 0 replies      
"Open" web.
22
bttf 11 hours ago 0 replies      
A victory for the giants.
23
camus2 12 hours ago 0 replies      
Dont worry,Mozilla already betrayed Adobe once with this whole Tamarin/ES4 fiasco,with a little luck they'll change their mind for the best this time too.
24
judk 10 hours ago 0 replies      
I await the resignation of Mozilla's CEO, who clearly has shown an inability to represent the community on this issue of human freedoms that form the cornerstone of the Mozilla Foundation.
25
judah 12 hours ago 0 replies      
Between this and forcing Eich out of a job over a political issue, I've lost a lot of love for Mozilla in the last month.
26
belorn 10 hours ago 0 replies      
Mozilla could and in my opinion should do much more in order to live up to their fundamental principles and stated goals. They could inform the user about each website that uses DRM without prevent the user from viewing the content.

Its not even a revolutionary concept, as they are already requiring a click-to-accept with self-signed certificates. It puts the responsibility to the website if the black box called DRM causes problems, locks up, or cause some general havoc on the user. It highlights that the website is demanding to take control over the users device, and gives the user an option to say no.

It is easy to speak about fundamental principles in PR announcements, but code speak louder. The only bright spot is if Mozilla don't do more for the users, add-ons and forks will try to carry the principles for them.

12
Womans cancer killed by measles virus in trial washingtonpost.com
377 points by arunpjohny  17 hours ago   128 comments top 19
1
zaroth 16 hours ago 8 replies      
Oncological Virus, or OV. Pushing science fiction. But did Washington Post forget to mention the 2nd patient they released data on, who didn't have any kind of prolonged response?

The paper presents 2 cases, selected because they were the first 2 cases to be tested at maximum viral load. There are additional people in the trial, and they will release full results once they are available.

It included two slides showing before/after blood levels and imaging. They talk about how they modified the virus to emit a tracking signal, and how they modified it to target the cancer cells. Really, really mind blowing and impressive work. I would love a tour of that lab.

These are end-stage patients for whom everything else has stopped working. One of the patients had already undergone several experimental treatments. There is some really exciting research going on for MM (multiple myeloma) treatments, and maybe even cures.

I think this is one example of the free market working well. Typical MM treatment runs about $60k / year, and with recent developments, patients are living 10+ years. Total number of MM patients is increasing both because the disease is becoming more prevalent, but mostly because people are living so much longer with MM. In short, it's a large and growing market. But it's not a cancer you can treat and have it go into remission. You get on treatment, and you stay on it and keep those levels down. The typical treatment is biweekly therapy.

But these OVs are one-time deals. So a single dose treatment is a very interesting alternative. The only problem is, MM is extremely resilient, and the cells are everywhere. It's so hard to eradicate, unless the OV is a cure, it's just another tool in the box to manage MM and extend lives.

Weird, the PDF of the actual paper was freely downloadable a couple hours ago, but now it seems the paywall is up? http://www.sciencedirect.com/science/article/pii/S0025619614...

2
lifeisstillgood 13 hours ago 6 replies      
OK, this is a silly question but I have to ask

Is there any likelihood that measles, or a similar virus we have held at bay with vaccination, was actively fighting cancer 200 years ago, thus pushing up the incident rates that apparently have gone up and dismissed with "well we weren't dying of cancer because we were dying of $INSERT_DISEASE_HERE"

3
bambax 17 hours ago 7 replies      
Is this real? Can anyone with actual knowledge of the history of fighting cancer with modified virus provide input?

From my completely uninformed point of view it seems that if it's real, it changes everything...

4
baldfat 14 hours ago 2 replies      
Cancer = Worst word for multitude of diseases where most are not related to each other except cell growth. Wish they could just not use it anymore.

Dad of a child who died from cancer and well the word cancer doesn't mean squat you need to know what type of cancer. Is it sarcoma or what?http://www.cancer.gov/cancertopics/types/commoncancers

5
nabla9 12 hours ago 1 reply      
Using virus infections against cancer has long history.

https://en.wikipedia.org/wiki/Oncolytic_virus

6
Gatsky 15 hours ago 0 replies      
Phase III study of virotherapy here (not published yet):

http://www.marketwatch.com/story/amgen-provides-update-on-ph...

http://en.wikipedia.org/wiki/Talimogene_laherparepvec

Not, it would seem, a panacea. Approach is interesting in that aspects of cancer biology make the cells more vulnerable to viral infection, eg supressed interferon production. Also a possible platform for immunotherapy ie getting the immune system to attack a virally infected cancer cell might wake up a more generalised immune response. But, medical grade virus is expensive to produce, and hard to think how a viral infection could eradicate 100% of the billions of cancer cells present in advanced disease. Also, humans get immune to viruses after infection.

7
darkFunction 16 hours ago 9 replies      
I am curious about the timescale of treatments for terminal diseases, and how trials can be morally randomised.

It seems to me that a very high percentage of people would opt for a potentially fatal, completely untested course of action as opposed to imminent death. So who gets to try these treatments, who tells dying patients they are not allowed them, and is there a black market or large amounts of money changing hands for experimental procedures?

Ekianjo in this thread quoted 7 years at the earliest for a treatment to become available. Surely with hundreds of thousands of desperate, dying, last chance sufferers, it is better to go to extreme measures and offer the most promising yet dangerous treatments to everyone. Is it simply a side effect of the way pharmaceutical companies have to do business? If so, it's sad, and maybe a larger share of cancer research money should be put towards 'out there' attempts to cure terminal patients.

Genuinely curious.

8
cromulent 14 hours ago 0 replies      
The immune system is so complex that this is difficult to understand. I imagine there is some link to the Abscopal effect, where radiation treatment of one tumor in one part of the body kills the other metastatic tumors.

http://en.wikipedia.org/wiki/Abscopal_effect

9
jameshk 34 minutes ago 0 replies      
This is progress! Happy to see some success!
10
NKCSS 16 hours ago 2 replies      
I am always happy to see advances in treating cancer. Lost my dad to cancer a few years ago, it would be great if people in the future have a better chance. I know there will always be new diseases, but nipping this one in the bud would be awesome.
11
anon4 16 hours ago 1 reply      
Currently, there are no do-overs since the bodys immune system will recognize the virus and attack it

Can't they circumvent this by injecting more of the virus than the body can fight at once? Though it's starting to sound like regular expressions...

12
j_s 9 hours ago 0 replies      
Thought this was the same as discussed 2 weeks ago, but that was polio going after a brain tumor: https://news.ycombinator.com/item?id=7686853
13
majkinetor 9 hours ago 0 replies      
Nobody underlined that woman got very high temperature. It could be the reason for the results. Coley's toxins were used century ago for that effect. In fact Sensei Mirai clinick in Japan recently reported remission of ~370 terminal cancer patients using combination of immunotherapy and high dose vitamin C & D, along with termotherapy. Cancer patients usually didnt have temperature long time before diagnosed.

http://en.wikipedia.org/wiki/Coley's_toxins

14
ck425 16 hours ago 1 reply      
From what the article says I think the original virus works by attacking tumors which then explode and spread the virus all around the body. If they use a version of the virus that is safe or that the person is immune to then it would target the cancer and cause it to explode but afterwards be harmless.

That's just the impression I get from the article. I know literally nothing about this do if anyone with actual knowledge can explain properly please do!

15
Stately 16 hours ago 2 replies      
This is too similar to how I Am Legend begins.
16
kevin818 10 hours ago 3 replies      
Anyone else worried that using virotherapy may result in those virus' building up resistance, similar to what's happening now with antibiotics and superbugs.
17
programmer_dude 17 hours ago 1 reply      
Yay for science! Though I am not quite sure why the virus only attacks cancerous cells. Has it been modified to identify cancer cells?
18
zenbuzzfizz 9 hours ago 0 replies      
I don't see any discussion of the M-count (monoclonal protein) response. My understanding is this is the key measure of myeloma.

It would be really great if this turns out to be a real treatment option because, harsh as it appears from the article, this woman's treatment sounds way easier than the current therapies for myeloma.

19
svyft 14 hours ago 1 reply      
Some indian ayurveda recipes use poison to cure poison.
13
iMessage purgatory adampash.com
343 points by mortenjorck  2 days ago   170 comments top 49
1
jc4p 2 days ago 5 replies      
I went through this last month when I switched to the Nexus 5, had no clue I wasn't getting messages until someone with an iPhone tweeted at me asking why I as ignoring them.

However, all it took from me was a call to Apple's customer service, I told them I had just switched off my iPhone and no longer got texts from people with iMessage and they immediately sent me to a tech that fixed the problem for me.

Have you been explaining it correctly when you call? All I said was "I had an iPhone until last week, switched to another phone but I'm still registered for iMessage"

Edit: According to my phone I called 1-800-692-7753 (Which is just 1-800-MY-APPLE) and my call took 8 mins 25 seconds total. Not too bad of an experience.

2
saurik 2 days ago 1 reply      
What is interesting to me always is that my experience trying to send iMessages to people is usually the opposite: if someone's phone runs out of batteries or they start a phone call (on CDMA, which can't do voice and data simultaneously) I nearly instantly am forced to send them text messages. I also have found the "Delivered" notices very reliable: AFAIK they require a device to actually receive the message. Note however that it is "a device": if you have iMessage associated with another random device, it might be receiving your messages for you; I would be very interested in knowing if these people cataloged all devices they have from Apple (including Macs) and logged out of all of them. (It could, of course, just be a bug; but it at least doesn't seem to be some fundamental aspect of the design that it permanently hijacks messages.
3
gdeglin 2 days ago 1 reply      
Mashable wrote about this problem months ago: http://mashable.com/2013/09/16/imessage-problem/

A lot of my friends have complained about this problem as well.

I'm surprised Apple hasn't moved faster to come up with a solution. Seems like a lawsuit waiting to happen.

4
mwsherman 2 days ago 0 replies      
IM and SMS are different ideas. SMS moves a message from one device to another. SMS also knows nothing about the receiving device or whether the message got there.

IM (under which I include iMessage) is user-to-user, not device-to-device. It can know about the recipient and whether messages are received.

Each of these things has advantages.

SMS works because the phone network is always tracking a device. It is very addressable. The receiving device is mobile, rarely changes, and is singular.

IM has a notion of sessions. The user signs off and on. It can travel over any IP connection. The device on which the user is addressable changes a lot. There may be multiple devices, making the definition of delivered a bit less deterministic.

Conflating these two makes for a confusing mental model for the user, and for failures like this.

5
fossuser 2 days ago 2 replies      
Even more frustrating, reading this just reminds me how much better the google voice solution to this problem is and it predated iMessage by several years. Google has just let it atrophy - then they introduced hangouts late (relative to other chat apps in the market) and still have not integrated the google voice features or pushed them with android.

Does anyone who works at/worked at Google know why this happened?

Were they trying to turn the telecoms into dumb pipes with the original Nexus and gizmo5 purchase, but when that failed just abandoned the idea? You'd think the success of whatsapp and facebook chat would make chat a priority. If people communicate using your platform they're more likely to use your account for other things.

6
benstein 2 days ago 0 replies      
I went through this a few months ago and discussed on HN: https://news.ycombinator.com/item?id=7166955

Here's my update: It's been about 4 months or so since I switched.

Nothing I was able to do or Apple was able to do fixed the problem. I was able to put Messages into debug mode and I sent Apple a full debug log (Apple bug report #15966535). They marked the ticket as "Duplicate" and I was no longer able to view any updates.

After about 3 months, most of the issue has resolved.

The majority of group-texts work now; iPhones now send the whole thread is MMS not iMessage. It's still not 100% but pretty good.

Most of my friends can send SMS without failures, but quite a few still get "iMessage failed" and have to "resend as SMS".

I've completely given up trying to fix the problem. Just hoping the remaining iOS devices resolve themselves at some point, or Apple fixes in next update.

<rant>Everyone thinks this is an Android problem that they can't message me anymore. Really tough to explain to the world that it's _their_ phone that's buggy.</rant>

7
steven2012 2 days ago 1 reply      
Wow, this has been going on for at least a year. I can't believe that Apple hasn't already fixed this problem, it really calls into question their commitment to doing the right thing.
8
izacus 2 days ago 0 replies      
Well it seems that Apple has little incentive to fix this and have been dragging their feet: They're hoping that a brand new Android user will blame non-working SMS messages to their phone and return it for another iPhone.

Just another lock-in behaviour from them.

9
mrcwinn 2 days ago 1 reply      
For what it's worth, I switched to Android and did not have to pay Apple $20 to disable my iMessages, despite not having an active support contract.

I did, however, have to do a quick Google search, log in to my iTunes account through the web, and de-register my Apple devices. The problem was solved relatively quickly and for free.

This should be simpler, but I'm not sure how much easier Apple can make it. They can't make an iOS app to help you because you got rid of the iPhone. It seems strange to make an Android app to help you. That leaves the web. Better luck to you!

10
benhansenslc 2 days ago 0 replies      
I switched to an Android phone 3 weeks ago and I am also still not receiving texts from iPhones. Apple's customer support said the same thing to me as they did to you. They told me that one customer had to wait 40 days before they were fully removed from the system. I am just hoping that it gets fixed for me by the time 40 days has come around.

I filled a complaint with the FCC at https://esupport.fcc.gov/ccmsforms/form2000.action?form_type.... It is a form for complaints against wireless carriers for number portability issues.

11
pmorici 2 days ago 2 replies      
The easiest way to avoid this problem is to make sure you turn off iMessage on your iPhone before you switch devices then you don't have this problem in the first place.

https://discussions.apple.com/thread/3392014?start=15&tstart...

12
greggman 1 day ago 0 replies      
A friend found this

http://support.vodafone.com.au/articles/FAQ/How-to-deactivat...

TLDR; Deactive iMessages on your iPhone before switching or go here if you don't have access to your phone. http://supportprofile.apple.com/MySupportProfile.do

Sounds like that's not enough from reports below tho

13
edgesrazor 2 days ago 0 replies      
I ran into this issue 2 years ago when I dropped iPhone for Android - I seriously can't believe it's still a problem. Even following Apple's official KB article, it will still take a few days for all of your messages to start going through again.
14
hert 2 days ago 0 replies      
Even more of a disaster with iMessage group threads. I had a thread w/ two friends, and when I switched to my Moto X, I didn't realize that I was no longer receiving messages on the thread from ONE of them.

Turns out, one of their iPhones recognized that it should start texting me, while the other's iPhone kept iMessaging me w/out delivering failure reports. So frustrating that I forced them to get WhatsApp!

15
Scorpion 2 days ago 0 replies      
I briefly switched from an iOS phone to an Nexus 5 and had the same issue. For other reasons, I switched back. A colleague of mine liked the Nexus a lot and made the switch after I did. He has been fighting this for months. Everything works properly for a while. Sometimes, several weeks will go by. Then, out of the blue, my phone tries to send the message to him as an iMessage. It's bizarre and frustrating.
16
prutschman 2 days ago 2 replies      
Google Hangouts on my Android phone keeps bugging me to intergrate SMS into Hangouts, as well as to "confirm" my mobile number. My fear is that something analogous to the iMessage "purgatory" might happen, though I haven't heard of anyone experiencing it.
17
boqeh 2 days ago 1 reply      
I had this same issue. Apple wouldn't help me, unfortunately. If I recall correctly, I had to disassociate iMessage from my AppleID completely, which seems to have worked. Although I still can't be sure.

I have a weird feeling the messages still aren't going through and a lot of my friends think I'm being an asshole.

18
joshstrange 2 days ago 2 replies      
There was a story posted the HN that was very similar to this semi-recently, in fact I though it was a repost until I saw this was published today. I can't seem to find the previous post, does anyone else remember that/have a link? Thanks!
19
mwill 2 days ago 0 replies      
I encountered this problem recently on a smaller scale. I disable iMessage for various reasons, but recently had to factory reset my iPhone, which led to iMessage being enabled. I completely forgot about it and after I remembered to disable it, I suddenly could no longer receive text messages from my friends with iPhones.

I gave Apple a call and initially the only response I could get from them was "Just turn on iMessage" and general confusion about why I had it turned off in the first place.

Eventually someone I talked to said they could fix it, and shortly after I started receiving messages again.

20
ahassan 2 days ago 0 replies      
My friends and I have never ran into this issue switching from iOS to Android. The main thing to do is to disable iMessage on your old iPhone before you get rid of it; that should unregister it on Apple's servers. If you do that, then you should be unregistered unless you have another device hooked up (i.e. Mac).
21
harmonicon 2 days ago 0 replies      
This happened to me when I got my android phone with a new number. When my friends with iPhones texted me, the text always shows up as iMessage and I would never receive it.

The thing is, I have not owned ANY smartphone before this one. My guess was the previous owner of this number had an iPhone and registered for iMessage service. iMessage route sms through its own server it never reached my carrier's network. I tried to get help from Apple store technician, but since I am not an apple customer, past or present, employee did not see a need to help.

Problem is I really liked that number. After 2 months of struggle I gave up and changed to a new number.

I will admit I never liked Apple and do my best to purge iProducts from my life. But I guess you just cannot avoid being screwed, anyway.

22
cstrat 2 days ago 0 replies      
I have read about these issues plaguing people. It is strange because whenever I have roamed overseas or have disabled data for whatever reason. People still can text me, albeit I am sure there is a delay between when they hit send and when I got the message.

Friends of mine have moved from iPhone to Android, when I send them a message it tries with iMessage - and I get the message failure exclamation mark. It then resends as a text and doesn't try iMessage again for some time. Haven't really had the black hole experience yet...

23
K0nserv 2 days ago 0 replies      
How can the engineering team be clueless on how to fix this? Now I admit that I don't know the inner workings of the iMessage protocol and servers, but presumably all that needs to be done is to disassociate the number with the Apple ID. If I were to guess this would involve dropping a row in a table somewhere.
24
jamra 2 days ago 1 reply      
There is (or at least used to be) an option on your iPhone that forces the messages to be sent and received over SMS rather than iMessage. I wonder if one could turn on that option on their old phone before switching numbers.

There was a fairly recent change in how iMessages are handled. In one of the iOS updates, you can receive iMessage messages on numerous devices tied to your Apple ID such as your iPad. I wonder if that's where the bug comes from.

The other option is to switch to Android at home and get new friends.

25
dangoldin 2 days ago 0 replies      
I ran into this too and ended up calling Apple. Their solution was to tell everyone who had my number to erase their iMessage history with me.

Somewhat odd - I can receive individual texts from two people that have iPhones but if one of them sends a group text to both of us, I do not receive it.

26
kevinherron 2 days ago 0 replies      
Unfortunately I'm in the same boat right now. Tried calling them and having the number removed, etc...

Been going on like this for over a month.

27
ironghost 2 days ago 0 replies      
Easy (yet long) fix:

on iPhone - Disable iMessage from the settings menu. - Go back to messages and send a standard text message to the phone number. - Enable iMessage from the settings menu.

Done.

28
NeliX4 14 hours ago 1 reply      
What's wrong with iMessage on Android app? http://imessageonandroid.com/

Why this exact same issue is poppup up in HN every now and then...

29
X-Istence 1 day ago 0 replies      
Had a friend recently go through this. She called up Apple, had her number deregistered and about a day or two later everything started flowing correctly again...
30
e79 1 day ago 0 replies      
Their support page makes it sound like they can just de-register iMessage with your account.

http://support.apple.com/kb/ts5185

A few comments here seem to suggest that this is a carrier or cellular infrastructure issue. It isn't! iMessage doesn't route over SMS-- that's the whole point. It routes to Apple's servers, which should be capable of doing a lookup to see if the number still has an associated iCloud or iMessage account.

31
enscr 2 days ago 0 replies      
Whenever I look at the iMessage icon on my iPhone/iPad, I feel it had so much potential when it came out but Apple just squandered it like a brat. If only they had opened the gates on interoperability ..sigh !

Sometime back they were arrogant and brilliant, not just the former.

32
JimmaDaRustla 2 days ago 1 reply      
Sounds like it would be easier to find new friends.

Seriously though, iMessage should have some sort of interoperability on other devices, even if its just a web interface you can log into to make configuration changes, including the deletion/deactivation of an account associated with a mobile phone number.

Edit: Or even monitoring iPhones associated with a number and disabling iMessage if said phone is no longer online with that number? Could possibly even forward unread messages, etc.

33
vasundhar 2 days ago 0 replies      
1. Validation seem to happen when you send the first message to check if given number is associated with iMessage2. Second time onward it only checks if the sender is in Data Network or not.3. There is an option in the iMessage settings > Messages > "Send as SMS" if this option is not selected once the device/iAccount knows the other device is iPhone and you are on Data ... it just sends an iMessage.

Turn "Send as SMS" so that it falls back to SMS if the destination is not available for iMessage.

34
cek994 2 days ago 0 replies      
I had a very similar problem when I drowned my iPhone and switched my SIM to an old Windows Phone 7 I had lying around. If you have your old phone, you can disable iMessage while the SIM is still in it, which apparently works -- but if you don't, you're basically up a creek. I ended up changing the email address on my iCloud account.

It baffles me that online iCloud doesn't have a dashboard for controlling this. Doesn't seem it should be that hard to unlink phone numbers from iMessage.

35
rnovak 2 days ago 0 replies      
I had the same issue, but I was able to still retrieve the messages via another apple device that was still connected to the iMessage service. I was then able to disassociate my number with the service.

When I had my iPhone, I had originally linked both my email and my phone number to the same iMessage account, so fortunately I never lost messages.

If it was tied to your email as well, you might be able to disable the service via another apple device.

36
justizin 2 days ago 1 reply      
frustrating, indeed. the short answer is, if you are in the know, and you switch from iphone to android, disable iMessage on your iPhone first.

It would be great to see an interoperable solution replace iMessage, but for now, it is (purportedly) secure and often more reliable than text messaging. I still pay for an unlimited sms plan.

37
softinio 2 days ago 0 replies      
I've been having the same issue and it ruined part of my vacation as people I was meeting up with on vacation thought I was ignoring their texts and we never met up.

What adds insult to injury is that all ios devices are shipped by default with the setting set to not send by sms when user not found on iMessage.

Apple should own up to this problem publicly and compensate users.

38
sturmeh 1 day ago 1 reply      
Is it conceivable that Apple are deliberately ignoring this issue as it does exactly what they would want?

It punishes people who move away from their platform with social isolation.

It's easy for them to overlook this issue and not put any effort into fixing it, because the investment would result in a better experience for everyone who switching away from Apple.

39
lurien 2 days ago 0 replies      
It's even worse if you want to keep an iDevice registered for facetime/iMessage use. You can't toggle it on and off on demand.
40
JacksonGariety 2 days ago 0 replies      
Why aren't they making sure there's an active iPhone number associated before delivering any iMessages?

Obvious solution.

41
jms703 2 days ago 0 replies      
The only reliable fix I've seen for this is to have your friends remove and re-add your mobile number from their contacts.
42
vhost- 2 days ago 0 replies      
Same story here. Switched to an android decide and no one could text me for months. Months! It's almost unbelievable.
43
george_ciobanu 2 days ago 0 replies      
I have similar issues with an iPhone and iPad synced to the same account. Stuff is always out of sync.
44
JohnHaugeland 2 days ago 0 replies      
I've been here for almost a year.

It's not clear why this seems okay. "We've stolen contact for a year. We're working on it."

Seems like anti-competitive behavior. Stopped buying Apple 100% immediately once I found out.

45
_Simon 2 days ago 1 reply      
This again? FFS RTFM...
46
VLM 2 days ago 3 replies      
Is the destruction of SMS as a technology necessarily bad? I don't think so. Like ripping off a band aid, get it over with and move on.
47
headShrinker 2 days ago 2 replies      
> save the green vs. blue bubbles, which are in their own way a sort of weird social/status indicator

Save your opinionated anti Apple rhetoric. The color coded indicator allows people to know which features are included in the service, or whether your text was delivered and read or in your case, not delivered...

48
shurcooL 2 days ago 2 replies      
I find it interesting how so many people still find it acceptable in 2014 to be using a "phone number" as their id.

It's a number you can't even pick yourself: you _pay_ to get a randomly assigned digits, at best with the ability to reroll (also not always free).

To me, it feels like someone using an `@aol.com` email in 2014. Or a rotary phone.

49
uptown 2 days ago 3 replies      
This site has a solution:

1. Reset your Apple ID password and do not log back in on your device(s)

2. Send a text to 48369 with the word STOP

It wont happen immediately but over a 12-hour period, you should start receiving texts on your Android device that are sent from iPhone users.

http://www.techrepublic.com/article/how-to-keep-receiving-sm...

14
Octotree: the missing GitHub tree view (Chrome extension) chrome.google.com
336 points by yblu  2 days ago   106 comments top 37
1
jburwell 2 days ago 4 replies      
To me, the lack a fast tree browser has been one of the biggest weakness of the Github interface. This plugin solves that problem exceeding well. Github should hire the author, and officially fund his efforts to make it a first class feature that does not require a plugin.
2
yblu 2 days ago 6 replies      
I built this to scratch my own itch, as somebody who frequently reads GitHub code and feels annoyed having to click countless of links to navigate through a large project. Hope it is useful for other people too.
3
jxf 2 days ago 3 replies      
This is a fantastic extension! Browsing is fast and efficient, and creating the token for my private repos was painless.

A "search for files/folders named ..." feature would be a nice bonus, too, so that you can quickly get to the right spot in a big hierarchy.

To the author (https://twitter.com/buunguyen): please add a donation link somewhere so I can send you a thank-you (or you can just e-mail me with your PayPal/other address; my e-mail's in my profile).

4
manish_gill 2 days ago 1 reply      
Fantastic! You planning to add Bitbucket support? That would be really nice. :)
5
bnj 2 days ago 0 replies      
Wow, giving it a quick try I can't believe how fast it is. This is one of those things that I've always desperately needed, and I never knew until now.

Be sure to tweet it at some of the github engineers Thy should bring this into their core product.

6
ahmett 1 day ago 1 reply      
Here's an idea: Automatically expand all the tree until there are more than 1 items in the level

e.g. src->com->twitter->scalding->scala->test (in this example, these are all folders in hierarchy and they are the only one until the 'test' so expanding them automatically all the way through makes sense).

7
whizzkid 1 day ago 1 reply      
Great work but i want to point out one small feature that Github has but not known to everyone.

Press 't' when you go to a repository, it will activate the file finder. From there you can just start typing for the file/folder name you want to see and it will filter the repo instantly.

I wonder why this feature is not popular yet.

8
granttimmerman 2 days ago 2 replies      
You can also press `t` on any repo on github to find files/filetypes quickly.
9
Jonovono 2 days ago 0 replies      
This is awesome. Much better than my similar project! : http://gitray.com.
10
xwowsersx 2 days ago 1 reply      
This is great. It would be even better if you could resize the tree. Some projects have really deep trees and at a certain point you can't seem the names of the files.
11
dewey 2 days ago 0 replies      
I'd love to see something like this being on the site by default. Maybe just a button next to the repository title where you'd be able to toggle between the current view and the tree view. Both of these options have their advantages for different use cases.

In the meantime that's a great solution. Thanks!

12
jhspaybar 2 days ago 1 reply      
I've been using Firefox almost exclusively for months. This may very well make me go back to Chrome. Looks amazing!
13
mzahir 1 day ago 0 replies      
Github also has a file finder similar to the Command-T plugin for vim - https://github.com/blog/793-introducing-the-file-finder

This extension is great for exploring but if you know what you're looking for, cmd+t will save you more time.

14
ubercow 2 days ago 1 reply      
I'd love to see a setting that makes the tree view collapsed by default. If I have some time later I might whip up a pull req.
15
gknoy 2 days ago 1 reply      
Is there an easy way to extend this so that it can also be used when accessing Enterprise Github installations, e.g. `github.mycompany.com`?
16
ntoshev 1 day ago 0 replies      
I don't really find the tree view useful. But I wish there was a way to see the code weight by individual files and whole repos: as KLOCS, size, anything. Is there such an extension?
17
Chris911 2 days ago 0 replies      
18
vdm 1 day ago 0 replies      
@creationix's Tedit mounts git repos directly; it will melt your brain. http://www.youtube.com/watch?v=U4eJTBXJ54I https://github.com/creationix/tedit-app
19
spullara 1 day ago 1 reply      
Press 't' and search the filenames in repo instantly. Very useful.
20
houshuang 2 days ago 0 replies      
Brilliant - it's often quite slow to change between directories in the web view, this is blazingly fast. Especially useful for deeply nested (templated) projects.
21
nilkn 2 days ago 1 reply      
Is there a way to use this for Github Enterprise repos?
22
dustingetz 2 days ago 2 replies      
Great extension, except in private repos, every time I click on a file (from github proper) the extension animates outward while telling me that it doesn't work with private repos. Extremely annoying and resulted in uninstall :(

edit: i'm not willing to give extension access to private repos, that would defeat the point of being private

23
mrdmnd 2 days ago 1 reply      
Did you get API rate limited, by any chance?
24
GowGuy47 2 days ago 0 replies      
I had the same idea a couple weeks ago but never finished it: https://github.com/Gowiem/GitHubTree. Crazy to see this. Glad somebody got around to it. Thanks man!
25
bshimmin 2 days ago 0 replies      
This is seriously excellent.

I bet Github have had this feature on their issue tracker for years - and I suspect it probably just got bumped a good few places up the list.

26
Dorian-Marie 2 days ago 0 replies      
Good idea. Having nicer icons and align the icon with the text would even more awesome.
27
StepR 2 days ago 1 reply      
Hacker News will never cease to amaze me everyday. You guys are the best. Is this going to be open sourced?
28
piratebroadcast 2 days ago 0 replies      
Epic. So fucking cool.
29
cmancini 1 day ago 0 replies      
Brilliant work. This will be a huge timesaver for me. Thanks!
30
ika 1 day ago 0 replies      
that wasn't a lacking feature for me but still, good job!also would be nice if author uses github like design instead of windows-ish one
31
mitul_45 2 days ago 0 replies      
What about enterprise GitHub support?
32
cdelsolar 2 days ago 0 replies      
Wow, you rock.
33
chadhietala 2 days ago 0 replies      
Thank you for this!
34
dorolow 2 days ago 0 replies      
This is incredible. Thank you.
35
dud3z 1 day ago 0 replies      
Wow, great work!
36
sideproject 1 day ago 0 replies      
soooooo good!! Thanks!
37
Demiurge 2 days ago 0 replies      
Niiice.
15
Can This Web Be Saved? Mozilla Accepts DRM, and We All Lose eff.org
331 points by DiabloD3  1 day ago   360 comments top 47
1
suprgeek 1 day ago 10 replies      
Mozilla had to be dragged into this acceptance kicking and screaming (metaphorically).

They were faced with a hard choice, Not implement EME (HTML5 DRM) and risk users moving to other browsers (user loss) or implement EME and risk looking like they are contradicting their core mission (trust loss).

They figured a little loss of trust is worth keeping most of the users on the Mozilla platform - which in my view is the correct decision. If users start to abandon Mozilla (FireFox) in droves then they lose their power to influence the development of the open web.

2
ep103 1 day ago 3 replies      
There are so many people here claiming this is the wrong choice, and yet I wonder what percentage of the commenters here are using chrome? By most sources I've seen, Chrome has 2x the marketshare, and actively pushed FOR EME. Perhaps if FF had Chrome's current marketshare, they would have been in a position to say no, but its the users who made that impossible. Mozilla should be commended for fighting as far as they did. And if you don't like this decision, make sure you switch off chrome before commenting.
3
Daiz 1 day ago 2 replies      
Oh, so the web has given up and is now genuflecting at the altar of video DRM. Next up: picture DRM, because since we're protecting videos we should naturally protect still pictures too. You know what? We also have all this professional writing on the web, and anyone can just copy & paste that! That clearly shouldn't fly in our brand new DRM-protected world - authors should be able to control exactly who can view and read their texts, and copying is strictly forbidden. Screenshots should be blocked too. Browser devtools will naturally have to be disabled on the World Wide Web, as they are capable of hurting our benevolent protector, the almighty DRM. Eventually, we'll arrive at The Right To Read[1].

Or we could just not give the devil our little finger.

[1] http://www.gnu.org/philosophy/right-to-read.html

Also, a reminder about the nature of this beast that everyone should be aware of:

HTML DRM will not give you plugin-free or standardized playback. It will simply replace Flash/Silverlight with multiple custom and proprietary DRM black boxes that will likely have even worse cross-platform compatibility than the existing solutions. In other words, giving in to HTML DRM will only make the situation worse than it currently is. Especially since it paves the way to an even more closed web.

4
pdeuchler 1 day ago 3 replies      
Seriously, who is building this DRM software/hardware? If you are a software developer you have no excuse but ignorance, and as someone who makes a living on a computer (where the Internet resides) that excuse is waning extremely thin. I harbor a hairs breadth more grace for the hardware engineers designing locked down chips, but that's no more than a rhetorical nicety due to the fact that I'm am not very familiar with the intricacies of their work.

I honestly don't get it, you could make just as much, if not more, money doing something that's not 100% ethically wrong... especially in this job market! It's easy to work remotely, so you can't claim geographic entrapment. I'm sure if those who were especially financially encumbered could make a kickstarter page people would literally pay them to quit their job!

As much as this stings, it stings even more that it's "our own" selling us out, that it's the people who should know better that are killing everything so many of us have worked so hard for.

5
themoonbus 1 day ago 10 replies      
Can some one explain to me why having, say, an HTML 5 based video player with DRM would be worse than one implemented in a closed platform like Silverlight or Flash? I'm genuinely curious, and not trying to make an argument here.
6
kator 1 day ago 4 replies      
DRM is a fantasy that uninformed media executives cling onto with a dream that it will put the genie back in the bottle. It's sad to see this stuff but totally understandable considering the divide between the technology people who understand the reality and the people in charge of dreaming of a fantasy that gets them back to the '80s..

Yes I know.. I was an executive as a major record label, trust me it's hard to be on both sides of this argument and it's not as simple as everyone makes it out to be...

I can't explain how many times I tried to help executives understand that the path between the media to the human eye or ear was vulnerable to so many attacks it clearly was a fruitless goal to protect media in that way. They hear some bright young person tell them they can protect their media like it was in the good 'ole days and they have the need to believe because without that belief they are out of a job..

And artists are on their own to figure out how to make money on their work...

7
13throwaway 1 day ago 3 replies      
If all major browsers support eme, every website will use it. Say goodbye to youtube-dl. Maybe next year eme will be updated to "protect" html. Soon the entire web may be a closed system. It doesn't matter what the FCC decides on Friday, today is the day the web dies, at mozilla's hands.

There is always a choice mozilla, please make the right one.

8
rectangletangle 1 day ago 0 replies      
Reading between the lines here, I have a strong suspicion this and the smear campaign directed against Eich are related events. Only a few short months ago Mozilla was staunchly opposed to this. Then Eich gets forcefully removed, and Mozilla's stance turns a 180. In retrospect the "outrage" against Eich felt very artificial. It could have been a deliberate attempt at character assassination in order to further someone's goals of destroying our internet freedoms. The motive of course being heavy financial incentives. This likely had nothing to do with LGBT* rights (which I'm a proponent of, FYI). Instead it was all about someone lining their greedy pockets, at the personal expense of Eich and us to a much lesser extent. Keep in mind the same people who have a vested interest in strengthening DRM, are the same people who own the media outlets which propagated this story.
9
27182818284 1 day ago 1 reply      
I must be missing something. I read the article and clicked the attached link to Mozilla's blog, and nothing seems radically to change for users other than a move sideways. Though I'm a little disappointed that there isn't a move forward, it certainly doesn't feel like a step backwards. Even Mozilla writes, "At Mozilla we think this new implementation contains the same deep flaws as the old system. " (emphasis mine)

Right now if you want to lock something down, like watching Netflix on your browser, you install Silverlight. In the future, Silverlight is replaced and Netflix uses XYZ technology but maybe with DRM-in-HTML or whatever. And as a user, it doesn't matter because most people I know today use a tablet with the native app, a streaming device such as the Roku player, or a SmartTV.

10
aestetix 1 day ago 3 replies      
I am curious whether this would have happened if Eich were still the CEO.
11
DigitalSea 1 day ago 2 replies      
This is most certainly the wrong choice, but people need to understand they essentially had no choice. Their options were rather limited:

Option 1) Stick to your laurels and refuse to implement DRM. Other browser vendors implement DRM, certain parts of the web become inaccessible via Firefox as DRM is implemented into more and more web services (think Youtube, Vimeo, Netflix, Hulu). Firefox's lack of DRM means its users are being disadvantaged.

Option 2) Implement DRM. Accept temporary defeat, don't lose browser share to Chrome and continue fighting from within against DRM.

Which option do you think sounds more appealing to Mozilla? Die on your sword, keep the trust of your dwindling user base or implement DRM and retain most of your user base (minus the people that will leave because of this decision). I think someone needs to create a fork and build a DRM-less browser, that's the beautiful thing about open source, don't like something, change it.

12
a3_nm 1 day ago 1 reply      
I would hope that rebranded versions of Firefox such as Iceweasel will strip the DRM support, so I guess it is not like it will be forced on people who don't want it.

Of course, this is still bad news, because it means there is no more pressure on content owners against this DRM, which can eventually become painful for people who want to avoid the DRM.

13
acak 1 day ago 1 reply      
I liked Ubuntu's approach where they asked you on startup if you want to install/enable proprietary packages like Flash or some graphics drivers.

Is it tough for Mozilla to prompt the user to do something similar with DRM stuff on first run? i.e. Telling them these are the features not supported by Mozilla in principle but are a) implemented to be in compliance with standards and b) required if you choose to use services like Netflix.

That way I would still have the option to run a DRM free browser (and voluntarily not use websites that require DRM).

14
bsder 22 hours ago 0 replies      
Excellent.

When the plugin fails (and it will fail ... anything which is default deny will have lots of mysterious fails), more and more people who paid for the content will switch to pirated versions.

Personally, I can't wait. Piracy was roughly at a balance point recently with all the mobile consumption. Now, the lazy-ass social media generation is going to discover the need for it.

Welcome to Popcorn Time.

15
dredmorbius 20 hours ago 0 replies      
Debian GNU/Linux ships with Iceweasel, not Firefox. Iceweasel is based on Firefox, but differs in some particulars (with which I'm unfamiliar).

The question: will Iceweasel implement the DRM which Mozilla is implementing into Firefox?

16
snird 1 day ago 2 replies      
Technology should never force it's users (the sites creators) to use one technology or another. Not implementing DRM is idiotic for couple of reasons:

1. The decision of whether or not to use DRM should be of the site creators. That's as "free" as it gets. forcing them otherwise by not letting them the option is bad.

2. Even without implementing it, DRM is available through flash or silverlight or any other third party plugin. The only result not implementing DRM gets is that of hurting the HTML5 video component.

17
EduardoBautista 1 day ago 3 replies      
How about just ignore the websites that implement DRM and let the free market decide if those websites will survive? People treat DRM as if it was going to turn us into North Korea when it comes to internet access.
18
chimeracoder 1 day ago 3 replies      
From a comment by a Mozilla employee on another thread, it seems that the UI for this has not yet been determined[0]. It's possible that this may be presented to the user in a way similar to a plugin installation, except that the plugin happens to be provided by Mozilla (not a third-party).

This isn't great, but to the end user, it looks the same as Flash and Silverlight.

Especially if Mozilla were to add click-to-play for all such plugins, along with an explanation of what they are (think of the warnings that are currently shown for self-signed certs), they may still have an opportunity to do good with this yet.

I'd really love for Mozilla to remain as true to its mission as possible. On the other hand, Mozilla's power to do good in the world is intrinsically linked to its marketshare[1]. If Mozilla ends up being the lone holdout, it's possible that they will just lose marketshare as DRM content becomes more widespread - that would be quite a Pyrrhic victory[2].

I share in the EFF's disappointment at the situation, though (saldy) this has been inevitable for some time.

[0] https://news.ycombinator.com/item?id=7744954

[1] Perhaps not 100%, but it's a major component of it.

[2] https://en.wikipedia.org/wiki/Pyrrhic_victory

19
Spooky23 1 day ago 1 reply      
The EFF stance is a little shrill. Remember how FairPlay and Zune were going to create proprietary music forever? Didn't quite work out that way.

End of the day, content is a commodity, and like any other commodity, prices are falling as the supply expands. Unless it's a Pixar movie, most films in DVD/Blueray are in the $10 range within weeks. Digital rights for new releases are as little as $3 when you buy with physicial media. Access to Netflix's catalog costs less than basic cable.

So I don't care about this, I do care about the trolls under the bridge (ISPs) who want to extract a toll for transit.

20
prewett 22 hours ago 0 replies      
I dislike DRM as much as the next person, but what do you suggest as an alternative way that content creators can protect their work? Movies cost tens of millions of dollars to make; I expect that you would not spend $10 million of your money to make something that everyone could just freely download. Would you invest a year writing a great novel if everyone could read it for free without paying you? Some people will, of course, say "yes" but I think most of us would not, and we would end up with less art.

I think the tech community needs to come up with a better alternative, rather than just complaining.

21
CHY872 1 day ago 1 reply      
So I'm guessing that this will backfire. Mozilla say that Adobe will create the software, which will presumably be bundled with the browser. This means that if Adobe's DRM gets cracked, suddenly every site using that DRM is vulnerable. At the moment, one just updates the client to obfuscate a bit better, which is then downloaded on next launch of the software. If it's all client side, surely then the DRM will have to be updated every few days - which would be a nightmare for sysadmins etc.
22
wolfgke 1 day ago 0 replies      
Just an idea that came to my mind: What about the idea to refuse support to any Firefox user that has installed a CDM component (for the same reason the Linux kernel developers don't give any support to users of "tainted" Linux kernels (i. e. kernels that have loaded non-open-source modules)).
23
skylan_q 1 day ago 1 reply      
First gender identity/equality is more important than free speech, and now DRM is more important than freedom from submitting to vendor demands.

What's the point of Mozilla?

24
jevinskie 1 day ago 0 replies      
Has anyone found where you can get the Adobe CDM binary blob?
25
tn13 1 day ago 1 reply      
The article is written as if Mozilla is making a mistake. I think Mozilla's stand is perfectly reasonable and pragmatic. Mozilla must adhere to all the web standards.

I think the war for an open web was lost when EME became W3C standard. We should have fought it at that time.

26
chris_wot 1 day ago 3 replies      
So time to fork Firefox?
27
hughw 1 day ago 0 replies      
DRM just won't be that useful in a web browser. I can share a link with you, but it won't work for you. If Mozilla omitted this feature, users would just fire up the Netflix app. I honestly doubt it would cause users to switch away.
28
swetland 21 hours ago 0 replies      
I don't get the hand-wringing over this. Don't install the plugin and only view non-DRM-encumbered video content. Seems simple enough. Just like you don't install flash because you don't want to support flash-based DRM, right?
29
Mandatum 1 day ago 0 replies      
I'm not sure why they even bother trying, it'll be broken within a week. They're pointlessly adding to a spec which has enough bloat and hacked-together features.
30
dangayle 1 day ago 0 replies      
I'm just sad that DRM is in the HTML5 spec at all. That's the real loss.
31
whyenot 1 day ago 1 reply      
I wonder how Mozilla would have handled this if they still were dependent on donations from users for funding instead of Google.
32
hmsimha 1 day ago 1 reply      
What does this mean for the TOR project? Will they have to bundle an old version of Firefox without the proprietary DRM component?
33
mkhalil 1 day ago 1 reply      
They should keep their browser open. Most people that use Mozilla know why they're using it. They would also understand how to install an official Firefox plugin to get crappy EME websites to work.
34
pjc50 1 day ago 0 replies      
Remember, the closed-source component is there because there has to be a place for the deliberately inserted NSA vulnerabilities (note the proposed use of Adobe). It's likely that this bizarre and unpopular decision is the result of some behind-the-scenes arm-twisting.
35
chris_mahan 1 day ago 0 replies      
The web is finished.

On to the next thing!

(I'm sooo glad my web writings are in plain text files rather than in other people's mysql databases)

36
spoiler 1 day ago 0 replies      
Maybe I'm missing something, but what's so bad about EME? I think it's a great idea that copyrighted material will be protected. I understand that it becomes "closed web" in a way, but it's not a big deal for me. Frankly, I can think of a few places where it could be useful, even.
37
enterx 18 hours ago 1 reply      
/me thinks its about time for these digital rights fucks to taxed for exploiting our common good
38
knodi 1 day ago 0 replies      
Corporate attacks on all fronts on users. Revolt we must.
39
dalek2point3 1 day ago 0 replies      
Can someone explain how DRM on HTML5 would work?
40
hidden-markov 1 day ago 2 replies      
Maybe it can be disassembled? Like it what was done with some proprietary blobs of Linux kernel.
41
higherpurpose 1 day ago 0 replies      
> eliminates your fair-use rights

This is true. I'd like to see a case in Court where the law about fair use is in conflict with the law that allows something that has been DRMed to be completely protected, legally. I assume fair use would win, but I'd still like to see that case, because then it could become a lot easier to break DRM in order to "excercise your fair use right", and circumventing DRM might become legal again, in effect killing DRM for good. Then even companies like Mozilla could implement DRM unlockers in their browsers and so on, since it would be perfectly legal to do so.

42
camus2 1 day ago 0 replies      
Fascinating how people accept handcuffs so easily.Because it' better than flash...not,you'll still need a plugin for the DRM and you're going to have to download a few different plugins because of course different vendors will have different DRM schemes,so back to 1999,realplayer,windows media plugin,ect...
43
jasonlingx 1 day ago 0 replies      
Fork it.
44
dviola 1 day ago 2 replies      
Time to fork Firefox maybe?
45
paulrademacher 1 day ago 1 reply      
TLDR: Market share > Principles

> We have come to the point where Mozilla not implementing the W3C EME specification means that Firefox users have to switch to other browsers to watch content restricted by DRM.

46
felipebueno 1 day ago 1 reply      
I'm done with Mozilla but I think the problem is not just the browser we choose anymore. The whole thing is compromised. We need a new internet, "new" ways to share and consume information. There are many people who think so as well.

e.g.: http://electronician.hubpages.com/hub/Why-We-Need-a-New-Inte...

47
discardorama 1 day ago 0 replies      
Mozilla gets most of their money from Google. When Google says "jump", Baker says "how high?". They've become so addicted to the funds from Google, that they can't live without it.
16
Realistic terrain in 130 lines of JavaScript playfuljs.com
314 points by hunterloftis  3 days ago   53 comments top 27
1
gavanwoolery 3 days ago 6 replies      
Just a small note, not to sound snooty, just to educate people on what realistic terrain looks like...

This is what midpoint displacement looks like as a heightmap:http://imgur.com/ksETpO0,7gykFEV#0This is what realistic terrain looks like (this is based on real-world heightmap data):http://imgur.com/ksETpO0,7gykFEV#1

That said, midpoint displacement, perlin/simplex noise, etc are good for modeling terrain at a less macroscopic scale and are plenty sufficient for the use of most games.

2
colonelxc 3 days ago 1 reply      
3
zhemao 3 days ago 0 replies      
There was a Clojure example of this algorithm posted a few months back. Funnily enough, it's been in my "read later" bookmarks for a while now and I just got around to reading it this morning before I saw this post.

http://blog.mediocregopher.com/clojure-diamond-square.html

4
pheelicks 2 days ago 1 reply      
Nice demo. I made a terrain rendering engine/demo in WebGL a few months back, that used Perlin noise: http://felixpalmer.github.io/lod-terrain/

If anyone wants to play around with Hunter's algorithm in WebGL, it should be pretty straightforward to swap out the Perlin Noise implementation for his. Note the shaders do a fractal sampling of the the height map, so you may want to disable this.

5
huskyr 2 days ago 0 replies      
What i like most about this demo is that the code is actually very readable, and the blog article explains it very well. Most of the times the code for these kinds of demos looks like line noise :)
6
elwell 3 days ago 1 reply      
7
blahpro 2 days ago 0 replies      
It'd be interesting to see an animation of the diamond/square iteration progressing in 3D, starting with a flat surface and ending with the finished terrain :)
8
twistedpair 2 days ago 1 reply      
Reminds me of the results easy to achieve with Bryce3D back in the mid 90's. They had a pretty great terrain engine. I don't think they're making Bryce any more. It would be great if they could release some of that code.
9
callumprentice 2 days ago 0 replies      
I made a quick first pass at an interactive WebGL version this evening. http://callum.com/sandbox/webglex/webgl_terrain/ - ground realism needs a bit of work :) but it was a lot of fun. Thanks for sharing your code Hunter.
10
the_french 3 days ago 1 reply      
can this algorithm be run lazily? ie, can you continue to generate continuous terrain using this technique or do you need to generate the whole map ahead of time?
11
namuol 2 days ago 1 reply      
Brings me back to a ray casting experiment I did a while ago [1]. I always wanted to revisit it to include a terrain generation step (it uses a pregenerated height map). Now I have an excuse! ;)

[1] http://namuol.github.io/earf-html5

12
happywolf 2 days ago 0 replies      
For those who only want to look at the result(s)

http://www.playfuljs.com/demos/terrain/

Refreshing the page will generate a new terrain

13
rgrieselhuber 3 days ago 0 replies      
Reminds me of T'Rain.
14
galapago 2 days ago 0 replies      
15
fogleman 3 days ago 1 reply      
Perlin noise is another good algorithm for terrain generation.

http://en.wikipedia.org/wiki/Perlin_noise

16
nitrogen 2 days ago 0 replies      
This midpoint displacement algorithm is also how a lot of the "plasma" effects from 1990s-era PC demos were created.
17
zimpenfish 2 days ago 0 replies      
I remember implementing this on a Sam Coupe from the description in (either BYTE or Dr Dobbs, I forget) back in ~1987. Somewhat slower and lower resolution, of course...
18
good-citizen 2 days ago 1 reply      
after thinking about this one for a while, it occurred to me that this really helps illustrate the point of 'Life May Be A Computer Simulation'. Take this world creation a step further, and rather than teaching a computer how to create rocks, each one slightly different, imagine creating humans, each one slightly different. If you think about 'God' as just some alien programmer dude, it helps make so much sense of the world. How can a caring God let so many terrible things happen to us humans? Well, how much empathy do you feel about each rock in this program? When you click refresh, and create a whole new world, do you stop and think about all the exist rocks you are 'killing'? If we are living in a computer simulation, perhaps our creator doesn't even realize we are sentient?
19
hixup 2 days ago 0 replies      
I was playing with something similar a while ago. It's a procedurally generated overlay for Google Maps: http://dbbert.github.io/secondworld/
20
SteveDeFacto 3 days ago 0 replies      
Some of you might find this algorithm I created a few years ago interesting: http://ovgl.org/view_topic.php?topic=91JL96IHFS
21
sebnukem2 2 days ago 0 replies      
I think implementing parallel computing using webworkers would be a good item for the "What's Next" list of suggestions.
22
nijiko 3 days ago 0 replies      
You can simplify this even further by using frameworks like lodash / underscore or ES6 native methods.
23
brickmort 2 days ago 0 replies      
This is awesome!! nice job!
24
good-citizen 2 days ago 0 replies      
stuff like this makes me remember why I love programming
25
snodgrass23 3 days ago 0 replies      
Great tutorial on a fun topic!
26
CmonDev 2 days ago 0 replies      
Had to put "JavaScript" into the title - typical HN... It was about algorithm rather than a language.
27
TheyCalledHimBo 3 days ago 0 replies      
I may just be a prick, but seeing this promoted as a variant of the midpoint displacement algorithm for terrain generation seem far less gimmicky. "X done in Y lines of Z" Whoop-dee freakin' do.

Still, cool algorithm.

17
Computers are fast jvns.ca
295 points by bdcravens  2 days ago   153 comments top 23
1
nkurz 2 days ago 5 replies      
1/4 second to plow through 1 GB of memory is certainly fast compared to some things (like a human reader), but it seems oddly slow relative to what a modern computer should be capable off. Sure, it's a lot faster than a human, but that's only 4 GB/s! A number of comments here have mentioned adding some prefetch statements, but for linear access like this that's usually not going to help much. The real issue (if I may be so bold) is all the TLB misses. Let's measure.

Here's the starting point on my test system, an Intel Sandy Bridge E5-1620 with 1600 MHz quad-channel RAM:

  $ perf stat bytesum 1gb_file  Size: 1073741824  The answer is: 4  Performance counter stats for 'bytesum 1gb_file':  262,315 page-faults         #    1.127 M/sec  835,999,671 cycles          #    3.593 GHz  475,721,488 stalled-cycles-frontend   #   56.90% frontend cycles idle  328,373,783 stalled-cycles-backend    #   39.28% backend  cycles idle  1,035,850,414 instructions            #    1.24  insns per cycle  0.232998484 seconds time elapsed
Hmm, those 260,000 page-faults don't look good. And we've got 40% idle cycles on the backend. Let's try switching to 1 GB hugepages to see how much of a difference it makes:

  $ perf stat hugepage 1gb_file  Size: 1073741824  The answer is: 4  Performance counter stats for 'hugepage 1gb_file':  132 page-faults               #    0.001 M/sec  387,061,957 cycles                    #    3.593 GHz  185,238,423 stalled-cycles-frontend   #   47.86% frontend cycles idle  87,548,536 stalled-cycles-backend     #   22.62% backend  cycles idle  805,869,978 instructions              #    2.08  insns per cycle  0.108025218 seconds time elapsed
It's entirely possible that I've done something stupid, but the checksum comes out right, but the 10 GB/s read speed is getting closer to what I'd expect for this machine. Using these 1 GB pages for the contents of a file is a bit tricky, since they need to be allocated off the hugetlbfs filesystem that does not allow writes and requires that the pages be allocated at boot time. My solution was a run one program that creates a shared map, copy the file in, pause that program, and then have the bytesum program read the copy that uses the 1 GB pages.

Now that we've got the page faults out of the way, the prefetch suggestion becomes more useful:

  $ perf stat hugepage_prefetch 1gb_file  Size: 1073741824  The answer is: 4  Performance counter stats for 'hugepage_prefetch 1gb_file': 132 page-faults            #    0.002 M/sec 265,037,039 cycles         #    3.592 GHz 116,666,382 stalled-cycles-frontend   #   44.02% frontend cycles idle 34,206,914 stalled-cycles-backend     #   12.91% backend  cycles idle 579,326,557 instructions              #    2.19  insns per cycle 0.074032221 seconds time elapsed
That gets us up to 14.5 GB/s, which is more reasonable for a a single stream read on a single core. Based on prior knowledge of this machine, I'm issuing one prefetch 512B ahead per 128B double-cacheline. Why one per 128B? Because the hardware "buddy prefetcher" is grabbing two lines at a time. Why do prefetches help? Because the hardware "stream prefetcher" doesn't know that it's dealing with 1 GB pages, and otherwise won't prefetch across 4K boundaries.

What would it take to speed it up further? I'm not sure. Suggestions (and independent confirmations or refutations) welcome. The most I've been able to reach in other circumstances is about 18 GB/s by doing multiple streams with interleaved reads, which allows the processor to take better advantage of open RAM banks. The next limiting factor (I think) is the number of line fill buffers (10 per core) combined with the cache latency in accordance with Little's Law.

2
exDM69 2 days ago 2 replies      
I posted the following as a comment to the blog, I'll duplicate it here in case someone wants to discuss:

This program is so easy on the CPU that it should be entirely limited by memory bandwidth and the CPU should be pretty much idle. The theoretical upper limit ("speed of light") should be around 50 gigabytes per second for modern CPU and memory.

In order to get closer to the SOL figure, try adding hints for prefetching the data closer to the CPU. Use mmap and give the operating system hints to load the data from disk to memory using madvise and/or posix_fadvise. This should probably be done once per a big chunk (several megabytes) because the system calls are so expensive.

Then try to make sure that the data is as close to the CPU as possible, preferably in the first level of the cache hierarchy. This is done with prefetching instructions (the "streaming" part of SSE that everyone always forgets). For GCC/Clang, you could use __builtin_prefetch. This should be done for several cache lines ahead because the time to actually process the data should be next to nothing compared to fetching stuff from the caches.

Because this is limited on memory bandwidth, it should be possible to do some more computation for the same price. So while you're at it, you can compute the sum, the product, a CRC sum, a hash value (perhaps with several hash functions) at the same cost (if you count only time and exclude the power consumption of the CPU).

3
personalcompute 2 days ago 4 replies      
I particularly enjoyed the writing style in this article, largely because of the extent that the author provided unverified and loose figures in the article - cputime distributions etc. My experience is usually people are extremely hesitant to publish any uninformed, fast, and incomplete conclusions despite them being, in my opinion, still extremely valuable. It may not be perfectly correct, but that small conclusion is still often much better than the practically non existent data on the situation I start off with and allows me to additionally read the article far faster than slowing down to make these minor fuzzy conclusions myself. There is this misconception that when writing you can do two things - you can tell a fact or you can say a false statement. In reality it is a gray gradient space, and when the reader starts off knowing nothing, that gray is many times superior. Anyways, awesome job, I really want to see more of this writing style in publications like personal blogs.

[In case it isn't clear, I'm referring to statements like "So I think that means that it spends 32% of its time accessing RAM, and the other 68% of its time doing calculations.", and "So weve learned that cache misses can make your code 40 times slower." (comment made in the context of a single non-comprehensive datapoint)]

4
krick 2 days ago 6 replies      
Pretty nave, I'm surprised to see it here. Not that this is pointless study, but it's pretty easy to guess up these numbers if you know about how long it takes to use a register, L1, L2, RAM, hard drive (and you should). And exactly how long it would take is task-specific question, because depends more on what optimization techniques can be used for the task and what cannot, so unless you are interested specifically in summation mod 256, this information isn't much of use, as "processing" is much broader than "adding moulo 256".

But it's nice that somewhere somebody else understood, that computers are fast. Seriously, no irony here. Because it's about time for people to realize, what disastrous world modern computing is. I mean, your home PC processes gigabytes of data in the matter of seconds, amount of computations (relative to its cost) it is capable of would drive some scientist 60 years ago crazy and it gets wasted. It's year 2014 and you have to wait for your computer. It's so much faster than you, but you are waiting for it! What an irony! You don't even want to add up gigabyte of numbers, you want to close a tab in your browser or whatever, and there are quite a few processes running in the background that actually have to be running right now to do something useful, unfortunately OS doesn't know about that. Unneeded data cached in the RAM and you wait while OS fetches memory page from HDD. But, well, after 20 layers of abstraction it's pretty hard to do only useful computations, so you make your user wait to finish some computationally simple stuff.

About every time I write code I feel guilty.

5
chroma 2 days ago 0 replies      
For an in-depth presentation on how we got to this point (cache misses dominating performance), there's an informative and interesting talk by Cliff Click called A Crash Course in Modern Hardware: http://www.infoq.com/presentations/click-crash-course-modern...

The talk starts just after 4 minutes in.

6
dbaupp 2 days ago 1 reply      
Interesting investigation!

I had an experiment with getting the Rust compiler to vectorise things itself, and it seems LLVM does a pretty good job automatically, e.g. on my computer (x86-64), running `rustc -O bytesum.rs` optimises the core of the addition:

  fn inner(x: &[u8]) -> u8 {      let mut s = 0;      for b in x.iter() {          s += *b;      }      s  }
to

  .LBB0_6:  movdqa%xmm1, %xmm2  movdqa%xmm0, %xmm3  movdqu-16(%rsi), %xmm0  movdqu(%rsi), %xmm1  paddb%xmm3, %xmm0  paddb%xmm2, %xmm1  addq$32, %rsi  addq$-32, %rdi  jne.LBB0_6
I can convince clang to automatically vectorize the inner loop in [1] to equivalent code (by passing -O3), but I can't seem to get GCC to do anything but a byte-by-byte tranversal.

[1]: https://github.com/jvns/howcomputer/blob/master/bytesum.c

7
userbinator 2 days ago 2 replies      
I wrote a new version of bytesum_mmap.c [...]and it took about 20 seconds. So weve learned that cache misses can make your code 40 times slower

What's being benchmarked here is not (the CPU's) cache misses, but a lot of other things, including the kernel's filesystem cache code, the page fault handler, and the prefetcher (both software and hardware). The prefetcher is what's making this so much faster than it would otherwise be if each one of those accesses were full cache misses. If only cache misses were only 40 times slower, performance profiles would be very different than they are today!

Here are some interesting numbers on cache latencies in (not so) recent Intel CPUs:

https://software.intel.com/en-us/forums/topic/287236

Im also kind of amazed by how fast C is.

For me, one of the points that this article seems to imply is that modern hardware can be extremely fast, but in our efforts to save "programmer time", we've sacrificed an order of magnitude or more of that.

8
ChuckMcM 2 days ago 0 replies      
Nice. I remember the first time I really internalized how fast computers were, even when people claimed they were slow. At the time I had a "slow" 133Mhz machine but we kept finding things it was doing that it didn't need too, and by the time we had worked through that there it was idling a lot while doing our task.

The interesting observation is that computers got so fast so quickly, that software is wasteful and inefficient. Why optimize when you can just throw CPU cycles or memory at the problem? What made that observation interesting for me was that it suggested the next 'era' of computers after Moore's law stopped was going to be about who could erase that sort of inefficiency the fastest.

I expect there won't be as much time in the second phase, and at the end you'll have approached some sort of limit of compute efficiency.

And hats off for perf, that is a really cool tool.

9
mrb 2 days ago 2 replies      
The author's SSE code is a terribly overcomplicated way of summing up every byte. The code is using PMADDW (a multiply and add?!), and is strangely trying to interleave hardcoded 0s and 1s into registers with PUNPCKHBW/PUNPCKLBW, huh?

All the author needs is PADDB (add packed bytes).

10
bane 2 days ago 3 replies      
It's pretty clear that we're wasting unbelievably huge amounts of computing power with the huge stacks of abstraction we're towering on.

So let's make this interesting, assuming a ground up rewrite of an entire highly optimized web application stack - from the metal on up, how many normal boxes full of server hardware could really just be handled by one? 2? a dozen?

I'd be willing to bet that a modern machine with well written, on the metal software could outperform a regular rack full of the same machines running all the nonsense we run on today.

Magnified over the entire industry, how much power and space are being wasted? What's the dollar amount on that?

What's the developer difference to accomplish this? 30% time?

What costs more? All the costs of potentially millions of wasted machines, power and cooling or millions of man hours writing better code?

11
cessor 2 days ago 3 replies      
I like the "free" style of the article. Here is another conclusion: In my professional life I have heard many, many excuses in the name of performance. "We don't need the third normal form, after all, normalized databases are less performant, because of the joins". Optimizing for performance should not mean to make it just as fast as it could possibly run, but to make it just fast enough.

Julia's article shows a good example for this. Of course, the goal appears to generate a feeling of what tends to make a program fast and slow and get a feeling for how slow it will be or how fast it can get; yet I'd like to point out that this...

https://github.com/jvns/howcomputer/blob/master/bytesum_intr...

... might be 0.1 Seconds faster than the original code when started as "already loaded into ram" which she claims runs at 0.6 seconds. Yet this last piece of code is way more complicated and hard to read. Code like this

Line 11: __m128i vk0 = _mm_set1_epi8(0);

might be idiomatic, fast and give you a great sense of mastery, but you can't even pronounce it and it it's purpose does not become clear in any way.

Writing the code this way may make it faster, but that makes it 1000x harder to maintain. I'd rather sacrifice 0.1 seconds running time and improve the development time by 3 days instead.

12
chpatrick 2 days ago 2 replies      
It's 1.08s on my computer for one line of Python, which is respectable:

  python2 -m timeit -v -n 1 -s "import numpy" "numpy.memmap('1_gb_file', mode='r').sum()"                                                                                      raw times: 1.08 1.09 1.08

13
infogulch 2 days ago 1 reply      
Nice writeup! I like how even simplistic approaches to performance can easily show clear differences! However! I noticed you use many (many!) exclamation points! It gave me the impression that you used one too many caffeine patches! [1]

[1]: https://www.youtube.com/watch?v=UR4DzHo5hz8

14
sanxiyn 2 days ago 3 replies      
I wonder why GCC does not autovectorize the loop in bytesum.c even with -Ofast. With autovectorizer, GCC should make the plain loop as fast as SIMD intrinsics. Autovectorizer can't handle complex cases, but this is as simple as it can get.

Anyone has ideas?

15
zokier 2 days ago 1 reply      
> So I think that means that it spends 32% of its time accessing RAM, and the other 68% of its time doing calculations

Not sure if you can do such conclusion actually, because of pipelining etc. I'd assume that the CPU is doing memory transfers simultaneously while doing the calculations.

I also think that only the first movdqa instruction is accessing RAM, the others are shuffling data from one register to another inside the CPU. I'd venture a guess that the last movdqa is shown taking so much time because of a pipeline stall. That would probably be the first place I'd look for further optimization.

On the other hand, I don't have a clue about assembly programming or low-level optimization, so take my comments with a chunk of salt.

16
userbinator 2 days ago 1 reply      
One of the things I've always wanted is autovectorisation by the CPU - imagine if there was a REP ADDSB/W/D/Q instruction (and naturally, repeated variants of the other ALU operations.) It could make use of the full memory bandwidth of any processor by reading and summing entire cache lines the fastest way the current microarchitecture can, and it'd also be future-proof in that future models may make this faster if they e.g. introduce a wider memory bus. Before the various versions of SSE there was MMX, and now AVX, so the fastest way to do something like sum bytes in memory changes with each processor model; but with autovectorisation in hardware, programs wouldn't need to be recompiled to take advantage of things like wider buses.

Of course, the reason why "string ALU instructions" haven't been present may just be because most programs wouldn't need them and only some would receive a huge performance boost, but then again, the same could be said for the AES extensions and various other special-purpose instructions like CRC32...

17
cgag 2 days ago 0 replies      
The rest of her blog is great as well, I really like her stuff about os-dev with rust.
18
enjoy-your-stay 2 days ago 0 replies      
The first time I realised how fast computers could be was when I first booted up BeOS on my old AMD single core, probably less than 1Ghz machine.

The thing booted in less than 10 seconds and performed everything so quickly and smoothly - compiling code, loading files, playing media and browsing the web (dial up modem then).

It performed so unbelievably well compared to Windows and even Linux of the day that it made me wonder what the other OSes were doing differently.

Now my 4 core SSD MacBook pro has the same feeling of raw performance, but it took a lot of hardware to get there.

19
thegeomaster 2 days ago 0 replies      
Anyone notice how the author is all excited? Got me in a good mood, reading this.
20
tejbirwason 2 days ago 0 replies      
Great post. If you want to dig in even deeper you can learn certain nuances of underlying assembly language like loop unrolling, reducing the number of memory accesses, number of branch instructions per iteration of any loops by rewriting the loop, rearranging instructions or register usage to reduce the dependencies between instructions.

I took a CPSC course last year and for one of the labs we improved the performance of fread and fwrite C library calls by playing with the underlying assembly. We maintained a leader board with the fastest times achieved and it was a lot of fun to gain insight into the low level mechanics of system calls.

I digged up the link to the lab description - http://www.ugrad.cs.ubc.ca/~cs261/2013w2/labs/lab4.html

21
hyp0 2 days ago 1 reply      

  I timed it, and it took 0.5 seconds!!!  So our program now runs twice as fast,
minor typo above: time is later stated as 0.25. super neat!

22
okso 2 days ago 1 reply      
Nave Python3 is not as fast as Numpy, but pretty elegant:

  def main(filename):      d = open(filename, 'rb').read()      result = sum(d) % 256      print("The answer is: ", result)

23
sjtrny 2 days ago 0 replies      
But not fast enough
18
Glenn Greenwald: The NSA tampers with US-made routers theguardian.com
283 points by not_dirnsa  3 days ago   136 comments top 18
1
perlpimp 3 days ago 6 replies      
So RMS was right after all, OpenSource gives you visible security where proprietary products are encumbered with all sorts of unwated and even dangerous "features".

my 2c

2
slacka 3 days ago 2 replies      
I am not surprised by the hypocrisy of the US government here, but where is the proof? He doesn't directly link to the June 2010 report to back his claims. While I trust him, the critical thinker in me despises not being able to check sources.

> Yet what the NSA's documents show is that Americans have been engaged in precisely the activity that the US accused the Chinese of doing.

Only points to the generic page http://www.theguardian.com/world/the-nsa-filesCouldn't he be more specific?

3
middleclick 3 days ago 5 replies      
Is anything safe? I mean, at this point, would it be too much to assume that given that the NSA has so much brain power (mathematicians) working for them, that they have not already cracked most encryption schemes we trust? I am not being a conspiracy theorist, I am genuinely curious.
4
suprgeek 3 days ago 2 replies      
"The NSA has been covertly implanting interception tools in US servers heading overseas..."

Which is Somewhat Ok, given the NSA charter.

What is the more interesting question - Is this limited to "US servers heading overseas..?" I mean we already know that NSA intercepts Laptops, Keyboards and such routinely for special "people of interest" within the US. Does it do the same i.e. routinely and indiscriminately bug routers even within the US?

5
resu 3 days ago 8 replies      
So stay away from routers that are Made in China and Made in USA - what's left?

Is there a country small enough without a world domination agenda, yet large enough to not be swayed by bullying from U.S, China etc.? It's time to start a router manufacturing business there...

6
xacaxulu 3 days ago 0 replies      
The NSA continues to undermine US businesses, further isolating us from the rest of the world.
7
SeanDav 3 days ago 4 replies      
Perhaps software and virtual routers are the way to go, especially if any are open source. It would be great if someone with knowledge in this domain could comment on this.
8
backwardm 3 days ago 3 replies      
I'm curious to know if using a different firmware would be a valid way to secure a (potentially compromised) router, or is this kind of tampering done at the hardware levelin some hidden part of a microprocessor?
9
brianbarker 3 days ago 0 replies      
So essentially the NSA warned us about China tampering with hardware because they knew how it could be done. They just forgot to mention they'd been doing it already.
10
Htsthbjig 3 days ago 1 reply      
Remove "Patriot Act" or the fascist law obligation of any American to collaborate with 3 letters agencies by force.

It converts any American worker in a spy of the Government.

11
mschuster91 3 days ago 2 replies      
Well, the NSA tampering here at least doesn't happen in the factories...
12
Sami_Lehtinen 2 days ago 0 replies      
When you register WatchGuard firewall it asks all kind of questions which are absolutely strategic. What kind of data it is used to protect, are you in tech or military business etc. And you won't be able to even use it without registration. And they call it security appliance. Lol. How about honestly calling it spy appliance.
13
cheetahtech 3 days ago 1 reply      
Just read something else he pushed.

He used some pretty strong words against the politicians.

Call Hillary a Neocon and corrupted, but he guesses she will win the next election. Page 5. http://www.gq.com/news-politics/newsmakers/201406/glenn-gree...

14
strgrd 2 days ago 0 replies      
I can't help but thinking Intel has something to do with this mission.

I mean think about how many hundreds of thousands of consumer computers come with Intel AMT vPro by default.

15
angersock 3 days ago 1 reply      
I'm watching to see if CSCO takes a hit from this--so far, doesn't seem to be a big issue.

It's not like this is surprising, as such; it's just really bad that these chucklefucks got caught doing it.

(Yes, it's arguably morally wrong and so on, but just from a purely economic perspective, bad show.)

16
zby 3 days ago 0 replies      
"surveillance competition"!
17
Zigurd 3 days ago 0 replies      
If you wanted to build an Internet product that could be trusted internationally where and how would you build it?

Unfortunately it looks like one part of the answer that's known is "not in the US."

We have only begin to feel the effects on this massive violation of trust. Unless trust can be restored, the US will become techno-provincial and only trustable with unimportant technologies like entertainment products.

18
jrockway 3 days ago 2 replies      
Greenwald is back at the Guardian? I thought he left to do his own thing.
19
Passwords for JetBlue accounts cannot contain a Q or a Z jetblue.com
276 points by alexdmiller  2 days ago   208 comments top 33
1
dredmorbius 1 day ago 5 replies      
As several people have noted, the Q/Z restriction likely arises from inputting passwords from a telephone keypad.

What I haven't seen is a statement as to why this would have been a problem. The reason is that Q and Z were mapped inconsistently across various phone keypads. The present convention of PQRS on 7 and WXYZ on 9 wasn't settled on until fairly late in the game, and as noted, the airline reservation system, SABRE, is one of the oldest widely-used public-facing computer systems still in existence, dating to the 1950s.

https://en.wikipedia.org/wiki/Sabre_(computer_system)

The 7/9 standard, by the way comes from the international standard ITU E 1.161, also known as ANSI T1.703-1995/1999 and ISO/IEC 9995-8:1994).

http://www.dialabc.com/words/history.html

Other keypads may not assign Q or Z at all, or assign it to various other numbers, 1 for Australian Classic, 0 for UK Classic and Mobile 1.

http://www.dialabc.com/motion/keypads.html

Similarly, special characters can be entered via numerous mechanisms on phone keyboards.

My suspicion is that there's a contractual requirement somewhere to retain compatibility with an existing infrastructure somewhere.

2
lvs 2 days ago 5 replies      
Looks like it has to do with the venerable Sabre system (scroll to bottom):

http://kottke.org/12/06/the-worlds-worst-password-requiremen...

3
eli 2 days ago 4 replies      
I'd caution against making assumptions about the competence of the developers based only what you can see from the outside. More likely than not there are good reasons to maintain interoperability with legacy systems. This may well be the most elegant way to solve a complex problem.

I've certainly written my share of code that would look weird to an outsider who didn't know the backstory and the constraints and the evolution.

4
seanmccann 2 days ago 2 replies      
They use Sabre (like others), and it's an archaic holdover from when phones didn't have Qs or Zs.
5
skizm 1 day ago 6 replies      
Actually this kind of gives me an idea: what if modern systems decided to just tell people they can't use "p" so that people stop using the word "password" or variants as their password.

Hell, for that matter, tell users they can't use vowels so they can't make words. They might do leet speak, or whatever which is pretty easy to crack given time, but it stops things like password re-use attacks (people less likely to have the same password as their other apps) and simple guessing attacks (try top 3 most popular passwords on all known emails/accounts).

For such a simple rule set (no vowels) it forces a decent level of password complexity.

6
phlo 1 day ago 1 reply      
As many sources have pointed, out, this is very likely related to Sabre. Interestingly, there is another reason why such a restriction might be useful:

There are three popular key arrangements. English/US QWERTY, French AZERTY, and German QWERTZ. Apart from switching around A, W, Y, Z, and most special characters, they are mostly identical.

If your goal is to ensure successful password entry even if a user is unexpectedly using an unfamiliar keyboard scheme, all you need to do is replace all instances of A or Q by one value; and all instances of W, Y, Z by another. Or you could, of course, disallow these characters.

I hear Facebook had a similar approach to coping with input problems in the early days of mobile access: for each passWord1, three hashes were stored: "PassWord1" (uppercase first letter), "PASSwORD1" (caps lock) and "passWord1" (unchanged). As far as I remember, they didn't deal with i18n issues -- or publish the results of their approach.

Edit: This would, of course, weaken password security significantly. If my very rough back-of-the-envelope calculation is correct, by a bit less than 50%.

7
theboss 2 days ago 4 replies      
That's nothing.... A friend of mine forwarded some emails shes gotten from jet blue.

First this screenshot:http://i.imgur.com/oKKpFM1.png

Followed by the money screenshot:http://i.imgur.com/DlAlQPt.png

She redacted some of the information before she sent it (obviously). This is from Jan 21 of this year. It's just so sad.... It's incredible people still have plaintext passwords serverside....

8
jfoster 2 days ago 2 replies      
If they were OK with applying more duct tape, why not map Q and Z to characters (eg. A and B) that can be part of passwords? (eg. a password of "quiz" would become "auib")

It would make their password system slightly weaker perhaps, since freq(a) then becomes more like freq(a)+freq(q) and freq(b) more like freq(b)+freq(z). I'm not sure that's much weaker than just excluding Q and Z, though. The user experience is improved. The major downside would be in technical debt.

9
slaundy 2 days ago 1 reply      
I just changed my Jetblue password to contain both a Q and a Z. Seems the support documentation is out of date.
10
Iterated 1 day ago 2 replies      
Question to all those saying this is because of Sabre:

How? Does the TrueBlue password somehow go through Sabre's systems? The truly old business unit of Sabre that everyone is referencing is Travel Network. I'm not sure why an airline's loyalty program would intersect with Travel Network other than through the back end of a booking tool.

11
stephengillie 2 days ago 1 reply      
When I saw the Sabre password requirements, I couldn't help but imagine that passwords are stored entirely numerically - "badpass" would be entered (hashed?) as "2237277", as in dialing a phone. So the password "abesass" would collide with "badpass" and grant access.

Has Sabre at least upgraded their storage mechanism, or do (did?) they reduce entropy on passwords?

12
amichal 2 days ago 1 reply      
guessing... Touch tone phone keypads dont always show q and z. I suspect that some older JetBlue system allows you to use your password via a touch tone system (with a vastly reduced keyspace)
13
dragonwriter 2 days ago 0 replies      
They also can't contain symbols (so apparently just digits and letters except Q and Z). The combination suggests to me the horrible possibility that they actually reduce the password to just digits for storage, and to support entry on devices that look like old touchtone phones [1] (I say "old" because newer ones usually have "PQRS" instead of "PRS" and "WXYZ" instead of "WXY"):

[1] Like: http://www.cs.utexas.edu/users/scottm/cs307/utx/assignment5....

14
jedberg 1 day ago 0 replies      
One of my bank accounts has the same restriction, so that you can enter you password through the phone system. It's stupid, but at least it has a reason.
15
eigenrick 1 day ago 0 replies      
Everyone in the conversation seems to be pointing out the fact that this is due to integration with legacy software. That's not an acceptable reason.

In the broader sense, there is a great irony in making password "strength" restrictions, like "must include" and "must not include" because they often end up making passwords easier to brute force.

If you start with the restriction that all passwords must have > 8 characters, you have basically an infinite number of possibilities, smart users will use a passPHRASE that is easy to remember. Dumb users will try to hit the bare minimum characters. When you put a restriction of 20 chars, it reduces the possibility that a persons favorite passphrase and guarantees that the set of all passwords is 8-20 characters, which means that the set of all passwords is smaller still.

They disallow special chars, which probably includes space, which further reduces the likelihood that someone will pick a passphrase.

Disallow repeating characters and you've further reduced the entropy.

Disallow Q and Z and it's reduced it further still.

I can't be arsed to do the math, so I'll reference XKCD http://xkcd.com/936/

But Sabre would do well to correct this, the optimal case is simply making a single requirement: passwords must be greater than 8 characters. The don't use your last N passwords requirement isn't bad, but people usually find hacky ways around this.

16
kirab 1 day ago 0 replies      
For everyone who designs password rules: Please do not require the password to contain uppercase, lowercase letter, numbers and so on. Because this actually makes passwords statistically easier guessable. The only thing you should require is a minimum length, I recommend at least 10, better 12 characters. Even 12 digits are more secure than say "Apple1".
17
sp332 1 day ago 0 replies      
Have you tried it? This person says it works just fine. https://twitter.com/__apf__/status/466327291027804160 And it doesn't make sense that it's a holdover from phones, because then it wouldn't be case-sensitive.
18
rjacoby5 1 day ago 0 replies      
I think everyone is completely missing the reason behind the omission of Q and Z.

Due to the database storage engine they chose, it was necessary to put a limitation on the number of Scrabble points that a password would award.

Q and Z are both 10-pointers, so passwords with them frequently blew past the limit. You can use J and X, but that's really pushing it.

And the "cannot contain three repeating character" rule is due to that being the trigger for the stored procedure that implements 'triple word score'.

19
tn13 1 day ago 0 replies      
There might be some very good reasons why such policy may exist. For example this system may involve telling the password to someone over the phone or using a TV remote to enter a password or some other keypad other than QWERTY.
20
manojit 1 day ago 1 reply      
Why people are still restricting password complexity. As long as passwords are carefully & cryptographically processed (read hashed with individual salt). I recently designed a system where the only password policy is the length (8 char minimum) and they are stored hashed with salt being a specially encoded user id (thus unique for each user).

I also like to contradict myself. Password complexity and and all the policy are needed to make the social engineering not feasible. I mean a strong and secure system and with that people are using 'password1234' is a very bad practice.

21
jamieomatthews 2 days ago 5 replies      
Can anyone explain why this is? I've never heard a security reason for this.
22
GrinningFool 2 days ago 3 replies      
That's ok, here's a better one.

etrade - yeah, THAT etrade? Yeah. They make your passwords case-insensitive.

23
Sami_Lehtinen 1 day ago 1 reply      
My bank allows only passwords which are six digits like 123456. No longer or other characters or symbols.
24
gt21 2 days ago 1 reply      
Here's a pic of when phone keypads don't have Q and Z: http://www.dialabc.com/words/history.html
25
bgia 1 day ago 1 reply      
Why didn't phone have Q and Z? Everyone is mentioning that they did not have them, but I can't find a reason for that.
26
brianlweiner 1 day ago 0 replies      
for Bank of America customers, you might notice your mobile app requires you to use a password < 21 characters . There is no such restriction for desktop browsers.

Attempting to login to my mobile app requires me to DELETE characters from my password until the overall length is less than 21. I'm then able to login.

What does this tell us about BoA's password storage?

27
codexon 1 day ago 0 replies      
why not hash the the password and encode it in base34? (36-2)
28
DonHopkins 1 day ago 0 replies      
How can people that stupid be allowed to operate airplanes?
29
maxmem 2 days ago 0 replies      
Also no special characters.
31
jrockway 2 days ago 0 replies      
Shouldn't this mean that the OUTPUT FROM THE HASH FUNCTION can't contain Q or Z!? Certainly no system other than the web frontend would be looking at the password itself...
32
guelo 2 days ago 0 replies      
My guess, some kind of harebrained master password scheme for support.
33
codezero 2 days ago 1 reply      
My guess is that this is just a rule to force people to read the rules.
20
Source code of ASP.NET github.com
265 points by wfjackson  2 days ago   99 comments top 13
1
skrebbel 2 days ago 4 replies      
Folks, not much of this is new. Both Entity Framework and ASP.NET MVC were already open source for quite some time [0][1]. All the other repos are nearly empty.

The only real news here is that, indeed, ASP.NET vNext is going to be developed in the open, or at least to some extent. But right now, not a lot of code seems to be released that wasn't already out there (although I did not go through all the repos).

I don't think you should expect to find many current/legacy parts of ASP.NET that aren't open yet: this seems to be mostly for new stuff.

Finally, don't forget that "ASP.NET" doesn't seem to mean a lot (anymore): it's basically Microsoft actively shipping the org chart. Anything that's web related and from MS appears to get tacked "ASP.NET" on front of it. Cause really, what does ASP.NET MVC, basically a pretty uninspired Rails port to C# (and just an open source library like any other), have to do with "active server pages"?

[0] https://aspnet.codeplex.com/wikipage?title=MVC[1] https://entityframework.codeplex.com/

2
moskie 2 days ago 1 reply      
The URL of that link is a pretty surprising thing, in and of itself.
3
daigoba66 2 days ago 1 reply      
One should note that this is the "new" ASP.NET. The old version, the one explicitly tied to IIS, is not and will probably never be open source software.

They're building a new stack from the ground up. Which is the only way, really, to make it "cross platform".

4
d0ugie 2 days ago 0 replies      
For those curious, Microsoft went with the Apache 2 license: http://www.asp.net/open-source
5
WoodenChair 1 day ago 6 replies      
Is this snippet of code bad? I was just randomly browsinghttps://github.com/aspnet/KRuntime/blob/dev/src/Microsoft.Fr...

        private bool IsDifferent(ConfigurationsMessage local, ConfigurationsMessage remote)        {            return true;        }        private bool IsDifferent(ReferencesMessage local, ReferencesMessage remote)        {            return true;        }        private bool IsDifferent(DiagnosticsMessage local, DiagnosticsMessage remote)        {            return true;        }        private bool IsDifferent(SourcesMessage local, SourcesMessage remote)        {            return true;        }

6
ellisike 2 days ago 1 reply      
Scott Guthrie is amazing. He's behind all the open source projects. Some of them are even taking pull requests. Entity Framework, MVC, ASP.NET, etc are all popular and open source.
7
turingbook 1 day ago 0 replies      
Some clarification:This seems to be only for demos and samples, not really the home of source code to cooperate on.

The Home repository is the starting point for people to learn about ASP.NET vNext, it contains samples and documentation to help folks get started and learn more about what we are doing. [0]

The GitHub issue list is for bugs, not discussions. If you have a question or want to start a discussion you have several options:

- Post a question on StackOverflow- Start a discussion in our ASP.NET vNext forum or JabbR chat room [1]

ASP.NET vNext includes updated versions of MVC, Web API, Web Pages, SignalR and EF... Can run on Mono, on Mac and Linux. [2]

MVC, Web API, and Web Pages will be merged into one framework, called MVC 6. MVC 6 has no dependency on System.Web. [3]

[0] https://github.com/aspnet/Home

[1] https://github.com/aspnet/Home/blob/master/CONTRIBUTING.md

[2] http://blogs.msdn.com/b/dotnet/archive/2014/05/12/the-next-g...

[3] http://blogs.msdn.com/b/webdev/archive/2014/05/13/asp-net-vn...

8
dev360 2 days ago 2 replies      
Is this admission that codeplex is dead?
9
githulhu 2 days ago 2 replies      
Not all of ASP.NET though...notably absent: Web Forms.
10
V-2 1 day ago 0 replies      
ICanHasViewContext :) (Mvc / src / Microsoft.AspNet.Mvc.Core / Rendering / ICanHasViewContext.cs)
11
MrRed 1 day ago 1 reply      
But why are they looking if their code runs on Mono [1] ?

[1] https://github.com/aspnet/FileSystem/blob/dev/src/Microsoft....

12
badman_ting 2 days ago 0 replies      
Oh, whoever did this is gonna be in big trouble. Heads are gonna roll.

Hmm? What do you mean "they meant to do that"?

13
lucidquiet 2 days ago 10 replies      
Too little, too late (imo). I'll think about it again once they have visual studio and all the good things running on *.nix.

It's too much of a pain to get anything to work with a .net project, and then deploy on anything other than IIS.

21
Big Cable says investment is flourishing, but their data says it's falling vox.com
264 points by luu  3 days ago   53 comments top 9
1
meric 2 days ago 5 replies      
"The industry is acting like a low-competition industry, scaling back investment and plowing its profits into dividends and share buybacks and merger efforts."

Most US industries are in a similar state (plowing profits into dividends and share buybacks and mergers). What's happening is companies are seeing there will be more benefit to their share holders to borrow money against their existing capital and paying that out as dividends than to risk that borrowed capital to make new investments. This is happening because the Federal Reserve has pushed the interest rate to near zero, at the same time people are over leveraged (since money has been so cheap for so long) and don't have the money to increase their spending in the future, which reduces the chance new investments will pay off.

EDIT: This website tends to be very pessimistic, but I found the following article informative and would illustrate my point well: http://www.zerohedge.com/news/2014-05-12/writing-wall-and-we...

2
Strilanc 2 days ago 1 reply      
Also, on top of being cumulative and using different periods, the chart just directly visually lies.

The pixel height difference between the 78.2 and 148.8 bars is ~110px for 70.6B$. But between 148.8 and 210 it's 198px for 61.2B.

So the pixel difference increases despite the money difference decreasing. I have no idea how this can be justified. It makes the right side of the chart look more steep than the rest instead of less (except the left-most part).

3
dba7dba 2 days ago 3 replies      
I am just amazed at the lies and stupidity these people are pushing all so that a few people at the top can buy a few mansions/private-jets. Little do they realize what kind of damage they are doing to the competitiveness of America's economy.

US flourished as it did partly because of open/affordable road system. Physical goods and people and ideas were able to move about freely and hence the economy grew.

Now it's all about the internet access. The goods people buy are often sent over internet connection and people/ideas flow the best when internet is working.

And here we are, with the few cable companies that we have doing their best to hamper flow of idea over the internet, the lifeblood of our economy.

4
coldcode 2 days ago 2 replies      
Oldest tricks in the chart book. Why do people lie in such an obvious manner and think no one will notice?
5
EricDeb 2 days ago 2 replies      
I love the grand total of one option I have for broadband internet at my apartment.
6
sirdogealot 2 days ago 1 reply      
> in the years that broadband service has been subjected to relatively little regulation, investment and deployment have flourished

Perhaps they are referring to the majority of the years/graph between 1997 and 2008? Which if they were, would make that statement true.

Even by saying that investment has increased overall between 1997 and 2013 would be true imho.

7
jsz0 2 days ago 0 replies      
This is cable industry trade data not really something intended for the general public. Dollar amounts aren't going to provide the context required to understand this data. For example over the last 5 years most cable MSOs have gone mostly/all digital which has reclaimed hundreds of megahertz's of spectrum. As a result spending on plant/infrastructure upgrades has slowed. The costs of the digital migrations wouldn't be classified as broadband investments even though it's directly related. Also in this time span most cable providers completed their transition to DOCSIS 3. Big upfront cost but less expensive to scale out over time. Soon they will have another big upfront cost for the DOCSIS 3.1 transition.
8
nessup 2 days ago 1 reply      
why is this not getting upvotes? awareness about telecom/broadband bullshit needs to be going up these days, if anything.
9
727374 2 days ago 0 replies      
Least controversial HN post... EVER.
22
Introducing ASP.NET vNext hanselman.com
234 points by ragesh  3 days ago   204 comments top 17
1
slg 3 days ago 7 replies      
As a .Net developer, I find all of the recent announcements from Microsoft really exciting. I just wonder if these type of things are enough to sway people's opinions regarding the platform. There is just so much baggage in the developer community when you say .Net or Microsoft (edit: as one of the three comments at the time of this posting proves). Are these moves just going to stave a potential exodus of .Net developers or will it actually lead to new developers picking up the language?
2
Goosey 3 days ago 3 replies      
This is extremely exciting. The lack of a 'No-compile developer experience' has been one of the biggest annoyances for me and my team. It actually has lead to influencing our coding patterns: since we can "refresh and see new code" for anything that is in the view templates (Razor *.cshtml in our case) we have become increasingly in favor of putting code there (or in javascript frontend 'thick client' code) to take advantage of not needing to recompile. It's not like recompiling is slow (maybe 5sec in our case), but it still breaks your flow and more importantly requires stopping the debugger if it is in use. In some ways the code has improved, in some ways it hasn't, but in either case it feels like the tail wagging the dog when you are changing how you structure code based on your tool's inadequacies.

I'm equally excited for the intentional mono support and "Side by side - deploy the runtime and framework with your application". ASP.NET MVC and Web API are really pleasant and mature frameworks, but configuring IIS has always been really unpleasant and clunky.

3
Xdes 3 days ago 4 replies      
"ASP.NET vNext (and Rosyln) runs on Mono, on both Mac and Linux today. While Mono isn't a project from Microsoft, we'll collaborate with the Mono team, plus Mono will be added to our test matrix. It's our aspiration that it 'just work.'"

I wonder whether we will be seeing a .NET web server for mac and linux. Hosting a C# MVC app on linux will be sweet.

4
konstruktor 3 days ago 0 replies      
I can hardly imagine a more effective developer advocate than Scott Hanselman. He seems to be doing more good for Microsoft's reputation among developers than anybody else. Of course he out-HNed the official msdn article. For those not familiar with his name, here is some of his other stuff:http://www.hanselman.com/blog/MakingABetterSomewhatPrettierB...http://www.hanselman.com/blog/ScottHanselmans2014UltimateDev...
5
troygoode 3 days ago 6 replies      
Finally switching away from the horrible XML-based CSPROJ files to a more sane JSON format (that hopefully doesn't require you to list every. single. file. individually) is the feature I'd be most excited about if I was still using .NET.

I recall CSPROJ files being the primary source of pain for me as I started to transition out of the Microsoft world and into the open source world, as it prevents you from using editors like vim & emacs if you're working in a team environment.

6
kr4 3 days ago 0 replies      
> ... your choice of operating system,

> we'll collaborate with the Mono team, plus Mono will be added to our test matrix. It's our aspiration that it "just work

This. is. superb! I love developing on VS with ASP.NET, and I love *nix tooling (ssh is pure fun), I was secretly hoping for this to happen.

7
daviding 3 days ago 3 replies      
What is a 'cloud-optimized library'? Does it mean 'small' or have I underestimated it?
8
malaporte 3 days ago 1 reply      
Seems pretty interesting. And official MS support for running the whole thing on Mono, right now, isn't that pretty big?
9
bananas 3 days ago 5 replies      
I've been through EVERY ASP.net update on every version of .net and every MVC update from CTP2 onwards, dealt with WWF being canned and rewritten, moved APIs between old SOAP stuff (asmx), WCF and WebAPI and rewritten swathes of ASP VBnand C++ COM code, ported EF stuff to later versions and worked around piles of framework bugs including the MS11-100 fiasco. That and been left royally in the shit with silverlight.

Not one of the above has actually improved the product we produce and are all reactionary "we might get left in the shit again" changes.

I'm really tired of it now.

10
robertlf 3 days ago 0 replies      
So glad I'm no longer a .NET developer. Every year it's a new razor blade.
11
cuong 3 days ago 1 reply      
How realistic is it to use a self-hosted OWIN server running ASP.NET vNext on Mono? What can we expect in terms of performance? I was always under the impression it was pretty far away from being a viable option, Microsoft help or not.
12
TheRealDunkirk 3 days ago 9 replies      
Yet another piece of mature web-development puzzle that Microsoft is trying to emulate. That's great, and good luck to them, but my recent efforts with trying to use Entity Framework suggest that this may not be a viable solution for a long time to come.

I'm typing this to delay the effort of ripping EF out of my project, and do ADO.NET Linq-to-SQL. (I guess. Maybe it'll just be raw SQL at this point.) Unless someone here can answer this question? It's worth a shot... http://stackoverflow.com/questions/23528335/how-can-i-implem...

I miss Rails.

13
adrianlmm 3 days ago 1 reply      
I'd really like that the next ASP MVC comes with full Owin support.
14
slipstream- 3 days ago 4 replies      
Does anyone else spot the irony of an MS guy using Chrome?
15
chris_wot 3 days ago 0 replies      
When will they be releasing ASP.NET vNext Compact Enterprise MVC Edition?
16
mountaineer 3 days ago 0 replies      
Tomorrow is my last day as a professional .NET developer, nothing here to make me think twice about saying goodbye.
17
li2 3 days ago 3 replies      
If you are serious about your career path as software engineer stay away from windows technologies.
23
Xeer wikipedia.org
227 points by mazsa  2 days ago   116 comments top 16
1
tokenadult 2 days ago 2 replies      
"The life of the law has not been logic; it has been experience." -- Oliver Wendell Holmes, Jr., The Common Law (1881) page 1. In other words, the Anglo-American system of common law is a system that has developed by generalizing from particular cases as they come up, and not by thinking from the top down about what kind of rules would be ideal.

It's rich with deeper meaning that there are a number of comments here about the development of rules and laws as we comment on an article posted on Wikipedia. I am one of thousands of volunteer editors on Wikipedia (since May 2010) years after having been (1) an editor of a glossy monthly bilingual publication about a foreign country as an expatriate resident of that country, (2) an editor of a series of English-language trade magazines about manufactured products from that same country, and (3) a student-editor (usually the only kind of editor such a publication has) of a law review. I started editing Wikipedia as late as I did, years after Wikipedia was founded, because when I first heard about Wikipedia I thought its editorial policies are madness--and, sure enough, the articles that resulted from the original policy included a lot of cruft. As Wikipedia has continued in existence, it has not been able to continue an Ayn Rand anarchy of bullies but has gradually had to develop rules and procedures and (a little bit of) hierarchy and organization. Most of the articles on the topics I do the most of my professional research in are still LOUSY, and I have been interviewed twice by the Wikipedia Signpost in the last several months about what needs to be done to improve articles on Wikipedia for various WikiProjects. The article kindly submitted here illustrates the problem, with its incoherent presentation of facts and speculation from a mixture of good and poor sources.

I live among the largest Somali expatriate community in the world outside Somalia (Minneapolis and its suburbs--we can listen to Somali-language local radio here since the 1990s) and have a new client for my supplementary mathematics classes whose family is from Somalia. That country's internal conditions during my adult life have been HARSH, and I don't envy any Somali patriot's task in trying to build up a country with peace, stability, and justice for all Somali citizens. I do wish all Somalis well in adapted customary legal systems to the modern world.

2
cup 2 days ago 7 replies      
I must admit I was confused to see xeer being posted to HN. Its interesting to contemplate the unique history of Somalia and the Somali people and how it fits into the greater African jigsaw puzzle.

I think the article is slightly misinformed however, the Sharia legal and judicial instrument which was adopted by the Somali people after the growth of the Muslim faith in the region was another system of justice and social order that arrived well before attempted European colonisation.

On a tangent, interesting things are happening with the Somali federal government now with respect to the telecommunications industry. Not only does Somalia now have its own top down domain (.so) but fiber optic lines are slowly being rolled out in the capital.

I find it ironic to think that in Australia the government is singing praises for copper network lines (after repealing the NBN) yet war torn anarchic Somali is pushing in the other direction. Somalia and Africas future really does look interesting.

3
gbog 2 days ago 3 replies      
There is this angle that says that natural laws are good, better than "artificial" laws. It seems trendy nowadays and is emphasised in the article.

But another angle, that seem to describe more closely the long term evolution and progress of human societies, is that laws and ethics have been slowly built by human societies against the law of nature. The direst way to express this is that in a natural environment, the weak and the disabled are left aside and die quickly, which we humans have decided to try hard to avoid.

So maybe a softer, more informal, "stateless" society like this Xeer could be valuable. But if it was, it would be because it would better protect us from the law of nature.

4
antirez 2 days ago 1 reply      
That may seem strange, but in Europe there are places where a similar "juridical" system was used too, which is, in Sicily. It was common for Mafia bosses and other respected older people to act as third party in order to judge disputes between people.
5
616c 2 days ago 3 replies      
I also find the name somewhat ironic.

Xeer clearly comes from , or khayr, which is Arabic for good. It is good, but in the higher moralistic and religious sense in addition to the normal sense. So I wonder if it goes back to original interaction with the Arabs, 7th century as noted or prior. The general idea, consensus-based law as I see it, seems similar on the basic principle in Islam to Ijma'[0], or consensus-based formation of jurisprudence. There are varying views, but that idea is that Islamic law (despite outside views of it) is not controlled by one but must be agreed upon by popular approval of jurisprudence scholars (of course this is loosely defined, but what can you do).

Xeer is definitely from the Arabic, as are many loanwords from Somali (as an Arabic speaker, who sat in linguistics courses where Somali speakers presented, I could be wrong). So I am not sure where the "no foreign loanwords" comment in the Wikipedia article came from.

Then again, maybe I am just reading to much into this name/book cover.

[0] https://en.wikipedia.org/wiki/Ijma

6
johnzim 2 days ago 1 reply      
From a jurisprudential point of view it's interesting to see how it evolved - the law in England moved out of the church and Xeer appears to have been born out of the reigning power in Somalia (elders) and remained therein.

I'll take the English common law and equity any day of the week - flexible where it needs to be so it's capable of applying concepts of natural justice constrained by well established principle, while still providing vital certainty as to the law. This passage in the wikipedia article makes the legal scholar in me shiver:

"The lack of a central governing authority means that there is a slight variation in the interpretation of Xeer amongst different communities"

Dealing with conflict of laws without prejudicing parties in an international setting is hard enough: imagine having to pursue justice according to discrepancies between individual communities! Better have some cast-iron choice-of-law clauses in those trade agreements!

7
fiatjaf 2 days ago 1 reply      
For people interested in common law and the problems of the State law system, I recommend the articles on the topic by John Hasnas:

THE MYTH OF THE RULE OF LAW: http://faculty.msb.edu/hasnasj/GTWebSite/MythWeb.htmHAYEK, THE COMMON LAW, AND FLUID DRIVE: http://faculty.msb.edu/hasnasj/GTWebSite/NYUFinal.pdf

8
neotrinity 2 days ago 1 reply      
How is it different from http://en.wikipedia.org/wiki/Local_self-government_in_India#... ??

which has been practised from way before 7th century ?

[ The Tone of the question is curiosity and not a flame-bait please]

9
disputin 2 days ago 0 replies      
"Court procedure..... In a murder case, the offender flees to a safe place including outside the country to avoid prosecution or execution "Qisaas." "
10
vacri 2 days ago 3 replies      
Several scholars have noted that even though Xeer may be centuries old, it has the potential to serve as the legal system of a modern, well-functioning economy.

This makes no sense, given the remainder of the article, as a modern, well-functioning economy (of which Somalia certainly does not have one) requires diversity. Xeer relies heavily on ingrained cultural norms, and is discriminatory against minorities and women. Lack of impartiality is also a question, given that you are assigned a judge at birth.

It might work well in Somalia, but I can't see what is described as being translatable elsewhere. There are some elements that aren't Xeer-specific (like reducing focus on punitive measures), but as a whole, I can't see it working somewhere else that doesn't have the same social structure.

11
blueskin_ 2 days ago 2 replies      
>stateless society

Sounds to me like a nicer way of saying failed state, which is what Somalia is.

12
noiv 2 days ago 2 replies      
Very interesting. Wasn't aware of alternatives to the western legal system hundreds of years old and actually widely accepted.
13
mcguire 2 days ago 1 reply      
"People who have migrated to locations far removed from their homes can also find themselves without adequate representation at Xeer proceedings."

That kinda sounds like a problem.

14
nighthawk24 2 days ago 0 replies      
Gram Panchayat in Indian villages often meet under trees too https://en.wikipedia.org/wiki/Gram_panchayat
15
anubiann00b 2 days ago 0 replies      
This won't work for large societies (unfortunately).
16
dr_faustus 2 days ago 0 replies      
And everybody knows: Somalia is paradise! You can even become a pirate! Arrrrrr!
24
Europe's top court: people have right to be forgotten on Internet reuters.com
226 points by kevcampb  2 days ago   205 comments top 21
1
buro9 2 days ago 3 replies      
I could and would argue that there are times in which a person should have the right to not be found.

An example scenario: Alice is a victim of a crime, reports the crime and Bob is arrested and goes on trial. Bob pleads not guilty and Alice participates in the trial as a witness. Bob is sentenced, the court record is made. The Daily News (fictional paper) reports on the court records of the day and has a reporter who attends the more interesting cases, and mentions Bob's sentence and gives some of Alice's statements as quotes.

In that scenario, the court record should always be a matter of public record, a statement of fact. The newspaper certainly has the right to access public record and to make a news story of the set of facts that are in the public record.

But, here starts the problems... Alice applies for a job and the employer Googles her name and comes across the news article. There are many types of crimes in which the public have great difficulty accepting a victim is a victim. For example, rape. It isn't too much of a stretch to say that the culture of victim blaming means that a matter of public record has just had the effect of defaming Alice.

Alice as a victim is never given the opportunity to move on with her life when every person that ever searches for her will find the story very quickly. She has been sentenced too by participating in the justice system, which is an open book.

The newspaper, just as in this case, will argue this is public record and cannot be silenced. Sure, I agree... but that doesn't mean that it's in the victims interest that the information be extraordinarily easy to find.

And Google are a better place in which to attempt to stop the information being found, given that they (and only 1 or 2 other search engines) cover the vast majority of searches made about someone.

Alice certainly does have the right to make information that she didn't explicitly choose to make public and that can cause her harm not be found so easily, even when that information is a matter of fact and public record.

She has the right to not be found (by that method - Google).

PS: I know a girl experiencing almost exactly that scenario, who cannot get a news story off of the front page results for her name. This isn't even a stretch scenario. The local newspaper just hasn't bothered responding to requests.

3
babarock 2 days ago 4 replies      
A couple of questions pop to mind:

- Will that affect the work of archive.org and the wayback machine?

- Is it okay for a politician to "erase" something he/she said 10 years ago?

4
dasil003 2 days ago 4 replies      
I sincerely think it's a good thing for the courts to look out for individuals's rights, but they are overestimating the power of the law. A thing can't be removed from the Internet once published, and forcing Google to remove it from their index is at best a middling measure that may slightly limit the exposure of said material.

I wish the court would grant me the right to fly as well, but it's beyond their power. I guess they just need a few more decades for the judges to die off and for the new old men to have a better intuitive understanding of the way the digital world works.

5
jerguismi 2 days ago 2 replies      
One quite an important fact is forgotten there, that publishing information is basically irreversible action. Even if google removed the information from their search engines, other search engines probably won't. And of course decentralized solutions to search engines are coming also, where information can't be removed even theoretically (for example, yacy.de)
6
hartator 2 days ago 1 reply      
Weirdly, I think it's more for politicians to forgot their past mistakes and their past actions than for the average citizen.

Taking France as an example, a lot of content (An good example will be some old racist video of our actual primer minister, past corruption of the mayor of one of major cities, stupid tweets...) is going to be censored and removed from the internet. And this is going to happen. Don't ever think one minute, the first thousand of "forgottenness" will be for citizens and not for politics.

I think that's one of the stupidest backward law ever. Thanks for fucking up the internet.

7
Karunamon 2 days ago 0 replies      
I am not looking forward to how this will impact discussion forums like the one we're on. Someone wants to be forgotten, therefore we must remove all posts someone made and destroy the context for everyone who may come along afterwards?

Just ick. Ick ick ick. More ill-thought-out "feel good" legislation like the cookie law.

8
buro9 2 days ago 1 reply      
So how does one go about asking Google to remove a front page search result about yourself that you do not wish to exist?

Google are famed for having virtually no way of contacting them, does it require the individual to jump through hoops to do so?

And no, not thinking of myself... but wondering just whether there are mechanisms available already to those who will now seek to exercise their right.

9
stuki 2 days ago 0 replies      
I guess the takeaway is: Don't operate Big Data companies out of Europe..... Pack up your bags, apply for YC and move to SV instead...

All harassing publicly famous entities will achieve, is to make obtaining available information more difficult for regular people. While those with deeper pockets and better connections, will simply pay niche providers for deeper searches and indexing.

From a privacy POV, you would WANT this kind of White Hat demonstrations of where your privacy weak points are. That way, you are aware of them and can make accommodations. While third party services can spring up to address the most widespread concerns. Rather than show up for a job interview, and have the interviewer "know" something about you, that you have no idea is available to them at all.

10
fixermark 2 days ago 0 replies      
"Dearest Max, my last request: Everything I leave behind me ... in the way of diaries, manuscripts, letters (my own and others'), sketches, and so on, [is] to be burned unread." ~Franz Kafka

... and I wonder how much of the work of a genius would have been lost forever if his wishes had been honored.

11
brador 2 days ago 3 replies      
Why didn't he ask the newspaper to remove his information?

Is Google to remove the search results (the link) or just their cache?

12
pekko 2 days ago 3 replies      
The decision rules that it would be Google's responsibility to filter search results, instead of the responsibility of actual page removing private data. So you can find the data if you know where to look at, just don't use Google?
13
nissehulth 2 days ago 0 replies      
Not just a can of worms, more like a full barrel. Shouldn't the publisher of the data be the one you turn to in the first place? I hope there is more to this story than is being told by Reuters.
14
aquadrop 2 days ago 1 reply      
So where this sensitive information starts? If I write on my blog something like "Today I went to the zoo and saw John Doe talking to giraffes", will John Doe have the right to force me delete this text?
15
justinpaulson 2 days ago 1 reply      
I am not sure how most countries in the EU handle the press, but without digging into this too much, it seems like this ruling greatly limits the freedom of press. What if a scandal is uncovered regarding a political leader or someone closely related with them? Does that person have the "right" to kill the right that the free press has to go public with the information? I really don't think something like this would stand up in the US at all, but I'm unfamiliar with press laws in most of Europe.
16
aerophilic 2 days ago 1 reply      
Question: Assuming for a moment that there is a right to be "forgotten". Should that right be permanent? I would argue that while it is relevant during a person's lifetime, it actually would hurt the public good if we made it permanent. My thought process goes out to say 100 years from now, where there may be researchers/family members that want to know more. Should they still be restricted well after I am dead? Thoughts?
17
krisgenre 2 days ago 0 replies      
The reason why most applications don't have an undo operation is because it is something that needs to designed from the ground up. Its really too late for the Internet to have an undo.
18
ozim 2 days ago 1 reply      
"The company says forcing it to remove such data amounts to censorship."

Don't they see that personal censorship is something good opposed to government censorship?

19
cyberneticcook 2 days ago 2 replies      
The biggest issue is that we don't own our data. It's stored in Google, Facebook, Twitter, LinkedIn etc.. servers. It should work the other way around, every individual should keep his own data and provide permissions to external services and other people to access it. Is there any project looking into this direction ? How do we reverse this situation ?
20
D9u 2 days ago 0 replies      
The NSA, etc, never forgets...
21
beejiu 2 days ago 5 replies      
And they wonder why so many Brits want to leave the EU.
25
Pervasive Monitoring Is an Attack tbray.org
225 points by kallus  2 days ago   27 comments top 6
1
discardorama 2 days ago 4 replies      
TFA says "PM is an attack ... and this is a consensus of the IETF". On the other hand, IETF continues to employ NSA employees (like Kevin Igoe, a co-chair of Crypto Research Group under IETF[1] ).

So: is it a consensus or not? Does Mr. Igoe consider PM an "attack", even though his own employer does it?

I'm having trouble reconciling the two.

[1] http://article.gmane.org/gmane.ietf.irtf.cfrg/2337

2
CapitalistCartr 2 days ago 1 reply      
" . . . the IETF is putting this stake in the ground in May of 2014."This isn't much of a stake in the ground, but its a start.

So far, the disclosures have involved the NSA and GCHQ: intercepting hardware and modifying it; strong-arming companies into "coperating"; pushing weaknesses known only to them into standards; and spending tens of billions to copy most of the Internet and have server farms sort it.

None of that seems amenable to this RFC.

3
ryanobjc 2 days ago 2 replies      
This is a good formal step.

The time it took from 'common knowledge' to a formal proposal makes me a little worried. If the IETF isn't really a "council of wise folks" then in the long term, doesn't their effectiveness get eroded?

4
leeoniya 2 days ago 0 replies      
adapting a quote from Office Space, every RFC shall ask, "Is this good for the internet?"
5
nknighthb 2 days ago 2 replies      
> if your application doesnt support privacy, thats probably a bug in your application.

Amateur radio is explicitly not for traffic that needs to remain private. It exists for limited purposes not including routine communication that can be served by other means (e.g. a phone or ordinary internet connection). It is chiefly for education and research/experimentation in radio. It is not for general personal communications or commercial use.

The applicable rule in the US[1] says:

"(a) No amateur station shall transmit: [...] messages encoded for the purpose of obscuring their meaning"

This serves to ensure the amateur radio service is not used in violation of its rules and purpose.

The rule has exceptions elsewhere in the rules. For example, remote control of satellites and model aircraft. And FCC rules as a whole pretty much go out the window when transmissions are for the purpose of protecting the immediate safety of life or property.

The rules are also susceptible of a particular interpretation: You can use encryption, provided the algorithm is documented, and you keep a record of the keys used. This has been used to block non-amateur access to WiFi access points operating within the ordinary WiFi band, but under Part 97 rules (e.g. non-FCC-approved equipment, or higher power than allowed for unlicensed users).

The rule also does not in any way prevent use of authentication and message integrity mechanisms, e.g. HMAC, because they are not intended to obscure the meaning of the message, merely authenticate it.

If you need private communication, there are other avenues available than the amateur radio service. And if you want greater freedom for unlicensed use of the airwaves than now exists, you'll have my support in principle (there are real problems with a free-for-all, but there are myriad ways FCC rules and spectrum allocation practices could be greatly improved in this regard). But this rule is not a bug, it is a deliberate feature of the amateur radio service.

[1] http://www.gpo.gov/fdsys/pkg/CFR-2013-title47-vol5/xml/CFR-2...

6
skion 1 day ago 0 replies      
Curious if Analytics and Real User Monitoring (RUM) companies feel addressed by this memo. And whether the IETF intended that or not.
26
Articles Every Programmer Should Read javarevisited.blogspot.com
222 points by javinpaul  1 day ago   66 comments top 14
1
patrickmay 1 day ago 6 replies      
The list is heavily weighted to implementation details. I'd include a few essays like "The Rise of Worse is Better" (http://dreamsongs.com/RiseOfWorseIsBetter.html) to encourage programmers to take a step back and think about design and architecture more often.
2
facorreia 1 day ago 2 replies      
Good list. I would add "Falsehoods Programmers Believe About Names"[1].

[1] http://www.kalzumeus.com/2010/06/17/falsehoods-programmers-b...

3
ufo 1 day ago 1 reply      
GOTO Considered Harmful

http://www.u.arizona.edu/~rubinson/copyright_violations/Go_T...

If you actually read the letter you can see that it also applies to modern programming and not just to "goto". Its truly a timeless article that everyone should read (and its really short!)

4
lmedinas 1 day ago 1 reply      
This article "How to write shared libraries"[1] also from Ulrich Drepper should be added to the list. At least for C/C++ Programmers.

1 - http://www.akkadia.org/drepper/dsohowto.pdf

5
bitlord_219 1 day ago 3 replies      
"What every programmer should know about SEO"

Yeah, no.

6
mwnz 1 day ago 1 reply      
10 Articles every web programmer should read. Personally, SEO has zero bearing on my work. Java is only relevant to a subset of developers.
7
mabbo 1 day ago 2 replies      
"Numbers every programmer should know"- Probably the most interesting part of that article is the slider. As you move it up and down, you can see how all the different things get faster over time.

... Except for the final one, "Packet Roundtrip". Networks have reached a physical limit of the universe, the speed of light.

http://www.eecs.berkeley.edu/~rcs/research/interactive_laten...

8
curiousDog 1 day ago 0 replies      
There should also be an article about how every programmer should write without too many grammatical errors. Some of the emails I used to receive were borderline incomprehensible and I'd have to go battle it out in person.
9
ggchappell 1 day ago 1 reply      
Perhaps we can back up a bit. The first reading I used to assign to my students back when I was teaching lower-level classes (I'm a C.S. prof.) is "On Following Rules" by Kirit Saelensminde.[1]

It's a quick, easy read. It makes a point that is important and not hard to understand, but that is often missed. And it provides a framework for dealing with the concepts you get from all those other articles you're supposed to read.

[1] http://www.kirit.com/On%20following%20rules

10
laxatives 1 day ago 0 replies      
Can anyone provide some good reads on the replaying leap second concept? Has anyone ever taken advantage of this? How do projects that rely on subsecond accuracy and syncronization resolve the issue?

edit: for anyone interested, there has never been a negative leap second (it's always been something like 23:59:59, 23:59:60, 00:00:00). see http://en.wikipedia.org/wiki/Network_Time_Protocol#Security_...

edit2: however, there are negative leap seconds in UNIX time. I wonder if there's a vulernability here? see http://en.wikipedia.org/wiki/Unix_time#Leap_seconds

11
krazydad 1 day ago 1 reply      
Ken Thompson's Turing Award Lecture: "Reflections on Trusting Trust"

http://cm.bell-labs.com/who/ken/trust.html

12
sliverstorm 1 day ago 0 replies      
To really make something of the knowledge of memory & latency, an 11th article to get you thinking about how your program interacts with the cache:

http://research.scee.net/files/presentations/gcapaustralia09...

13
kawliga 1 day ago 0 replies      
SEO ???

hahahahaaha

14
platz 1 day ago 0 replies      
For every article you tell me I must read, I'll be happy to demand you read 10 articles of my choosing in return.
27
An Introduction to Programming in Go golang-book.com
220 points by dwevlo  1 day ago   70 comments top 16
1
stcredzero 1 day ago 2 replies      
C was wildly successful because the world needed a language that had some of the mechanisms of high level languages, which allowed low level control and compiled to fast machine code. C hit an in-between niche at the right time and place.

As far as I can see, Go is doing the same. We need a language that has some mechanisms of higher level languages, like CSP style concurrency and GC, but which still allows for low level control of things like memory layout.

I ported a partly done multiplayer game server written in Clojure to Go, and I find that I'm more productive in Go. The tooling and server monitoring is more developed for the JVM platform, but coding in Go is more fun because everything is immediately responsive, and nothing "falls over" like it can with nREPL/bREPL with Clojure/ClojureScript. Always paying that several seconds environment startup delay, or the additional management required to avoid it, does wear on you in the long run.

2
jroes 1 day ago 1 reply      
Another resource I like is gobyexample: https://gobyexample.com/
3
krat0sprakhar 1 day ago 0 replies      
I've found the learning go[0] book to be an awesome concise introduction to Golang.

[0] - http://archive.miek.nl/projects/learninggo/index.html

4
BuckRogers 1 day ago 0 replies      
Just finished this book this past weekend. I'm not impressed by any of the Go books on the market, but this one is suitable for a brief introduction to get you going.

After reading it and writing some trivial code, the advantages it has don't really help me with any problems I face being a 1 man shop. Just losing a lot of libraries so I'll probably stick to using Python for 99% of the stuff I do.

5
tragic 1 day ago 1 reply      
Getting loads of text encoding oddities everywhere - is that just me?
6
cronopios 1 day ago 1 reply      
I just read this book last week!

It's an easy read, and it does a good job showing what Go is like. It piqued my interest, and yesterday I started reading 'Programming in Go: Creating Applications for the 21st Century' by Mark Summerfield. Does anybody know how it compares to 'The Go Programming Language Phrasebook' by David Chisnall?

7
optymizer 13 hours ago 0 replies      
"An Introduction to Programming using Go" better reflects the purpose and the tone of this book.
8
nkozyra 1 day ago 2 replies      
This site isn't new, is it? I just finished writing a book on Go and feel like I encountered this site a few times while researching - though not as a complete anthology.

If so, I've found this site helpful but "Introduction" is the proper term, it doesn't go particularly deep into anything.

10
VLM 1 day ago 1 reply      
Would I buy the author a beer at a con, yes, so I dropped $3 at amazon for the kindle version. That's about what its worth.

Problems:

1) Is it for noobs or the guy learning his 10th language? Starts with "what is a file, what is a folder, what is a text editor". Then leaps into a very matter of fact "and this is how you do recursion". So a noob is going to be totally lost after the first chapters and an old timer is going to be pretty bored with the "what is a folder" level stuff.

2) Strange text encoding errors on the site. ... Bullet point "Strings are" a-hat (as in a with top hat, not ahat) lowercase-epsilon c-concat-e "indexed" a-hat epsilon "starting at 0 not 1" ... current firefox if that matters (which it absolutely shouldn't)

Other than that, its a pretty good intro level book, like I wrote I'd "buy the author a beer" level of goodness.

11
BorisMelnik 1 day ago 2 replies      
Question - anyone have a resource of examples of sites that have been made with Go?
12
Numberwang 1 day ago 2 replies      
Would one be able to pick up go as a first programming language with this?
13
samirahmed 1 day ago 0 replies      
it is awesome that the book is free online, but I find it really tough to read with no syntax highlighting ...
14
crncosta 1 day ago 0 replies      
Still useful, but outdated.
15
worklogin 1 day ago 3 replies      
What sort of tools can I create in Go? Say I'm someone who programmed six years ago and since then has only a cursory exposure to Python, some Scala and Perl. What "real" applications, desktop, server, web or otherwise, should I attempt to build?

I've heard good things about Golang, then I hear things like its lack of generics make it useless for a lot of cases.

16
sanxiyn 1 day ago 0 replies      
I was briefly excited because I thought I would discover the book containing wisdom of how to program a competitive Go AI engine. (Go is an ancient board game.) Oh well.

By the way, is there a suck book?

28
What 4chan thinks of HN rbt.asia
215 points by Floens  1 day ago   80 comments top 42
1
peterkelly 1 day ago 4 replies      
> How 2048 will make you a better programmer

They may make fun of this, but implementing another 2048 clone has helped me become a bootstrapped digital nomad with seed funding, as well as helping me to learn Ruby, node.js, mongodb, AngularJS (including Providers and Factories), while simultaneously embracing JSON to double my sales by 2x within just a couple of days at a recent hackathon.

2
disbelief 1 day ago 0 replies      
Hilarious:

> How I got my girlfriend pregnant using JavaScript.

> How we became ramen profitable by pivoting our cat consulting business to dogs.

> Mildly interesting topic (wikipedia.org)

3
joshbaptiste 1 day ago 1 reply      
ABC in # lines of JavaScript (400 comments, 1000 points)

Actually interesting topic (3 comments, 4 points)

Hilarious..

4
nostrademons 1 day ago 2 replies      
4chan: smart people pretending to be dumb.
5
wkdown 1 day ago 0 replies      
George RR Martin uses DOS

DOS still used by George RR Martin

George RR Martin talks about authoring in DOS

DOS to be killed off in the next episode of Game of Thrones

6
cgh 1 day ago 0 replies      
"Show HN: HackerNews reimplementation in one line of x86 ASM"

This one cracked me up.

I've noticed a welcome relief from JS posts lately, not to mention politics. The new moderation system is working well from my perspective.

7
ColinCochrane 1 day ago 0 replies      
I got a good laugh out of some of those.

Things I have learned from coding for a month

Ten ways to become a better programmer (by the guy who's been coding for a month)

8
cliveowen 1 day ago 1 reply      
It's amazing how accurate it is.
9
paulannesley 1 day ago 2 replies      
> Show HN: My full-stack web framework written entirely in CSS3
10
TorKlingberg 1 day ago 1 reply      
I think HN, Reddit and 4chan have mostly the same audience. People just behave differently on each site.
11
deadfall 1 day ago 2 replies      
"What HN thinks about what 4chan thinks about HN"
12
awjr 1 day ago 0 replies      
> How to touch yourself at night without JavaScript knowing it.

:O

13
dsjoerg 1 day ago 0 replies      
How I ported the control software of a nuclear reactor to reactive Javascript
14
ChrisNorstrom 1 day ago 1 reply      
Brutal. Hilarious. Some good points. Some dumb ones (they don't seem to understand how important failure is and why it's good to share your failures). Their mockery is actually quite diverse.

===SV Celebrity Worship===

"Why Elon Musk is the most perfect human being alive today"

===Political===

"A Heart Divided: How a gay JavaScript programmer feels about Brendan Eich"

"Please don't mention Condoleezza Rice. Our autism is above politics."

"10 ways we unknowingly oppress female programmers and enforce patriarchy"

===Juvenile Cliq===

"Reasons why this <obscure new non-stable language that can't even compile yet> will replace C as a system language and why you SHOULD use it if you don't want to be left in the dust."

"Why we switched to [obscure framework] and you should too."

===HN flaws===

"<userX>, you seem to be hellbanned for no reason."

===Feel Good Superior Heroism===

"Check out this TED talk about teaching node.js to kids in Africa."

===Love of Javascript===

"Have you heard about our Lord and Saviour JavaScript?"

"How to touch yourself at night without JavaScript knowing it."

"Breaking news! POP3 and IMAP written in Javascript!"

"How I ported the control software of a nuclear reactor to reactive Javascript"

"Linux kernel ported to JavaScript running in the browser. See how we did it."

"How I made a filesystem in javascript."

"How I got my girlfriend pregnant using JavaScript."

"How to avoid getting HIV using JavaScript."

"The Linux kernel doesn't have enough javascript."

"I recommended my boss to rewrite the local powergrid infrastructure to javascript and how I lost my job."

15
VeejayRampay 1 day ago 1 reply      
Uncanny how "Lapis A Lua, Moonscript Framework built on OpenResty" fits so well in that list.
16
kachnuv_ocasek 1 day ago 1 reply      
<completely irrelevant rant that will get upvoted to the top>

OK, seriously, it should be /g/ instead of 4chan.

17
Orangeair 1 day ago 0 replies      
Have you heard about our Lord and Saviour JavaScript?
18
nateabele 1 day ago 1 reply      
Heh. Mostly, can't argue, barring two things:

> How I got my girlfriend pregnant using JavaScript

I'm pretty sure this is directly opposed to reality.

> How 2048 will make you a better programmer

Incidentally, this actually has been experience (to be clear, I mean playing it, not coding it).

19
justuseapen 1 day ago 1 reply      
Introducing js.js: a JIT compiler from JavaScript to JavaScript

Trying so hard not to lol at work...

20
dang 1 day ago 0 replies      
21
adamsrog 1 day ago 3 replies      
<passive aggressive argument>
22
anon4 1 day ago 0 replies      
Don't forget:

What 4chan thinks of what HN thinks of what 4chan thinks of HN: https://boards.4chan.org/g/thread/41920845#p41922057 thread currently in progress)

23
falcolas 1 day ago 0 replies      
190 points, 74 comments, flagged by a minority of users into oblivion.
24
COil 1 day ago 0 replies      
> How I ported the control software of a nuclear reactor to reactive Javascript

> How To Make Your Flat UI Flatter

> I decided to re-implement Javascript in Javascript. It failed. Here is my story.

Made me laugh!

25
DrinkWater 18 hours ago 0 replies      
It is pretty accurate, however HN is still the best source for all the topics i am interested in.

Just skip the crap posts, and you're good.

26
vezzy-fnord 1 day ago 1 reply      
Previously on "What 4chan knows-uh, thinks of HN": https://news.ycombinator.com/item?id=6747373
27
taternuts 1 day ago 0 replies      
"fuck this shit we have this every month and some retard posts it on HN farming 300 points"
28
rafaelvasco 1 day ago 0 replies      
Omg Lol :

Why flat design just doesn't stack up

29
cheetahtech 1 day ago 0 replies      
I would add two more things.

Im highly moderated to the point of killing users voices.

Politically selfish when it comes to talking about YCombinator. (Politics is alright, as long as YCombinator is doing it)

30
snake_plissken 1 day ago 0 replies      
I chortled at very many of these. So good.
31
pikachu_is_cool 1 day ago 0 replies      
This post and the comments here completely violate the HN guidelines. This shouldn't have been upvoted, let alone be the top post. Thank you mods, for deleting this.
32
api 1 day ago 0 replies      
So accurate. Such parody. Wow.
33
peterwwillis 1 day ago 0 replies      
> How I built an automatic Raspberry Pi garage door on top of JavaScript

I thought this was hilarious. Until I realized it's a real thing. http://itsbrent.net/2013/03/hacking-my-garage-with-a-raspber...

I'm done with the internet.

34
justizin 1 day ago 0 replies      
this just made my day.
35
birdsoffish 1 day ago 0 replies      
>How to touch yourself at night without JavaScript knowing it.Hehe
36
LazerBear 1 day ago 0 replies      
This is hilarious and spot on. Still love HN though.
37
estrabd 1 day ago 0 replies      
It is funny because it is true.
38
pearjuice 1 day ago 1 reply      
Sadly, most of the things on there are true. We can try to deny it all we want, but most of the stuff that's there is true. The power of anonymity is that you get to voice honest opinions without tying it up with your identity and/or feeling responsible for it. Of course, this can be argued otherwise too, by citing some (bad) comments as example from that thread, but for the most part, what you see there are honest comments.
39
4channer 1 day ago 0 replies      
tfw I came here from that post on /g/ to comment about the thread I came from. Meta af
40
bitJericho 1 day ago 0 replies      
Here's a well reasoned statement that will be downvoted because it's unpopular.
41
opendais 1 day ago 0 replies      
LOL. :) That is adorable.

Although, tbh, catching the right trend floats alot of startups is why that kind of content matters...

I mean that is really how GitHub 'won' the whole git hosting thing, wasn't it?

42
stcredzero 1 day ago 2 replies      
On Javascript:

No, it's awful. But it's the only client side scripting language out there, and Web guys love to pretend they're just as skilled as systems guys. They take concepts that systems programmers discovered 30 years ago, put a fancy name on it and do it worse.

Funny, but that's exactly what people were saying about Java programmers in the 90's. Now, the Java ecosystem is full of useful libraries and well optimized systems. It's become the default language for implementing big business systems. I wonder if Javascript won't simply follow the same course?

EDIT: I seem to have struck a nerve. Please read this at face value and not as some kind of snark. What I say is factually true. There are plenty of horrible things written in Java, but eventually, that becomes true of any language, and there are tons of great things written in it. It's no accident that HFT is written in Java nowadays. Also no accident that Clojure and Scala are written on top of it.

29
MIT's Scratch Team releases Scratch 2.0 editor and player as open source github.com
215 points by speakvisually  2 days ago   61 comments top 23
1
gw 2 days ago 1 reply      
I've taught a few kids using Scratch and it works quite well. I'm happy to hear they're working on an HTML5 version, and hopefully it will be possible to install if offline like the current Adobe AIR version does. I teach with a few netbooks running Ubuntu, and can't always rely on having internet, so I had to install the last version of AIR that supported Linux to get it working.

There is also an unaffiliated app for Android called Pocket Code that is clearly inspired by Scratch. It works nicely on Nexus 7 tablets, and touch screens are clearly more natural for kids, but it is buggy and more limited than Scratch so I had to stop using it. Hopefully it will improve, or the Scratch team will provide their own mobile port (not a simple task, of course).

2
raimondious 2 days ago 1 reply      
We are hiring! http://scratch.mit.edu/jobs/

Also, Scratch Day is May 17th: http://day.scratch.mit.edu/

3
barbs 2 days ago 0 replies      
Scratch is a multimedia authoring tool that can be used by students, scholars, teachers, and parents for a range of educational and entertainment constructivist purposes from math and science projects, including simulations and visualizations of experiments, recording lectures with animated presentations, to social sciences animated stories, and interactive art and music. Simple games may be made with it, as well.

http://en.wikipedia.org/wiki/Scratch_(programming_language)

4
JoshTriplett 2 days ago 1 reply      
This (along with the existence of the HTML5 version linked elsewhere in the comments, https://github.com/LLK/scratch-html5 ) is great news! I've always hesitated to point anyone at Scratch due to its non-open license. It's great to see that problem solved.
5
droob 2 days ago 0 replies      
Neat! It would be nice if README.md gave a little introduction to the project.
6
davidw 2 days ago 0 replies      
Scratch is really cool - my 6 year old daughter enjoys it a lot even though she doesn't grasp all of it yet. But I think it's a good start in that it starts introducing some of the concepts behind making a computer do things, and making things with computers. She has fun with it too, it's pleasant, which should help develop a positive attitude.
7
davexunit 2 days ago 4 replies      
It's unfortunate that Scratch is built upon the Flash platform. Though Scratch has been released under the GPL, it requires nonfree software in order to run. Scratch looks interesting, but I cannot recommend its use until this problem is fixed.
8
300bps 2 days ago 2 replies      
I played with Scratch for some time with my 9 year old son. They've done an excellent job with it. We ended up using Construct 2, only because they've done an even better job and it releases to HTML 5, is playable on all kinds of devices including iPad.
9
jimmar 2 days ago 2 replies      
Interesting project! Just a note--I got to step 3 of 13 on the Scratch tutorial and the language switched to Dutch or something ("3 Dansen maarVoeg een nieuw NEEM STAPPEN-blok toe. Klik in het blok en typ een min-teken."
10
tijs 2 days ago 1 reply      
Hardly 'open' but if you're looking for something similar that works nicely on an iPad Hopscotch is also worth a look https://www.gethopscotch.com

Bit more limited than Scratch but the new editing interface they just released works really well.

11
nighthawk24 2 days ago 2 replies      
Catroid is another cool related project https://github.com/Catrobat/Catroid
12
danielweber 2 days ago 2 replies      
ISTR Scratch being available as a standalone executable, and then changing into a web app. The standalone tool made it easy to let my kid play around with while not having the whole internet there to distract him. Is the standalone app still available?
13
codewiz 2 days ago 0 replies      
Turtle Art (AKA Turtle Blocks) is another LOGO-derived educational language with aims similar to Scratch.

Turtle Art is free software and its block language seems more powerful and orthogonal. It can be extended with inline Python expressions or by loading Python scripts. I think it's a great way to introduce young children to the basic concepts of programming with a smooth transition to a mainstream language.

While Turtle Art is bundled with the OLPC-derived Sugar learning environment, it also works on a regular Linux desktop.

Get it here: http://wiki.sugarlabs.org/go/Activities/Turtle_Art

14
Patrick_Devine 2 days ago 1 reply      
I absolutely love Scratch. I get my six year old daughter to make all of the artwork and sound effects and I end up doing all the code. I've been really looking forward to Scratch Jr, just to see if she can do more of the coding.

One thing I'd love to see is an atan2 function in the math routines. Right now if you want to do anything a little bit tricky you end up having to implement your own.

15
philippeback 2 days ago 0 replies      
There is also the Phratch version (Pharo based).http://www.phratch.com/

This is more for grown ups where one can integrate all kinds of cools things.

A video with Lego Mindstorms EV3 controls.

http://vimeo.com/82540943

And engineers have fun too.http://vimeo.com/89912838

16
peterb 2 days ago 0 replies      
I tried to get my son into Scratch, but he didn't like it. He loves hacking on Lua code in the Roblox game.
17
skierscott 2 days ago 0 replies      
Ah, Scratch. This is how I got started with programming (pretty late, my senior year in high school). I made a game[0] where you try and knock another player or a dumb AI off a disk.

[0]:http://scratch.mit.edu/projects/1108096/

18
RobotCaleb 2 days ago 2 replies      
Is there a linux distro that is centered around Scratch that I can PXE boot on my media center PC for my son to play with?

I'm interested in getting him into computing of sorts. I tried Doudoulinux, but it was mostly not good.

19
millettjon 2 days ago 2 replies      
Scratch is great for making animated comics. It appeals to girls as well as boys. I taught kids from 6 to 12 years last year. Kids older than that got bored with it. Highly recommended.
20
juliendorra 2 days ago 0 replies      
We love and use Scratch a lot at Coding Goter events. Recently we also started to use the HTML-based SNAP![1] that also has the advantage to let older kids create their own custom blocks (functions). It's open source, and seems to be evolving fast. (It started as a clone of scratch, but might diverge in the future if I understand the conversations right)[1]http://snap.berkeley.edu/
21
craigching 2 days ago 2 replies      
One of the older versions of Scratch had support for Lego WeDo. Does Scratch 2.0 have support for Lego WeDo? Would love to introduce it to my kids!
22
sceadu 19 hours ago 0 replies      
23
higherpurpose 2 days ago 1 reply      
I saw somewhere that (now their) App Inventor makes for 13 percent of language use for education purposes, while Python is at 14 percent. They must be doing something right.

http://appinventor.mit.edu/

30
Introducing Moto E and Moto G with 4G LTE: Smart phones priced for all motorola-blog.blogspot.com
205 points by neduma  1 day ago   170 comments top 29
1
nicpottier 1 day ago 8 replies      
The big surprise for me was that they nixed the rear LED flash. Not because anybody would take pictures with that camera, but for a phone made for the developing world not having a "flashlight" is a huge knock against it.

Seriously, flashlights are a big selling point even on simple dumb phones here in Rwanda, and I just can't imagine having a phone without it When the power goes out a few times a week, having a source of light on you is a big boon. (and no the screen doesn't count!)

Seems like a misstep to me.

2
rtpg 1 day ago 4 replies      
The Moto G is still pretty much the best value proposition out there, great phone at a great price.

In this crowd there's a lot of hate when the phone doesn't have an SD card slot (and I kinda wish it did), but the fact of the matter is you cannot get a better phone for the price (and even for double the price it might be hard).

3
nolok 1 day ago 3 replies      
The Moto G is still the best in its price range, and at $129 the Moto E may manage to pull the same thing in the lower than 150 dollar market.

Whatever Google changed in Motorola's approach, it's working, because that's the first two phones they've ever made that I wish to buy and recommend (including G for myself).

They say it sells very well so I hope these things are sufficiently profitable for them, we will all benefit from great phones at low price point.

(the higher priced market may be harder to breach though, since the Nexus brand is already there playing the same game)

4
pjmlp 1 day ago 5 replies      
Go to love the way upgrades are "sold" to the customers:

"And with a guaranteed upgrade" == "The device will receive at least one software update to the current KitKat 4.4.2 operating system"

5
andyl 1 day ago 5 replies      
I hope all phones will adopt this feature: "Built-in FM radio"
6
ColinDabritz 1 day ago 1 reply      
Regarding the guaranteed upgrade:"2 The device will receive at least one software update to the current KitKat 4.4.2 operating system."

The wording there is confusing. I could see reading it as "it currently doesn't support 4.4.2, but it will." or "We'll upgrade to at least 4.4.3" or "we'll upgrade to at least 4.5"

I suspect they intend it to mean 4.5, but it is written vaguely, and that sort of promise sounds very much like marketing weasel words to avoid shipping anything more than minor system upgrades (which are INTENDED to not break functionality on devices).

They should be more explicit, why not just say a particular version number or later? Fear of the team changing the numbering scheme?

7
free 1 day ago 1 reply      
This is the first time I am seeing that a phone is priced cheaper in India. It is exclusively available online in India on http://www.flipkart.com/motorola/motoe
8
enscr 1 day ago 0 replies      
Moto E & Moto G, both are very well designed, hot selling items in developing countries. They offer so much more value for that price point than ANY of the competitors. Granted you'd find pain points but the fact is, they are providing a quality user experience to the masses at a fraction of the cost.

I think a first time budget user deserves a quality UI with a great touch experience to start with. (I'm looking at the sluggish Samsung etc. models at this price point with crappy touch experience).

9
serf 1 day ago 0 replies      
I know it's becoming less of an issue for others, but I wish that the current Motorola lineup had replaceable batteries.
10
fidotron 1 day ago 2 replies      
At least in Canada it remains incredibly difficult to get the G or X without a plan or contract. I can't help wonder what exactly it is that makes Motorola so resistant to direct selling in the frozen north.

The single greatest thing about the Nexus 5 is how easy it is to buy the handset with absolutely no consideration about the carrier at all, and it's sad that Motorola haven't learned this from greater Google before going off to Lenovo.

Doubly sad, as the products deserve a lot more attention. I do wonder if they're concerned about them being good enough to cannibalise vast swathes of the market.

11
whizzkid 1 day ago 1 reply      
I see that Motorola is trying to get to the position where Nokia is/was. I don't think it is really really easy but at least they got the idea right.

"Moto E: Made to Last. Priced for All."

12
apricot13 1 day ago 0 replies      
They've been very clever with the timing on this! Bringing it out at a really affordable price just before all the new nexus/galaxy s5 mini/metal/iphone 600 come out!

So people like me, who are (desperately) due an upgrade can buy this and sit on the fence until all the shiny new phones are available and then start a new contract. Genius!

Its a shame about the lack of flash and the non removable battery but I think this type of phone is made to be short term. Its meant to be either an interim solution as I mentioned above or as a taster to get people into the smart phone ecosystem and upgrade to something better in a year or so when it starts slowing down.

13
ctb_mg 1 day ago 1 reply      
Where does the Moto E compare in "horsepower" to phones we currently know? It states the Snapdragon 1.4 ghz dual core processor, isn't that what's in the US domestic Galaxy S3?
14
grymoire1 11 hours ago 0 replies      
"Goodbye Flip Phone" Not likely - as long as the carriers REQUIRE a data plan to buy a smart phone. Some people use a WiFi tablet and a flip phone to keep monthly costs to a minimum.
15
neves 1 day ago 0 replies      
I'm a happy Moto G user, use its led as a flashlight, but as someone who has a big music collection, the external 32g sd card is really a must.
16
rahimnathwani 1 day ago 2 replies      
Does anyone know why the maximum MicroSD capacity is listed as 32GB? 64GB cards work on my relatively old Huawei Android handset.
17
pling 1 day ago 5 replies      
I own a Moto G and am not totally happy with it. I replaced a Lumia 820 with it. The big problems for me are the camera is awful, I mean really bad. It's that bad I've started dragging my DSLR with me. Also the WiFi is terribly unreliable. I'll be sitting opposite the router and it'll start "avoiding poor connections". This isn't environmental as it happens everywhere. Also exchange integration is ugly and painful. Genuinely regretting the purchase.

I'm sure someone can produce a better handset for the price.

18
dan_bk 1 day ago 2 replies      
Who cares about yet-another-new-phone.

What we need is 100% open-source software, hardware and firmware.

19
nellyspageli 1 day ago 0 replies      
This is great news for android developers! Maybe one day google will be able to deprecate Gingerbread! It was released over 4 years ago yet every android app must support it since it represents about 20% of users since manufacturers continue to sell phones with it in 2014. Gingerbread is a real pain to support and bloats and slows all android applications by having to include a huge support library (~5MB).
20
DonGateley 1 day ago 1 reply      
These Moto phones would be great vehicles for getting people to try Ubuntu Touch. Why do they only provide images for the most expensive possibilities?
21
shapeshed 1 day ago 3 replies      
Seems you can't buy the phone outright in the UK and have to buy it through network providers. Would have been good to see an unlocked version purchasable direct from Motorola.
22
higherpurpose 1 day ago 0 replies      
Moto E is a nice phone, and although I haven't tried Moto G in real life, it actually looks like it has slightly better build quality for some reason, and more compact, too (for the size). But for only $50 difference, Moto G seems the better choice. Maybe my expectations were unrealistic, but I actually thought they could manage to put this one at $99, especially once I saw a rumor that it will sell for $117 in India. Moto G also started at about 17 percent in India more than in US ($210), so I logically deduced in US it would be $99.

http://indianexpress.com/article/technology/mobile-tabs/moto...

Moto G had a GREAT price at $179 ($210-$240 elsewhere), and why it became so popular in the first place. It doesn't quite look that Moto E has the same type of great price for what it offers. $99 would've been that great price that would've made everyone recommend it as the default choice for the price range.

If the rumor about the price in India was right, and it won't actually be more like $150 now there, then perhaps they are trying to have a more "global" price, where it's more or less the same price everywhere, even in US, and this way they'd make little profit on the global versions, but more profit in US.

As for the specs, I'm a little disappointed it comes with Cortex A5 instead of A7, but since it's clocked 200 mhz higher than Cortex A7 would be, maybe it's not too big of a problem, especially since they claim the general performance of the device is faster than a Galaxy S4 in many situations (like opening apps, which I think has more to do with their use of the F2FS file system, which ironically is made by Samsung, but they aren't using it themselves).

I also told my little brother if he'd want one of these, and he asked me if it has flash, and was disappointed to hear it does not. I think Motorola underestimated the importance of flash for this type of phone. The screen, size, internal storage+SD, I'm fine with. I'm just hoping that whenever Google launches Android 5.0 (hopefully this year), it will be upgraded to it.

23
krisgenre 1 day ago 0 replies      
Its better than my Galaxy S2 that I bought for more than $550 just three years ago :(
24
jestinjoy1 1 day ago 3 replies      
Both MotoG and MotoE don't have replaceable Battery. Whether this has any effect on their pricing!
25
jokoon 1 day ago 1 reply      
I'd still buy a smartphone without a camera. I wonder how much that smartphone would cost without a camera.
26
JohnDoe365 1 day ago 0 replies      
Currently Moto G has US GSM option only.
27
peterwwillis 1 day ago 0 replies      
My Android 4.1 smartphone is $70 at Bestbuy. It's a dual-core 1.4ghz with 1GB ram. Half the cost of the Moto E.

Does the Moto E have slightly higher specs than my phone? Yes. But is it "shaking up" the smartphone world? Hell no. Feature phone buyers will buy phones like mine, which are half the cost of the Moto E, and fully capable modern Android phones.

28
ommunist 1 day ago 2 replies      
Moto - 1 day on battery with moderate use. iPhone - 5 days on battery with moderate use.

Moto does not cost 5 times less than iPhone.

29
chrisbolt 1 day ago 3 replies      
I went to motorola.com, clicked "Learn More" about the Moto E, then clicked Buy Now, and the first "option" I was presented with was this:

http://cl.ly/image/3E3y0m083j2I

After that, I was shown options "US GSM" and "Global GSM", both priced identically. Why even ask me? All they did was add clicks between me and giving them money.

       cached 16 May 2014 02:11:01 GMT