hacker news with inline top comments    .. more ..    28 Nov 2012 News
home   ask   best   6 years ago   
How to set up a safe and secure Web server arstechnica.com
199 points by chinmoy  7 hours ago   85 comments top 20
JoeCortopassi 7 hours ago 1 reply      
For anyone that wants more resources like this, I've found articles in Linode's library to be very helpful: http://library.linode.com/lamp-guides
rlpb 2 hours ago 1 reply      
If you want a safe and secure Web server, use what your distribution gives you. Don't add third party sources if you can avoid it, ie. don't need features < 1 year old. It's hardly safe and secure if it's not had long enough for people to find problems with it anyway.

Instead, go with what your distribution gives you. The people who put your favourite distribution together work on making the system safe and secure as a whole. People who don't think it is safe and secure file bugs and they get fixed. And you have one place to get all your updates in case fixes are needed.

If you start adding third party sources, you're on your own as to managing any implications of the way you've put it together. Just because each individual component is safe and secure doesn't mean that it is as a whole. For example, Ubuntu add hardening (AppArmor) for various server daemons which you won't get if you just download apache from the project website.

If you need a guide on putting a system together yourself, then you aren't someone who can manage these implications yourself, and you're trusting the guide author in not having made any mistakes. Are you really in a position to judge his competence?

Just use your distribution's standard web server and you'll get your safe and secure Web server in one command.

dschiptsov 2 hours ago 7 replies      
Vurtualization is not for production. Why to have this useless layer, which messes up your CPU caches even more, interfere with you IO and complicates memory model? What for?

Virtualization was build for server providers to make easy money, not for server owners to gain performance advantages.

Vistualization is not for production. Production servers need less code, not more.

It is the same kind of mistake as JVM - we need less code, integrated with OS, not more "isolated" crapware which needs networking, AIO and really quick access to the code from shared libraries.

And, of course, a setup without middle-ware (python-wsgi, etc) and several storage back-ends (redis, postgres) is meaningless.


Well, production is not about having a big server which is almost always 100% idle, and can be partitioned (with KVM, not a third-party product) to make a few semi-independent virtual servers 99% idle. This is virtual, imaginary advantage.

On the other side, your network card and your storage system cannot be partitioned efficiently, despite all they say in commercials. And that VM migration is also nonsense. You are running, say, a MySQL instance. Can you migrate it without a shutdown and then taking a snapshot of an FS? No. So, what migration you're talking about? It is all about your data, not about having a copy of a disk-image.

It is OK to partition development, or 100% idle machines - like almost all those Linode instances, which have a couple of page request in a day - this is what it was made for, same as old plain Apache virtual-hosting. But as long as one needs performance and low latency, all the middle-men must go away.

barefoot 5 hours ago 4 replies      
"...being locked to IIS as a Web server (or dependent on crippled Windows ports of better Web servers) means you'll be playing in the bush leagues. IIS is found running many huge and powerful websites in the world, but it's rarely selected in a vacuum..."

I sense a little bit of bias.

As a multiplatform developer I can think of a number of reasons why someone might opt to go the Windows Server route. ASP.NET MVC 4 is a first class framework and many prefer it over other popular alternatives on other platforms such as Django, Rails, and Cake. In addition, Visual Studio is arguably the best IDE available and publishing to an IIS server is dead simple.

As for cost, full versions of Visual Studio and Windows Server can both be obtained for free through the DreamSpark program for college students and through the similar BizSpark program for startups and small businesses.

jiggy2011 7 hours ago 1 reply      
It's also a good idea to install (and configure) at least some basic IDS like tripwire. You should probably have it do checks on a cron job
as well as doing chkrootkit.

Also a good idea to have your log files backed up somewhere else where your server does not have sufficient access to delete (or modify) them.

Also if you have multiple web apps running, chroot them if at all possible so that if something does break out it can't (so easily) wreak havok over your entire filesystem.

If you are using PHP also bare in mind that a common default is for all sessions to be written to /tmp which is world read and writeable. So if others have access to your server they can steal or destroy sessions easily.

I also didn't see mention of an update strategy for security updates. You can use apticron to email you with which updates are available and which are important for security.

You can set updates to go automatically (I recommend security only) but if you are more cautious
you might want to test on a VM first. But keep an eye on them! This is very important, especially if you are managing wordpress etc through apt.

And so many other things that I have probably forgotten.

Having some form of audit (that tripwire can provide) is vital in those "oh fuck" moments where something doesn't seem quite right and you start wondering if you have been pwned but have no real way of actually knowing.

edtechdev 1 hour ago 0 replies      
This guide doesn't cover important things like the firewall and blocking attackers (shorewall, fail2ban) and properly configuring mysql, php, etc.

If you have a small server, I'd really recommend checking out these scripts that assist with configuring and setting up a server very quickly:

I personally used a fork of lowendscript last year to set up some servers, but if I had to set up a new server today, I'd check out some of the other other options at that link, like Minstall: https://github.com/maxexcloo/Minstall
But this Xeoncross lowendscript fork is still very active: https://github.com/Xeoncross/lowendscript

babarock 28 minutes ago 1 reply      
Am I the only one who thinks that SSD are useless since most of the time the processing will be bottlenecked by the network overhead?
buster 4 hours ago 1 reply      
Ah well, it's rather a "how i set up a small web server for fiddling around with stuff" not so much a professional article about security. Sorry, but the first page is like "mhh, yeah, geeks hate MS, let's use the other choices" under the hood.
Why? Because it doesn't really mention a technical choice against MS.
Don't get me wrong i would never ever use Windows Server but when i'd write such an article i'd have to find at least a few technical pros and cons for the choices i preset.
"Uhhh, the internet is more like a unixy thing" doesn't cut it.

This goes on with the choice for Ubuntu Server. Why? Is it an article about "safe and secure web server" or about "how does my grandma set up a server"?
There are much more choices in terms of reliability and proven track record like FreeBSD, OpenBSD, Debian, RHEL/CentOS. The choice was made because it's easier to set up and apparently the author is too lazy to _really_ do his homework.

In the end, i'd say if the articles title would be "beginners guide how to setup a server" i wouldn't comlain..

zdw 5 hours ago 2 replies      
Better hardware would be an HP Microserver (which should win the contest for "worst URL ever"):


Has ECC RAM support. Takes 4 3.5" hard disks, and runs very quiet and cool.

quandrum 5 hours ago 2 replies      
I think it's time articles like these start suggesting an infrastructure as code product, like chef or puppet, to do the heavy lifting.

I feel like doing this stuff by hand should be considered insecure and outdated..

patrickg 1 hour ago 2 replies      
What kind of 'top' program is this shown in page 3 (direkt link to the image: http://cdn.arstechnica.net/wp-content/uploads/2012/11/webser...)
hardik988 5 hours ago 4 replies      
Pardon my stupidity, but how would one go about getting an IP address for a server installed at home? Is it a static IP address provided by my ISP, or something else ?
jimfuller 30 minutes ago 0 replies      
nice introduction article (on mostly how to setup nginx); the title should reflect this instead of focusing on 'safe and secure' ... if one was to store something valuable (lets say cc info) you would want to go far beyond what this article covers. #justsayin
jkuria 6 hours ago 4 replies      
Frankly, articles like these are a deterrent to all but the most techie of people. Why go through all this and shell $270 when you can get an Amazon EC2 instance free for a year!
Charlesmigli 2 hours ago 0 replies      
Article covers main parts of the webserver setup and gathers very interesting information scattered all over the Internet.
All the nginx setup and config things are REALLY useful, all the more regarding the poor quantity/quality of resources one can find out there. Really useful to me. I wish I had one guide like this when I setup my own webserver.
I made a tl;dr version but the main interesting parts stay all the nginx tricks for me http://tldr.io/tldrs/50b5ccb711c0ea5051000f29.
hakaaak 4 hours ago 1 reply      
leadholder 6 hours ago 0 replies      
I recently worked on a site run on such a server. I've set up my own servers before, and I think it can be fun, but this time it was the other guy's. I have to say it was pretty annoying because the little things that were not set up properly added up to a website that wouldn't deliver email, a shell environment with awful defaults...yuck. There was a lot of maintenance that was ignored because the guy just didn't have the time. Well, that's what commercial web hosts are for. It's amusing to think that some overburdened IT guys believe they're doing their clients a huge favor by running a vanilla web server in their network closet.
tzaman 2 hours ago 0 replies      
This article would make a really nice screencast that would be much more useful to newbie sysadmins.
Jailout2000 4 hours ago 0 replies      
What a bias. Using Debian and not even mentioning forks like Red Hat. Downvoted, never recommend.
Scrub values in JavaScript live github.com
56 points by nornagon  3 hours ago   19 comments top 7
spicyj 2 hours ago 1 reply      
Very cool. Just wanted to mention that we've had the same capability in the Khan Academy CS environment for a few months now:


Read John Resig's blog post for more details: http://news.ycombinator.com/item?id=4382076

pbiggar 2 hours ago 1 reply      
This is very impressive and cool. However I really don't think that parsing source code and manipulating is a very maintainable way to do this (and this as someone who writes Clojure for a living).

You could do exactly the same using very simple Knockout. It wouldn't be as funky, but it would be something you'd be happy to use in production.

stuaxo 1 hour ago 0 replies      
Does anyone know if there is a similar editor component for Gtk, QT or Wx ?

EDIT: Ideally for python.

Skalman 2 hours ago 1 reply      
It doesn't work in Firefox 19.
shakeel_mohamed 2 hours ago 1 reply      
Nice! Just a thought, values can go beyond the valid range. For example: dividing by zero, having negative values for RGB, and negative height/widths.
mcrider 2 hours ago 1 reply      
What would be the use cases for this, other than education?
ashcairo 2 hours ago 0 replies      
Very clever.
Jeff Bezos' Original Job Ad: It's 1994, You're a Unix Programmer. readwrite.com
28 points by capdiz  2 hours ago   11 comments top 6
ern 29 minutes ago 1 reply      
If you are looking for a list of significant USENET posts, including this one by Bezos, go to: http://www.google.com/googlegroups/archive_announce_20.html The list was created in 2001, when Google Groups reconstructed a huge archive of USENET postings.
macavity23 47 minutes ago 1 reply      
'extremely talented c/c++/unix developers'... 'able to build complex systems in about one third the time most competent people think possible'

This language conveys so much more competence than the standard 'seeking unix ninja rockstar' stuff that seems to be de rigeur these days.

Pwnguinz 42 minutes ago 1 reply      
Look at how similar this copy looks to any other SV startups these days (minus some buzz words like "Cloud", "Social", "Disruptive", etc). It goes to show that hiring (well) is hard. Even harder to gauge whether or not a company is worth applying to from the job descriptions alone.

Phrases like: "You must have experience designing and building large and complex (yet maintainable) systems" are so vague and ambiguous that if I honestly saw this post from some guy named "Bezos" in '94, I would have written off as a jokester.

At least in '94 they haven't started using the word "disruptive" as if it's something you can do to a whole industry overnight. Thank goodness.

equity 9 minutes ago 0 replies      
Why is the link to this job opening posted so often? I see it every 1 - 2 months here. It's not so interesting to see it posted as often as it is.
klon 1 hour ago 1 reply      
But who got the job?
agumonkey 1 hour ago 0 replies      
Alan Kay quoted for truth.
GNU grep is 10x faster than Mac grep jlebar.com
58 points by jlebar  5 hours ago   30 comments top 10
pooriaazimi 1 hour ago 4 replies      
I'm not trying to start a theological war about grep/ack here, I'm just mentioning it in case someone hasn't heard about 'ack' before and they (like me) might find it extremely useful: http://betterthangrep.com

It's grep, just better. It highlights the selected text, it shows which files, and in what line the text was found (and uses vivid colors so you can distinguish them easily), ignores .git and .hg directories (among others, that shouldn't be searched) by default, you can tell it to search, for example for only `--cpp` or `--objc` or `--ruby` or `--text` files (with a flag, not a filename pattern), and many many other neat features that I'm sure grep has, but you have to remember and memorize them. ack has sensible defaults.

Why ack? http://betterthangrep.com/why-ack/

manpage: http://betterthangrep.com/documentation/

Oh, and ack is written in perl and doesn't require admin privileges to install.

martinp 2 hours ago 3 replies      
'why GNU grep is fast' from the FreeBSD mailing list: http://lists.freebsd.org/pipermail/freebsd-current/2010-Augu...
X-Istence 2 hours ago 1 reply      
This may also be because the default grep, i.e. BSD grep actually pays attention to what you have set in your environment variable LANG. Default on OS X is en_US.UTF-8.

If the author were to set LANG to c. He would find that BSD grep suddenly speeds up tremendously.

pixelbeat 30 minutes ago 2 replies      
I notice these Mac tools becoming a bit stale.
sort is derived GNU sort, but from some ancient version.
I guess this might be due in part to these tools now being GPLv3 ?
mattparlane 2 hours ago 1 reply      
For those using homebrew:

    brew install https://raw.github.com/Homebrew/homebrew-dupes/master/grep.rb

eik3_de 33 minutes ago 0 replies      
You should tack a "LC_CTYPE=C" in front of grep to get comparable results. A multibyte CTYPE can slow down grep up to factor 30.
buster 2 hours ago 0 replies      
Obviously this means, Linux is 10x faster then Mac, ha!

Seriously though, it's really amazing what performance they squeezed of that tool. Always amazing to grep through gigabytes of files in a few seconds.

pooriaazimi 1 hour ago 0 replies      
I once tried a sed script on a couple million text files (60 GB in total) - they were web pages downloaded in some format (WARC? I don't remember what it was called) and I needed to change the formatting slightly (to feed them to Nutch) - Mac's default sed was literally 50 times slower than gsed (on the same machine). If I remember correctly, gsed finished the task in under two hours.
tehwalrus 2 hours ago 0 replies      
just tried on snow leopard, not quite 10x but nearly 2x faster, certainly. (admittedly, by firefox checkout is mercurial, and hg locate seems to pass something invalid to xargs half way through, but I guess the first chunk of files are the same.)

Someone commented on the article that this might be caused by missing off the -F flag; I tried this, and -F makes both versions slightly faster again.

xtrahotsauce 1 hour ago 2 replies      
Does "git grep" use a system grep or does it implement grep on its own?
Show HN: Telescope, an open-source social news app built with Meteor telesc.pe
190 points by sgdesign  11 hours ago   72 comments top 23
DigitalSea 9 hours ago 2 replies      
Amazing. I've only dabbled with Meteor lightly, but this takes it to a whole new level. Telescope is so fast, you've done an exceptional job building this my hat goes off to you and thank you for open sourcing it so others like myself can learn how to build a Meteor app.

My only concern is what about search engine visibility? If I were to build an app like this with Meteor would Google see the page content?

camwest 8 hours ago 1 reply      
Sorry but is this meant to be marketing for Meteor or a legit project? Am I missing something or is there no test suite? This doesn't bode we'll if this is the example of how to write a Meteor app.

Please correct me if I've overlooked something.

mrchess 8 hours ago 1 reply      
I ~want~ to use Meteor on a large project, but they only are in Preview and in the FAQ states there will most likely be major API changes each [preview?] release, with no 1.0 release date in sight.
Jonovono 8 hours ago 3 replies      
Very cool. I am working on a music site right now in Meteor and really enjoy it! I will likely open source it after. Check it out thus far: http://tunes.meteor.com

I think there are lot's of improvements that could be made to sites like Reddit/HN. But thanks a lot for open sourcing it this will be very helpful to me in learning meteor!

edit: Just noticed you are planning on writing a book. Can't wait!

graue 9 hours ago 2 replies      
Very cool. I spotted a little layout bug: Resize your browser to be skinny, you'll see that the "Sign Up/Sign In" text starts to overlap the header text and become unreadable.
helloburin 7 hours ago 1 reply      
Thanks for putting this together! I think more designers should be exploring the possibilities of these real-time frameworks :) Today, it just seems like something a lot of devs get excited about, but I think it's when designers get a hold of something that the REALLY cool stuff starts showing up.

P.S. I'm a subscriber to your newsletter -- great stuff! Keep it up.

goldfeld 8 hours ago 1 reply      
I've been working with Telescope's code for just over a week, and Meteor since the beginning of the year, and I wanted to vow for what a pleasure it has been. The Meteor team is putting together something truly grand, and these guys in turn are doing a terrific job putting forth a great open app written in great Javascript, which is really a wonderful service to all those wanting to get into Meteor. I'll be the first in line to buy their book.

Congrats, I knew you'd make the front page, Sacha! And so good to know it's holding up, that changes my plans considerably, I'll plod on full steam on top of Telescope and Meteor.

For context, I forked this app for an MVP[1] showing Meteor's own roadmap, up for vote, in HN-clone format, which went live only a week ago[2].

[1] http://www.leakmap.com/
[2] http://news.ycombinator.com/item?id=4815271

tryggvib 30 minutes ago 0 replies      
This is awesome and really great to be able to read the source (like other have said). Why is it called an "open source" app? It doesn't seem to have any software license in the GitHub repo.

This means it's just a "source available" app but normal copyright applies. I think this can be a bit misleading but nonetheless congratulations!

pm 11 hours ago 0 replies      
I'm a fan of the telescopic mouseover effect, and the abundance of blue/purple. I was expecting it be an aesthetic clone, but you guys have done a nice job.
codewright 9 hours ago 2 replies      
Really big fan, very snappy and fast.

I've been skeptical of Meteor, but this speaks well for it.

Still concerned about the security/data leakage/authentication methodology though.

debergalis 8 hours ago 1 reply      
[meteor dev] I had a chance to meet Sacha and Tom a few weeks back. They're two of the nicest fellows I've worked with -- absolutely delighted to have them as leaders in the Meteor community.
angryasian 9 hours ago 1 reply      
maybe its just me, but I'm failing to see whats so impressive about this. Couldn't this have been accomplished in any language, and after viewing source code .. I feel like could of done with a lot less code. While its fast, for a tech demo, I imagine for a lot of other framework/language combinations could be made to be this fast as well.
sgrove 11 hours ago 1 reply      
From the example sites, it looks like Telescope is extremely configurable. I haven't looked at the code yet, but is it considered production quality? I wouldn't mind a small bookmarking tool like Sidebar for Clojure/Clojurescript projects.

Thanks for posting this, awesome to see examples of polished Meteor projects!

Update with link to demo, looks very nice: http://demo.telesc.pe/

dreamdu5t 7 hours ago 3 replies      
This is proof Meteor is not the future. Bloated JS heavy webpages that aren't RESTful or modular are not moving the web forward.

Aside from that, I don't see the utility of being able to see the stories change order, or comments come in. Especially once you have large numbers of comments and the stories reorder constantly.

I just don't see what is so interesting about Meteor - and I love node.js and JavaScript.

harpb 7 hours ago 1 reply      
sgdesign 9 hours ago 1 reply      
Oh and by the way, the Telescope demo is hosted on a free Heroku instance. So far it seems to be holding up pretty good, with about 80 concurrent users.
frozenport 7 hours ago 0 replies      
Slow loading time.
knwang 8 hours ago 3 replies      
Really good job guys.. I have a couple questions:

1. How does the Meteor community look at Coffeescript?
2. Can the backend database be anything other than Mongo?

petercooper 9 hours ago 0 replies      
As far as I recall, "Show HN" posts have a different ranking algorithm (or, at least, a handicap) than regular link posts.
dlsym 4 hours ago 0 replies      
The demo-button-effect is awesome!
darylantony 9 hours ago 1 reply      
The source code is really great read through. Exemplary work.
tjholowaychuk 8 hours ago 0 replies      
hmm the big kinda blank screen while you click "Load more" and the initial load time makes it seem a little unappealing, design is nice though!
indspenceable 6 hours ago 0 replies      
If you click on the "Try demo" button it doesn't do anything.
The greatest Google Mail feature you may not be using jgc.org
368 points by jgrahamc  14 hours ago   154 comments top 59
kahawe 19 minutes ago 1 reply      
While we are talking about greatest features and GMail... it would be awfully nice if they would finally implement some sort of sub-string search, given it's almost 2013 and they are synonymous with searching and finding things on the internet.

It is incredibly frustrating that in order to be able to find an email I received years ago I have to figure out exactly how someone might have written a certain term in that mail. And I cannot see any excuse for not offering that feature; limit me to a few substring searches a day if resources are an issue and I don't expect fully-indexed lightning-fast results, a simple "grep" so to speak is just fine...but please let me search my mails properly!

dlss 14 hours ago 11 replies      
I do use this feature -- I often select random blocks of text while reading. This feature means I often (5-10% of the time) have to click discard and then reply again to get my desired behavior.

In related news nytimes.com used to have a similar feature where the definition of words would pop up when you selected them. It basically caused me to stop reading their site.

cousin_it 13 hours ago 2 replies      
The greatest gmail feature you're not using is probably "Undo Send", if you're not already using it. I have it set to the longest possible timeout of 30 secs, and would like an even higher value.
Matt_Cutts 12 hours ago 2 replies      
The greatest feature for me is Send and Archive: http://gmailblog.blogspot.com/2009/01/new-in-labs-send-archi...
mibbitier 13 hours ago 3 replies      
This is a horrible misfeature.

Often, I select bits of an email, to copy and paste elsewhere to check things. Then I hit reply, and wonder why only the currently selected text is there.

There should be a way to turn this "feature" off.

paulirish 3 hours ago 1 reply      
The greatest Google Mail feature you're not using is definitely Forward All.
Forward an entire thread of emails, in chronological order, somewhere.
philwelch 13 hours ago 0 replies      
This feature is also present in Apple Mail, which is fortunate because ever since Apple Mail caught up to Gmail's last remaining interface enhancement (having an "archive" button), the greatest Gmail feature I use is IMAP access.
acangiano 12 hours ago 4 replies      
In a related note, I hate how Google promotes top posting even when I select a specific quote.
AceJohnny2 14 hours ago 3 replies      
That's interesting. Thunderbird has been doing this for a while, and I love it.
davidw 13 hours ago 1 reply      
That's neat. I used to cut down emails by means of ctrl-K in my browser, which, being mapped to emacs keys, means "delete this line". However, in their brand new email compose thing, Google has seen fit to override this, making that key combination point to "make a hyperlink" or some such BS, causing me much, much frustration. Yes, I know, you can still utilize the old way of doing things... but for how long until they decide that it's simply got to go and it's time for you to upgrade.

I guess RMS has a point.

rogerchucker 13 hours ago 1 reply      
Best Gmail feature is "Undo Send" - period. It has saved me embarrassment countless number of times.
upinsmoke 12 minutes ago 0 replies      
The Mail app on OS X has this for some time now.
lhnn 13 hours ago 4 replies      
So many people in this thread are saying they highlight sections of text as they read. I don't do that, and no-one I know does that.

What is the benefit? Is it intentional, or is it a habit with no real use?

languagehacker 9 hours ago 0 replies      
That's a feature in Mail.app, too. Does that mean I should write a blog post about it? Will it get to the front page of Hacker News? Will it get to the top? What if I say it's the best feature you're not using on Mail.app? I'd be wrong, but people would still go to my site, right?

I honestly think HN should be doing more about linkbait like this.

hackmiester 14 hours ago 2 replies      
Wow, great, a condescending title! Fact is, I am using this. It used to be in the settings, I think, which is how I found out about it. And as others have stated, it's in a lot of mail clients. Apple Mail supports this, as does the iOS mail app, too.

"I haven't heard of this feature" != "no one knows about this feature".

justindocanto 13 hours ago 1 reply      
I'm confused why this is on the top of the front page.
smalter 13 hours ago 0 replies      
A great Gmail feature that a lot of people don't use is "Send & Archive". It's very useful for keeping a clean inbox.
shill 9 hours ago 0 replies      
Wow, it's a slow news day on HN today.

Here is the email feature I want. If I paste a URL that looks like a post/article into a new message, I want the slug automagically split, title cased and copied into the subject line.

For example:


Would generate this subject:

    The Greatest Google Mail Feature You

Yes, this was a bad example because the title has been truncated, but I can fill in the rest manually. Most slugs contain the full title.

jedbrown 6 hours ago 0 replies      
I've been using this feature for ages, but it got much worse about six months ago when the last message in a thread became automatically "focused". Now when I scroll up in a thread and select some text, the message I'm selecting text from doesn't automatically get focus. I have to click one extra time to get focus on the message I'm selecting. Even after half a year, I still forget the extra click in about 20 emails per day, making the reply go to the last message in the thread, with the entire thing quoted. To recover, I have to discard my new message, scroll back to the thing I wanted to reply to, and repeat the process. I filed a bug report in the first week, but no response and the bug remains.
kirpekar 14 hours ago 0 replies      
That's the one feature I hate.

I usually read through emails highlighting (selecting) the important parts with the mouse. So when I hit "R", Gmail quotes only my last selection. Discard, unselect, hit R again.

cloudkj 12 hours ago 0 replies      
I just realized that this feature also exists in Outlook, which explains why many of my replies have strange quotes. I often highlight a user id or URL to copy for further investigation, then hit reply only to be confused by the condensed quote in the reply window. Now it makes sense :)
Brajeshwar 7 hours ago 0 replies      
So does Apple Mail. So, when I used that feature in Gmail, I wasn't really surprised and actually expected it to be there.

Similarly, BufferApp post with the selected text instead of the title.

This is indeed a good UX feature and people should use this where it make sense - select text and put it in context with the next action.

carbocation 4 hours ago 1 reply      
jgrahamc - Offtopic, but usethesource.com is down.[1][2] Since it's still being linked from your blog, I assume this is unintentional and possibly went unnoticed.

[1] http://www.usethesource.com

[2] http://www.downforeveryoneorjustme.com/www.usethesource.com

tete 14 hours ago 1 reply      
Most mail clients seem to do this (maybe not so many web-based ones). Gmail has the problem of creating Tofu:


julesie 58 minutes ago 0 replies      
This feature is also available in the Sparrow desktop client.
MehdiEG 13 hours ago 0 replies      
Like many, it's probably one of the first "feature" I bumped into with Gmail and that still annoys the hell out of me years later.
nachteilig 13 hours ago 0 replies      
Mail.app does this too, except that I keep using it accidentally.
hardik988 13 hours ago 0 replies      
I love this feature, and have been using it for a while.

Off topic, but a similar feature exists in Pinboard (https://pinboard.in). You can select some text on the page before clicking bookmark, and that gets set as the description of the page in the bookmark. It's a pretty handy feature if the page title is not enough to describe what the page is about.

Tomis02 10 hours ago 1 reply      
That's all nice and dandy, but check this out. For a few months now I've been noticing that emails from my inbox were being moved after a few days into spam, without any kind of warning. This happens once every two weeks, more or less.

We're not talking about false positives, these are emails that stay in my inbox for days before being moved to the spam folder. Which basically means I need to check my spam folder every day. Trust forever lost.

scott_meade 12 hours ago 0 replies      
To turn this off in Mail.app: Preferences | Composing | Responding | "When quoting text in replies"... "Include all the original message text"
hakaaak 5 hours ago 0 replies      
I've accidentally used this feature a number of times, and it drives me nuts.

The greatest feature about Gmail that not enough use is 2-factor auth (even though it is not limited to Gmail- other web-based mail services provide it); it is a pain in the arse, but after you get hacked once or twice, you'll be happy you did it. Popular Saas apps are prime targets for being hacked. It may mean they are safer, but they are also riskier to use. If you're not using 2-factor auth, you should probably not use Gmail, unless a hacker taking control of your account wouldn't bother you or your contacts.

dewiz 5 hours ago 0 replies      
I often "hit" this feature by mistake because I "mark" the text I am reading. Personally I find it annoying and if I could I would disable it... any help apreciated.
ralph 13 hours ago 0 replies      
I'd like Gmail to do better as an email client confirming to standards. If someone sends me an email with the Resent-{From,To,...} headers then I want them shown to me. How can I forward an email as a message/rfc822 MIME content-type rather than a poor rough text approximation in the main body?
tammer 9 hours ago 1 reply      
Great! Except lately I've come to despise ui elements that can only be discovered by accident. An easily usable and effective feature that I only find via a blog post is a feature that could use some visual feedback.
biturd 9 hours ago 0 replies      
I'm pretty sure this feature is not on by default. You must turn it on in labs and then it is enabled. It's odd if that's the case because everyone posting here knows of it as if it were just part of the experience.

But I'm looking at the on/off radio button in labs in another tab right now.

atldev 11 hours ago 0 replies      
This makes a good combo with my other favorite Gmail trick: followup.cc
aidenn0 14 hours ago 0 replies      
And I was about to say "Of course I'm not using this since I don't use gmail" but I use claws, which does this. Personally I would chuck any mouse-using client that doesn't do this.
koopajah 14 hours ago 0 replies      
I thought most of the email clients did that already and I just checked a few which in fact don't! I'm pretty sure thunderbird does this if you want the same feature on an external application.
anigbrowl 12 hours ago 0 replies      
I fail to see what's so difficult about simply deleting the irrelevant text in a long email.
username3 6 hours ago 0 replies      
What if I want to highlight and quote separate paragraphs?
dutchbrit 11 hours ago 0 replies      
I hate this feature - I always select something when reading, hit reply and then find out it quoted something I randomly selected..
conradfr 12 hours ago 2 replies      
Is there a way in GMail (and Reader) to set a unread item as read ? It drives me crazy ...
jkaljundi 13 hours ago 0 replies      
Same in Thunderbird
hippich 6 hours ago 0 replies      
If you select an area of a message and then hit Reply only the selected text will be quoted in the response.
mdonahoe 7 hours ago 0 replies      
My favorite feature is Mute.
zeedotme 13 hours ago 1 reply      
the issue i have with this is that it deletes all previous messages from the email thread. So if someone wants to read back through previous messages or forward the email onto someone who does - no es posible.
hayksaakian 13 hours ago 0 replies      
i don't think i'll be using it in the future

most of my email flows like a conversation, i don't need to bring back older parts if my recipient already has them in front of him.

nvr219 13 hours ago 1 reply      
I tried using this feature in Apple Mail for a few weeks before turning it off because it was super annoying (I found it annoying for the same reasons mibbitier did).
shocks 10 hours ago 0 replies      
Another useful feature you may not be using is the Priority Inbox.

No, wait. I'm lying. It's crap.

rjv 13 hours ago 0 replies      
Does anyone know of a way to select multiple passages to be quoted automatically? A control+click/drag if you will?
tiglionabbit 10 hours ago 0 replies      
Not all that useful if I want to respond to more than one quote, is it?
alxndr 8 hours ago 0 replies      
Don't lots of other email clients do this too?
jgervin 9 hours ago 0 replies      
Use this all the time. Apple mail has the same feature.
3825 12 hours ago 0 replies      
I have been using it since...
jdjiaikej 10 hours ago 0 replies      
AOL had this feature in the mid-90s
skiplecariboo 14 hours ago 0 replies      
any mail client does that imo.. Mail.app too
alexlimoges 11 hours ago 0 replies      
I personally don't like that feature, for I often select a portion of the email (eg a name or an address to google it) and then when I hit reply I notice only the highlighted text remains!
shellehs 9 hours ago 0 replies      
yes, the greatest feature, that ever email clients.
donniezazen 13 hours ago 0 replies      
Wow!! This is fantastic.
Amazon Maps API amazon.com
92 points by taylorbuley  7 hours ago   17 comments top 8
cek 4 hours ago 0 replies      
And so the fragmentation of mobile platforms accelerates along the services axis. [1]

Amazon attempts to ease the pain by offering "interface parity" with the Google Maps API, but there are significant functional differences.

We are going to see more and more examples of this where mobile platform vendors are going to try to get developers to use their firm's web services when running on their platform. Bummer for devs who are already struggling with trying to target multiple platforms.

[1] http://www.lockergnome.com/mobile/2012/10/22/the-fragmentati...

tnuc 6 hours ago 2 replies      
Why are Apple and Amazon both only allowing their maps to be usable on mobile devices?

I am sure there are people who want get away from Google/Bing but without web I can't see them bothering.

cynwoody 3 hours ago 0 replies      
If memory serves, Amazon was the first to come out with street view, via their a9.com site, circa 2005. However, they canned it before coverage got behind a few major metropolitan areas.

Then came Google Maps and a much cooler street view implementation.

Codhisattva 6 hours ago 1 reply      
Amazon has licensed Nokia map tech and data. Not that interesting unless you are desperately trying to get away from Google.
sadfaceunread 6 hours ago 2 replies      
So this is ONLY on Kindle devices? Do kindle targeted apps suffer from an inability to use google/bing?
bstar77 6 hours ago 2 replies      
Can someone with developer access say how good the maps are? I don't see any way to use the maps without an invitation.
rjzzleep 6 hours ago 0 replies      
well it's an android maps library. what's the particular interest?


demo looks like a demo. basically it's because they're not part of the android group. do they have the google maps api libraries on the device? just curious i honestly don't know. I'd imagine there are licensing issues?

thejosh 6 hours ago 0 replies      
Page Not Found

We're sorry, but we couldn't find the page you requested.
You may want to go to the homepage or read our FAQs.

DeadDrops - anonymous, offline, peer to peer file-sharing deaddrops.com
25 points by hornbaker  3 hours ago   14 comments top 11
noonespecial 3 hours ago 1 reply      
It would be interesting to use a $20 TP-Link 703N to create a solar wifi dead-drop.
Drakim 2 hours ago 1 reply      
Wouldn't this be a tad easy to "troll"? All you would need to do is hit that USB port with a rock and it would be completely unusable, all effort into cementing it into the wall wasted.
alx 1 hour ago 0 replies      
Here is Moustreet keys, located in Toulouse, France: http://blog.lamoustacherie.fr/?page_id=3981

The advantage they have is a female plug, you can't physically break it like standard usb key (most of these keys are broken after a year).

But they're disappearing too, people tend to paint walls :)

batiudrami 2 hours ago 0 replies      
This seems like a really simple way to get implicated into some nasty shit.
livebeef 28 minutes ago 0 replies      
This can't end well, I won't ever plug anything into my laptop if it is hidden in a wall.
See: http://www.fiftythree.org/etherkiller/
unfamiliar 1 hour ago 0 replies      
It they think I'm going to hold my shiny new laptop awkwardly against a wall and get it all scratched up only to get it riddled with child porn and malware, they are sadly mistaken.
morphyn 1 hour ago 0 replies      
What could possibly go wrong ?
Beltiras 34 minutes ago 0 replies      
Clever. Inordinately stupid, but clever.
Sami_Lehtinen 43 minutes ago 0 replies      
Same link has been posted here several times. At least two times earlier.
areallybadidea 1 hour ago 0 replies      
Not this crap again it's so old and only 5 people in the whole world do it. Someone else spammed this on HN last month. I guess it's not as bad as a "how our company learnt from being stupid" splog posts we see daily here.
enr 2 hours ago 0 replies      
Yeah that seems perfectly safe.
NASA director: Curiosity has found organic molecules lagazzettadelmezzogiorno.it
4 points by pitiburi  17 minutes ago   1 comment top
pitiburi 13 minutes ago 0 replies      
jStat: a JavaScript statistical library jstat.org
40 points by Hirvesh  5 hours ago   6 comments top 3
eel 2 hours ago 0 replies      
It seems promising, but it also seems like an inactive project, according to the commit dates on the GitHub page linked in the other comment.

Also, there is a sizable amount of dependencies for this library, all due to the use of flot. Since most of the code doesn't use any of the dependencies, I wonder if they would consider releasing any future versions as two parts, jstat.js (containing the number crunching methods) and jstat-flot.js (containing the plotting wrapper methods).

btipling 3 hours ago 1 reply      
It's weird that one can't easily find any links to their github page from the site. Here it is:


I almost thought they weren't on GitHub.

Hirvesh 5 hours ago 0 replies      
Via Functionn - Open Source Resources For Web Developers & Designers: http://functionn.blogspot.com/2012/11/jstat-javascript-stati...

P.S. Functionn contains a whole lot more of awesome resources like jStat. There only a fraction of them I can post here at a time. Take a look if you're interested, and subscribe:


Show HN: UpShot, Open Source Screenshot Sharing via Dropbox on OS X fredericiana.com
32 points by fwenzel  5 hours ago   19 comments top 12
piranha 54 minutes ago 0 replies      
I use http://monosnap.com/ because it allows one to edit screenshots in a way Skitch did (does).
latchkey 1 hour ago 0 replies      
This is cool also from the standpoint of a nice example of using Python to do a Mac menubar app. I've got an idea for something I'd like to do and I've been looking for an easy to follow example like this for a while now. Thanks!
Inufu 1 hour ago 0 replies      
the script I use on linux:

userid="your id here"
myfile=$(date +%Y%m%d%S).png
scrot -s "/home/mononofu/Dropbox/public/$myfile"
echo "http://dl.dropbox.com/u/$userid/$myfile | xclip -selection c
notify-send "Done"

iamdave 4 hours ago 2 replies      
"when you do, it moves that file to the Dropbox public directory and copies the URL to the file into your clipboard automatically"

Sold. Given the amount of prototyping I do on my workstation requiring constant review of previous revisions for collaboration, this is something that lets me get a link sent and stays as far out of the way of my workflow as possible-and it supports an application my team already uses? Yep. Sold.

I might fork this so I can add the option to define which directory in /Public, since I'm OCD like that (having stray files kills me). Thanks for this!

trafnar 2 hours ago 0 replies      
I don't like having an app constantly running for this, or for it to intercept all my screenshots.

That is why I use a hacked version of Gyazo (http://gyazo.com/). I modified the script so that moves the file to dropbox instead of their monstrosity of a share page.

Gyazo is the best because you launch the app, are presented with a standard screenshot UI, and then the app closes and your URL is copied and opened in a browser.

Here is my modified gyazo script if you want to try it: https://gist.github.com/3390267

kayoone 3 hours ago 1 reply      
Isnt the Dropbox Public folder going away or is already removed for new users ? I think they stated that in the Annoucement of their new "Share a link" feature.

If you are a longtime dropbox user you will keep it though.

k33l0r 2 hours ago 0 replies      
I use an instance of S3itch (https://github.com/roidrage/s3itch) together with the old Skitch client.
tsheeeep 1 hour ago 0 replies      
I use Jing to do just this. It works on Windows and Mac and next to Screenshots also allows to capture up to 5 Minutes of Screencast at a time. Then I have the save location set to the Public Dropbox folder and the link is automatically put into my clipboard.
Rayne 4 hours ago 1 reply      
I actually just bought Captured on the appstore for this specific purpose. Wish I hadn't now, since imgur uploading (part of why I bought it) appears to be broken. This works wonderfully. I miss the old Skitch, but this helps me heal.
lukeholder 4 hours ago 1 reply      
The ability to just capture a portion of the screen is the only feature missing i would need. Awesome work.
faceoff 3 hours ago 0 replies      
"I found TinyGrab, which works with OS X's screenshot function. I can even upload the files to my own server, but only using unencrypted FTP, which is scarily insecure."

!true. tinygrab supports sftp. doesn't it?

faceoff 3 hours ago 1 reply      
btw..did you consider https://droplr.com
Everything You Need To Know About Meta Descriptions Tags searchenabler.com
12 points by gizmofreak  3 hours ago   2 comments top
boyter 1 hour ago 1 reply      
I have always wondered if a reasonable search engine could be made that just indexed meta descriptions and very aggressive spa, removal. With all the work people put into useful ones without keyword stuffing these days it might be slightly viable.
Ninja IDE: written in Python for Pythonists ninja-ide.org
210 points by mmariani  17 hours ago   117 comments top 36
kghose 16 hours ago 6 replies      
It is FOSS (GPLv3). The license information was a wee bit hard to find (Wayyy down on the about page http://ninja-ide.org/about/) and I first thought it was some frankenstein freemuim product where you had to apply for a free license if you were an OSS devel (like PyCharm) etc. etc.

I gave it a whirl:

1. Snappy, which is nice, since PyCharm can be sluggish on my Mac
2. No VCS integration
3. By default very strict code checking is turned on, which turns my (functional) code into a sea of underlines, which is not so pretty

It looks to be an interesting start, but it will need VCS integration before it looks suitable as a PyCharm replacement.

I didn't look in detail at code completion/code assist, which PyCharm does very well.

sho_hn 15 hours ago 4 replies      
Can someone explain to me why this is at the top of the front page despite a website devoid of useful detail, while this completely fails to catch on: http://scummos.blogspot.de/2012/11/kdev-python-14-stable-rel...

(Seriously, check it out - KDevelop's Python plugin and Microsoft's PTVS are currently the two projects doing serious work on static analysis of Python for live editing purposes. Here's a nice subthread comparing the two: http://news.ycombinator.com/item?id=4725634)

ketralnis 15 hours ago 0 replies      
I realise these are at first blush, but:

* Scrolling is way too slow. This isn't nitpicking, this is really very important to me

* I like PEP8 warnings and use them in other editors, but I don't like not being able to pick which style stuff I care about

* I don't like the PEP8 tooltips. They cover up my code and that's the worst possible place to put them. Even if I do plan to "fix" the issue, coming up over the code that I'm typing right now is never okay.

* It's really quite a lot of work through some confusing terminology to get a test run of the IDE going on an existing project. I don't want to move my code into your workspace. I don't want to import my existing project (that sounds scary)

* Some glaring bugs seem to indicate that this is more young than is indicated on the very flashy project site. For instance, if I try to import a project but cancel the "select a directory" popup, I inconsistently get it either removing my previous selection or crashing the whole IDE

kstenerud 16 hours ago 1 reply      
Pretty cool all around, but it needs a lot more stability work. It crashed a few times just scrolling around in some of my python projects, and there are quirks such as complaining "This font can not be used in this editor" if I open the font selector and then click "Cancel".

Also, changing the margin line doesn't seem to take effect unless you quit and restart the IDE.

unohoo 16 hours ago 2 replies      
What would really help is a small demo video just to get a whiff of what the IDE feels like. The description and screenshots are somehow not enough for me to download and install an entire IDE and take a test drive. If there is a demo video somewhere, my apologies - I was not able to find it.
jra101 16 hours ago 1 reply      
Would be nice to be able to selectively disable some PEP 8 rules in the style checker. I don't care about lines longer than 80 characters and I don't like separating functions by two empty lines.
spindritf 15 hours ago 0 replies      
"For Ubuntu Users: You can add the NINJA-IDE PPA and install it from there (you will get automatic updates!)"


Thank you.

hoka 16 hours ago 1 reply      
I'll definitely give it a shot.

From a usability perspective, your download button could be better. It doesn't download right away (which is fine), but redirects to downloads/win for me. Might be nice to have it auto-scroll to the win downloads since it took me a while to figure out what was going on.

Here's a screenshot from Win7 32-bit: http://i.imgur.com/2RT6u.png

That random pink line makes it unusable for me.

gatox 15 hours ago 0 replies      
Hello, I'm part of the NINJA-IDE Team, and first to all, I would like to thank everyone for the feedback (good ones, as much as bad).
Currently we are working to make NINJA-IDE compatible with Python3 (among other features) and taking care of several issues to ensure better stability (and guide the development process with tests).

I hope we can find the time to take care of some of the stuff mentioned here as videos, screenshots, user guide, etc.

It's a lot of work, but we are proud of what we can achieve with a free software project.

Thx everyone!

yuvadam 16 hours ago 0 replies      
Don't know about the IDE but that font is horrendous.
recuter 16 hours ago 0 replies      
Something something second system syndrome, just use vim/emacs/sublime. 'etc.
mikle 13 hours ago 0 replies      
I hate to be that guy, but after almost a decade doing Python one thing I learned is that we prefer Pythonista, not Pythonist.
buster 16 hours ago 0 replies      
Wow.. how did this not make it to HN before? Already version 2.1.1 and never heard of it?
zlapper 16 hours ago 1 reply      
As others have already mentioned, PEP8 validation is enable by default, which is a little excessive in my opinion (specially with the line < 80 chars rule). It would be great to be able to disable individual rules, a la Ecliplse/Netbeans.

All in all it looks very nice, thanks for sharing.

veeti 12 hours ago 1 reply      
Although vim has almost completely sucked me in already, does this thing have support for 1) separate indentation settings for different file formats and 2) separate indentation settings for different "projects"?

I've been looking forever for a text editor that does this and surprisingly few do.

stevoski 16 hours ago 1 reply      
How does this compare to PyCharm?
shill 15 hours ago 0 replies      
I am already extremely satisfied with PyCharm. I'll keep an eye on this though. Being able to write plugins in Python is promising.
endtime 16 hours ago 2 replies      
Having very recently switched to Sublime Text 2 (from Komodo Edit), I'm curious if this offers anything that can't be done with Sublime + mature existing plugins...?
jlujan 16 hours ago 0 replies      
On mountain lion, it requires X11. Not sure why as my PyQT apps do not.
dmd 10 hours ago 1 reply      
Crashes on launch for me.
masukomi 15 hours ago 0 replies      
am i the only one who's really wishing there were some real screenshots to check out before downloading the thing?
nirvanatikku 16 hours ago 0 replies      
Crashed while scrolling =( Was curious, but can't see myself moving away from PyCharm/Sublime.
btipling 15 hours ago 0 replies      
It can't seem to create or open JavaScript files. How does one use it with Django?
rxc178 16 hours ago 2 replies      
This is nice, but one quick question, why's the windows installer in spanish?
gruuby 9 hours ago 0 replies      
I cannot use an IDE that doesn't feature a vi mode for the editor. I'd be very, very lost. I'm yet to find an IDE that doesn't get in my way, vi mode or not.
azinman2 11 hours ago 1 reply      
Tried it out on existing code. Was complaining that spacing wasn't a multiple of 4, when I set it to 2 spaces in the prefs. I even reloaded it and verified the setting.

Back to Sublime!

indiecore 16 hours ago 0 replies      
Nice, it would be good to have some screenshots and stuff though, I'll definitely check it out.
pablosanta 13 hours ago 0 replies      
It keeps crashing on me. I'm on Lion. :(

Looks good though. I thought it was going to be YET ANOTHER ECLIPSE distribution, but apparently it's not. It seems to be pretty fast. Hope they fix the crashing issue on Lion soon.

neil_s 12 hours ago 0 replies      
The name of the IDE emphasizes that its not just yet another IDE, and yet I don't see anything new here, or any difference from existing IDEs, other than heavy Python support.
DodgyEggplant 15 hours ago 1 reply      
Wing IDE is great
ninetax 16 hours ago 1 reply      
It would be great to see some screen shots.
jotaass 10 hours ago 0 replies      
Just tried it. Looks nice but a bit lacking on the code completion, i think. Maybe I need to give it another chance.

Also, I think would be nice if there was a way to interact with the console after running a script. I realize this may be sort of an odd request, but it is very convenient when you're not quite sure on how you want to solve a problem, and you need to try out some solutions interactively. I greatly enjoy this in spyder, my current python ide of choice.

silasb 16 hours ago 0 replies      
Is this based on QT Creator?
datashaman 5 hours ago 0 replies      
QT toolkit. urgh...
zdanozdan 16 hours ago 4 replies      
whats wrong with emacs ?
gfosco 16 hours ago 1 reply      
As soon as I see the words "cross-platform" on an IDE, I'm no longer interested. Looks really nice though, they did a good job with branding.
Django 1.5 beta release notes djangoproject.com
111 points by kgrin  12 hours ago   38 comments top 6
quaunaut 11 hours ago 3 replies      
The two big headline features, the configurable User model and Python 3 support, are both huge. The first, because as is right now, simply extending the User model feels janky and possibly broken, and the second, because Django is probably one of the biggest reasons people have not moved onto Python 3.
elithrar 11 hours ago 1 reply      
> Configurable User model

This pleases me greatly. No longer do I need to rely on plug-ins (and therefore, them being updated) to minimally extend the existing User model for small projects.

scommab 11 hours ago 0 replies      
Major Highlights:

- Python3 support! (supports python2.6.5+ and python3.2+ in the same code base)

- {% verbatim %} (no collision problems with mustache)

- Built-in Partial model saving

Alex3917 7 hours ago 1 reply      
If I want to create my own database and write my own SQL queries instead of using Django's object models and ORM, does it still make sense to use Django's user model in my project?
jfb 10 hours ago 4 replies      
Can someone point me to a compare/contrast sort of doc between modern Django and Rails? I don't do a lot of web app programming, but I'm curious as to whether there's much of a difference between the two.
hayksaakian 8 hours ago 1 reply      
> python3 support

Go on

Gmail and Drive - a new way to send files gmailblog.blogspot.com
202 points by neya  17 hours ago   82 comments top 23
guelo 15 hours ago 10 replies      
Totally off topic but blogspot is just awful. Why does everything have to be a complicated buggy JavaScript app? There's nothing wrong with serving up good ol HTML pages, especially for simple text and images content like a blog.
munin 16 hours ago 1 reply      
> Have you ever tried to attach a file to an email only to find out it's too large to send?

Yeah! Some jerk who runs my MTA set the size of acceptable attachments really low! I wonder who did that...

$ host -t mx mydomain.com

mydomain.com mail is handled by 0 aspmx.l.google.com.

Oh... I see.

simonsarris 16 hours ago 0 replies      
This is lovely. Very welcome.

Sending and sharing files are two of those things that are just now sluggishly rolling over to discover that it's a new millennium.

Dropbox and Drive are making great strides lately and I'm really thankful for it. Using Dropbox to have the same "folder" across three computers is the first time synced sharing ever felt intuitive enough for my (71 year old) father to regularly use, and now he can use this to reliably send larger files to people without any worry of fouling up permissions (that would otherwise be difficult for him to understand).

WayneDB 15 hours ago 2 replies      
I never liked the idea of hosting my own files on someone else's server (Dropbox) or sending them through a middle-man.

That's why i just run my own "cloud" on my own premises. If I want to give someone access to a file, I just throw it on my Synology DiskStation and the receiver can get at it via FTP or HTTP client.

revelation 13 hours ago 1 reply      
So can we use that to send binaries to people? Because gmail will absolutely not allow you do that. They will go as far as inspecting archives to look for binaries and ban you from sending them.
paulirish 3 hours ago 0 replies      
I've been dogfooding the "Gmail will double-check that your recipients all have access to any files you're sending" feature for a month now and it's FANTASTIC. If you use Google Docs a lot, this saves so much permission pingpong.
jdbevan 44 minutes ago 0 replies      
Is no-one else worried about these TOS applying to their email attachments?

you give Google (and those we work with) a worldwide license to use, host, store, reproduce, modify, create derivative works (such as those resulting from translations, adaptations or other changes we make so that your content works better with our Services), communicate, publish, publicly perform, publicly display and distribute such content. The rights you grant in this license are for the limited purpose of operating, promoting, and improving our Services, and to develop new ones. This license continues even if you stop using our Services (for example, for a business listing you have added to Google Maps).

EDIT: I guess it's a moot point if you're already using Gmail.

stephenhuey 16 hours ago 0 replies      
This is long overdue. I've been inserting links to Google Docs (the old name for Drive files) into emails forever, but plenty of people I know don't realize how easily they can do that and give up if a large file cannot be attached to an email. I'm also surprised by how many Gmail-using friends of mine don't even know there's some hefty free file storage a click away even though the link to it has been at the top of their Gmail for years.
tedmiston 15 hours ago 2 replies      
A welcome feature, but we can't ignore the paradigm shift's tiny repercussion: once the sender deletes the file, the receiver will no longer be able to access it (assuming they've lost, deleted, or not yet downloaded their own copy). Lately I've used shared Dropbox folder links for larger attachments, but the same problem seems to persist with any hosted solution. A solution that pleases both the sender having control over their files and the receiver having long-term access is tough to imagine.
danbarker 15 hours ago 3 replies      
I've been paying for Google Drive for several months because I really, really want it to work, but it's actually kinda useless as it causes constant instability and 120%+ CPU load on my 2012 Macbook Pro. This means that I frequently close the application down, so it's not actually covering me and if I lost my computer, the most recent files probably wouldn't be covered. There's been an open issue about this in the support forums for months and there's no news on when they're going to fix it...
benaiah 13 hours ago 1 reply      
So, in other words, Gmail just added a feature that Hotmail/Outlook.com have had for years.

golf clapping

kissickas 14 hours ago 1 reply      
> Now with Drive, you can insert files up to 10GB

Hmm, how much space do I have in there now?

0% of 5 GB used... Now it makes sense.

csmatt 15 hours ago 1 reply      
It's about time!

I use Google's cloud-based services for as much as I can, but it's still not seamless and is annoying when I have to open a new window to access a service run by the same company providing the one in the page I'm on.

Next step: Please allow me to easily save PDF's and other documents directly to Drive from a URL. I shouldn't have to download a file to my device and then upload it to drive.

yason 15 hours ago 2 replies      
This is how email could work too. The sender would host it (by himself or in cloud) and the recipients go fetch it when they want to read it. Updates and comment threads all collect into the same place. No spam either since nobody would be pushing tens of megabytes of messages to your inbox.
goronbjorn 13 hours ago 0 replies      
There is a really good third-party Chrome extension that effectively does this already and also works with Box and Dropbox: https://attachments.me/
kamakazizuru 15 hours ago 0 replies      
this is awesome! it might also just tip the scales from dropbox over to drive. I cant believe something so obviously powerful took so long! I do hope that it will allow me to share files with non-gmail users as well!
fudged71 14 hours ago 1 reply      
Question: so with this, I can send an attachment and change the file before the recipient opens it? Will they see if it has been modified? Will I see when they have accessed it?
kexek 4 hours ago 0 replies      
Would be perfect if they add this Google Drive attachments functionality to Sparrow. Someday.
ivanb 14 hours ago 0 replies      
Is this minuscule feature worth the front page?
mitko 15 hours ago 0 replies      
plug: my friend built a chrome extension that does a superset of that - it is called Cloudy and integrates with filepicker.io which lets you choose files from multiple cloud storages:

disclaimer: I work for a Google competitor

agumonkey 16 hours ago 0 replies      
I wonder if this will cause storage optimisations on their data centers.
facorreia 17 hours ago 0 replies      
Seems very useful. I bet I'll be using that a lot.
stephengillie 16 hours ago 1 reply      
Sorry for being pessimistic, but any speculation on the vulnerabilities this connection opens?
Redis crashes - a small rant about software reliability antirez.com
303 points by hnbascht  21 hours ago   95 comments top 17
jgrahamc 21 hours ago 2 replies      
His point about logging registers and stack is interesting. Many years ago I worked on some software that ran on Windows NT 4.0 and we had a weird crash from a customer who sent in a screen shot of a GPF like this: http://pisoft.ru/verstak/insider/cwfgpf1.gif

From it I was able to figure out what was wrong with the C++ program. Notice that the GPF lists the instructions at CS:EIP (the instruction pointer of the running program) and so it was possible by generating assembler output from the C++ program to identify the function/method being executed. From the registers it was possible to identify that one of the parameters was a null pointer (something like ECX being 00000000) and from that information work back up the code to figure out under what conditions that pointer could be null.

Just from that screenshot the bug was identified and fixed.

dap 18 hours ago 2 replies      
Great post, showing admirable dedication to software reliability and a solid understanding of memory issues.

One of the suggestions was that the kernel could do more. Solaris-based systems (illumos, SmartOS, OmniOS, etc.) do detect both correctable and uncorrectable memory issues. Errors may still cause a process to crash, but they also raise faults to notify system administrators what's happened. You don't have to guess whether you experienced a DIMM failure. After such errors, the OS then removes faulty pages from service. Of course, none of this has any performance impact until an error occurs, and then the impact is pretty minimal.

There's a fuller explanation here:

CrLf 15 hours ago 2 replies      
I find this idea of a lack of ECC memory on servers disturbing... This is the default on almost all rack mountable servers from the likes of HP or IBM. Of course, people use all kinds of sub-standard hardware for "servers" on the cheap, and they get what they pay for.

I haven't seen a server without ECC memory for years. I don't even consider running anything in production without ECC memory, let alone VM hypervisors. I find it pretty hard to believe that EC2 instances run on non-ECC memory hosts, risking serious data loss for their clients.

Memory errors can be catastrophic. Just imagine a single bit flip in some in-memory filesystem data structure: the OS just happily goes on corrupting your files, assuming everything's OK, until you notice it and half your data is already lost.

Been there (on a development box, but nevertheless).

shin_lao 21 hours ago 3 replies      
This is an interesting post, especially the part about memory testing.

We have a simple policy: ECC memory is required to run our software in production. Failure to do so voids the warranty.

js2 16 hours ago 0 replies      
It's crazy that an application should have to test memory. It should simply be handled by the HW and OS. e.g. Some details about how Sun/Solaris deal with memory errors:


Note the section on DRAM scrubbing, which I was reminded of from the original article's suggestion on having the kernel scan for memory errors. (I remember when Sun implemented scrubbing, I believe in response to a manufacturing issue that compromised the reliability of some DIMMs.)

apaprocki 20 hours ago 0 replies      
Can't agree with this more.. And he is just talking about logging crashes. One of the best debugging tools you have at your disposal in a large system (a lot of programmers contributing code -- bugs can be anywhere) is logging the same stack information in a quick fashion under normal operation in strange circumstances so as not to slow down the production software. The slowest part of printing that information out is the symbol resolution in the binary of the stack addresses to symbol names. This part of the debugging output can be done "offline" in a helper viewer binary and does not need to be done in the critical path. We frequently output stack traces as strings of hex addresses detectable by a regex appended to a log message. The log viewer transforms this back into an actual symbolic stack trace at viewing time to avoid the hit of resolving all the symbols in the hot path.
nicpottier 17 hours ago 0 replies      
This kind of attention to detail is all too rare these days. I love Redis, because I have never, not once, ever had to wonder whether it was doing its job. It is like a constant, always running, always doing a good job and getting out of the way.

It only does a few things, but it does them exceedingly well. Just like nginx, I know it will be fast and reliable, and it is this kind of crazed attention to detail that gets it there.

erichocean 18 hours ago 1 reply      
Although we use ECC in our servers already, I've recently been experimenting with hashing object contents in memory using a CityHash variant. The hash is checked when the object moves on chip (into cache), and re-computed before the object is stored back into RAM when it's been updated.

Although our production code is written in C, I'm not particularly worried about detecting wild writes, because we use pointer checking algorithms to detect/prevent them in the compiler. (Of course, that could be buggy too...)

What I'm trying to catch are wild writes from other devices that have access to RAM. Anyway, this is far from production code so far, but hashing has already been very successful at keeping data structures on disk consistent (a la ZFS, git), so applying the same approach to memory seems like the next step.

The speed hit is surprisingly low, 10-20%, and when you put it that way, it's like running your software on a 6 month old computer. So much of the safety stuff we refuse to do "for performance" would be like running on top-of-the-line hardware three years ago, but safely. That seems like a worthwhile trade to me...

P.s. Are people really not burning in their server hardware with memtest86? We run it for 7 days on all new hardware, and I figured that was pretty standard...

codeflo 21 hours ago 0 replies      
In theory, there's nothing stopping the OS from remapping the pages of your address space to different physical RAM locations at any point during your test. So even if you have a reproducible bit error that caused the crash, there's a chance that the defect memory region is not actually touched during the memory test.

Now, this may not be such a huge problem in practice because the OS is unlikely to move pages around unless it's forced to swap. But that depends on details of the OS paging algorithm and your server load.

jimwhitson 19 hours ago 1 reply      
At IBM, we were very keen on what we called 'FFDC' - 'first- failure data capture'. This meant having enough layers of error-detection, ideally all the way down to the metal, so that failures could be detected cleanly and logged before (possibly) going down, allowing our devs to reproduce and fix customer bugs. Naturally it wasn't perfect, and it depending on lots of very tedious planning meetings, but on the stuff I worked with (storage devices mainly) it was remarkably effective.

In my experience in more 'agile' firms - startups, web dev shops and so on - it would be very hard to make a scheme like this work well, because of all the grinding bureaucracy, fiddly spec-matching and endless manual testing required, as well as the importance of controlling - and deeply understanding - the whole stack. Nonetheless, for infrastructure projects like Redis, I can see value in having engineering effort put explicitly into making 'prettier crashes'.

tylerneylon 4 hours ago 0 replies      
The memory check algorithm is a nice solution of the challenges he presents - easy to understand and effective.

Here is a variation which, unless I'm missing something, would be a little simpler still and require less full-memory loops:

1. Count #1's in memory (possibly mod N to avoid overflow).
2. Invert memory.
3. Count #0's in memory.
4 Invert memory.

I think this would catch the same errors (stuck-as-0 or stuck-as-1 bits).

One difficulty is that multiple errors could cancel each other out, at which point you can do things like add checkpoints in the aggregation, or track more signals such as number of 01's vs number of 10's. In the end, this is like an inversion-friendly CRC.

ComputerGuru 21 hours ago 4 replies      
Page is down. Here is a formatted copy: https://gist.github.com/4154289
grundprinzip 20 hours ago 0 replies      
I totally like this post, because main-memory based software systems will become the future for all kinds of applications. Thus, handling errors on this side will become more important as well.

Here are my additional two cents: At least on X86 systems, to check small memory regions without effects on the CPU cache can be implemented using non-temporal writes that will directly force the CPU to write the memory back to memory. The instruction required for this is called movntdq and is generated by the SSE2 intrinsic _mm_stream_si128().

BoredAstronaut 11 hours ago 2 replies      
This post reminded me of my time as a consulting systems support specialist. Lots of weird problem turned out to be bad hardware. Usually memory or disk, sometimes bad logic boards. For end users, this would often lead to complete freezing of the computer, so it was less likely to be blamed on broken software, but there were still many times it was hard to be sure. Desktop OS software can flake out in strange ways due to memory problems. I used to run a lot of memory tests as a matter of course.

I think the title of the article could be more accurate, considering how much is devoted not to issues about software reliability per se, but to distinguishing between unreliable software and unreliable hardware. I think an implicit assumption in most discussions about software reliability is that the hardware has been verified.

I personally do not think that it is the responsibility of a database to perform diagnostics on its host system, although I can sympathize with the pragmatic requirement.

When I am determining the cause of a software failure or crash, the very first thing I always want to know is: is the problem reproducible? If not, the bug report is automatically classified as suspect. It's usually not feasible to investigate a failure that only happened once and cannot be reproduced. Ideally, the problem can be reproduced on two different machines.

What we're always looking for when investigating a bug are ways to increase our confidence that we know the situation (or class of situation) in which the bug arises. And one way to do this is to eliminate as many variables as possible. As a support specialists trying to solve a faulty computer or program, I followed the same course: isolate the cause by a process of elimination. When everything else has been eliminated, whatever you are left with is the cause.

I'm still all jonesed up for a good discussion about software reliability. antirez raised interesting questions about how to define software that is working properly or not. While I'm all for testing, there are ways to design and architect software that makes it more or less amenable to testing. Or more specifically, to make it easier or harder to provide full coverage.

I've always been intrigued by the idea that the most reliable software programs are usually compilers. I believe that is because computer languages are amongst the most carefully specified kind of program input. Whereas so many computer programs accept very poorly specified kinds of input, like user interface actions mixed with text and network traffic, which is at higher risk of having ambiguous elements. (For all their complexity, compilers have it easier in some regards: they have a very specific job to do, and they only run briefly in batch operations, producing a single output from a single input. Any data mutations originate from within the compiler itself, not from the inputs they are processing.)

In any case, I believe that the key to reliable programs depends upon the a complete and unambiguous definition of any and all data types used by those programs, as well as complete and unambiguous definitions of the legitimate mutations that can be made to those data types. If we can guarantee that only valid data is provided to an operation, and guarantee that each such operation produces only legitimate data, then we reduce the chances of corrupting our data. (Transactional memory is such an awesome thing. I only wish it was available in C family languages.)

One of my crazy ideas is that all programs should have a "pure" kernel with a single interface, either a text or binary language interface, and this kernel is the only part that can access user data. Any other tool has to be built on top of this. So this would include any application built with a database back-end.

I suppose that a lot of Hacker News readers, being web developers, already work on products featuring such partitioning. But for desktop software developers who work with their own in-memory data structures and their own disk file formats, it's not so common or self-evident. Then again, even programs that do rely on a dedicated external data store also keep a lot of other kinds of data around, which may not be true user data, but can still be corrupted and cause either crashes or program misbehaviour.

In any case, I suspect that this is going to be an inevitable side-effect of various security initiatives for desktop software, like Apple's XPC. The same techniques used to partition different parts of a program to restrict their access to different resources often lead to also partitioning operations on different kinds of data, including transient representations in the user interface.

Can a program like Redis be further decomposed into layers to handle tasks focussed on different kinds of data to achieve even better operational isolation, and thereby make it easier to find and fix bugs?

chewxy 12 hours ago 0 replies      
And people wonder why I recommend redis. Having run redis for over 1.5 years on production systems as a heavy cache, a named queue and memoization tool (on the same machine), redis has never once failed me. It's clear with antirez's blog post, his attention to detail.

This post is fantastic.

lucian1900 21 hours ago 8 replies      
Perhaps using safer languages (and languages with better error reporting) would be a solution to these kinds of problems.
pnathan 16 hours ago 0 replies      
there is an approach to hard real time software where antirez's idea for a memory checker is done.
Show HN: 6 Months in, first iPhone app into app store today baylisslabs.com
24 points by christl11  5 hours ago   9 comments top 3
christl11 1 hour ago 0 replies      
pps. the app didn't take 6 months to develop, but its been that long since I left a full time contract to start out on my own. Also been doing some consulting work and all the other things that come along with running your own business.
jvrossb 2 hours ago 1 reply      
Would adding metronome functionality make sense for this kind of app?
MaxGabriel 4 hours ago 4 replies      
Cool. So, I don't know anything about music, but maybe you should consider stating explicitly what instruments you support?
Samsung Printer firmware contains a backdoor administrator account cert.org
40 points by Garbage  8 hours ago   11 comments top 4
digitalengineer 3 hours ago 0 replies      
Reminds me of the Cold War when the CIA planted camera's inside XEROX copiers and was stealing everyone's secrets for decades. http://www.editinternational.com/read.php?id=47ddf19823b89
neilwillgettoit 6 hours ago 0 replies      
wildranter 2 hours ago 1 reply      
I don't know why people buy Samsung stuff. As pg once said, they make everything look like a microwave.
Yver 4 hours ago 2 replies      
When I read about that kind of backdoors it makes me wonder how nobody ever ends up in prison for it.
Linode Simplifies Plans, Reveals CPU Priority linode.com
91 points by jterenzio  12 hours ago   72 comments top 18
graue 9 hours ago 1 reply      
It wasn't immediately clear to me what changed. Here's the old homepage:


It appears Linode removed the 768 and 1536 plans, renamed the 1024/2048/4096 plans to 1GB/2GB/4GB, and added an 8GB plan. They also added a row in the table showing CPU priority. The 512 plan is unchanged, as are specs and prices for the other three remaining plans.

MichaelGG 9 hours ago 3 replies      
I am consistently surprised how many VM hosts refuse to tell you what your CPU guarantees are. EC2 at least gives you a general equivalence to a specific hardware. Rackspace refuses to go into detail. Others I've spoken to will only commit to saying "core", without specifying what the reference hardware is. And even then, actually making sure you've got a commit of that CPU is a whole other issue. I think EC2's compute units are a commit, though.

Linode's "priority" seems like ex-Slicehost's way of saying "hey bigger machines get a higher proportion"... nothing really useful for figuring out exactly what you're buying.

And you can't ever really figure out what you have: things could be severely over-committed, and you'll never know until you get starved. So you can't just benchmark your way out of it.

dotBen 10 hours ago 1 reply      
Linode keeps instances homogeneous, so only the same type/size of instance exists on a given bare-metal machine.

Unless that's changed, that would mean that all instances running on a given machine share the same CPU Priority, there will just be fewer instances demanding service from the CPU(s) the larger the plan you have.

...so wondering if that's what CPU Priority means, or if Linode is about to mix instance sizes on same hardware?

kyrra 11 hours ago 1 reply      
Linode forum discussion on this topic (that I could find): http://forum.linode.com/viewtopic.php?f=17&t=9544

And maybe I'm just ignorant on the topic, but what exactly does CPU priority do here? I understand basic linux process priority (like the 'nice' command), but how exactly does CPU priority behave on linode. Searching through their docs, I couldn't find anything.

EDIT: to maybe answer my own question, maybe this is the Xen credit schedule? http://wiki.xen.org/wiki/Credit_Scheduler

swalberg 9 hours ago 2 replies      
I'm actually disappointed. I liked the 768 package, it was big enough that you could run a fair amount of stuff [0], and cost only $30/month. I was planning on buying a new one over my Christmas holidays and moving my stuff over so I could get onto a newer CentOS. CPU has never been a problem, so this new priority is meaningless to me.

For my needs, $30/mo was about as much as I'd spend on a server to host mine and a few friend's blogs, some photos, and some remote services. $40 is too much for me and the lower plan just doesn't have enough RAM to be interesting.

So now my options are 1) find somewhere else, or 2) backup my data and rebuild the box in place.

0 - I manage a few Linode 768s including my own. 768 was a great size for a few small blogs and a low traffic Rails site, or a larger traffic blog.

mashmac2 11 hours ago 1 reply      
What do they mean by CPU Priority?

I'm assuming that meant access to part of a processor, but how does that work with 4 CPU and 16x priority? (I'm working on the assumption that 1x priority ~= 1 core.) Of course, my assumption is probably wrong - just curious how this affects the load on a given server and how the VPS interacts with other VPS's on that node.

Tichy 1 hour ago 1 reply      
I've heard good things about Linode, but ultimately, why not get a dedicated server for just a little more money? I pay 30€/month for mine with Hetzner, yesterday there was another host with prices starting from 10€?

So what is the appeal of Linode? That you can upgrade to a faster server quickly?

songgao 7 hours ago 4 replies      
Am I the only one here who thinks that for personal use, owning a server in house is a better choice than using a hosted VPS or server?

It's quite easy to get a decent micro HP server (even with SSD storage) within $1000, which would cost $150.00 - $300.00 a month for a equivalent plan on Linode. Suppose you upgrade your server every two years, the monthly cost of the server is less than $50. You get dedicated CPU time and I/O, permissions to managing everything.

Internet bandwidth might be a problem. But let's put ourselves in the 2 or 3 years future. What if you already have Gigabit Internet like Google Fiber for $70/mo?

And you get other benefits for owning a server in your house. Since it's connected to your home LAN, it can be used to help build a smart home, control smart sensors/cameras, or serve as a media server.

Am I missing something here?

larrys 10 hours ago 8 replies      
Anyone care to share their experience with linode.com vs. http://prgmr.com/xen/ ?

(We setup a few vps's with rackspace and have been happy so far.)

contingencies 6 hours ago 1 reply      
I recently evaluated some cloud providers. There were differences of 10x latency for a bunch of basic (unix filesystem plus some bash script) level operations between EC2 and Rackspace. The Rackspace people failed to take complaints seriously, so we took our business elsewhere.

EC2 is good but their spin-up time is crap.

Though same-kernel is obviously a security reduction, the speed is far better: I for one can't wait to see more LXC and other lightweight virt stuff being made available with real cgroup-level guarantees.

twodayslate 7 hours ago 1 reply      
Why would someone get a linode when they can get a dedicated server for $15? http://news.ycombinator.com/item?id=4838729
andmarios 7 hours ago 0 replies      
The change (removal of some plans) is a couple weeks old.

About CPU priority, Linode never kept it a secret.
For the small VPS (512MB RAM), you get a guaranteed 1/20 of a 4 core XEON processor and it scales linearly with each plan's RAM.

As explained on their FAQ, their machines have 8 cores each and house 40 512MB VPS.

FreeKill 11 hours ago 1 reply      
I wonder how these new plans will affect existing users. My plan falls directly between two of these simplified plans. The prices seem the same still, so it would cost me $15 more a month to increase to closest new package.
mp99e99 9 hours ago 1 reply      
Since there are all VPS users here, what do you think is the best way to market a VPS product.. or rather, how did you end up becoming a Linode customer?
cllns 11 hours ago 0 replies      
Weird they haven't updated their blog with a post about this.
wtf242 8 hours ago 0 replies      
This is disappointing. My Rails app gets just enough traffic that it uses 1.2-1.4 gigs of ram on average. The 1.5 gig plan was perfect for me and I've used it without issues for years now.
taligent 9 hours ago 2 replies      
Have they updated their security and disclosure policies ? If not they can remain in my "dodgy vendor who you can't trust" list.

For those that don't remember hackers managed to get root access to several VPS via some Linode vulnerability. Didn't bother to let customers know. Didn't bother to update their status/website. Didn't bother to tell anyone what they've done to fix it. Compare that with CloudFlare:

Linode continues to be a recurring example of how not to behave as a vendor.

mp99e99 9 hours ago 0 replies      
Since there are all VPS users here, what do you think is the best way to market a VPS product.. or rather, how did you end up becoming a Lin
Quant Hedge Funds 101 mergersandinquisitions.com
22 points by redDragon  5 hours ago   14 comments top 2
Patient0 25 minutes ago 1 reply      
"So you think about conditions like these, determine the significance and correlations between all of them, and then come up with an overall model that tells you whether an asset such as a stock will increase or decrease in value"

Knowing whether the stock is likely to go up or down is dangerous without having some model of how much it will go up or down. It could have a 90% chance of going up but with a negative expected return (because if it goes up it will only go up slightly but if it goes down it will crash).

I wonder if this guy understands this? His "up/down" language suggests that maybe he doesnt and his group is only making money by "picking up pennies in front of a steamroller".

Monty Hall and the Birthday problem? Are they still asking these old hackneyed interview questions!?

As a whole everything this guy said in the interview sounds a bit naive/old-hat. Are people really still blindly trading the correlation between MSFT and Oil?

The whole interview reads like what a junior statistics graduate thinks quant trading funds do rather than how they actually make money nowadays.

It's like he hasn't even read Fooled by Randomness....

001sky 3 hours ago 3 replies      
Their strategy is typically a variant of the following: “I know that somebody else will buy X, so let's buy X first and sell it to them at a higher price.”

-- zero value added, in other words.[1]


[1] Front running is not the only use of statistics and quants in finance, however.

Turbulenz engine on WebGL Podcast thewebglpodcast.com
3 points by kinlan  46 minutes ago   discuss
Leaping Brain's "Virtually Uncrackable" DRM is just an XOR with "RANDOM_STRING" plus.google.com
695 points by asherlangton  1 day ago   246 comments top 36
Eliezer 1 day ago 7 replies      
Maybe there's a scheme here to prevent good DRM by flooding the market with highly inflated impressive-sounding claims attached to laughable security. The Old Media crowd won't be able to solve the Design Paradox (http://www.paulgraham.com/gh.html) well enough to tell who's lying, good designs won't be able to charge more than laughable competition, and the DRM field will slowly die.
mturmon 1 day ago 2 replies      
From http://leapingbrain.com/:

"Video content is protected with our BrainTrust™ DRM, and is unplayable except by a legitimate owner. All aspects of the platform feature a near-ridiculous level of security."

Near-ridiculous security seems about right.

toyg 1 day ago 9 replies      
I am awed by the chutzpah of whoever is behind Leaping Brain, selling snake oil to clueless media people.

This is why I'll never be rich: I am utterly unable to sell crappy non-solutions to people with more money than knowledge.

radarsat1 16 hours ago 1 reply      
I would like to propose that DRM is not intended to be uncrackable. It's easy to convince yourself that DRM is flawed, because fundamentally it is a flawed tool. Companies know this, they're not stupid. However, DRM is actually not a technical tool to prevent piracy. Rather, DRM is a legal tool to provide stronger legal arguments that theft has occurred.

I'm not saying this is right, necessarily, but I think companies know full well that their DRM scheme will be broken, so it's not really worth investing in an "uncrackable" and costly solution. Instead, the role that DRM play is purely legal -- when the company does decide to go after someone for piracy, the DRM scheme, no matter how simple, provides them with the ability to say that the accused person "broke a lock," rather than simply walking in through an unlocked door. "Entering" vs. "breaking and entering." It's nothing but legal leverage, and effective at that role even if it's not a very strong lock.

Of course, to have this argument hold, a company would never be able to admit that they purposefully implemented weak security -- this would be akin to admitting that their door was unlocked afterall, and would weaken their legal argument. Therefore, there remains a niche in the market for solutions that look secure even if they fundamentally aren't. It's all about lip service.

pilif 1 day ago 2 replies      
This could very well be a simple bug where it's supopsed to XOR with some really random string generated on the server, but some replacement of a template string isn't happening which is why it XORs with RANDOM_STRING.

Of course this is only marginally better and should really have been caught, but there's a huge difference between saying that XORing 12 bytes with RANDOM_STRING is kick-ass DRM and actually having a kick-ass DRM infrastructure that then doesn't work right because of a bug.

If this was any really random looking string, I would be more inclined to assume that this was intentional. By the string being this token, I would guess it's a bug somewhere.

Remember. If RANDOM_STRING was truly random, unique per file and account and only transmitted from the server before playing, then this would be as good an encryption as any.

hosay123 1 day ago 6 replies      
You cannot simultaneously crow "hurr, DRM is broken!" and act all smug about this discovery. Perhaps the original developer, like you, understood this, and did the absolute bare minimum necessary to fulfil commercial obligations, all the while making it easier for people like himself (i.e. you) to get what they want, and making a few bucks from the old and dying media industry all at the same time.

Given the evidence (complex integration with a non-standard set of open source libs, complex industry area in general), I'd say it's almost certainly an insult to imagine the developer could not have made your life harder if he'd chosen to.

Please, if anything commend the dear fellow, and shame on whoever considered a momentary glimpse of Google Plus limelight worth making this guy's Tuesday morning and ongoing professional reputation much harder earned than it otherwise might have been.

"No good deed goes unpunished"

mahmoudimus 1 day ago 1 reply      
I did a lot of reverse engineering back in the day - you'd be surprised how many "virtually uncrackable" DRM protections used by companies like Adobe (at the time - Macromedia) that were just stupid XORs of magic strings.

Ahh..the good old days of SoftICE and w32disassm.

Oh man, the worst was the md5 of some salt + whatever you put in.

If you ever want to see some gems of misuse of cryptography for DRM management, let me know - email's in my profile.

Some examples: Using RSA 1024 bit keys, with exponent of 3...

marshray 1 day ago 3 replies      
This is apparently why the DMCA anti-circumvention provisions only apply to bypassing "effective copy protection" systems.

Of course, if a copy protection system was "effective" it wouldn't need a law prohibiting its circumvention. Conversely, if a copy protection system is circumventable, it's not effective.

yk 1 day ago 0 replies      
This is roughly the level of programming I expect from DRM software. After all, the content needs to be in unencrypted format at some point to view it.[1] Therefore there are two kinds of programmers working on DRM, idiots and liars. One kind does not understand the futility of their efforts, the other kind wagers that there superiors do not understand the futility of their efforts.

[1] Assuming a general computation device, not a dedicated hardware player.

ataggart 1 day ago 2 replies      
Judging by the headline, it sounded like they tried to implement a one-time pad, but had only heard of them by rough description.
danso 1 day ago 1 reply      
Ha, so the key really was "RANDOM_STRING", in the literal sense...was that just the programmer giving up, or was that pseudocode that was missed during shipping?
joezydeco 1 day ago 1 reply      
How do we know this wasn't a non-english speaking subcontractor that took the spec too literally?
asdfaoeu 1 day ago 3 replies      
Someone want to explain why this is less secure than other DRM methods?
jcromartie 20 hours ago 0 replies      
You know what's absolutely terrifying? This guy could conceivably go to jail for this. Looks like he has kids, presumably a wife... hoping it goes well for him.
pav3l 1 day ago 9 replies      
Can someone explain how he got a hold of the decrypted .mov files that he compared the encrypted ones with? It's not very clear to me from the post, and I'm not familiar with Leaping Brain.

Either way.. wow... XOR encryption with just such a short repeating string! I bet it wouldn't be too hard to decrypt it even without the original file, since the file signature alone would probably be longer than the string. DISCLAIMER: I'm just speculating, I don't know the .mov specs.

anonymous 1 day ago 1 reply      
facepalm Come on, people!

First rule of weak DRM, you do not talk when you find weak DRM.

Second rule of weak DRM, you DO NOT talk when you find weak DRM.

Third rule of weak DRM, upload to pastebin, then walk away.

photorized 1 day ago 0 replies      
The business goal behind most of these "protection" methods is to make unauthorized (unpaid) copying/sharing inconvenient. That's it. There are no commercially feasible methods to protect video or audio content against "a determined hacker", but that's not what these barriers are for. You can make fun of these laughable encryption methods all you want, but they serve their purpose by providing the desired purchase to piracy ratio.

The problem is marketing folks getting carried away when describing these "technology solutions" to the content owner, because that's what they (as well as VCs) want to hear.

Disclaimer: cofounded a video CDN+DRM provider more than a decade ago, developed many content protection methods over the years.

shocks 1 day ago 0 replies      
"All aspects of the platform feature a near-ridiculous level of security."

Well... They weren't lying...

sigkill 22 hours ago 1 reply      
To be fair, when I read the title I thought that if the string is truly random then it's actually a very good technique. This is the core operating principle behind the one-time pad which is provably secure.

Now that I read the article twice, I literally got a panic attack when I realized that it wasn't a random string that they were xor'ing their data with, but a string called "RANDOM_STRING". Although it sounds bad, one must realize that this is not security by obscurity since the key has been leaked, and nobody guarantees encryption against a leaked key.

iandanforth 1 day ago 5 replies      
Could someone (OP?) provide more of the steps that might have gone (went) into discovering it was an XOR operation and the original string? Seems like an impressive intuitive leap to me!
tlrobinson 1 day ago 1 reply      
"It turned out the actual player, launched from their compiled app, was a Python wrapper around some VLC libraries"

Isn't VLC licensed under the GPL? Or at least was until very recently? http://www.jbkempf.com/blog/post/2012/How-to-properly-relice...

Is/was Leaping Brain violating the license?

EDIT: the wrapper script is apparently released under the GPL too: http://news.ycombinator.com/item?id=4834834

etsimm 6 hours ago 0 replies      
I find it curious that (after 242) there are no comments here ranting about ir/responsible disclosure. Is this simply indicative of the readership's unanimous hatred of all things DRM - or is there perhaps a threshold of ineptitude beyond which we feel ethically free to fully disclose vulnerabilities?
javajosh 1 day ago 0 replies      
This should be lauded just as much for being a solid little piece of citizen, even activist, journalism. The specific issues about DRM are important, but I think the greater willingness to really look into things and publish the results should be encouraged.
damian2000 1 day ago 0 replies      
There's two software engineers and a product architect listed on the about page - http://leapingbrain.com/about/

It might be a good idea to remove their names, to protect their reputation. ;-)

jiggy2011 8 hours ago 0 replies      
Question: Can anybody name a DRM scheme that hasn't been cracked?
nnq 1 day ago 0 replies      
...I find it extremely funny when people use the word "virtually" to mean "practically" or "nearly" or "almost" and they turn out to be wrong but are excused by the fact that they added the magic word "virtually" :) ...and conversely, if someone uses the word when talking to me, I label everything the person says afterwards as 99% weasel words...
cafard 23 hours ago 0 replies      
Back in the 1990s, the revolutionary organization Sendero Luminoso was naive enough to believe in WordPerfect's encryption. This was a grave mistake, for that encryption (for 4.2 and 5.1 at least) was a simple XOR of the password against the text--and in 5.1 you had 10 or so bytes of known text to compare against in the header. The decryption of the files was not the only thing that worked against Sendero Luminoso, but it must have hurt them.
stcredzero 1 day ago 1 reply      
Breaking repeated XOR with a string is a variant of the Vignere cipher or the Vernam cipher, depending on how you think of it. Either way, breaking it is a freshman cryptography exercise.
i0exception 1 day ago 6 replies      
Anyone who has taken Computer Security 101 would know that security through obscurity is not the smartest thing to do. Calling it "near-ridiculous level of security." is downright blasphemy.
asherlangton 8 hours ago 0 replies      
The CEO of Leaping Brain (or someone pretending to be him) has now joined the Google Plus thread, implying that the "DRM" was intended as satire...
samuellevy 1 day ago 0 replies      
Tomorrow on HN: "Legislation passed to embed DRM chips into people's heads, which automatically shut down visual input if un-authorized content is detected playing in their vicinity. Three strikes policy before permanent blindness."
loup-vaillant 13 hours ago 0 replies      
The obligatory xkcd: http://xkcd.com/221/
Syssiphus 1 day ago 0 replies      
Hm, anybody remember Dmitry Slyarov? http://en.wikipedia.org/wiki/Dmitry_Sklyarov

As far as I recall the Adobe PDF encryption was also just some XOR with a simple passphrase. Got him into serious trouble.

And WTH is 'virtually uncrackable'?

px43 1 day ago 0 replies      
This is what they call this a 1024 bit Vernam Cypher in the movie "Swordfish".
seanhandley 23 hours ago 1 reply      
XOR isn't insecure per se. What I'd like to know is how this "random string" is created in the first place
ballfrog 20 hours ago 0 replies      
From their website:

Fort Knox-level security.

Video content is protected with our BrainTrust™ DRM, and is unplayable except by a legitimate owner. All aspects of the platform feature a near-ridiculous level of security.

California Finds Economic Gloom Starting to Lift nytimes.com
24 points by 001sky  6 hours ago   27 comments top 3
aresant 5 hours ago 4 replies      
The housing & job reports feel right and are welcome news.

But did the NYTimes just point to California's current budget as a sign of our State's "economic stability"?

They quote that the "California Legislative Analyst's Office projecting . . . . [that] California might post a $1 billion surplus in 2014"

That is such an irresponsible thing to publish - a "surplus" makes it sound like the government is totally on top of the situation here and a model for success.

The reality is that CA would be going off the @!@W$! rails right now in 2012 if voters hadn't just passed Prop 30.

Prop 30 was an EMERGENCY, retroactive (from Jan 1 2012), and "short-term" (7 year) tax on high income earners + a moderate increase in sales tax which is going to raise $6B a YEAR to pull CA's ass out of the fire.

So while the gloom may be lifting, I remain pessimistic that our state gov't has any long-term solutions to their continued budgetary missteps.

daniel-cussen 5 hours ago 2 replies      
I found Romney's comparison of California to Greece mostly fair. Saying that California is like a balmy Mediterranean country with a rich culture and history is not pejorative. That it has giant public sector workforces and fiscal problems is perhaps a pejorative comparison, but one that is avowed by the color of the ink on California's balance sheet.
w1ntermute 6 hours ago 4 replies      
Watch as the Democrats (now dominating the State Legislature) take all the credit for this.
Ceasefires Don't End Cyberwars cloudflare.com
37 points by spacesuit  7 hours ago   11 comments top 3
ams6110 5 hours ago 1 reply      
Ceasefires usually don't end real wars either.
codexon 3 hours ago 1 reply      
CloudFlare's goal is to power a better Internet. While that will inherently mean we will increasingly find ourselves in difficult situations like this one, we will continue to be guided by the principle that it is not our role to decide whether one idea or another is correct, but instead to ensure that all ideas can find equal footing online.

I wonder what jgrahamc has to say about Cloudflare shielding sites used to ddos people.


isalmon 5 hours ago 2 replies      
It would be nice if CloudFlare could provide a couple of examples of what sites exactly are being attacked.
Signal-blocking wallpaper stops Wi-Fi stealing whatsnext.blogs.cnn.com
3 points by xpressyoo  1 hour ago   2 comments top 2
EiZei 7 minutes ago 0 replies      
"The metapaper also advertises itself as a healthy alternative, since it claims to reduce a person's exposure to electromagnetic waves. Scientists behind the product point to studies that say the overuse of wireless technology could cause harmful heath effects."


dexter313 12 minutes ago 0 replies      
Does it block cell phones?

3G and 4G have similar frequencies to wi-fi.

5 essential tips for customer care people dealing with technical queries troyhunt.com
3 points by mopoke  1 hour ago   1 comment top
sayhitofrank 50 minutes ago 0 replies      
It always feels like sitting on a "timebomb"...
       cached 28 November 2012 11:02:01 GMT