hacker news with inline top comments    .. more ..    24 Apr 2011 Best
home   ask   best   7 years ago   
Amazon's $23,698,655.93 book about flies michaeleisen.org
582 points by rflrob  1 day ago   120 comments top 25
siegler 1 day ago  replies      
A similar thing happened to me as a seller. I saw that one of my old textbooks was selling for a nice price, so I listed it along with two other used copies. I priced it $1 cheaper than the lowest price offered, but within an hour both sellers had changed their prices to $.01 and $.02 cheaper than mine. I reduced it two times more by $1, and each time they beat my price by a cent or two. So what I did was reduce my price by a few dollars every hour for one day until everybody was priced under $5. Then I bought their books and changed my price back.
lsc 1 day ago 3 replies      
oh man. reminds me of the days when I sold books on amazon and half.com. I wrote a script that took the 'nickel less than the other guy' approach.

These things are /wonderful/ when it comes to making the market more efficient. Really, though, the shipping costs eat up most of the efficiency. Amazon needs an easy way to say "I want these 10 books, used. Find me the lowest price (including shipping)" - the idea is that the more books you could buy from one seller, the less shipping friction would be involved, but amazon isn't really set up that way, which makes it much less efficient for the low end used books.

hellasbook 2 hours ago 0 replies      
I have been an on line book dealer for the past 10 years. Originally the rules about having actual possession of the goods offered for sale were quite strict.

Then came the megalisters - agencies with software that listed all books in print and a contract with publisher's warehouses/library supply services/factors to arrange delivery of in print books to customers.

Then came the "phantom listers" - agencies with software that spiders through the listings of legitimate on line book dealers looking for titles with few (or no) copies listed in Amazon. They then list them at inflated prices. Some of these have more than one alias on Amazon.
I think of them as the Piranhas of the Amazon in the ay that they consume the smaller fish.

If and when they receive an order then then try to purchase a copy from one of the "real" owners of the book.[Some are cheeky enough to request as well a"trade discount" on the dealer's price] Small businesses listing on Amazon and abebooks.com must keep their "fulfillment"rating high, so they can ill afford to refuse to supply such parasites.In fact when I requested once that I NOT be obliged to support what I think of as an unethical selling and pricing model I was told

1] that the buyer was a valued long time customer
and that
2] I could approach the agency in question and - at THEIR discretion - request that they stop ordering books from me.

Since AZ and ABE take out some % of the list price for the the books sold on their site (as well as monthly listing fees, closing fees and sometimes a portion of the shipping charge ) they can and do make a much greater profit from the high priced phantom listers than from the legitimate reasonably priced offerings of small book dealers, consequently they are not very interested in aggressively policing the situation.

Now there is "Monsoon" and other software to automatically adjust prices on line. In practice most often works to REDUCE the price below the lowest price already listed - a rush to the bottom where books get listed for mere cents.

One tactic I recommend is to search for books and to see the widest range of options is to use "addall.com" It comes in 2 flavours "New' and "Used"and searches around 30 book listing services You can compare prices (ascending or descending) and also see the kind of dealers who sell the books.

Many listings have boilerplate descriptions "we ship fast" "books may have..." etc. which indicate that no human may have examined the object being listed.Other listings have descriptions of content and condition that clearly demonstrate that it is a"real"book from a "real"dealer.
When in doubt look for THAT dealers OWN website to ask questions and get personal service.

Qz 1 day ago 2 replies      
A quick visit to amazon:


Multiple books priced upwards of $600m. One has a Kindle edition for 9.99.

anigbrowl 1 day ago 1 reply      
This explains a lot. I wrote a small book back in the 90s that had few sales and was unlamented when it fell out of print. A couple of years ago saw it as the subject of an amazon.com sidebar ad and was astonished to find that it was listed at $100 or thereabouts, and couldn't imagine how or why it might have become collectible. The idea that this was the result of competing pricing algorithms makes a great deal more sense to me.

Now, if you'll excuse me I'm off to weep over my broken dreams of belated celebrity.

jonnathanson 14 hours ago 0 replies      
Very funny and enlightening analysis. I've seen the fringes of algorithmic pricing a few times in other categories -- especially out-of-print DVDs or VHS tapes.

Example: don't ask me why I enjoy the movie My Dinner With Andre, but for whatever reason, I do. A few years back, I wanted to buy a copy of the DVD and checked Amazon. It turned out that the DVD was long out of print, and new copies were going for $400 apiece. I figured this price was high, but nevertheless, it was nothing that couldn't be explained by actual rarity and supply/demand metrics. Rare DVDs have been known to climb into the hundreds of dollars, especially if new and unopened. But I came back a few days later, and the price was $1932.78 (or something unusual to that effect). The next day, $3500 and change. Which struck me as odd, to say the least. Was some nefarious Goldman trader attempting to fix the market for Wallace Shawn's back catalog?

Needless to say, I didn't love the movie quite that much. So I passed. These days, Criterion has released a new DVD version of the film, and accordingly, everything's dropped back down to about $30 per copy -- including the price of the original, OOP version. I feel sorry for anyone who actually might have taken the plunge at $400, which is not out of the realm of possibility. That's a lot of money for a film about two guys having dinner.

benvanderbeek 1 day ago 1 reply      
I work at a mid sized Amazon 3rd party seller. We reprice automatically. With thousands of SKU's there's no other way to be competitive. There are many layers to consider though, you definitely can't let your whole catalog auto-reprice. Usually sellers just focus on their top X% SKU's and let the rest auto-calculate.

I wonder if we have a price ceiling setup...?

Edit: Yes we have a floor and ceiling. I have no idea what I was thinking.

jaysonelliot 1 day ago 1 reply      
I can't begin to describe how much this pleases me. It's straight out of a William Gibson novel, but happening right now.

God, I love living in The Future.

geekfactor 1 day ago 1 reply      
Hmmm. I wonder if this would work in reverse? Suppose I want to buy a book that has several new copies for sale on the Amazon Marketplace for say $60. If I post a new copy for sale for $10, perhaps one of these algorithms would kick in to reduce the price?
kbrower 1 day ago 1 reply      
I have a book with 1 sale and 6 sellers including amazon. They definitely do not have the book.
Jach 1 day ago 2 replies      
I'm interested where the 1.270589 number comes from. sqrt(golden ratio) is close: 1.27201965
derrida 21 hours ago 0 replies      
Its all innocent when it is Amazon. But I am sure there must be similar phenomena taking place in the financial markets, but the algorithms that determine pricing there are more complicated than multiplying by a simple factor.
originalgeek 22 hours ago 2 replies      
> My preferred explanation for bordeebook's pricing is that they do not actually possess the book.

There's only one problem with this theory. The Amazon TOS requires sellers to have inventory before they can sell inventory. And they will ban you for life for violating that.

cookingrobot 21 hours ago 3 replies      
Automatic pricing is super common these days, even on more expensive things than books. I'm actually working on a startup called shopobot.com that wants to help people use the volatility to their advantage. We see rapid $50-100 swings on things like SLRs, so it's actually pretty significant.

Ironically our site is down right now because we're based on Amazon's web services. Karma? :)

russellallen 1 day ago 1 reply      
Sanity checks, people. Sanity checks.
keeran 1 day ago 1 reply      
I'm surprised the author (and no-one here) hasn't worked out the starting date of the automated price war based on the original RRP of the title :)
JonnieCache 1 day ago 1 reply      
It's gonna be quite a giggle when this happens with the stock market.
atakan_gurkan 17 hours ago 0 replies      
Pricing something high can actually increase sales. There is an example in Cialdini's book "Influence", near the beginning of chapter 1. A jewelry store owner was trying to get rid of some items, instead of marking the price down, she marked them up (by a factor of 2), and they were rapidly gone.

It is a nice book to read, and will probably make you feel uneasy when you realize the abundance of manipulation tactics around us.

pitdesi 1 day ago 3 replies      
Someone should try selling a book that there are limited copies of (ie one that can't be procured easily) to see what happens if someone buys from a seller who doesn't have it.
guynamedloren 1 day ago 2 replies      
[Discalimer: I've never looked into this before and I have absolutely no idea how the Amazon reseller market works, so this might be impossible or prohibited.]

What's stopping somebody from relisting every product that already exists in the reseller market, but raising the price a few bucks? Even if people are more likely to buy the lesser priced product, the seller has to lose since they don't actually have any products on hand or skin in the game. Better yet, they could relist the products for less than the competitors and jack up the shipping prices to skim a few bucks profit off the top of each sale (this would work if somebody sorted by item price only, instead of price + shipping).

Figs 21 hours ago 1 reply      
I've seen lots of books on Amazon priced at $0.01 or $0.02 before. Maybe a result of similar processes? (I always assumed they were some kind of scam, since why the hell would someone sell a book so cheap?)
maxxxxx 8 hours ago 0 replies      
Sometimes you can use this for trade ins to Amazon directly. Last week I traded in a book that had several prices for $100.
Amazon took it for $27. On other sites you can get it for $24.
jeffdavis 22 hours ago 0 replies      
"they have a huge volume of positive feedback"

Is that a pun or something?

xudir 10 hours ago 0 replies      
njharman 1 day ago 0 replies      
I see this all the time. On Ebay too.

Hmmmmm, maybe I'm spending too much time trolling for things to buy?

Amazon Web Services are down amazon.com
549 points by yuvadam  2 days ago   332 comments top 45
timf 2 days ago 5 replies      
Some quotes regarding how Netflix handled this without interruptions:

"Netflix showed some increased latency, internal alarms went off but hasn't had a service outage." [1]

"Netflix is deployed in three zones, sized to lose one and keep going. Cheaper than cost of being down." [2]

[1] https://twitter.com/adrianco/status/61075904847282177

[2] https://twitter.com/adrianco/status/61076362680745984

yuvadam 2 days ago  replies      
Current status: bad things are happening in the North Virginia datacenter.

EC2, EBS and RDS are all down on US-east-1.

Edit: Heroku, Foursquare, Quora and Reddit are all experiencing subsequent issues.

asymptotic 2 days ago  replies      
Amazon's EC2 SLA is extremely clear - a given region has an availability of 99.95%. If you're running a website and you haven't deployed across across more than one region then, by definition, your website will have 99.95% availailbility. If you want a higher level of availability use more than one region.

Amazon's EBS SLA is less clear, but they state that they expect an annual failure rate of 0.1-0.5%, compared to commodity hard-drive failure rates of 4%. Hence, if you wanted a higher level of data availability you'd use more than one EBS volume in different regions.

These outages are affecting North America, and not Europe and Asia Pacific. That's it. Why is this even news? Were you expecting 100% availability?

ig1 2 days ago 1 reply      
A couple of hours into the failure, and no sign of coverage on Techcrunch (they're posting "business" stories though). It shows how detached Techcrunch has become from the startup world.

Edit: I tweeted their European editor about it and he's posted a story up now.

kylec 2 days ago  replies      
I guess this is one Reddit outage that can't be blamed on poor scaling
mcritz 2 days ago 0 replies      
This feels the same way as hearing that the whole Internet just got shut down.
mtodd 2 days ago 2 replies      
Why is ELB not mentioned at all on the Service Health Dashboard?

We're experiencing problems with two of our ELBs, one indicating instance health as out of service, reporting "a transient error occurred". Another, new LB (what we hoped would replace the first problematic LB), reports: "instance registration is still in progress".

A support issue with Amazon indicated that it was related to the ongoing issues and to monitor the Service Health Dashboard. But, as I mentioned before, ELB isn't mentioned at all.

dsl 2 days ago 3 replies      
4/21/2011 is "Judgement Day" when Skynet becomes self aware and tries to kill us all. http://terminator.wikia.com/wiki/2011/04/21

I am just a little freaked out right now.

jws 2 days ago 1 reply      
Silver lining: Hopefully I can test my "aws is failing" fallback code. (my GAE based site keeps a state log on S3 for the day when GAE falls in a hole.)
helium 2 days ago 2 replies      
I just launched a site on Heroku yesterday and cranked up the dynos up in anticipation of some "launch" traffic. Now, I can't log in to switch them off. Thanks EC2, you owe me $$$s
powdahound 2 days ago 0 replies      
I'm seeing 1 EBS server out of 9 having issues (5 in one availability zone, 4 in another). CPU wait time on the instance is stuck at 100% on all cores since the disk isn't responding. Sounds like others are having much more trouble.
jedberg 2 days ago  replies      
Yes, they are. :(
espeed 2 days ago 0 replies      
Quora is down, and evidently "They're not pointing fingers at EC2" --
http://news.ycombinator.com/item?id=2470119 -- I was going to post a screen shot, but evidently my Dropbox is down too.
paraschopra 2 days ago 2 replies      
http://venuetastic.com/ - feel bad for these guys. They launched yesterday and down today because of AWS. Murphy's law in practice.
smackfu 2 days ago 1 reply      
So when big sites deal use Amazon Web Services for major traffic, do they get a serious customer relationship? Or is it just generic email/web support and a status page?
potomak 2 days ago 2 replies      
Quora says: "We'd point fingers, but we wouldn't be where we are today without EC2."
alexpopescu 2 days ago 0 replies      
Instead of enumerating who's down, I'd be more interested to hear about those that survived the AWS failure. We could learn something from them.
mathrawka 2 days ago 3 replies      
I think this is a good example of how the "cloud" is not a silver bullet to making your site always up. AWS provides a way to keep it up, but it is up to each developer to ensure that they are using AWS in a way to make sure their site can handle problems in one availability zone.

I think we will see more of a focus from big users of AWS about focusing on how to create a redundant service using AWS. Or at least I hope we will!

olegp 2 days ago 2 replies      
Assuming the problem is indeed with EBS, I would say this should be a warning sign to anyone considering going with a PaaS provider, which Amazon is quickly becoming, instead of an IaaS provider like Slicehost or Linode.

The increased complexity of their offering makes it more likely that things will break, leaving you locked in.

I did a 15 minute talk on the subject, which you can check out here: http://iforum.com.ua/video-2011-tech-podsechin

EDIT: here are the slides if you can't bother watching the video http://bit.ly/eqDNei

dmuth 2 days ago 0 replies      
Holy crap. An Amazon rep actually just posted that SkyNet had nothing to do with the outage:


smtm 2 days ago 2 replies      
AWS/S3 has become the new Windows - great SPOF to go for if you want to attack. This space needs more competition.
oomkiller 2 days ago 1 reply      
They better start writing their explanation now. Multiple AZ's affected?
frekw 2 days ago 0 replies      
It's a bit ironic that Amazon WS has become a SPoF for half the internet.
ig1 2 days ago 0 replies      
Given that Heroku's parent company (Salesforce) owns a cloud platform, it seems kinda inevitable now that Herkou will perhaps sooner-than-later switch back-ends (or at least use both)
antonioe 2 days ago 0 replies      
Had our blog go down. Didn't realize it was AWS wide..did a reboot. Now I am in reboot limbo. Put an urgent ticket into Amazon. They just said they are working urgently to fix the issues. Let's see how long this goes.
tybris 2 days ago 0 replies      
All hosting services go down occasionally. If you want to stay up you need to build a fault-tolerant distributed system that spans multiple regions and potentially multiple providers.

Also, Amazon should fix EBS.

vnorby 2 days ago 0 replies      
From EngineYard: "It looks like EBS IO in the us-east-1 region is not working ideally at this point. That means all /data and /db Volumes which use EBS have bad IO performance, which can cause your sites to go down."
jjm 2 days ago 1 reply      
Everyone talks about SLAs but I believe it doesn't consider the fact that the EBS vols are still up (not on fire, and available) and are phantom writing or that the network is queued up the wazoo so writes don't even happen in a timely manner as you'd expect.
ck2 2 days ago 0 replies      
So what percentage of the top 1000 sites are now crippled by this?
swedegeek 2 days ago 0 replies      
In case anyone is late to the party and missed the non-green lights on the AWS status dashboard, here is the page as of about 9:30 EDT...


piramida 2 days ago 0 replies      
Today, April 21st 2011, according to the "Terminator", Skynet was launched... No wonder AWS is down
antonioe 2 days ago 0 replies      
1:23EST and Reddit is back up. Quora/4SQ still down. My site still down.
mathrawka 2 days ago 3 replies      
So do we get some credit on our AWS accounts? I haven't really read their SLA for EC2.
dmuth 2 days ago 1 reply      
Being unable to get much done here, my co-workers have found other things to do in the office: http://www.youtube.com/watch?v=u1-oGxDHQbI :-P
xlevus 2 days ago 1 reply      
Ruh Roh. The service I'm using to acquire accommodation seems to be dependent on AWS. Guess I'm going to be homeless tomorrow if it doesn't get fixed. :X

Note to self. Don't ever build a service reliant on AWS.

singlow 2 days ago 0 replies      
It's definitely a limited outage. My three instances seem to have operated all night with no problem. Two of them are EBS instances.
pextris 2 days ago 0 replies      
reddit.com is down, but luckily http://radioreddit.com is not.
greaterscope 2 days ago 1 reply      
Wish were able to download our ebs snapshots, which are supposedly hosted on S3. What does everyone else do?
hcentelles 2 days ago 0 replies      
It seems like availability zone us-east-1c it's working, i can launch a EBS backed instance right now.
kennethologist 2 days ago 0 replies      
Thankfully, my major clients are using the Asia EC2 Instances!
hendi_ 2 days ago 4 replies      
Yay for relying on the cloud \o/
marjanpanic 2 days ago 0 replies      
Amazon down - just one more reason to try out BO.LT and their amazing CDN and page sharing services...

Just launched:

jwr 2 days ago 2 replies      
It's really mostly EBS failures, so the title is overly dramatic. And EBS has been known to have issues.
Jun8 2 days ago 0 replies      
This is like witnessing your parents having sex while a kid: you sort of knew this is a possibility but it is a devastating blow to your belief system nevertheless.

The amount of services I use that depends on Amazon is amazing. They have really become the utility company of the Web.

Hashify.me - store entire website content in the URL hashify.me
543 points by kevinburke  4 days ago   122 comments top 40
dpcan 4 days ago 3 replies      
I see this as a remarkable answer to the problem of needing to view a cached version of the website.

For example, what if a URL were posted to Hacker News, but after the URL was a ?hasifyme=THEHASH, where THEHASH was the Hash of the website linked-to.

This way, if the URL could not be loaded because the server load was to high, you could just forward the URL to Hashify.me and the cache of the plain text from the website would still then be readable.

Boom, instant cache of the website content stored right in the URL!!!!

kwantam 4 days ago 0 replies      
Last week on a whim I whipped up a URL shortener that expires the forwarded URL after one week[1]. Using that plus hashify, you can essentially make expiring web pages.

[1] pygm.us

danielsoneg 4 days ago 0 replies      
Oh, Bit.ly's gonna _Love_ you guys.

Seriousy, though - awesome hack.

nrkn 3 days ago 2 replies      
Oh dear. This is like some kind of sick, twisted Rube Goldberg machine...

Take entire text of Bram Stoker's Dracula

Chunk into 123 parts

Data URI encode each part

Generate a TinyURL link for each data uri (thanks for having an API guys)

Embed the TinyURL links in the Hashify Markdown editor using object elements

Curses! Even just the objects takes it over the limit. It has to be done in two parts. Create two Hashify pages.

Part 1:


Part 2:


Works in Firefox and Safari. Chrome, Opera and IE9 don't like it.

Groxx 4 days ago 2 replies      
A fantastic abuse of technology. That's one heck of a URL.
gojomo 4 days ago 1 reply      
With a name like 'Hashify', I'm surprised they don't also offer the option of putting the content into the '#fragment' portion of the URL. Then, not even the hashify.net site would need to receive and decode the full URL; they'd just send down a small bit of constant Javascript that rebuilds the page from the #fragment.
antimatter15 4 days ago 1 reply      
I hacked together an encrypted (aes 256) read/write "database" once with the bitty API as the persistence backend.

However, this site disappoints me, it doesn't seem to do anything other than what a data URL can do, except it's vulnerable to downtime because of a centralized website.

Edit: for those of you unfamiliar with what a data URL is. You an store a HTML or image document using a URL like data:text/HTML;base64,hashifystuffhere

powera 4 days ago 2 replies      
This is pointless. It's impossible to create two pages that link to each other, for one. Also, as noted, most browsers won't allow URLs greater than 2k in size.
shazow 4 days ago 1 reply      
Here's a Python shortcut:

Instead of...

    from base64 import b64decode

You can do...


Encoding works too. As well as zip (foo.encode('zip')).

iamwil 4 days ago 0 replies      
I wonder if the equivalent of a quine for this is possible.
paolomaffei 4 days ago 2 replies      
"A hash function is any well-defined procedure or mathematical function that converts a large, possibly variable-sized amount of data into a small datum"

Hashify is not really a hash, is it?

jules 4 days ago 0 replies      
So in effect, you're using bit.ly as a webhost. Url shorteners might not be completely useless after all.
pedalpete 4 days ago 1 reply      
Clearly this is awesome, I'm curious as to what lead you to build it? Understanding that you weren't solving a 'problem', but you've created something really compelling here.

Care to give a peak into how you came up with it?

riobard 4 days ago 1 reply      
What about gzipping the content first?
yakto 4 days ago 3 replies      
sbierwagen 3 days ago 1 reply      
So it's a reimplementation of data URIs, except it depends on two different sites being up and responding to replies, so it lacks even the tiny amount of usefulness data URIs have?

I can't think of a single use case where you would go, "Ah ha! Hashify would work perfectly for this!"

bct 4 days ago 0 replies      
What are the differences between this and a data: URI? Just the shortener and that it can use out-of-band Javascript for the editor?
aj700 4 days ago 0 replies      
would be a good text "host" but needs clones, so that when it disappears in a few years I can still easily convert my urls back into the document therein. that's the one problem these text host sites have. they never last. this gets around this by hosting nothing, merely converting, but still.

and using bit.libya. i dont trust it.

isn't this also somewhat censorship resistant. since the hashify url without its bitly can be put anywhere on the web that is writable, thus making multiple copies available in a covert way.

kqueue 4 days ago 1 reply      
This is going to break in cases where the request line grows above 8k-16k. Many browsers/proxies implement limits on headers/request lines, for good reasons.

It's a very cool idea though.

mumrah 3 days ago 0 replies      
Alternatively, you can use the Data URI Scheme like so:

  >>> "<h1>Hello, World!</h1>".encode('base64').strip('\n')

Paste this into your location bar: data:text/html;base64,PGgxPkhlbGxvLCBXb3JsZCE8L2gxPg==

Works in Chrome 10.0

Steve0 3 days ago 0 replies      
This reminds me of the old tiny-url file system: http://tech.slashdot.org/story/05/10/25/0350222/TinyDisk-A-F...
juskotze 3 days ago 0 replies      
embedded images works. what would this be the solution for? http://hashify.me/aW1hZ2Ugb3ZlciBoZXJlIDxpbWcgc3JjPSdkYXRhOm...
EGreg 3 days ago 0 replies      
So basically, the URLs are files, and copying/pasting them is like copying/pasting encoded data. It is the same as data: urls actually, except maybe for browser security, which is pretty irrelevant for this anyway.

Actually, now I am wondering if an iframe src could be a data: url in browsers. If so, that could be interesting! Showing content without hitting the server. Probably not though, because of cross-domain security again. Any ideas?

aneth 3 days ago 0 replies      
How long before the bit.ly namespace is exhausted?
hElvis 4 days ago 1 reply      
I am just coding the same thing right now (began a week ago). Also had the idea to use bit.ly as shortener (because of its api) and make use of multiple shortened links to store the data. Right before looking at HN I was doing some research for a good js compressing algo.

On the one hand i am a bit disappointed (that i am too late), but on the other hand hashify.me is made far better I could make it. Great realisation.

rudenoise 3 days ago 1 reply      
Very cool idea, it's something I can think of a bunch of cool uses for. Will definitely be looking into it.

One downside came when I tried bookmarking with Delicious (hit the url length limit, truncation would break it). But great for shorter content.

mlinksva 4 days ago 3 replies      
Nice hack, though odd name given that no hashing occurs.
Enideo 3 days ago 0 replies      
A similar technique is used already on the website http://www.wondersay.com
Here the URL path is the text to animate and the fragment hash stores the settings. Bitly is also used to hide the contents of the URL (and hence the messages).

This is clever, in that the entire content of the website is not stored in a database, but in external links. Obviously the biggest problem with this technique is having bots crawl your site, so Google's #! convention is used.

blantonl 4 days ago 0 replies      
I could see this as a very useful implementation for HTML5/Mobile Web sites.

Consider the user experience for the target site on a mobile platform. You have already loaded the site on your mobile device before even taking action, so when you click the link the response is much faster than requesting the site at the click.

micheljansen 4 days ago 0 replies      
At my workplace, this completely freaks out our corporate proxy, so no go :(
yalogin 3 days ago 0 replies      
What is the purpose of this? The URL is already pointing to the store - the actual site which hosts the page. Instead now we have a shortened URL which stores the document. So they just took away the distributed nature of the URL and put it all in one store (bitly).
theoa 4 days ago 1 reply      
Hashify.me seems to be overloaded for the moment. Nevertheless a brilliant and delightful concept!
DanWaterworth 3 days ago 0 replies      
This is sweet. It would actually be possible to create a database using bitly entirely in javascript. It would be read-only for clients and read/write for webservers. You could even make it ACID compliant. I might have a go.
brndnhy 4 days ago 0 replies      

Apache responds with an HTTP code 414-Request URI Too Large once the URI reaches around 8K in length.

Default limits exist in several load balancers as well.

prassarkar 3 days ago 0 replies      
Hashify + pen.io and you've got a great service.

Hashify gets a pretty UI. pen.io removes the need for a DB.

measure2xcut1x 3 days ago 1 reply      
The client side shorten request to bitly.com exposes the bitly credentials.
strayer 3 days ago 0 replies      
Isn't it a "page" or "document", and not a "website"?
petegrif 4 days ago 0 replies      
extremely cool
codejoust 4 days ago 0 replies      
Finally, an easy and quick way to decode base64 hashes.
IPhones and 3G iPads log your location in an unencrypted file on the device oreilly.com
408 points by petewarden  3 days ago   188 comments top 37
allwein 3 days ago  replies      
So after doing a quick analysis of the data on my iPhone, I've come to the conclusion that this isn't a huge issue at all.

First, I'll start with the WiFi data (WifiLocation table):
Among the information captured is MAC, Timestamp, and Lat/Long. I have a total of 118,640 records in my table. I did a "SELECT DISTINCT MAC FROM WifiLocation", and got... 118,640 records. This tells me that it's not "tracking my every move" via Wifi location since there's a single entry for each MAC. The question might be, is it updating the Timestamp when I'm near a specific Wifi Network? My guess is no. I did the backup and analysis this morning, April 20th. Yet the last entries in my database are from April 16th. This tells me that it's not an always on tracker and that it's not updating timestamps.

Next, I looked at the CallLocation table:
The same thing held true with this table. The last entry on my phone was from April 16th. Also, I have 6300 entries in my CellLocation table. I decided to start restricting the precision of the Lat/Long to see if there were duplicates that would indicate "tracking". At 5 decimal points, there were no duplicates. At 4 decimals, there were a handful that had 2 dups. At 3 decimals, there were more dups, with the most being 6. At this point I still had 5672 uniques. At 2 decimals, the most had 89 and I had 2468 uniques. At 1 it really went down, obviously, and I was down to 253 uniques. The other thing I noticed was that there was no regular timing of entries, and that when there were entries, a large number of them had the same timestamp.

So based on my analysis, this isn't a feature that enables detailed tracking of a user. It will allow you to see if a user has been in a certain location the first time, but that's the extent of it. For instance, I could see that I made a trip to Washington DC in late October of last year. But you can't really tell my movements around my home town with any amount of precision. My assumption, like others, is that Apple is using this to enable easier use of Location based services. I assume (which I'm going to test), that whenever a user enables a Location Based app (Google Maps, FourSquare), iOS updates this database with all local cell towers/wifi locations and the Latitude/Longitude. The more comprehensive the local database is, the quicker/easier it is for Location Based Services to help pinpoint a users location. Instead of waiting for GPS to spin up and get a satellite lock, it will be able to get a more accurate lock off of cell tower/wifi triangulation.

petewarden 3 days ago 4 replies      
I'll be checking in here for technical questions. The github direct link is http://petewarden.github.com/iPhoneTracker/
runjake 3 days ago 5 replies      
I didn't know this was news. I and other security researchers & law enforcement have known about it for a while. I assisted in one court case where the data was used as evidence.

I suspect the slick-looking iPhoneTracker app finally made it interesting to the media.

Edit: There was a similar deal on iOS 3 but it seemed more like a bug, not a feature. Data would be purged at some unpredictable interval. I can't recall the file path and don't have an iOS 3 device handy.

ceejayoz 3 days ago 1 reply      
> We're not sure why Apple is gathering this data, but it's clearly intentional, as the database is being restored across backups, and even device migrations.

My understanding is that all data and files is persisted in that manner. Not sure why they're implying this file has been singled out.

desigooner 3 days ago 1 reply      
It might not be directly related but there was a news story on CNET [1] yesterday about cops in Michigan using a device from Cellebrite to download information from phones of people they stopped for violations that includes contacts, phone logs, messages, photographs and location history.

Does Apple's decision of having such information stored on the phone unencrypted make it easy for such devices? The device claims to subvert phone passwords though.


awakeasleep 3 days ago 2 replies      
I wish this wasn't presented as sinister.

The fact is, that phone companies store all that data for EVERY cell phone, and it's always available to government agencies and divorce attorneys after a subpoena.

All this does is raise the common man's awareness, and possibly provides an afternoon of fun looking at your travel history. If you want your iphone data secret, it prompts you to encrypt your backups when you first plug the phone in.

tomkinstinch 3 days ago 1 reply      
For those with jailbroken iPhones and SSH, the data can be accessed or copied directly. The information is stored in this file:

The file can be viewed with any ol' SQLite browser, and the location information is stored in the "CellLocation" table.

After using an iPhone 4 since release day, I have ~1400 entries.

pieter 2 days ago 0 replies      
Of course, Apple would know your location most of the time anyway, whether or not this file exists. You send the ID's of cell towers and wifi points to Apple, which returns you the location of those points. Apple could always have been storing your location based on that interaction alone.

In fact, keeping a database like this could actually give Apple LESS information about your location, as you don't have to request a new location if you already have the info of all the near ID's in your database. I'm not sure if this actually happens though.

The same, of course, can be said for any Android device and Google's A-GPS database; you have no guarantees that Google isn't logging your location whenever you're using location services.

justsee 3 days ago 0 replies      
The same community that would generally react very negatively to reports of a company storing passwords unencrypted in a database seems to effortlessly explain away Apple's approach to storing a significant amount of personal tracking data unencrypted, not on one pretty inaccessible server but on multiple easily-accessible devices. Fascinating.
chadp 3 days ago 1 reply      
Someone should make an app for jailbroken phones to disable this location logging (or delete it regularly).. many would likely pay for it!
gpambrozio 3 days ago 0 replies      
Apple has been known to collect this information for a while now [1] but storing all this information in a database should not be required for this.

If you tuink about how much information you have on your phone, if somebody has access to it or to your backups, I think your locstion history is the least of your problems. But I do agree that it should not store this information, encrypted or not...

[1] http://news.cnet.com/8301-31021_3-20010948-260.html

cube13 3 days ago 1 reply      
Could this be related to the mobleMe "Find my iPhone" feature that Apple added in 4.0?

If so, this is probably a non-story. I'd be interested if it still logs if Location Services are off, too.

ck2 3 days ago 2 replies      
BTW all cellular devices are recorded as they move through tower locations while they are on and police don't feel they need a warrant for such data, so your location is pretty much available without that file.
yardie 3 days ago  replies      
I can sort of understand the outrage but I don't see the utility of it. Apps that are written for the App store don't have access to this data without the permission of the user. And the only way an app would be allowed access to a file outside the sandbox is if its jailbroken.

I'm not familiar with the in and outs of iOS LocationManager but it generally gives you the immediate coordinates at the time you request and nothing more. As for why the database of locations? It's entirely possible they are using it for QoS.

As for access to device backups. If someone has unauthorized control of your desktop computer you have bigger problems.

mirkules 3 days ago 0 replies      
Funny, I had to go to a location without internet access, but where I periodically have to "mark" where I am so I can reference it later. I was about to write my own app for this purpose when I saw this post. To boot, I had my iPhone on me the last few days anyway, so this will definitely come in handy.

Despite the utility I got out of this, I wish we would be told about it...

tlear 3 days ago 1 reply      
This is a perfect timing for promotion of Playbook and BB security. I am sure RIM will miss the opportunity though.
zenocon 3 days ago 1 reply      
About 6 months ago, I left an ipad on a plane. Unsurprisingly, all my attempts to recover it led to dead ends. I didn't have the mobileme / findmyiphone app installed on it. I understand privacy concerns, but I'd actually like it if Apple did have a copy of this db, and they allowed me to proxy through them / law enforcement so that I could locate this lost device. I know someone has it b/c I can see they were using my Netflix account.
aj700 3 days ago 0 replies      
Okay, but do the devices do this if 'Location Services' are turned off.

And I assume Cydia will now get an app that forces them off if the os ignores the setting.

serialx 3 days ago 0 replies      
Created a GPX file generator. Use it to convert the database into a GPX file format. Open it up with Google Earth.


edw 3 days ago 5 replies      
Does no one else agree with me that this is awesome? I love being able to visualize my comings and goings. It's the story of the last year or so of my life, in colored dots.

I hope Apple doesn't respond to the "outrage" by no longer collecting this data. To a first order approximation, I am with Scott McNealy over in the "Privacy?! Get over it" camp:


As an aside, can real outrage even exist anymore in this age of the easy forum post or re-tweet or tumblr entry or Facebook post? And if it does, how do you identify it? And if you can identify it, what does it mean?

pgio 3 days ago 0 replies      
This was noted last September by C. Vance here:


Good detail on how and why it is generated.

plainOldText 3 days ago 0 replies      
I can imagine a jealous spouse saying now to the other "i love you so much honey and from now on i will do your iphone backups. Just to make sure everything is safe for you" Then the jealous spouse downloads the iphone tracker visualization tool: "So honey, where were you last night? Really ? Dont you dare lie to me" :)
ljdk 3 days ago 0 replies      
In addition to cell tower and Wi-Fi hotspot locations iTunes keeps a backup of all text messages and recent calls. A while ago I've even made a small web app to chart it - http://datalysed.com/?p=130
nicklovescode 3 days ago 0 replies      
Apple is simply building a mandatory foursquare competitor, it's not a big deal guys
xsmasher 3 days ago 1 reply      
I assume Apple collects this data to pass back to skyhook so they can update their database of wifi-to-geolocation data. Must be nice to have millions of sensors roaming around collecting data for you.
jstn 3 days ago 1 reply      
Whether or not this is true, Apple should add something like File Vault to iOS. Encrypting your backups is redundant if you're already encrypting your whole home directory, but none of that matters if they have access to your unencrypted phone. Check out the police downloader devices the ACLU is investigating: http://www.aclumich.org/issues/privacy-and-technology/2011-0...
templaedhel 3 days ago 0 replies      
From what I understand, at least with google, this data (the data sent anonymously) is used amoung other things, for the maps traffic feature. If a fair number of phones are traveling below the speed limit on a road, it can be assumed that the traffic is bad on that road. Not sure if the apple data is used for that, or if they get the traffic data from google, but it is one legitimate use.
acrum 3 days ago 3 replies      
The simple solution is select encrypt backups in your iTunes options. If my computer or phone got stolen, I'd have more important things to worry about than whether the thief can find a list of locations I've been. It's fun/interesting to see it mapped out though.
kovar 3 days ago 0 replies      
Apple license agreement covering the collection of location data - http://pastebin.com/EdFJr6iU
polar 2 days ago 0 replies      
Not news at all to someone in the digital forensic community: https://alexlevinson.wordpress.com/2011/04/21/3-major-issues...
Limes102 3 days ago 0 replies      
When I read this I simply had to try it out for myself and quickly plot the data. It's a nice reminder of the places I have been over the past year.

I don't mind that Apple have saved the information on the device, what I mind is that they haven't given us an option to clear the logs or to actually visualise the data directly from the phone.

sambeau 3 days ago 1 reply      
If you have a 3G device the cell towers already know this and the data is already tracked. So what is new here?
dgulino 3 days ago 0 replies      
ramynassar 3 days ago 0 replies      
This has been happening for a long time, has it not?
jawngee 3 days ago 1 reply      
Jailbreak + cron + rm
BigZaphod 3 days ago 2 replies      
If the man really wants your location, he can just ask the phone company.
uptown 3 days ago 0 replies      
All of this from a device which prevents you from ever removing its battery.
"... so now I will jiggle things randomly until they unbreak" is not acceptable' gmane.org
348 points by signa11  2 days ago   125 comments top 16
j_baker 2 days ago 5 replies      
I can't help being reminded of this hacker koan:

A novice was trying to fix a broken Lisp machine by turning the power off and on.

Knight, seeing what the student was doing, spoke sternly: "You cannot fix a machine by just power-cycling it with no understanding of what is going wrong."

Knight turned the machine off and on.
The machine worked.

coderdude 2 days ago  replies      
I always get a kick out of how Linus talks to people. It's so direct and he never coats his arguments to make them easier to swallow. You could learn a lot about not bullshitting from that guy.
akent 2 days ago 0 replies      
It gets better later in the thread:

Yinghai, we have had this discussion before, and dammit, you need to understand the difference between "understanding the problem" and "put in random values until it works on one machine".

"There was absolutely _zero_ analysis done. You do not actually understand WHY the numbers matter. You just look at two random numbers, and one works, the other does not. That's not "analyzing". That's just "random number games".

gregschlom 2 days ago 6 replies      
Am I mistaken in thinking that in this case, it might also be a cultural/communication problem?

When Yinghai answers:

  We did do the analyzing, and only difference seems to be:
good one is using 0x80000000
and bad one is using 0xa0000000.

he clearly didn't understand what Linus meant by "think and analyze".

I don't know the Chinese culture well enough (assuming Yinghai is from China), but I am under the impression that they would emphasize more on results (fix the problem) than on processes (understand why the fix works, in order to be sure we are not breaking something else).

Am I wrong?

vog 2 days ago 3 replies      
Once again, a great Linus Torvalds statement! I especially like the last paragraph, which has so much truth in it, and can be applied to small as well as large software projects:

Don't just make random changes. There really are only two acceptable models of development: "think and analyze" or "years and years of testing on thousands of machines". Those two really do work.

smcl 2 days ago 2 replies      
I think Linus forgets that at one point he too was inexperienced and liable to make these hit-and-hope fixes. I agree with his point in general, but the dickish manner in which it's delivered isn't particularly helpful ("Why don't we write code that just works?")
kemiller 2 days ago 0 replies      
I was all set to defend the "jiggle randomly" school of development with something along the lines of "sounds like someone who has never had an external deadline to worry about" but then I got really sad, and didn't.
benwerd 2 days ago 0 replies      
Well, there goes my development methodology.
rams 2 days ago 1 reply      
Programming by coincidence as the PragProg says (someone has already posted the link).It's extremely common here in most Indian companies, especially with freshers.
hobbes 2 days ago 3 replies      
Well, that approach worked fine for the evolution of complex life-forms.
lindvall 2 days ago 0 replies      
Another aspect of this thread that I find very refreshing is the fact that the previous implementation may have used magic numbers doesn't reduce anyones desire to actually understand what is going on going forward.

The realization that a magic number was already being used could have caused one of two outcomes:

a) justification for replacing one magic number with another

b) realization that more research needed to be done to understand exactly what was going on in the first place

I appreciate seeing (b) as the option chosen. We should all strive to be this diligent.

mv1 2 days ago 0 replies      
I call this "poke it with a stick" debugging. Let's poke the code like this and see if it works now. This approach is so wrong yet so common it's infuriating.
ciupicri 2 days ago 0 replies      
That patch reminds me about a nouveau bug[1] I had a couple of weeks ago. According to one of the developers behind nouveau it was caused by a new memory mapping/allocation scheme that broke things on systems with more than 4 GBs of RAM. Some device memory (registers etc) was mapped above 4 GBs and some of them don't like this. So he built a new kernel which reverted the change and the problem was "miraculously" fixed.

[1] https://bugzilla.redhat.com/show_bug.cgi?id=689825

enjoy-your-stay 2 days ago 0 replies      
Linus seems to be (correctly) railing against what is classic cargo cult behaviour.

Making changes until someting seems to "work".

aangjie 2 days ago 0 replies      
Reminds me of some of my last work experiences...
AWS is down, but here's why the sky is falling justinsb.posterous.com
339 points by justinsb  2 days ago   80 comments top 15
mdasen 2 days ago 6 replies      
Amazon has probably correctly designed core infrastructure so that these things shouldn't happen if you're in multiple Availability Zones. I'm guessing that means different power sources, backup generators, network hookups, etc. for the different Availability Zones. However, there's also the issue of Amazon's management software. In this case, it seems that some network issues triggered a huge reorganization of their EBS storage which would involve lots of transfer over the network of all that stored data, a lot more EBS hosts coming online and a stampede problem.

I've written vigorously (in previous comments) for using cloud servers like EC2 over dedicated hosting like SoftLayer. I'm less sure about that now. The issue is that EC2 is still beholden to the traditional points of failure (power, cooling, network issues). However, EC2 has the additional problem of Amazon's management software. I don't want to sound too down on Amazon's ability to make good software. However, Amazon's status site shows that EBS and EC2 also had issues on March 17th for about 2.5 hours each (at different times). Reddit has also just been experiencing trouble on EC2/EBS. I don't want this to sound like "Amazon is unreliable", but it does seem more hiccup-y.

The question I'm left with is what one is gaining from the management software Amazon is introducing. Well, one can launch a new box in minutes rather than a couple hours; one can dynamically expand a storage volume rather than dealing with the size of physical discs; one can template a server so that you don't have to set it up from scratch when you want a new one. But if you're a site with 5 boxes, would that give you much help? SoftLayer's pricing is competitive against EC2's 1-year reserved instances and SoftLayer throws in several TB of bandwidth and persistent storage. Even if you have to over-buy on storage because you can't just dynamically expand volumes, it's still competitively priced. If you're only running 5 boxes, the server templates aren't of that much help - and virtually none given that you're maybe running 3 app servers, and a replicated database over two boxes.

I'm still a huge fan of S3. Building a replicated storage system is a pain until you need to store huge volumes of assets. Likewise, if you need 50 boxes for 24 hours at a time, EC2 is awesome. I'm less smitten with it for general purpose web app hosting where the fancy footwork done to make it possible to launch 100 boxes for a short time doesn't really help you if you're looking to just have 5 instances keep running all the time.

Maybe it's just bad timing that I suggested we look at Amazon's new live streaming and a day later EC2 is suffering a half-day outage.

akashs 2 days ago 2 replies      
Amazon makes it pretty clear that Availability Zones within the same region can fail simultaneously. In fact, a Region being down is defined as multiple AZs within that zone being down according to the SLA. And since that 99.95% promise applies to Regions and not AZs, multiple AZs within the same region being down will be fairly common.

Edit: One more point. In the SLA, you'll find the following: “Region Unavailable” and “Region Unavailability” means that more than one Availability Zone in which you are running an instance, within the same Region, is “Unavailable” to you. What it implies is that if you do not spread across multiple Availability Zones, you will then have less than 99.95% uptime. So spreading across AZs should still reduce your downtime, just not beyond that 99.95%


justinsb 2 days ago 1 reply      
A quick tldr: Availability Zones within a Region are supposed to fail independently (until the entire Region fails catastrophically). Any sites that designed to that 'contract' were broken by this morning's incident, because multiple AZs failed simultaneously.

I've seen a lot of misinformation about this, with people suggesting that the sites (reddit/foursquare/heroku/quora) are to blame. I believe that the sites were designed to AWS's contract/specs, and AWS broke that contract.

risotto 2 days ago 1 reply      
These outages are very rough. Clearly a lot of the Internet is building out on AWS, and not using multiple zones correctly in the first place. But AWS can have multi-zone problems too as we see here. Nobody is perfect.

But what people forget is: AWS has a world class team of engineers first fixing the problem, and second making sure it will never happen again. Same with Heroku, EngineYard, etc.

Host stuff on dedicated boxes racked up somewhere and you will not go down with everyone else. But my dedicated boxes on ServerBeach go down for the same reasons: hard drive failure, power outages, hurricanes, etc. And I don't have anyone to help me bring them back up, nor the interest or capacity to build out redundant services myself.

My Heroku apps are down, but I can rest easy knowing that they will bring them back up with out an action on my part.

The cloud might not be perfect but the baseline is already very good and should only get better. All without you changing your business applications. Economy of scale is what the cloud is about.

jpdoctor 2 days ago 5 replies      
Every time someone bitched at me for not having a "cloud-based strategy", I kept asking how many 9s of reliability they thought the cloud would deliver.

We're down to 3 nines so far. A few more hours to 2 nines.

The cloud is not for all businesses.

weswinham 2 days ago 0 replies      
I'd say your choice between Quora's engineers being incompetent or AWS being dishonest/incompetent is a completely false dichotomy. Anyone who has been around AWS (or basically any technology) will agree that the things that can really hurt you are not always the things you considered in your design. I just can't believe that many of the people who grok the cloud were running production sites under the assumption that there was no cross-AZ risk. They use the same API endpoints, auth, etc so it's obvious they're integrated at some level.

Perhaps for Quora and the like, engineering for the amount of availability needed to withstand this kind of event was simply not cost effective, but I seriously doubt the possibility didn't occur to them. It's not even obvious to me that there are many people who did follow the contract you reference who had serious downtime. All of the cases I've read about so far have been architectures that were not robust to a single AZ failure.

As for multi-az RDS, it's synchronous MySQL replication on what smell like standard EC2 instances, probably backed by EBS. Our multi-az failover actually worked fine this morning, but I am curious how normal that was.

endergen 2 days ago 1 reply      
Read how @learnboost who uses AWS was not affected by the AWS outages because of their architecture design:
EGreg 2 days ago 1 reply      
This is again the problem with centralized vs distributed services. Not just Amazon's infrasturcture.

http://myownstream.com/blog#2010-05-21 :)

grandalf 2 days ago 1 reply      
It's pretty wild that this stuff happens. Similar to today's nasty outage, Google has had some massive problems with its app engine datastore...

I'm curious if anyone has any predictions about what the landscape will be like in a few years? Will these be solved problems? Will cloud services lose favor? Will everything just be designed more conservatively? Will engineers finally learn to read the RTFSLA?

cafebabe 2 days ago 0 replies      
Relations. At the viewpoint of a non-cloud-user, this is a pretty normal situation. Systems fail. Maybe, we should think about cloud as a service, that is managed somehow different (to enable easier access to our wallets and budgets) but do eventually fail the same way as standard services. That's how I saw it as the first headline about cloud services appeared in front of me couple a years ago.
ww520 2 days ago 2 replies      
One data point. I have one of my clients' servers in the east-1d availability zone. East coast region, zone d. So far things are holding up, no crash or no slow down. Fingers crossed.
wslh 2 days ago 2 replies      
I use dreamhost and never had a failure like the Amazon one.

It's an irony.

parfe 2 days ago 0 replies      
Reddit goes down when a butterfly in India flaps her wings.
KeyBoardG 2 days ago 0 replies      
The ending of this article came of very slanderous rather than just a report of why the problem occured. Keep it.
delvan07 2 days ago 1 reply      
Crazy how that crashed and brought other sites like Reddit, Quora etc down.
The Sad, Beautiful Fact That We're All Going To Miss Almost Everything npr.org
337 points by adambyrtek  4 days ago   131 comments top 43
grellas 4 days ago 6 replies      
I truly believe nearly everyone realizes that it is impossible to experience all or nearly all of what is important to cultivate in one lifetime.

Beyond that, perception tends to be affected by one's age. When I was young (e.g., in my 20s), all the possibilities of the world seemed open to me and it was just going to be a question of what I would do first - I put everything else into the category "I'll get to that when I have time." I had done a lot to develop my talents and knowledge base, and in a range of areas to boot. But my reading of the "great works" trailed off following college. Time was too limited to get to most of them. But, some day, yes, I would do so. I had never learned to play an instrument. But, when I had time, I would learn piano. I had limited time to do non-business travel, but some day I would make it up.

Of course, "some day" one day comes and you quickly realize that many unrealized hopes and dreams would never in fact be realized. And that includes becoming cultivated in a range of areas. When this fact first strikes you, it truly is depressing. For me, it was the first time in my life that I started to feel "old" (feeling old is not so much chronological as it is a state of mind). You become overwhelmed with the fact that you will never keep up with all the new trends and you will never have the time to fill all the holes in your knowledge base or to do all the things you dreamed of doing.

In time, though, I came to make peace with this sense of restlessness. Life is too short to do everything but life is more than ample enough to do important things, things that count beyond the mundane routines of daily existence. This life is but a breath or, as my 100-year-old grandmother said shortly before she passed on, everything that she had experienced to that point was "but a blink." When you can get to that stage and say, "no regrets" for a life well-led, you can have peace with your finite capacities and your finite existence in this world. There is much that is beautiful to do in this life. You don't need to do it all. You just need to do it well.

pstack 4 days ago 2 replies      
I'm not much for repeating content. That's why I don't like to buy DVDs or build a collection of things. Music is an exception, but as far as film and books -- I have zero interest in experiencing the exact same content repeatedly. I could not consume all the wonderful content in ten life times, so I'm not going to short myself something that I have yet to enjoy, because I have to read a book the fourth time or see a movie the tenth time.

My first thought in response to this actually deviated from the intention of the topic. You see, I am constantly amazed at humanity. For all the stupidity and evil, we have a capacity for overwhelming kindness, compassion, ambition, and ingenuity.

Few days ever go by that I don't see something that makes me have a moment of extreme pride in this species (and I wonder if other species from other planets out there in the great beyond would share any of the same appreciation).

Anyway, those thoughts are usually followed directly by the realization that life is so painfully short. Too brief. No matter what fantastic accomplishments I witness in my few remaining decades on this blue ball, I will miss out on everything that comes after. I probably won't be alive when we discover other life in the universe. When we accomplish teleportation and long distance space travel. When we have kick ass robots that we can have conversations with. When we do everything that nobody can even conceive of, today.

I wonder, would anyone take up the offer if it was given, to be in some sort of stasis that allowed you to awake for one year every hundred years? You'd miss out on all relationships and so much life, but you'd also experience a year of life every century, well into the 31st century (and probably beyond, if medical science could extend your life another forty years at some point, there).

I'm tempted. I can't say I'd do it with absolute certainty, but I would have to think very long and hard about the chance for such a prolonged journey. Plus, I bet girls in 3011 are total sluts.

Umalu 4 days ago 2 replies      
When I turned 40 I figured my odds of living another 40 years were pretty good. I then figured that if I continued to read one book a week (my average) for the rest of my life, I would read another 2,080 books. That sounds like a lot but really it isn't, especially when one considers how many great books there are out there that one hasn't read. Many more than 2,080! So now when I consider reading another book I ask myself if it looks good enough to be one of my 2,080. Many books do not make that cut. I think it's been a good filter and I expect as I grow older, and I have fewer and fewer books looks left to read, I will get even more selective in what I read.
jonnathanson 4 days ago 2 replies      
Certainly one way of looking at my time on Earth is to ask what I've consumed and what I haven't. What I've read and what I haven't. What I've seen and what I havent.

In some ways, the more motivating question for me is: what have I produced? If time is an input, what is my output? I would hope that I'm converting time as efficiently as possible into great output, though I know that's often not the case. But framing my life in this way -- as the processing of time into something tangible -- keeps me focused, energized, and productive.

gwern 4 days ago 0 replies      
The overwhelming amount of material has a number of implications; when I realized just how much stuff was out there, it occurred to me that this implied a lot of things about people's true esthetic preferences and the justifications for intellectual property. Ironically, I then wrote a long essay on it: http://www.gwern.net/Culture%20is%20not%20about%20esthetics....

(I include a number of statistics on how much stuff, exactly. Skip down to http://www.gwern.net/Culture%20is%20not%20about%20esthetics.... where the numbers go into the billions.)

martinkallstrom 3 days ago 0 replies      
I thought this would be about the fact that all of us will miss the next thousand and million years and everything that will happen on earth during that time.

I'm not at all as sad over missing most of contemporary culture as I am over missing what will happen beyond my life span. The technological marvels, the scientific advances, the development of new ways of looking at what life really means. The advent of new forms of life, perhaps alien or artificial, which is so unlikely to happen during the flash of time I get to spend here.

In this perspective the article came across as very unimaginative and dull, but I suppose that's only me.

godDLL 3 days ago 1 reply      
I'm only 27 years old, but it is my firm conviction that 99% of stuff out there is utter shit. Not just mediocre, not just unnecessary, but genuine bollocks.

And I think that's fine. It's important we try new things before we completely understand them.

My adrenalin levels are way up though, as I'm typing this. It sure doesn't _feel_ fine.

jswinghammer 4 days ago 0 replies      
What's more sad I suppose is that what a few people have decided to be worth reading is probably just a subset of what's actually great out there to discover. I read a lot of classic literature and philosophy not working through any list but rather trying to be less ignorant than I was yesterday.

I routinely go back to re-read things I've read before though. That's really what defines what books I got something out of reading. I'm probably never going to read Plato again if I have a choice in the matter but I'm currently returning to Cicero and then reading Augustine of Hippo whose major works I read years ago. I've also read the Bible more times than I could count.

Thankfully there aren't too many new computer books worth reading or I'd never get around to reading these old books.

JSig 4 days ago 1 reply      
While reading this, I kept thinking of the excellent Twilight Zone episode "Time Enough at Last."

In it, The book loving protagonist survives the "end of the world" and, after being all alone, is ready to kill himself. But once he stumbles upon the library, he realizes he has the rest of his life to read whatever he wants. Of course, things don't go as planned.

bambax 3 days ago 3 replies      
> Let's do you another favor: Let's further assume you limit yourself to books from the last, say, 250 years. Nothing before 1761.

That's the wrong way to go!! Start with Aristotle, rather, and read only books that have stood the test of time. What would be the point of reading every Kindle "space opera" priced at $0.99?

If you read all of Plutarch, Shakespeare, Montaigne, and Cervantès, you're fine, really.

te_chris 3 days ago 0 replies      
Reminds me of Umberto Eco's response when asked why he kept such a vast library and how many had he read: He responded (paraphrasing) that "the key wasn't how much I've read, but how much I've yet to read and learn".

As far as I'm concerned abundance is great because there's just so much to learn and be surprised by in the world!

michaelochurch 3 days ago 1 reply      
A friend of mine (raised Jewish, but deist/agnostic) came to the conclusion that the most reasonable cosmology is one in which each person reincarnates as every human ever to live before passing on to the next world or becoming a god. This probably rules out free will, but it gives hope that, maybe, we do get to experience at least the entire human world... we just don't have the luxury of perceiving it all at once. It also solves the karma/ethics problem neatly.

This is a tough realization, in any case. Even a million-year lifespan wouldn't help, because all of the books we were reading now would be likely to fade if we were to live another million years. As humans, we're innately finite in all sorts of important ways (e.g. attention, memory) that have nothing to do with lifespan.

I have to agree with the people who've said that "it's the journey, not the destination". It's all we can control, and it's what we actually experience. As a deist and Buddhist, I've often wondered why God wanted us to evolve in a world where lives are so short and death happens all the time, instead of one in which humans could get a more reasonable 10,000 years (or process information and experience 100 times faster, which would have the same effect). It's infinitely frustrating and reminiscent of Sisyphus (probably, in fact, a Greek metaphor for the reincarnation of the spiritually lazy, noting that ancient Greeks did believe in reincarnation) but it also has a certain beauty to it: getting to go through childhood and to re-learn all of the great things in this world again, and again, and again.

mmcconnell1618 4 days ago 1 reply      
I realized long ago that the rate of change in technology makes it impossible to have a significant knowledge of something as varied as computer science. There are so many avenues to explore from electrical engineering to manufacturing to compiler design, languages, HTML, UX, design and color theory and that's just the surface. It is sad that one person can not possibly have the time to experience all aspects of their craft but it means that specialists become unique and important players.
heliodor 3 days ago 2 replies      
Good points in the link. The only thing I'd like to point out is that the author starts with the assumption that BOOKS are what we WANT to read. Just because they've been the norm, does not make them the best method of knowledge transfer. I'd argue that 90% of each book is fluff. Back in the day, Charles Dickens and Co. were getting paid by the page, hence why all the classics are lengthy. Add to that the fact that school essays come with size requirements (a poor way of making people iterate on their thoughts), and we have a system where size is king. We've gotten used to a book being 100 pages or more, but knowledge can be transferred a lot more efficiently. Hacker News is one example of a better method, which really brings us to the following (incomplete) list of more concise writing methods:

- Cliff Notes

- Pamphlets

- Blogs

Some people see reading as a leisure activity. I see it purely as a knowledge transfer method, hence I prefer it to be concise and to the point. The only argument I can think of in favor or lengthy books is that by spending more time on the material, it sinks into your head better.

AngryParsley 4 days ago 3 replies      
Excellent article, although it seems to assume that the average quality of content has stayed constant over time. I don't think that's the case. Take music, for example. In the past, we were limited in the types of sounds we could make. Nowadays, with the help of computers, musicians can create any sound they can imagine. A similar thing has happened with television and movies. Technology has allowed a wider range of content to be created. It's also made it easier to create high-quality content.
csomar 3 days ago 1 reply      
When you think of it, the human spirit is a combination in the human brain. That is, you can create a human inside a computer. You'll just need powerful and may be crazy processing power. Now if we figure out a way to read the human brain configuration and data, and then transfer it to another body (the same body as the person) that you constructed it biologically, you can make the same person alive again.

I'm dedicating my whole life in an attempt to live again.

narrator 3 days ago 0 replies      
If you want to miss as little as possible, find the most advanced technical material you can possibly find that interests you and read all the pre-requisite material until you can read and fully understand it. That way, you take the most direct path to at least having the tools to understand everything that interests you.
coop11 3 days ago 1 reply      
For better and for worse, the tools that have come along with the information revolution foster what seems to be a much broader, yet more shallow perspective.

For example, lets consider the fact that it takes .19 seconds to find someone's personal distillation of Darwin's Origin of Species. I can now get a summary of one of the great scientific discoveries in less than 1000 characters and be back to reading Facebook updates without blinking an eye.

I didn't read a single word from the original work. I never touched on the years of toil, thought and research that become obvious only after you hear it in Darwin's words. To draw a relevant analogy, its like we are adding layers of abstraction to information. Wikipedia is just the high-level, interpreted view that hides all the nitty gritty details we don't need to worry about anymore. So how often will we need to dive into the inner-workings in the future?

The question becomes is this satisfying? Is it "good enough" to just read the cliff notes? I hate to say it but I think yes. We will end up with an increasing number of "instant experts" who know a little about a lot. And the craftsman - the true specialists - will probably just fade away with the rest of the irrelevant details.

scotty79 3 days ago 0 replies      
I don't miss so much all the things of today as all the things of tomorrow that I will never see the glimpse of.

It saddens me whenever I think about this for a moment. Only hope is that humanity achieves singularity before I die which is in my opinion not likely.

orblivion 3 days ago 0 replies      
Consider the alternative for a moment: You can read everything. So can everybody else.

Sortof boring in the end, isn't it?

bostonpete 3 days ago 0 replies      
Nothing before 1761. This cuts out giant, enormous swaths of literature...

I don't think it cuts out all that much. I would assume that the number of books published before 1761 would be a tiny fraction of a percentage of all books ever published.

yason 3 days ago 0 replies      
I fully trust that I will bump into things, ideas and people relevant to my life without explicitly sorting through nearly everything or filtering out the crap.

I've found that it works well. It also converges: by not filtering out crap I'm not in contact with crap and I see very little if any crap in my life. And whenever I find a new thing, most of the time it truly is something wonderful and comes up pretty much at the right moment when I'm most receptive to it.

I would feel very anxious if I kept thinking and worrying myself about what I might be missing.

gambler 3 days ago 2 replies      
I think I read a lot. I also keep a list of the books I think everyone should read. That list includes around 20 titles right now. Twenty out of much, much greater number. Moreover, it seems that the rate at which new books are added to the list is slowing down.

The article is a typical NPR piece in that the unproven, unexplained assumptions that the author makes in the process of writing are by several orders of magnitude more significant than the thesis of the text. The thesis seems to be "there are lots of books, and you won't read them all". The assumption is that all books have equal value.

Frankly, if someone believes that any book has equal value to any other book, I wouldn't much care about that person's take on literature. To me, it's akin to listening to a "mathematician" who says that all proofs are valid.

My point is, the number of books that I would really miss reading is very finite and manageable. My problem isn't in reading them all, but in finding them all amidst the ever increasing amounts of garbage.

davidrupp 4 days ago 0 replies      
This is exactly what I've been thinking recently about computer science / programming. There's just so much to read / learn / practice / improve. Nice to have it put in perspective. Gotta learn to surrender more.
nazgulnarsil 3 days ago 0 replies      
why would I want to bother trying to exhaust the sphere of possible human experiences before the heat death of the universe when there is a much larger space of possible modes of existence?

Oh I guess this article was written by someone who plans on dying :p

Tycho 3 days ago 1 reply      
I think saying 'think of all the wonderful books I won't have time to read' is a bit like saying 'think of all the great wine in the world I'll never get a chance to drink!' It's something to enjoy in your liesure time, you don't need to worry about consuming it exhaustively. The only sad thing would be if you never had any liesure time at all (you missed all your opportunities to have some).
aforty 4 days ago 5 replies      
I'm going to sound really illiterate but who reads two books a week? I read a ton, as most programmers do but I read perhaps one or two [fictional] books PER YEAR.
alecco 4 days ago 0 replies      
There's just too much out there. Try to make things that replicate (subjective|probabilistic) good in the universe. Also this:


(It applies beyond academia, too)

BasDirks 3 days ago 0 replies      
Without exaggeration I can say that one line of Proust is worth more to me than 99.9% of all books out there. I am all for being open to "different experiences" (ie. reading outside the classical canon), but I have learned to be picky.

Experience is not about quantity, it's about those magical moment when your world expands in a violent flight, and about learning to love the world in new ways.

jroid 4 days ago 0 replies      

The true joy of life is the trip, not the destination

JoeAltmaier 3 days ago 2 replies      
I've often wondered if we have "enough". Enough movies, enough books, enough poetry. More than several lifetimes' worth.

Why not stop? Replay from, say, 1910, in an infinite loop. Nobody would notice, and we could all get on with other things.

gordonc 3 days ago 1 reply      
Oddly enough, the more I read the more I become convinced that this may actually not be the case. Technology and our ability to effectively process information is growing at an exponential rate " of course, so is the amount of information.

But logically, there will be a point in the future where the output of humanity in an hour will be greater than the sum total of all human knowledge prior to 2012. Some fans of the Mayan calendar say this date is December 21st, 2012. Kurzweil says more like 2045. Really hard to say, IMO. But still, if we have the power to create that kind of information then we'll have to be able to take in a lot more, so I'm not too worried about missing everything. I'm just concerned about the information pertinent to my health, career, and loved ones, which is quite readily available thanks to sites like this.

pmsaue0 3 days ago 0 replies      
"But what we've seen is always going to be a very small cup dipped out of a very big ocean, and turning your back on the ocean to stare into the cup can't change that." Powerful
driekken 3 days ago 0 replies      
Great works (be they books, movies or art) should be treated the same way life is: as one more step on your journey.

Overplanning will make your choices too selective, thus making you knowledgeable in some areas and totally ignorant in others (most of them).

Underplanning will make your journey unfocused, conferring you some knowledge on everything, but not much else.

So, I guess we should strive for a balance, but what this balance represents is distinct for everyone.

kingkawn 3 days ago 0 replies      
Imagine all the things that are as of yet unknown that we will miss. This dwarfs the knowable to infinity.
teyc 3 days ago 0 replies      
I've just been discussing this point with my daughter a couple of weekends ago. The same I suspect applies to all the cool programming languages I wanted to learn.
becomevocal 3 days ago 0 replies      
And to think I'm going to miss out on something I'll never know because I've read something about missing so many things.
robertk 3 days ago 0 replies      
Time marches without apology.
dools 3 days ago 0 replies      
Forget books, I feel this way just about The Simpsons.
ares2012 3 days ago 0 replies      
Very well said. Great article.
paganel 3 days ago 1 reply      
As far as classical literature goes, if you're read Balzac, Stendhal and Tolstoy then you've read them all. Problem solved :)
FiZ 3 days ago 0 replies      
Now I feel much better about missing Mad Men.
brianstorms 3 days ago 0 replies      
And the ironic fact is that I'm going to miss almost everything in that article because it is presented as faint grey text on a white background.


(too bright, didn't read)

Want to attract top tech talent? Offer telecommuting cnn.com
312 points by chanks  4 days ago   171 comments top 43
edw519 4 days ago 4 replies      
In the past 4 hours I have exchanged ones and zeros with people and computers in New Jersey, Pennsylvania, Florida, Arizona, California, Germany, Singapore, and India.

And got 3 "office days" worth of work done.

And I haven't even brushed my teeth yet. :-)

You know what telecommuters say, "Once you've deployed a killer app in your pajamas, you can never go back."

angrycoder 4 days ago 1 reply      
I've spent the bulk of the past 8 years telecommuting, even when I've had full time positions locally. It requires a strong commitment to communication and the experience to recognize when the communication is failing so you can get off your ass for a face to face meeting or phone call. I also ending up working a hell of a lot more hours than I would if I just had a desk job. But I wouldn't have it any other way.

I don't give a shit about your corporate culture or your ping pong table or your office politics or your ego battles, I only care about solving problems and producing solid solutions. Telecommuting lets focus on doing just that while leaving all the other nonsense behind.

jasonkester 4 days ago 1 reply      
I've been working a remote contract these last 6 month, and one thing that surprises me is how I bill a lot less hours than I would if I were on site.

If I boot up in the morning, spend 30 minutes trying to get into something, fail and end up back here on HN, I don't bill any time. If I did the same thing sitting in a cube, I'd get paid for it.

The end result is that the client gets a much better deal by having me off site. Works great for me too, since I can justify billing out at a higher rate that reflects the fact that they get essentially all my productive time for the week, and nothing else. Everybody wins.

As an added bonus, if I really can't get started in a morning, I can bail and go bouldering for the day and not feel guilty about it. The bizarro world salaried version of me would spend that same day sitting in a cubicle secretly playing video games on the clock.

mgkimsal 4 days ago 4 replies      
This is sort of a no-brainer. I'm pitched by recruiters about once a week, and they're all for far away positions - typically SF area, but other areas too. I can't sell my house any time soon - the market just isn't moving - so I'm somewhat held hostage by geography. This doesn't mean that I can't travel onsite and visit your office regularly, nor does it mean skype and phone don't work out here in the sticks :)

Telecommuting requires the entire team or ideally the entire company be set up to work like that - just having one guy out in a different state or country on his own doesn't work very well in most cases. I understand that. What I don't get is why more companies aren't structuring themselves to take advantage of remote workers.

If you have strong procedures in place to deal with remote/offsite workers (fulltime or not), you can more easily integrate short term labor when you need it - you'll have the shared workspaces, file transfers, documentation, version control, etc, already set up and ready to let new people in as needed.

This seems like this would be a competitive advantage that more companies should be looking at. Maybe they will be in the coming years as the impact of underwater mortgages and geography-locked workers starts to impact the broader tech labor market.

jordanb 4 days ago 4 replies      
I routinely have people begging me to work for them, but probably 4 in 5 clam up when I specify that -- while I'm happy to spend as much time with them as necessary to get the specs and make everyone comfortable -- I will not sit in their office to do the programming.

I can not, for the life of me, understand why that is such a sticking point with so many people. Fortunately I have enough people who are willing to work with me anyway that I can forget about the ones who don't, but it seems mind-numbingly obvious to me that if you're desperate for the skills I have, you should be willing to work with a requirement that doesn't cost you anything.

As near as I can fathom, it's a psychological thing. Remote people don't seem like they're part of your "empire" as you look out across the office and see all the busy beavers hunched over their computers.

wyclif 4 days ago 3 replies      
Startups that demand relocation to NYC or SF are a total show stopper here. Many of us are held hostage by geography-- maybe you own a home, maybe you have kids in a school they're flourishing in, maybe your parents need help. If you're a family man you won't like spending your wife and children's time in Bay Area or Metro NYC traffic. But here's the thing: the technology most startups use means they can transcend geography to an extent.

I do not underestimate the value of having a team in the same location. It does affect morale and company culture. But with the right people and the right technology those issues are quickly going to wind up in the rearview mirror. When I look at a company, I want to see if they are set up to do remote work, or are they landlocked?

bravura 4 days ago 0 replies      
At MetaOptimize, I am competing with Google and Facebook and LinkedIn and a host of other companies to recruit top ML + NLP talent.

But, the benefit of my recruiting model is that I hire people remote and part-time. A lot of strong strong people are in academia, but love their academic post and don't want to move. They just want build cool stuff on the side and make some extra money.

The fact that Google + Facebook + LinkedIn + friends must own you full time, asses in seats, creates a unique hiring opportunity for me.

michaelchisari 4 days ago 0 replies      
I love Chicago, you just can't beat this city for the kind of value you get for the price. The closest competitor is New York in terms of food and culture, but the costs are astronomically higher.

When I'm contacted by a job recruiter for a position that pays more, but requires moving to somewhere like Mountain View, I will rarely entertain the idea. Sure, I'll fly into the office, I'll do it quite regularly actually, but there's no way I'm picking up and moving to a city with considerably less to offer, yet which costs substantially more.

I understand the value of social capital, and working with people face to face. But how often does that really need to happen? As developers, our job is ultimately to write code, and that can happen anywhere.

mgkimsal 4 days ago 1 reply      
Posting another thought here.

I have a suspicion that there's enough people in the SF area (and a couple of the the other "big" markets) that will actively fight against telecommuting, specifically because it will lower wages in those areas.

As someone else posted, yes, I understand the value of a team being in the same room. But that value comes at a premium price, and it's one which may not always be worth paying. However, given the boom/busy/buyout cycle in the bay area (as an example), enough of the same workers can move from company to company as the companies are merged, bought out, or close, that this keeps the talent supply close enough to what is needed to make it harder to embrace telecommuting. You just need a few more people to fill in a few gaps, right? They need to be on site, or they're not a 'team player', or just not serious about their career in tech if they don't want to move to the valley! (have heard this before).

If the majority of companies out in SF set themselves up to embrace telecommuting, that would mean it would be easier to fill that next position with someone from Idaho, Kansas or Utah. That would mean the company could pay a lower wage. Why would the culture of startup workers want to embrace something that will end up driving down their salaries?

ddlatham 4 days ago 3 replies      
I see several comments saying more companies should be hiring remote workers, building their culture to support it, and why aren't they doing it?

It's interesting to see the numbers in this article.

What's remarkable is that, even after two years of flattish compensation, technology professionals are willing to sacrifice $7,800 on average to work from home

The article speaks as if this is a huge amount, but as a chunk of total salary, it's probably 5-15% for most tech jobs. So the question for employers is, is it worth an extra 5-15% in salary to have your team work together in person?

In many cases it is.

igorgue 4 days ago 2 replies      
My advice would be, talent /= experience.

I'm sick and tired of companies being super picky because I don't know their technology stack or maybe just one element, they don't even offer a technical test these days.

I wont learn Cassandra or Scala because they are popular on super-webscale sites, I did, though, learn Haskell, Node, MongoDB, Redis, Python, Ruby, not because they teach that in college. I think that tells you I can learn other technologies.

But I get a feeling - in many interviews I've had in the last couple of month - they get disappointed when I tell them I don't know Scala, Cassandra, Hadoop...

Most good programmers I know (they're not genius level, nor most of the people here) are willing to relocate wherever you are, but they might not know the super-awesome-webscale technologies you use because they work for actual businesses that charge customers and don't need a billion uniques a month to be profitable.

Instead of bitching about the lack of talent (and there are many programmers like me) be willing to train people on your weird-ass technology stack.

bphogan 4 days ago 8 replies      
I would love to do more remote work, but then I hear horror stories about people having problems not being around people. Feelings of isolation set in after a couple of months, and people end up going to a gig where they are around people. To those doing remote work, is this common?
shimonamit 4 days ago 2 replies      
I liked the ending:

Maybe if we called it 'cloud commuting', CIOs would buy in.

rickmb 4 days ago 0 replies      
In my experience, the number of people that actually have what it takes to telecommute effectively is extremely small, and most of those are self-employed already.

Of those that want to telecommute and have a steady job, very few can actually handle the responsibility and the lack of stimulus from co-workers for longer periods of time. Flexible hours, working from home on a regular basis, sure, no problem, but actual full time telecommuting requires a lot of commitment, discipline and communication skills.

dabent 4 days ago 1 reply      
I have failed to understand why someone in India can do my job, but telecommuting is still frowned upon.
droz 4 days ago 2 replies      
I think I may be in the minority, but I don't think people are thinking of the consequences of this telecommuting push.

I like the fact that there is a place where I do work and where I live. I do not want these two places to be the same. Much for the same reason people don't put a television in the bedroom. The bedroom is for sleeping, the living room is for TV.

I find that with the remote people I need to work with, I'm working around their schedule of when they will be around. It is difficult to communicate over IM, IRC, telephone and Skype (doesn't feel a fluid as face-to-face interaction).

If everyone starts becoming remote, then what's the point of having an office? If enough companies come to the same conclusion, what's the point in owning a building (from the eyes of a real estate guy). And ultimately, what's the point of having big office parks and so on and on. That's a lot of land and lot of capital at stake if this ever actually took off.

If a company asked me to be remote, I'd tell them I'm going to start looking for other employment options.

earl 4 days ago 0 replies      
I've previously worked for a pair of bosses who telecommuted. Never again. At least for the types of algorithmic / machine learning software I work on, frequent in person collaboration is really helpful and none of the remote collaboration software came close to standing in front of the same piece of paper / monitor / whiteboard.
crenelle 4 days ago 2 replies      
I have telecommuted for several gigs and positions. The most difficult situation to deal with is when most of the rest of the company doesn't telecommute, so everyone is not in the habit of cluing you in to what's going on. You may have all the tools required to establish and maintain decent communications, but they often don't bother to adopt any of it for their end. I even remote-developed for a large networking company with enormous communications facilities designed to solve problems like that -- but they had me fly over to headquarters all the time instead.
kayoone 4 days ago 0 replies      
I worked from home for the past couple of years and now work in an office and have to communte everyday. But i have to say i like working in a space with other engineers to discuss ideas and concepts and energy more as sitting alone at home.
atacrawl 4 days ago 3 replies      
With an unemployment rate of just 4% among tech professionals, and shortages in specific fields, flexibility shouldn't be a last resort.

I would have guessed that tech unemployment would be lower than the national average, but 4% is really low.

megamark16 4 days ago 0 replies      
Thinking back to the recent post about Performable paying a $12,000 referral bonus: I consider myself a perfect candidate for the Performable job, with the exception of location and willingness to pick up my family and move them across the country. For a competitive salary (by midwest standards, which would be a lot less than what you'd pay in Boston) I'd tell them to keep that $12,000 and use it to fly me out to boston twice a month for a few days at a time, and let me work from home the rest of the time. I work hard when I'm at home. Shoot, I built AppRabbit from the ground up evenings and weekends from the recliner in my home office. Plus, I spend more than an hour a day commuting right now, so you can have that time too :-)
spoiledtechie 4 days ago 2 replies      
<shameless plug for a job>

I know im a bit late to the game.

I have been looking for a telecommuting job for the past 6 months with no such luck. I am a solid programmer and I get the job done. I hobby code at home and have an awesome work ethic. I have several years experience and have been everything for GIS software to coding directly on the GPU.

I have worked on all three types of mobile devices for my current job, and have a pretty intense background of C#.

If anyone has a job that they would love to fill with a telecommuter. Please look my direction.

my blog at spoiledtechie.com
my email spoiledtechie with gmail.

Thanks for the look see!

SoftwarePatent 4 days ago 1 reply      
Want to attract top tech talent? Offer a high salary.
pbj 4 days ago 1 reply      
I was reading a study a while back about how an absurdly high percentage of total jobs could be done via telecommute but aren't. It's crazy to think about how many millions upon millions of dollars in fuel costs and reduced emissions could be saved by having more telecommuters. Not only that, but companies could save so much money by having reduced or no office space/electricity/etc. Plus they'd get the added benefit of increased worker satisfaction in most cases.
Harkins 4 days ago 2 replies      
Anecdote: I've been traveling for the last three months, but most of my network is in Chicago. I hear from recruiters and hiring managers every couple days and, despite the consensus that Chicago has a "developer crisis" (to use the words of an Obtiva blog post), every single job has onsite only.
adnam 4 days ago 1 reply      
I recently quit my job because they wanted me to telecommute.
jherdman 4 days ago 0 replies      
Like hell I would! Nothing beats face-to-face communication, and the wonderful things that can arise from spontaneous interactions.
Garbage 3 days ago 0 replies      
Quite offtopic. But still I wanted to share this. ;)

Why working at home is both awesome and horrible - http://theoatmeal.com/comics/working_home

pdenya 4 days ago 0 replies      
I used to work in NYC but I've been working from home for a couple years now for an agency based in NYC (I live in CT now). I make good money but I've been offered a 20k+ raise and other bonuses to work in the city. Barely considered it.
kalleboo 4 days ago 0 replies      
I've never had a proper job in an office - I started working an online gig in uni, and I've been working for them ever since. It's great not needing to be in one place. I've started taking round-the-world trips - live cheap in hostels, work in coffee shops using their WiFi. Hanging out in Singapore and Tokyo sure motivates me to work a lot more than my apartment at home.
fshaun 4 days ago 0 replies      
As with many things, working from home entails tradeoffs whose [dis]advantages will be weighted differently.

For me, it's great when I need to concentrate and bang out the code. No distractions, and I can poll IMs instead of needing a context switch when someone drops by. If I'm stuck on a problem I'll go for a walk, cook food, or take the laptop out to a coffee shop to work. I enjoy the flexibility.

Downsides for me: I do miss some of the random office chatter -- finding out cool problems coworkers have solved and generally learning by osmosis. And I have yet to find a great replacement for 3-4 people standing at a wall of whiteboards. IMs, skype and meeting highlights solve some problems. Our group is 2/3 remote spanning 8 time zones, so we're used to working a bit harder on communication.

As for social factors, it was hard at first. I'd find myself not leaving the apartment for weeks, which was less than good... I'm making an effort to get out of the house daily, whether hitting the gym, buying groceries, or just strolling around. This is getting easier, especially as spring seems to finally be hitting Boston.

The biggest non-technical advantage for me is not needing a car. I detested commuting. Instead of spending an extra hour or two driving I can take breaks (or even naps) in the day and have the same "door-to-door" time. Financially, it's also a winner. I don't even know how much gas costs here. There are 4 zipcars within a few blocks if I need them.

Work-life separation is trickier. I'd love to have an apartment with an extra room for an office, but that would likely cancel out any vehicle savings. Getting a separate desk to split work and personal computing helped a ton here.

BenSS 3 days ago 0 replies      
I've telecommuted for 8 years now and I'm looking for a new gig. I'm really surprised how difficult it is to to find good companies who are really in to telecommuting and are looking for my skill set (web, cms, mobile). While it can be isolating if you don't force yourself to get out once in a while the flexibility and productivity sure can't be beat.
melipone 4 days ago 0 replies      
I agree. We need to see more of that. I would suggest to offer a few trips a year to the "office" though for morale.
dekayed 4 days ago 1 reply      
I think it is also important for in-office workers be allowed to telecommute when needed. Having that flexibility offers a lot of freedom as there are times where you need to be out of the office but can still work. I currently have a family situation where I try and be at my parents' home a week a month and having the option to work from there has been a huge help. It is definitely a big factor in why I would stick around in my current job for a while at least.
johnbacon 3 days ago 1 reply      
Timely article, comsidering 37Signals
is relocating their entire team to Chicago. Apparently they are freaked out, not about their employees, but about people in their homes who might break in to their stuff. Aka ex spouses et al. So David H said they will all work from the Chicago office, with iMacs
Chained to their desks and wear
uniforms. No more working from home or laptops unless it's on open source projects. I'd link to the post on 37Signals blog, but I'm in bed and on an iPhone.

Just cruise to their corporate blog. They posted the details a few days ago.

mdink 4 days ago 0 replies      
I started an interesting thread awhile back about this very topic:


Had some interesting comments...

adyacplus 4 days ago 0 replies      
I have never earned a dollar in IT, but in other fields
I got a lot of success, so I consider IT a hobby.
Perhaps I could be a top tech talent if enough money and
flexible conditions were around, anyway I think programmers are mere pawns in game of business, so it seems better to
devote time and energy to develop better strategies.
keefe 4 days ago 0 replies      
sane people like to work for other sane people and work should be about what you produce unless you have necessary human interactions
Andys 4 days ago 0 replies      
lancefisher 4 days ago 0 replies      
I live in Missoula, MT and I'm not willing to move because my family is here, and I love the city. Programmers here are willing to take a pay cut to stay, and I know several that telecommute to out of state jobs that pay better than most local companies. I telecommute too, but my employer is local.
amorphid 3 days ago 1 reply      
I like hiring people that are available to meet locally, and then let them work remotely for most things.
brndnhy 4 days ago 0 replies      
If you're "top tech talent", compromising on salary will likely not be part of your telecommuting scenario.
cheez 4 days ago 0 replies      
Haha, cloud computing!
My National Security Letter Gag Order (2007) washingtonpost.com
302 points by boredguy8  2 days ago   58 comments top 14
nbpoole 2 days ago 4 replies      
Since that editorial was published (back in 2007), the person who wrote it, Nicholas Merrill, has been "partially un-gagged": he is now able to talk publicly about portions of the case.

A followup Washington Post article: http://www.washingtonpost.com/wp-dyn/content/article/2010/08...

He also did an IAmA post on reddit, which has a lot of information: http://www.reddit.com/r/IAmA/comments/fjfby/iama_director_of...

(Since reddit is down right now, here's the cached Google version: http://webcache.googleusercontent.com/search?q=cache%3Ahttp%...)


Edit: Wanted to add a link to a later followup post he made on Reddit, talking about his plans to start a "Non-profit ISP and Teleco": http://www.reddit.com/r/reddit.com/comments/fkndx/update_nat...

(And the Google cached version: http://webcache.googleusercontent.com/search?q=cache%3Ahttp%...)


Edit: And in case people are curious about the actual court case: http://en.wikipedia.org/wiki/Doe_v._Ashcroft

ck2 2 days ago 1 reply      
Fun fact: under Obama the rate of "national security letters" has only increased as well as the number of whistleblowers prosecuted.

Not saying he personally directed the FBI to increase, just saying it has and nothing has stopped it.

But he has personally sought to expand NSL powers.

some background:






And Manning is in serious, serious trouble under Obama, I will be amazed if he gets only life, because they purposely just added an "aiding the enemy" charge which carries a death sentence:



Cushman 2 days ago 1 reply      
Outrageous proposal time: NSL DDoS.

Let's say I own a business. Every week, I get two dozen letters purporting to be from the FBI requesting information on my customers. Some of the requests are clearly ridiculous; others might be genuine. If they are genuine, I'm forbidden from discussing them over the phone; the requests aren't a matter of public record, so I can't look them up; I don't have a secure fax, I run an internet company. I could tell my lawyer about it, but he'd be subject to the same restrictions as me.

My only options are either to submit an individual request for verification for each letter by delivery service, or comply with every request I receive, deluging the FBI with frivolous documents. Either way, thousands of companies attempting to comply with dozens of such requests every week and the secret police system would quickly grind to a halt.

To be followed shortly by a lengthy prison sentence " if they're lucky " for anyone participating in the fabrication of government documents, of course. Still, it's a fascinating prospect.

Zak 2 days ago 1 reply      
I have to wonder whether a person is legally obligated under such an order to actively hide the existence of the NSL request. Does he really have to lie to his clients, friends and family when asked directly about it, or would "I can't answer that" satisfy the letter of the law while giving the asker a strong clue as to the answer to their question?
jdp23 2 days ago 0 replies      
Remember that the several clauses of PATRIOT Act will sunset unless they're renewed by the end of May. Once Congress returns (the week of May 2) expect floor fights in both the House and Senate.

It's a great opportunity to introduce reforms -- including NSL's and gag orders. EFF has more at https://secure.eff.org/site/Advocacy?cmd=display&page=Us...

zacharypinter 2 days ago 0 replies      
Here's a video of a talk Nicholas Merril gave about the gag order:


__david__ 2 days ago 0 replies      
Having never seen the contents of a national security letter, I wonder what the ramifications would be if you opened it and read it aloud for the first time in front of a large (or small) group of people. Or perhaps have it read aloud to you in front of a large group of people. Certainly you can't be expected to know that is going to gag you until you've read it once and by that time it would be too late.

Is it worded such that the whole group of people would be gagged? There's got to be some interesting way to circumvent it.

imrehg 2 days ago 2 replies      
As a non-lawyer, what would be the situation of this person is asked in the court about some of their actions that is explained by the existence of the gag order? Would "the truth, the whole truth and nothing but the truth" override that gag order, or they had to somehow withhold that information?
Volscio 2 days ago 0 replies      
Please date old articles in the subject line. i.e. "My National Security Letter Gag Order (2007)"
megamark16 2 days ago 0 replies      
I must be getting old. After reading this article I wrote an email to my representative. I'm pretty sure that's a checkbox on the form I fill out when I get a physical:

Have you ever sent a strongly worded letter to an elected official? []Yes []No

binarymax 2 days ago 0 replies      
A sign similar to this was proposed by librarians:

"The FBI has not served this library a national security letter. Please watch for removal of this sign."

jeffreyg 2 days ago 0 replies      
shareme 2 days ago 0 replies      
a comparison..a person entering the US military and getting a the lowest level of clearance has less punishment if caught disclosing than these NCLs..
viggity 2 days ago 2 replies      
this is not hacker news.
Dropbox Lack of Security tirania.org
288 points by zdw  5 days ago   184 comments top 22
patio11 4 days ago  replies      
This is the first time I've heard someone on HN actually ask for more security theatre. Sure, Dropbox could spend seven figures to get a ISOxxxx whatever consultancy to draw up a 125 page document describing their internal checks, do the obligatory all-hands yearly mandatory training where you have to get 10/10 questions right and question 1 is "A user has uploaded naked pictures of themselves to their account. True or false: it is permissible to download these and take them home with you.", etc etc.

And they'd be exactly where we are today:

1) Yes, we could look at your data any time we want to. This is an inevitable consequence of letting you look at your data any time you want to.

2) We promise not to abuse our power #1.

3) If you don't trust us on #2, you should not do business with us.

Except they'd be out seven figures.

thought_alarm 4 days ago 1 reply      
Do a lot of people think that Dropbox is some sort of super-private service?

I'm no security expert, but do I hope it's obvious to most people that Dropbox wouldn't be able to do stuff like reset your password if they didn't have access to the contents of your files at some level. A truly secure and private service would look a lot different, and be much more complicated to set up. That's the tradeoff.

gergles 4 days ago 2 replies      
I don't care. I use Dropbox because of the unparalleled feature set and ease of integration. I have my taxes stored on Dropbox, along with a lot of other sensitive information. They're in an encrypted RAR file with a line-noise passphrase, just like they would be if I were storing them anywhere (including locally -- after all, what if Mallory steals your hard drive? Or, to parrot the most common movie plot threat, what if the NSA secretly breaks into your house when you're out at the movies and images all your disks then slips them back in without your knowledge?)

The features DB offers for sharing, web access, etc. are well worth the tradeoff, and I am ashamed to see the security pedants constantly pillorying Dropbox because it's not some imaginary "verified secure" system. They don't advertise to be that. A claim of "we encrypt your files with RSA" should be utterly meaningless to you without knowledge of how the key is controlled, and a few seconds' thought and examination of the feature set should inform you that yes, Dropbox has to have the key to decrypt the files. That doesn't make the claim of "your files are encrypted" any less true.

tlrobinson 4 days ago 1 reply      
It always seemed obvious to me that Dropbox has access to your unencrypted files because they make them available to you through the web interface.
arashf 4 days ago  replies      
hi there, arash from dropbox here. all data is (as we state in the referenced help article) encrypted before it's stored on the backend.

all data on dropbox can be made shareable and is web viewable. as a consequence, we do need the ability to decrypt in the cloud.

re. employee access to files - there are controls to prevent this. for example, even drew (founder/CEO), doesn't have physical access to our storage servers anymore.

for very sensitive data, there's always the option to use truecrypt (we even offer this as a recommendation in our security documentation: https://www.dropbox.com/terms#security)

csallen 4 days ago 3 replies      
Dropbox didn't lie. This is simply a misinterpretation (or misunderstanding) of what's meant by the phrase "Dropbox employees aren't able to access user files". It's not the same as saying "It's impossible." The fact is, if you send a company your unencrypted data, it's obviously possible for them to view it at some point. Otherwise they could never encrypt it in the first place. So when they say that employees aren't able to access it, they mean that they, as a company, choose not to access it.

A good analogy is the post office. Anyone who works there and handles your mail could, if they so desired, tear open your package and steal the cookies your mother sent you. We trust them anyway, because we know they take precautions to ensure it doesn't happen. Dropbox is the same, but even tougher (I doubt the average Dropbox employee has access to their decryption mechanisms, but plenty of people at the post office can unseal your envelopes).

That said, to not acknowledge it as even possible for the company you send your data to you be able to access that data seems, to me, a bit naive. That's not the promise they made, and so the claim that they lied is false.

runjake 4 days ago 0 replies      
All this press about Dropbox is getting ridiculous. I'm almost suspecting it's a hit job, but I'm wondering why people like De Caza are getting involved.

Pay attention to the two following rules. They are, and always have been true. Write them down if need be:

1.) The government can demand files from any US (and many non-US) companies. The company is then legally-obligated to turn them over.

In the past, the government has even successfully demanded data without the proper warrants (read about the VZW/AT&T/Qwest/NSA fiascos).

2.) Your cloud data is always subject to security breaches and provider employee abuse. Encrypt accordingly (I prefer DMG and TrueCrypt).

Why is this news? Did people not understand this?

tzs 4 days ago 8 replies      
It is possible to design a Dropbox-like system with the following properties:

1. Files are stored encrypted.

2. The service provider does not have the ability to arbitrarily decrypt the files. By "arbitrarily decrypt" I mean decrypt at any time they wish. They will be able to decrypt if the owner's client is actively connected.

3. When someone uploads a file that is identical to an existing file, it initially is stored separately, but in most cases can be eventually de-duplicated, without compromising #1 or #2.

I'll leave the details as a fun exercise.

donpark 4 days ago 1 reply      
Three points:

1. Sensationalism aside, Dropbox should review questionable security claims to reduce false sense of security if any. With millions of users, careless words formed out of marketing needs are no longer needed. What Dropbox users need now is more clear picture of what they are giving up to gain Dropbox's services.

2. The weakest security link is the user and their computer, not Dropbox which has enough financial incentives at stake to be diligent security wise. In the end, no computer open to external data or code is safe. What protect most users today is actually not security technologies but cost/benefit ratio to potential attackers, tempered by goal and scale. 99.9999% of Dropbox user data is useless to attackers and cost of mining questionable nuggets out continually expanding sea of data from 20 million users is not a trivial task.

3. While it's true that user must trust Dropbox in the end, some of its security measures could use strengthening even if it's just intended to raise the level of sophistication necessary to steal Dropbox data.

icedpulleys 4 days ago 0 replies      
Regardless of how you want to parse a company's public statements and written policies, it's the height of naivete to think that a data host (ANY host) wouldn't share your data with law enforcement or has encrypted data in such a way that they guarantee that no one can access it.

If you have sensitive data, encrypt it yourself. Encrypt it on your local drive, back up encrypted data, encrypt it before uploading it to Dropbox. Doing otherwise is akin to not having a proper backup process: it's either because of laziness or ignorance.

zdw 5 days ago 1 reply      
Couple this with the unencrypted metadata on mobile problem: https://grepular.com/Dropbox_Mobile_Less_Secure_Than_Dropbox...

And how their "encryption" on the server side is basically a lie, as they do dedupe on data: http://paranoia.dubfire.net/2011/04/how-dropbox-sacrifices-u...

I'm stunned that anyone would use them for anything for ephemeral data you wouldn't mind posting in public.

earl 4 days ago 4 replies      
truecrypt ftw

If you're uncomfortable with dropbox, put a truecrypt partition right inside your dropbox folder.

kevinpet 3 days ago 0 replies      
This is the second completely unreasonable press attack on Dropbox. They are so unreasonable that I have trouble believing a reasonable person would think they are valid complaints unless they were trying to sell me a competing product.

Everyone with any security sense knows:
1. If someone gains access to your computer, and they can read your hard drive, and your computer can automatically log in to some service, then they can log in to that service.
2. If you can access the data without decrypting it locally, then your service provider can too. In a fantastically secure system, they will have decide to do and then wait for you to log in, but that's pretty unusual.

I predict next week we will get an article pointing out that I can get your files by breaking into your email account and then using the reset password feature.

perlgeek 4 days ago 0 replies      
I don't know if that's how dropbox does it, but I could imagine that they have a master key to which normal employees don't have access, you need the founder and a trusted second person to retrieve it.

Thus their statement "Dropbox employees aren't able to access user files, and when troubleshooting an account" wouldn't be too far off the mark, and they can still make the data available to the government, on request and with higher effort.

jeffreyg 4 days ago 0 replies      
There was a really good thread in /r/netsec a few days ago about encrypting your dropbox:


chrishenn 4 days ago 0 replies      
Relying on others to safeguard/encrypt your personal data just doesn't make sense to me, in the same way that closed-source cryptography doesn't make sense.

If dropbox is claiming a false sense of security then that is an issue, but users that truly care about their data should resort to truecrypt or something where they are the only ones who control access. You can sync your files with dropbox and keep them safe with a truecrypt volume. Or if that is to much of a pain, only do so for sensitive files. Have your cake and eat it too!

MetallicCloud 4 days ago 0 replies      
Wouldn't they have to keep the keys on their servers? Otherwise when my computer dies, I wouldn't be able to access my files from a different computer.
joanou 4 days ago 0 replies      
Dropbox is a good service, and I am sure file access is limited to a few employees, but I wouldn't use it for sensitive data or for a business. Any service where you do not control the encryption keys, e.g. Box.net, and myriad others will have the same issue. It's all about tradeoffs. Ultimately they can access your data. The truecrypt option may solve it for some but that means the whole archive has to be shared.

AltDrive unlimited online backup versions your files and allows you to control your encryption key. It runs on *nix, OSX, Windows, and other OSs. http://altdrive.com

grandalf 4 days ago 0 replies      
All US companies will comply with government requests for data, even Google, when a warrant is presented.

If you don't want anyone looking at your data, use your own strong encryption layer and hope that there's not a back door.

kennywinker 4 days ago 2 replies      
forgive me if I'm naive, but can file hashes be spoofed in any way? I'm thinking upload a bunch of files that hash to random numbers, then download the de-duplicated original files.

could someone more knowledgable in this area tell me if this is a credible threat?

davidmduarte 4 days ago 1 reply      
I don't use Dropbox because their app on my computer have access to my computer.
The data I could send to Dropbox are as secure as the data i send to a host or email server.
... or may I wrong. :)
jbverschoor 4 days ago 0 replies      
If I steal your ssh private key, I can do anything I want
The Node Beginner Book nodebeginner.org
274 points by shawndumas  4 days ago   48 comments top 12
nailer 4 days ago 2 replies      
Good things:

- Tells me what I'll make with the tutorial right up front.

- Lets me know exactly what prerequisite knowledge is (I can tick all those boxes, good).

- Aimed at folks who know traditional backend languages and some JS but aren't JS Gods (a lot of node tutes seem to assume complete JS mastery).

No bad things so far!

Thanks Mr Dumas.

ManuelKiessling 4 days ago 2 replies      
Hi all,

I'm the author of The Node Beginner Book. Thanks for discussing it here.

Your input is a great help. I see the points WA makes regarding the bad things.

It's true that it's yet another Node tutorial chewing around on the web server / web app stuff; but I think for the people I'm addressing it's still the most useful scenario because it allows to understand how a full fledged app is put together, and is a great example to explain all of the fundamental concepts, new JavaScript ones and conventional ones (because it might makes sense to understand what stuff is done differently and what is done in a known fashion).

So nothing really new here - I hope where this tutorial differs is that it (arguably) might be the first "one-stop" tutorial for Node to get beginners started. Not more not less.

Every other resource I could find forced me to google around to fill the gaps - while this is not a bad thing per se, I think sometimes it's nice to have something that really guides you from A to Z.

Like, for example, http://ruby.railstutorial.org/. If I manage to create something that's only 10% as cool, I'm going to be very happy :-)

yardie 4 days ago 1 reply      
This will be an invaluable guide in the future but I think it needs more work. Hello World has been covered everywhere so it's utility as an intro is pointless if you are already a programmer or are familiar with programming.

The guides I find most useful, in addition to references, are the ones that have you build an application from the ground up. So that you start to understand the pros and cons of the language you are trying to learn. I already know how to do Hello World, I already know how to create a node server, what I want is a bit more context, like building a simple messaging server, how to create and use simple frameworks, whats even more appreciated is tutorials and samples about the stuff already built-in.

This is one of the reason why I like working with Apple and Microsoft. They give you tons and tons of sample code that compiles and work. Want access to process information? Here's how. Want to use the camera? Here's how.

It would be great to have a simple CRUD node app that connects to MySQL. That usually gets me 75% of the way there.

city41 4 days ago 4 replies      
I'm still trying to understand how node actually works. I'm just about to start digging through its source code. It seems most people view node as a magical mystery that they don't understand why it works, just that it does. How is a single threaded app doing things in parallel? Is it like a game loop where it iterates through all its pending operations and gives each a slice of time to progress forward? Are deeper parts of node multi threaded? The callbacks being called serially makes perfect sense, its the parallelness of the actual operations that confuses me.
d0m 4 days ago 1 reply      
I suggest using syntax highlighting in code examples.
RyanMcGreal 4 days ago 2 replies      
This was a well-written, easy-to-follow introduction, but I'd gladly pay money for an actual Node book that takes the reader from introduction to mastery.
brown9-2 4 days ago 1 reply      
A little confusing that the title refers to "Node" rather than "node.js".
rmason 4 days ago 1 reply      
I second the need for a good CRUD example. Also "clear" instructions on running NODE on Windows would be a big help.
Apocryphon 4 days ago 2 replies      
I've asked this already (http://news.ycombinator.com/item?id=2447840) but what books on JS are good for someone who wants to go into development with Node? Most texts that I know of deal with client-side JS. Right now I'm just reading Eloquent Javascript + JavaScript: the Good Parts, but I would like to see if there's any other books that would be good, especially for someone new to closures.
rick_bc 4 days ago 0 replies      
Kind of off-topic, but I don't really understand what Node.js is about until this presentation.


ManuelKiessling 3 days ago 0 replies      
For your consideration: I've just added the chapter on how to integrate request handlers into the router:


hutushen222 4 days ago 0 replies      
Though I learn only a little JavaScript, I will try it while I have a block time.
Now, just save it to my personal archive.
Dear Dr. Stallman: An Open Letter alexeymk.com
268 points by AlexeyMK  7 hours ago   95 comments top 25
danieldk 6 hours ago 4 replies      
Years back, I used to be an FSF member. Not that I liked the GPL much (in fact, I mostly use the Apache License), but they raised important issues, and had a track record of investing into fine software (GNU) that I benefitted from a lot.

However, their campaigns were getting so off-target, that much of my sympathy dwindled, and I ended my membership. Childish 'anti' advertising, such as 'BadVista' and DDoSing Apple's genius bars (gee, that's will convince anyone who was visiting an Apple Store) only made the whole free software movement look bad, childish, and unsocial. To this day, they seem to put their energy into almost hilarious campaigns (Windows 7 sins? Seriously?).

This open letter is on the mark, their current course only marginalizes the FSF and part of the FLOSS community. Whatever happened to relying on your own strengths, rather than caricaturizing the competition?

andywood 3 hours ago 3 replies      
I dislike the idea that every single leader in the world must only conduct themselves according to Dale Carnegie, as it were. There is an over-abundance of people doing just that, and judging from this post and many of the comments, a lot of people seem to want others to conform to that sort of uniform "persuasive" behavior. I'm not saying it isn't effective, but surely not everybody needs to do that. Isn't there room in this big world for a few genuine personalities?
emilsedgh 6 hours ago 8 replies      
A few points i would like to remind everyone who criticizes rms.

1) rms is a radical guy. You cannot change that. He fights for what he thinks is right. He is not the kind of person you can ask to censor himself.

If he thinks u.s goverment is to blame for 9/11, no matter how saying it in a lecture seems childish, he will say it.

If you invite rms for a lecture, he is coming with his radicalism. That is to be expected. You cannot invite rms and expect steve jobs.

2) rms is a practical guy. stop acting like he's a mad man who knows nothing. He started GNU, wrote emacs, glibc, gcc and probably others. He created the concept of free software and wrote a license as good as GPL to defend it.

He also managed to gather a community around this very crazy idea of free software.

3) rms doesnt want people only to use free software. he wants people to value their freedom, and as a result of that, use free software.

It doesnt really matter if whole world uses android instead of ios. The point is, these days, most people involved in open source community, do not even care about free software and the freedom it offers.

Most people are interested in technological advancements or affects of an open source project on market. None of them are concerns for rms.

And, what i said above is just what i interpreted from his actions and are not facts.

mbateman 5 hours ago 2 replies      
Just a quick thought from skimming this thread: It seems like there are two issues, radicalism and eccentricity.

Saying that the government may have caused 9/11 is eccentric. It's crazy and "radical" in an uninteresting way, and most intelligent people will ignore it.

But the idea that one shouldn't use Google docs if one values freedom is radical. I think RMS is completely and totally wrong, but the radicalism or apparent impracticality of the idea is not what I object to.

The cheesy campaigns and slogans have elements of both. They are radical ideas presented in an eccentric way. I think the 7 sins stuff, Swindle, etc., are stupid and childish and really have no upside.

But contrary to what the OP seems to suggest, while many people are turned off by radicalism, radicals are influential way out of proportion to what one would be led to expect by making a quick survey of people's negative reactions to them.

It can be hard to separate what's radical and what's merely eccentric. Especially if you're the one trying to figure out how to present radical ideas.

michaelpinto 6 hours ago 1 reply      
If Stallman wasn't a crazy hippy he wouldn't have been into this cause years before even the first dot.com boom. It's unfair for someone to insist that a visionary go corporate all these years later because you feel uncomfortable. If you want to be the next generation spokesman than become that, but don't waste time trying to make a zebra shed his stripes.
sliverstorm 6 hours ago 1 reply      
On the flipside, if Stallman was less crazy I wouldn't have my second-favorite comic strip of all time:


3dFlatLander 54 minutes ago 1 reply      
When I first started to become computer savvy, Stallman had already moved into the activist stage, and further away from programming. I had always heard that he was a great programmer. But, I've never actually seen any code he's written. The earliest software versions I can find on gnu.org's FTP are from 1994--I'm guessing most of the projects had multiple contributors by this time.

Anyone happen to know where some pure Stallman code can be found?

lell 6 hours ago 1 reply      
Proponents of the FSF desire the hegemony of free software with the same uncompromising fervor as revolutionaries in russia desired communism in 1917. And the analogy does not end there. In many ways, free software is a communists dream.

Marx hoped that technology would make human labour redundant through mechanisation of production, allowing humans to spend all their time doing r&d (or r&r) --- he hoped that stuff like food would become free to create. With software, it's already possible to make this a reality, as programs can be replicated without cost. All other things being equal, this should lead to great benefits to society, a prospect that has attracted RMS and others.

That being said, it is as useless to ask RMS to compromise as it would be to ask those revolutionaries to in russia, 1917.

Furthermore, asking them to take baby steps is condescending, and they will ignore this advice. The reason is that their motivation differs from the majority of the hackernews readership. Sure, if they took baby steps and focused on PR and focused their agenda, free software might become more mainstream, and many entrepreneurs and small companies would benefit. But they don't want entrepreneurs and small companies to benefit, esp. if it means making these compromises.

Essentially, this is why I find articles like this condescending. The point of FSF is to improve society by advocating universal adoption of free software. Entrepreneurs indirectly benefit from these endeavors. Entrepreneurs then complain that the FSF could be more effective if they compromised their platforms. But this is sort of disingenuous because it's essentially the entrepreneurs telling the FSF to redirect effort that would benefit all of society to effort that would benefit the entrepreneurs. Granted, the former efforts are harder than the latter, but it is no one's place to tell the FSF how to direct their charity and advocacy, especially not someone who stands to gain from the reallocation that they themselves suggest.

rjbond3rd 4 hours ago 1 reply      
This man is arguably the greatest hacker of all time. He's hacking the culture, he's been incredibly successful.

It's shocking and irresponsible that people are commenting on "what rms says" based only on hearsay, speculation and mis-quotes. At least take the time to Google before condemning the man for things he never said.

leoc 5 hours ago 1 reply      
Eben Moglen http://emoglen.law.columbia.edu/ is a more winning spokesman for the FSF these days. I hope he'll forgive me for mentioning that there's plenty of him on YouTube http://www.youtube.com/results?search_query=eben+moglen :)
mark_l_watson 3 hours ago 0 replies      
I don't think that Stallman should tone down his message. Sure, he can be rough, but so what. (I've experienced this in email with him when he asked about re-releasing some of my early Lisp books under the FSF doc license, but that is OK.) The world needs people with strong contrarian opinions and even if I don't always agree I value what they say.

Way off topic, but: I can imagine a future world where there is an underground using free software, private but linked ad-hoc networks, etc. The victories of the super rich over the rest of us in the last decade actually have me looking at fiction like the world in Gibson's 'Johnny Mnemonic' as a real possibility for the future.

jrockway 5 hours ago 2 replies      
I don't think this guy gets it. Clearly, he has drawn the line at "proprietary software is fine, as long as it's useful to me". To RMS, though, that's not where the line is: he simply refuses to use software he can't tweak or audit. That's not like calling Obama Hitler or saying global warming is a scientific fraud. It's just an ideology, like not driving a car or only eating foods that don't come from animals. Nothing wrong with that, so why all the hate?

This article is sillier than calling the kindle "swindle".

elwin 6 hours ago 0 replies      
I hear this opinion a lot, and I think it slightly misses the point. The open-source world already has plenty of socially conventional advocates promoting their products. If the FSF became an ordinary open-source software promoter, it wouldn't have nearly as much influence as, say, the Ubuntu marketing team.

But there aren't many organizations trying to derive software principles from objective logic instead of subjective cost-benefit analysis, who insist that freedom and controlling your own computing is not just another feature but a vital issue. RMS may not convert many Windows users, but he does come up with valuable insights. If no one else is going to be a vocal, uncompromising advocate for software users, I can cringe through Windows 7 Sins and jokes about letting presidents drown.

Typhon 6 hours ago 0 replies      
Can somebody tell me when exactly did Stallman say that someone who used proprietary software was a hater of freedom ?
In the last interview I read, he seemed able to understand that almost nobody would go as far as him on the side of software freedom.


autarch 2 hours ago 0 replies      
I think Stallman needs to read this book - http://www.amazon.com/gp/product/159056233X

Activism doesn't need to be mysterious, there's lots of psychological research that you can look to when you ask "how can I convince people to {go vegan, support software freedom, support gay rights}?"

cosmok 2 hours ago 0 replies      
It is almost impractical for me to be like Stallman and shun a lot of Hardware/Software. But, I do not wish for Stallman to be any less radical than he is as: by being radical he gains my attention and some of his thoughts and ideas stick with me and, it has made me think about 'freedom' while buying any piece of Hardware/Software.

I would never want to work him on anything - I watched him tear apart people while responding to their concerns - but, people like him are essential to the Free Software movement.

pgbovine 5 hours ago 1 reply      
minor nit: i don't think rms is a "Dr.", since he didn't get a Ph.D. (unless he has a secret M.D.)

From wikipedia:
"Stallman then enrolled as a graduate student in physics at MIT, but abandoned his graduate studies while remaining a programmer at the MIT AI Laboratory. Stallman abandoned his pursuit of a doctorate in physics in favor of programming."

perhaps he has an honorary doctorate?

st3fan 4 hours ago 1 reply      
When you talk about the risk of software as a service, you can mention that the US gov't is attempting to collect identifying user data from the Wikileaks Twitter account, or the recent domain name seizures of PokerStars and other online gambling websites.

These are practical consequences of a lack of Free Software

Huh Wut!?

How is free or open software going to prevent any company from receiving a court order to disclose data about its users?

This has nothing to do with technical implementation of a service.

jberryman 2 hours ago 0 replies      
I really appreciated the tone of this piece. Respectful, well written and convincing.
smellyboy 6 hours ago 0 replies      
Whilst I'm against negative campaigning, rms has been and still is the consciousness of free software. We would be in a very bad place if not for him. yeah sometimes he's a dick, but then we all are.
gsf 1 hour ago 0 replies      
I wonder how many 24-year-old CS students have given this same advice to Stallman in the last 25 years. Not that Alexey shouldn't voice his thoughts, but it's well-trod ground.
angus77 4 hours ago 0 replies      
I pretty much agreed with everything except the ridiculous idea that Stallman should try out Google Docs so he could see how "good" it is.
cgray4 3 hours ago 1 reply      
I really don't think the signs that are used to illustrate this article are comparable. The Kindle/Swindle sign isn't making up a new name for the Kindle. It is saying that this thing in the sign is a swindle. You shouldn't buy it because it makes false promises. If a person made up a sign with a bottle of Coke and put "Tastes Great" beneath it, that person wouldn't be calling Coke "Tastes Great".

Sure, it's negative advertising, but that doesn't put it on the level of Lyndon Larouche advertising. It might be on the level of the people who called Microsoft M$ on Slashdot 15 years ago, but I don't really think it is. I didn't see the talk, so I don't know if he called it a swindle during the talk but if he did, then I would put the remark in the latter category.

I'm even less sure what the objection to the other sign is. Is it the word "sins"? They want you to go to their website to see the things that they don't like about Windows 7. Mainly, I would guess, in the way that it restricts your freedom. What is a short word that is less incendiary that means things-I-don't-like-about-a-thing-that-restricts-my-freedom?

Finally, "baby steps"? In this day and age? I've used almost exclusively free software for over ten years. It's really not that hard. I prefer it. So start using free software or don't. I don't care. But don't pretend it's a big hassle that someone told you that you should.

(To be clear, I'm not a total apologist for RMS. He has said some distasteful things about women and from what I hear his hygiene isn't the greatest either.)

Tichy 5 hours ago 1 reply      
Damn you, Photoshop (presumably the only piece of closed software some people just can't do without).
6ren 3 hours ago 1 reply      
> When we asked, you mentioned that you do not write much code anymore.

This may be partly why he seems out of touch with programmers.

Linus Torvalds on Garbage Collection (2002) gnu.org
259 points by AndrewDucker  1 day ago   198 comments top 26
ekidd 1 day ago  replies      
Shortly before Linus wrote this article in 2002, I wrote an XML-RPC library in C that used reference counting. By the time I was done, I'd written 7,000+ lines of extremely paranoid C code, and probably eliminated all the memory leaks. The project cost my client ~$5K.

The standard Python xmlrpc library was less than 800 lines of code, and it was probably written in a day or two.

Was my library about 50 times faster? Sure, I could parse 1,500+ XML-RPC requests/second. Did anybody actually benfit from this speed? Probably not.

But the real problem is even bigger: Virtually every reference-counting codebase I've ever seen was full of bugs and memory leaks, especially in the error-handling code. I don't think more than 5% of programmers are disciplined enough to get it right.

If I'm paying for the code, I'll prefer GC almost every time. I value correctness and low costs, and only worry about performance when there's a clear business need.

barrkel 1 day ago 2 replies      
Reference counting is GC; a poor form if it's the only thing you rely on, but it is automatic memory management all the same.

Generational GC will frequently use the (L2/L3) cache size itself as its smallest generation, meaning it shouldn't suffer from the pathologies talked about by Linus here.

What GC really gives you, though, is the freedom to write code in a functional and referentially transparent way. Writing functions that return potentially shared, or potentially newly allocated, blobs of memory is painful in a manual memory management environment, because every function call becomes a resource management problem. You can't even freely chain multiple invocations (y = f(g(h(x)))) because, what if there's a problem with g? How do you then free the return value of h? How to you cheaply and easily memoize a function without GC, where the function returns a value that must be allocated on the heap, but might be shared?

Writing code that leans towards expressions rather than statements, functions rather than procedures, immutability rather than mutability, referentially transparent rather than side-effecting and stateful, gives you big advantages. You can compose your code more easily and freely. You can express the intent of the code more directly, letting you optimize at the algorithm level, while the ease of memoization lets you trade space for speed without significantly impacting the rest of your program. Doing this without GC is very awkward.

GC, used wisely, is the key to maintainable programs that run quickly. You can write maintainable yet less efficient programs, or highly efficient yet less maintainable programs, easily enough in its absence; but its presence frees up a third way.

jfr 1 day ago 7 replies      
> A GC system with explicitly visible reference counts (and immediate freeing) with language support to make it easier to get the refcounts right [...]

To be a little pedantic on the subject, such a system (reference counting and immediate freeing) is a form of automatic memory management, but it is not GC in any way. Garbage collection implies that the system leaves garbage around, which needs to be collected in some way or another. The usual approach to refcounting releases resources as soon as they are no longer required (either by free()ing immediately or by sending it to a pool of unused resources), thus doesn't leave garbage around, and doesn't need a collector thread or mechanism to.

There are partial-GC implementations of refcounting, either because items are not free()d when they reach zero references, or to automatically detect reference loops which are not handled directly.

I agree with Torvalds on this matter. GC as it is promoted today is a giant step that gives programmers one benefit, solving one problem, while introducing a immeasurable pile of complexity to the system creating another pile of problems that are still not fixed today. And to fix some of these problems (like speed) you have to introduce more complexity.

This is my problem with GC. I like simplicity. Simplicity tends to perform well, and being simple also means it has little space for problems. Refcounting is simple and elegant, you just have to take care of reference loops, which also has another simple solution, that is weak references. I can teach a class of CS students everything they need to know to design a refcounting resource management system in one lesson.

GC is the opposite: it is big, complex, and a problem that the more you try to fix it, the more complex it becomes. The original idea is simple, but nobody uses the original idea because it performs so badly. To teach the same class how to design a GC system that performs as well as we expect today, an entire semester may not be enough.

famousactress 1 day ago 2 replies      
We should really encourage eachother to put the date in the title when submitting old articles to HN. It's a total brainf*k to read through the entire article, and not realize the context it was in.. or to just glance at the title and assume the topic is a current one. Just saying.

[Edit] Not that I have a problem with older posts, btw.. I actually really like them most of the time. But the date would give everyone a better opportunity to evaluate whether they want to read the article, and would be reading it with reasonable context.

loup-vaillant 1 day ago 6 replies      
So. Programs that use Garbage Collection tend to be slow.

Cause: Hardware don't like it.

Solution: fix the hardware?

Seriously, I'm afraid we're stuck in a local optimum here. It is as if machines are optimized for the two dominant C/C++ compilers out there, and we have then to optimize our program against that, closing the loop. Shouldn't compilers and hardware be designed hand in hand?

jasongullickson 1 day ago 3 replies      
What he's advocating sounds a lot like how things work in the iOS world, in my experience.
wladimir 1 day ago 4 replies      

Though his argument about cache does still hold.

sklivvz1971 1 day ago 3 replies      
It's 2011, FFS.
This kind of mindset is really self defeating in the long term.
Sure, hand optimizing is better. Having a gazillion lines of shit legacy code and technical debt to fix because you hand optimized for the 90's, it's not so great.
I'll keep my GC and sip a Mohito on the beach, while Linus keeps on fixing Linux's "optimizations" ten years from now.
iskander 1 day ago 1 reply      
I'm very suspicious of anyone (even Linus) claiming that gcc is slow because of its memory management. The codebase is crufty and convoluted--- it's probably slow for a thousand different reasons. If you refactored into a clean design and rewrote the beast in OCaml (or any other language with a snappy generational collector), you'd probably get a large performance boost.
__david__ 1 day ago 0 replies      
I like the way the D language approached this. It's garbage collected but it also has a "delete" function/operator. That way you can use garbage collection if you'd like, or you can manually free memory when you think it's worth it.

That seems like a reasonable compromise and I'm surprised that more languages don't do it.

albertzeyer 1 day ago 1 reply      
When I read this, I immediately thought about std/boost::shared_ptr. This is a bit ironic since Linus hates C++ so much.

shared_ptr is a really nice thing in C++. (For those who don't know: It is a ref-counting pointer with automatic freeing.) And its behavior is very deterministic. In many cases in complex C++ applications, you want to use that.

joeyespo 1 day ago 0 replies      
I think this is another case of everybody thinks about garbage collection the wrong way: http://blogs.msdn.com/b/oldnewthing/archive/2010/08/09/10047...

From the article: "Garbage collection is simulating a computer with an infinite amount of memory. The rest is mechanism."

Whether or not it's reference counting or generational, the goal is still to simulate infinite memory. That way, you can focus on the high-level problems instead of the technical memory-related details. So it's not necessarily a bad mindset to have.

manveru 1 day ago 1 reply      
Might be worth mentioning Tcl in this context, as it uses reference counting for the GC [1].

It also doesn't allow circular data structures, which are quit hard to implement if all you have are strings anyway.

[1]: http://wiki.tcl.tk/3096

KirinDave 1 day ago 0 replies      
That was 2002. Here is the state of the art in 2008: http://cs.anu.edu.au/techreports/2007/TR-CS-07-04.pdf

Unsurprisngly, things have changed. Many of Linus's complaints were valid, and we've learned how to address them.

joshhart 1 day ago 2 replies      
Here are a couple of reasons why I think it's not so clear cut:

1. If garbage collection was that damaging to the cache, Haskell wouldn't be nearly as fast as C.
2. Copy-on-write data structures are nice because the immutability allows for concurrent access without locking.

Granted, this was from 2002 and Linus may no longer feel so strongly about the topic.

ww520 1 day ago 1 reply      
This is like arguing assembly is better than high level languages because it's faster with explicit control. The thing is 99% of the time it doesn't matter.

In most cases, GC-based programs have good enough performance to get the job done. For the 1% case, sure use the C/C++/Assembly to have the explicit control and performance. Doing things in non-GC systems because of potential caching problem sounds like a case of premature optimization.

Vlasta 1 day ago 2 replies      
I like him mentioning the programmer's mindset associated with GC being a big danger. Some people consider GC a magic bullet and refuse to think about what's happening under the hood. I do not consider that a good habit.
kerkeslager 1 day ago 0 replies      
> In contrast, in a GC system where you do _not_ have access to the explicit refcounting, you tend to always copy the node, just because you don't know if the original node might be shared through another tree or not. Even if sharing ends up not being the most common case. So you do a lot of extra work, and you end up with even more cache pressure.

It's possible that things were different in 2002, but I don't really think this is the case now. In general, I make the node immutable and never copy it (copying an immutable object makes no sense). In a well-designed code base, mutations happen within the function where the data is created (read: on the stack, where cache locality is a given). Immutability also addresses Linus' concerns with thread-safety. And that's not accounting for concerns which Linus DOESN'T mention, such as increased development speed and correct program behavior.

I'm not the only one saying this. Josh Bloch, for example, recommends immutability and cites cache reasons (http://www.ibm.com/developerworks/java/library/j-jtp02183/in...). And many languages (Haskell, Clojure) are designed heavily around avoiding mutation and sharing nodes within data structures.

This talk of copying nodes to avoid your objects changing out from under you sounds a lot like what I call "writing C in Java". Linus is looking at this from the perspective of, "If they took away explicit memory management from C, this is how I would do it." But OF COURSE if you just bolt a feature like GC into a language that didn't have it before, it won't work well. Effective cache usage in a GCed system requires other language constructs (like immutability).

Now, after all that, I won't make the claim that immutability in a GCed language like Java or C# is faster or even as fast as C with explicit memory management: it would take a lot of profiling code and comparing its functionality to make that claim with any kind of certainty. But it doesn't seem like Linus has done that profiling and comparison either.

mckoss 1 day ago 1 reply      
Didn't Linus forget

    newnode->count = 1;

mv1 1 day ago 0 replies      
I find it sad that, to this day, one has to spend so much time worrying about memory management to get decent performance. I've yet to work on a performance oriented project where I didn't need to write at least a couple custom allocators to reduce memory management overhead.

GC systems are no better in this regard. I was told of an interesting hack in a Java program that implemented a large cache of objects by serializing them into a large memory block so that the GC saw it as one big object and didn't traverse it. This resulting in dramatically reduced GC pause times (10x+). When needed, objects were deserialized from the array. Disgusting, but effective.

teh 1 day ago 1 reply      
Slightly related: He mentions that when the containing structure of a sub structure goes away you can free all the resources. The guys behind Samba 4 developed talloc [1] which is build around that idea.

[1] http://talloc.samba.org/talloc/doc/html/index.html

LarrySDonald 1 day ago 0 replies      
So.. Essentially man the F up and live without GC in the parts that are going too slow instead of saying "Oh it's cool, just wait ten years and hardware will be fast enough to run this anyway". Use GC for stuff that needs to be simple and is fast enough anyway, don't bog down code that's too slow with it.
earino 1 day ago 0 replies      
Guy who writes kernel code cares about performance, film at 11.
mmcconnell1618 1 day ago 1 reply      
I'm quite sure the machine code generated by my compiler isn't nearly as good as it could be if I hand coded it but the efficiency of not writing in machine code far outweighs any potential performance gains.
VladRussian 1 day ago 0 replies      
"All the papers I've seen on it are total jokes."

Couldn't agree more. We were actually laughing in the office when an office mate brought up such a paper many years ago.

"I really think it's the mindset that is the biggest problem."

Linus is a superhero 20+ years working on the supertask of changing people's mindset.

mfukar 1 day ago 1 reply      
Hacker News, another place where 10-year-old emails are submitted as news.
Doom engine code review fabiensanglard.net
251 points by franze  1 day ago   26 comments top 5
hapless 1 day ago 4 replies      
It's amazing that this would run halfway well on a 33 MHz 486. Doom had a 35 fps cap, and ran at 320x240 (square pixels):

2.7 million pixels per second at 35 fps (the cap).

1.4 million pixels per second at 18 fps (~50% of cap).

At the more realistic target of 18 fps, you have 24 clock cycles per pixel. A 486 averaged about 0.8 instructions per clock, so you're looking at 19 instructions per pixel. With a 33 MHz memory bus and the DRAM of the day, you're looking at about 5 clocks for memory latency. That looks like an upper bound of no more than 4 memory operations per pixel.

A convincing 3d renderer averaging 19 instructions and 4 memory operations per pixel. And we're not even counting blit/video delays here. Good lord is that savage optimization work. Carmack is famous for a reason.

P.S. The really scary thought is that Doom would hypothetically run on any 386 machine -- can you imagine painting e.g. 160x120 on a cacheless 20 MHz 386 laptop?

thibaut_barrere 1 day ago 2 replies      
The first (and only one, currently) comment brought me back years ago! A major performance trick back then was to ensure the code and data would remain into the (very small) cache, as well as preferring structures that would be read in order.


"Because walls were rendered as columns, wall textures were stored in memory rotated 90 degrees to the left. This was done to reduce the amount of computation required for texture coordinates"

The real reason is faster memory acces when reading linearly on old machine, less cpu cache clear. It's an old trick used on smooth rotozoomer effect in demo scene year ago.

light3 1 day ago 0 replies      
Luc 1 day ago 1 reply      
Michael Abrash' "Zen of Graphics Programming" has a good overview of many of the tricks used during that era: http://www.amazon.com/Zen-Graphics-Programming-Ultimate-Writ...

(now, of course, mainly to be read for nostalgic reasons).

Tyrant505 1 day ago 0 replies      
Any qed users? I liked it for map editing most.
Git Cheatsheet (Visualization) ndpsoftware.com
249 points by adulau  4 days ago   21 comments top 17
stretchwithme 4 days ago 0 replies      
Just knowing this exists helps a lot. Haven't had time to go to git school.

Its great to visual everything but having a gui that enabled you to access all the commands would be even more powerful for non-brainiacs like myself.

I use Subversion a lot on the Mac and have done many subtle things with the available commands but there's often a learning curve to make sure I don't shoot myself in the foot.

Back on Windows there was TortoiseSVN that was a breeze to set up and use. And I know there are similar things for the Mac but haven't find one that was a nobrainer like TortoiseSVN. Its merge editor was pretty good too if memory serves.

bostonvaulter2 4 days ago 0 replies      
Nice visualization. The other day I was looking for a all-in-one visualization like this. It would be for beginners so it wouldn't include all of the more advanced commands/flags like "add --interactive". I couldn't find a decent one so I created my own.


Google docs doesn't include a decent curved line/connector feature so I need to re-make it in Inkscape or something.

mythz 4 days ago 0 replies      
This is a seriously awesome reference to have around! You should definitely consider wrapping it up in a chrome webstore app or something!
cpeterso 4 days ago 0 replies      
My favorite Git "cheatsheet" is easygit (eg), a Perl "porcelain" wrapper for git. It's training wheels wrapper for git. You use real git commands, so you don't need to unlearn anything when graduating to git. easygit has safer defaults and extra sanity checks (like forgetting to stage modified files). The help messages are more verbose and use more consistent terminology than git's man pages. For example, easygit always uses the term "staged" instead of "index/staged/add/hard/soft/mixed/cached/HEAD/etc."


dtwwtd 4 days ago 2 replies      
Nice, I like the interactivity. I realize it would make the chart really cluttered, but I wish there was a button to show all of the workflow arrows at once.
runningdogx 4 days ago 0 replies      
Nice, but can you normalize the height of the command-name strips? They can range from 17-19px (before +4 padding) in height, depending on the local font setup I suspect. Cumulatively that's enough of a difference so that in an unusual case (when they're all 19 tall, as they can be in a gentoo+chromium setup), the Local Repository column of strips extends down and obscures the description.
teyc 4 days ago 0 replies      
This is absolutely the best one. It is easy to understand the relationship between the various repositories. Bookmarked.
mkramlich 3 days ago 0 replies      
This is so good and useful and original that I recommend you try to get it linked to from the main Git website, and from GitHub. Great job! It really helps visualize what all the commands do, what they effect.
sateesh 3 days ago 0 replies      
Very nicely done, good visualization and organization of command reference.

It suggests using a 'git pull' for pulling changes from the remote repo. 'git pull' automaticaly merges changes to the current branch thus not giving the user an option to review the changes that are going to happen in the current branch. I think the better way to fetch changes from remote repo is 'git fetch' followed by 'git merge'. Also see: http://longair.net/blog/2009/04/16/git-fetch-and-merge/

chrismanfrank 4 days ago 0 replies      
really well done. i wish i had this when i first learned git.
neilalbrock 4 days ago 0 replies      
I think this could be really good as a way to explain Git to folks who maybe haven't had the exposure to DVCS or Git itself. Version control is surprisingly hard for some people to wrap their heads around. Nice work.
kenjackson 4 days ago 0 replies      
I just see some colored blocks that change color on click.
sbarg 4 days ago 0 replies      
I really like the ndpsoftware.com home page. Very cool...
qusiba 4 days ago 0 replies      
bazaar works well for me.
kevinleversee 4 days ago 0 replies      
thanks for sharing
joakin 4 days ago 1 reply      
Please, replace courier and leave monospaced, I have my own preferences on fonts, and Im sure everybody else does too.


Working Best at Coffee Shops theatlantic.com
243 points by GiraffeNecktie  4 days ago   84 comments top 38
edw519 4 days ago 10 replies      
Not my experience. Here's why...

A coffee shop and a laptop are convenient, even fun ways to produce mediocre content. A blog entry, email, maybe even cleaning up a few lines of code. But as most programmers know, sooner or later, you have to enter another mode to get that additional "oomph" to get the important critical-path work done. Some call it "the zone".

In fact we just talked about this the other day:


A coffee shop is probably the best way to think you've entered the zone without actually entering the zone. The can be very dangerous. You've had a good time surrounded by like minded people (kinda like being here at hn), but you've never actually transcended anything really important.

I spend time at the library one or more times per week, sometimes just to get out of the office. It's fun to think and get transactions done, but fortunately, I have an internal guide that tells me, "Time to get back to the silence, large screen, and comfortable chair of the office to get the real work done."

I realize that this isn't the same experience as others, but I still often want to ask them, "Did you really get done what you wanted to get done in the coffee shop?"

This subject reminds me of a great line from Joel Spolsky's "Hitting the High Notes":


"Five Antonio Salieris won't produce Mozart's Requiem. Ever. Not if they work for 100 years."

My version:

"Five workers in the coffee shop won't produce the killer work that one in a dedicated space can produce. Ever. Not if they drink 100 lattes."

mechanical_fish 4 days ago 6 replies      
My theory is subtly different from these. I sometimes call it "ambient sociability".

Humans are pack creatures. If you put us in solitary confinement we go insane. This is generally true even for introverted people; only on the far edge of the bell curve do you find people who crave absolute solitude for weeks or months on end (and these people tend to be really odd, and it's often hard to tell if the oddness is cause or effect.)

However, as every reader of programming productivity books knows, being surrounded by a bunch of people that is constantly interrupting you makes it hard to focus. And so civic design has evolved the library, the coffeeshop, and the coworking space: Places where you can be alone yet also surrounded by people.

The secret is to surround yourself with people who don't have the same agenda as you. Then you won't often be interrupted by things that break your focus: The staff might occasionally ask to refill your coffee, and you'll get interrupted if the building catches fire, but otherwise you can work on your own thing.

timr 4 days ago 3 replies      
I realize that I'm fighting the tide, but I'm going to ask anyway: please don't do this very often. And when you do, please have basic respect for other people, purchase from the business frequently (every hour or so, at minimum), limit your total seat time to an hour if the place is crowded, and don't be "that guy" -- the dude spread over two tables, with the laptop stand, the iPad, the backup drive, the portable keyboard, mouse, etc.

There's nothing worse than walking into an otherwise pleasant cafe and being unable to sit because the place is filled with laptop zombies and/or dudes (and it's always dudes) holding "business" meetings. In San Francisco and Seattle, there are dozens of cheap co-working facilities, but you still can't get a seat to eat lunch in a place like Coffee Bar or the Creamery for all of the nerds that crowd in between 9AM and 5PM.

Finally, if you find yourself working at a cafe for eight hours a day, every day, you're abusing the system. Go to a co-working facility, and pay the minimal amount of money for a desk. If you can afford paying for multiple coffees every day, you can afford a co-working space. If you can't afford either, you should work at home, or in a library. The coffee shop is not your personal, low-rent office space.

T_S_ 4 days ago 0 replies      
If you are in the Bay Area check out Hacker Dojo in Mountain View. If you want, you can think of it as a coffee shop with

1) Free coffee. Open 24 hours.

2) Screens you can hook up if needed.

3) Other coffee drinkers who also hack.

4) Very fast internet connection.

5) Happy hour on Friday, when you need to lose the caffeine edge.

Guests are welcome. There is a box for voluntary suggested donation. Membership is $100 per month. Remember the "no free cup of coffee" theorem. :)

kariatx 4 days ago 1 reply      
I am probably the last person on HN to realize this. I've been working at home for ages (like since Hanson was still a popular band), and I'm really finding out how wacky the long term psychological effects are.

Spending too much time at home makes me more negative about everything. In particular, I'm starting to wonder if working from my home office makes me a little too pessimistic about my business and less willing to take risks. I feel pretty crabby about work when I'm working from my home desk, but as this article points out, working at a coffee shop makes my work seem cooler. That shift in attitude is a revelation both for productivity and creativity.

datapimp 4 days ago 0 replies      
My theory is that by opening up myself to the possibility of meeting a woman keeps me on my toes and harnesses my darwinian energy which I then channel into my work. Working at home in my underwear doesn't create this psychological situation.
gwern 4 days ago 0 replies      
> ...when we are alone in a public place, we have a fear of "having no purpose". If we are in a public place and it looks like that we have no business there, it may not seem socially appropriate. In coffee-shops it is okay to be there to drink coffee but loitering is definitely not allowed by coffee-shop owners, so coffee-shops patrons deploy different methods to look "busy". Being disengaged is our big social fear, especially in public spaces, and people try to cover their "being there" with an acceptable visible activity.

This reminds me of http://lesswrong.com/lw/2qv/antiakrasia_remote_monitoring_ex... - 2 guys using VNC to simulate the patrons of the coffee shop. You can't help but feel that someone might be watching & judging you, and that prods you into doing something more creditable.

Why does it work? My own opinion is that it's a hack on hyperbolic discounting (http://en.wikipedia.org/wiki/Hyperbolic_discounting): we are wired to over-value short-term gratification, so small penalties or shifts in difficulty can put our desires back in whack to where they should rationally be.

Hence, melatonin can help you maintain the right sleep schedule & not stay up on HN all night because it makes you sleepy (http://www.gwern.net/Melatonin.html#self-discipline); but this perspective cuts both ways - if you can make yourself do things with small shifts in penalties/rewards, small shifts in penalties/rewards can stop you from doing things, hence you should 'beware trivial inconveniences' (http://lesswrong.com/lw/f1/beware_trivial_inconveniences/).

famousactress 4 days ago 0 replies      
I agree with working in restaurants. The coffee shops near me that have wifi are full of telecommuters gaming for the tables with power outlets. For some reason I find it way more distracting to be surrounded by other remote employees oogling their laptops than I do people eating or drinking coffee.

In my area there are a few restaraunt/bars however that have great wifi, few remote employees, food that isn't scones, and better music. Also, they're completely dead in between meal times so no one minds if I hang out from noon to 4pm.

ben1040 4 days ago 0 replies      
I like the theory they posit about the social expectation to look busy.

It's why I often take a walk down to the university library to work. There's a long-standing social expectation that when you go to the library, you're there to do serious things and be productive. You're not going there to yak with your friends on IM, read HN, or surf Reddit for funny cat pictures, which is what I will invariably find myself doing if I were at home. So I put my head down and get to work!

Going to the library to work is one way I can find the motivation to still put in 5 hours on a side project even after putting in an 8 hour work day. I sit down at a table there and can immediately get traction on my projects.

It doesn't hurt either that the library is a few blocks from my house, is open until 1 AM, and sells good coffee. I am going to have to find another good nearby place that's open late though because the library is only open 8-5 in the summer.

ezy 4 days ago 1 reply      
I love taking a break from home working and hitting a coffee shop. For me, it's just a way to improve my mood. Similarly, I'll break out the lawn chair and sit outside to work -- that is just as effective emotionally, but not quite as practical (glare, no real table, etc.)

My only gripe with coffee shops is the bathroom break. No way am I leaving a $2000 laptop on a desk unattended, but after a few cups of anything, you have to go. Sometimes I'll ask someone to watch it for me, but if there's no one who's around consistently or who seems trustworthy, this can't always work. So I end up packing up my shit then unpacking it again...

city41 4 days ago 1 reply      
I completely agree with this article. My problem is sacrificing my two glorious 24" monitors for a tiny laptop screen. I feel the sacrificed real estate really bites into the productivity gains of being in a place like a coffee shop.
blhack 4 days ago 0 replies      
There is an interesting social accountability aspect to working in a coffee shop, for me at least.

I work best with something playing in the background. A movie, some futurama, some always sunny in philadelphia, etc. Or I work best with 30-40 minutes of code code code code followed by a few minutes of playing minecraft, or reddit, or facebook, or something, followed by more coding.

When I'm at "work", in my office, I can't really just zone out and play futurama on one of my monitors. That would be totally inappropriate, and I would probably get called out on it by one of my coworkers.

And if I'm at home, I have the dog wanting to play, or the roomate wanting to go out, or a really awesome stereo begging to be played with, or a garage full of DIY projects...

The coffee shop is right in the butter zone. I would never sit there for 8 hours watching Futurama, but I don't feel bad if I watch a bit of it. The pressure to not look like an idiot is enough to keep me focused, and the freedom to do whatever I want helps keep me relaxed.

It's perfect. This is made better by the fact that my local coffee shop has nice little works-spaces for people to use.

(If you're a Phoenician, the coffee shop I'm talking about is Xtreme Bean in Tempe)

joshklein 4 days ago 0 replies      
I think whether you work best in a one kind of ambient atmosphere versus another (a place of social interactions versus a place of quiet) depends on whether you are an introvert or an extrovert.

Introverts and extroverts are equally able to function and enjoy both environments, but an introvert has to "turn it on" in a social atmosphere, and therefore needs quiet alone time to recharge his batteries. An extrovert is the opposite; he has to "zone himself in" to make use of quiet alone time, and needs social time to recharge his batteries.

I think extroverts are the people who enjoy the ambience of the coffee shop, drawing energy from the hustle and bustle around them. Introverts find this more taxing; they're just as able to do work, but they'd probably be more productive in a quiet study room.

geebee 4 days ago 0 replies      
Working at coffee shops, for some reason, just feels really great. I learned calculus in coffee shops, and I really enjoy coding in them now. I hadn't thought of restaurants - that was a good idea about showing up in the early afternoon, when they won't need the table for a while.

I do have a personal rule - no being a coffee shop mooch. I make sure I buy something every now and then if I'm going to sit there for hours.

Diners are also a pretty excellent place to work. Even staid, corporate ones actually do the trick for me, but places with a bit of character tend to be more fun.

I actually think a coffee shop culture may be a key component to a creative environment. I attended UC San Diego as an undergrad, and while it has some wonderful qualities, I think it is missing the coffee shop scene you get at an urban campus (Berkeley and Washington are a couple of good west coast examples), where there's a seamless transition from the university to the coffee shops immediately next to it. Don't get me wrong, the coffee shops in the beach towns around San Diego are pretty great, and full of students studying, but when you surround a university with them, you get a kind of magic.

daimyoyo 4 days ago 1 reply      
Perhaps it's because I'm introverted, but I have always had the exact opposite reaction. Before I could afford to have Internet at my house, I had to use coffee shops to get work done online, and it was almost unbearable. The noise was a constant distraction, the smell of roasted coffee quickly became something I couldn't stand, and most of all, I hated being bunched up against everyone else. I suppose each experience will be different, but now that I've had the pleasure of working at home, I wouldn't want to go back if I had a choice.
mhb 4 days ago 0 replies      
Perhaps if more restaurants or bars had wifi I too would work at them.

It is a little weird that despite the assertion that this arrangement is such a productivity multiplier, many people prefer to be concerned about WiFi availability rather than spend the ~$50/month for a cell modem enabling them to make the world their cafe.

Luckily Hemingway brought paper and a pen with him.

Goladus 4 days ago 0 replies      
I am just noticing this. I currently work in a semi-cramped office with two other guys, but it's connected to a modern steel/glass high-rise building designed for academic research. It has a gorgeous, large cafeteria on the second floor. It's pleasant enough and there's enough activity during the morning and afternoon that it's a perfect place to work. Usually there are about 5-10 people using the cafeteria to work at any given time, plus there are lounge chairs situated at various points in the nearby hallways that can work as well.

I am less likely to get distracted with stupid internet crap when I'm in the public space. I find it relaxing, and I don't have the nagging feeling that by sitting at my office desk chair I'm wasting my life. Often those feelings are worth it all by themselves, even if there's no productivity change.

However, there are definitely distractions, and I find that I'm usually unable to get fully focused in a public space. I am often drawn to someone walking by or annoyed by a random smell. For difficult technical work, silence and peace are usually more conducive, as well as my multi-monitor setup and fast, wired ethernet connections.

But for stuff like reading email, and correspondence, knocking minor items off your todo list, and defeating a procrastination block, being in a public space seems to help a lot.

rmason 4 days ago 0 replies      
I need it to be fairly quiet to code with few distractions. Yet many of my friends have to listen to loud music or they can't write code at all. It may be ADD but it seems they need to distract a part of their brain to be able to concentrate on the task.

Though I haven't much experience with them coworking spaces are much better because you can bounce ideas off other developers. I think there's a commaraderie as well that enhances the experience for me.

scrrr 4 days ago 1 reply      
I'd speculate that most people are more effective when they feel observed and having strangers around you creates that situation.

Now coming to think about it I think this is why they had posters of the leader in "1984" everywhere.

hugh3 4 days ago 0 replies      
I've tried working outside the office. Usually I can get intensive work done in short bursts, but it's too tempting to go take a walk and set up your temporary office in some other location for a while. "You've been in this cafe for too long, you should go to the library... this library is dark and ugly, why not go sit on a bench outside? This bench isn't really comfortable and the screen is too glary... but hey, it's time for lunch and there's a really great place only twenty minutes' walk away..." Finally I get to the end of the day and figure out I've done about four good twenty-minute periods of work and had a really nice stroll in the Botanical Gardens.
orbitingpluto 4 days ago 1 reply      
If I hit a wall I have to change things up. If I'm not getting $#!7 done in the office, I go to a cafe. Productivity at a cafe is never as high, but I chock that up to screen real estate. One monitor for Eclipse and one monitor for docs. Otherwise I'm wasting time flipping.
emehrkay 4 days ago 0 replies      
I love being the "typical apple hipster" at the coffee shop or bookstore or panera with my macbook open and textmate filling my screen.

You know what's boring? Track practice for 9 year olds, but it is a great time to sit and code. I churn out so much code while those little legs move around the track.

I just love not working in the house. It's crazy since I absolutely love looking at how people have their home offices setup on sites like wherewedowhatwedo.com and lifehacker's featured workspaces. forgot about http://www.deskography.org/

kmfrk 4 days ago 2 replies      
I've tried finding something similar in my (European) country, too. Unfortunately, we don't have any Starbucks, and I find the silence in libraries to be too loud for my taste.

I've found two "eh" coffee shops, but one's always crowded and has poor ventilation, and the other one cranks up the music as if to scare people into only buying take-away, while the coffee-grinder or blender makes it impossible to get anything done on the same floor.

I've actually considered doing a start-up coffee shop to address this very thing, but it's to big an undertaking at the moment. And I'd rather do it in a country that seems to respect that culture already, which renders the idea somewhat moot.

kayoone 4 days ago 0 replies      
Ive been working from home since late 2006 until early 2011. I was pretty happy with it despite always having the feeling of not getting enough done.

I now work in an office in my own startup and simply love it. Conversations with other engineers, the whole energy of people working on the same thing in the same room and actually being able to differentiate between work and being at home are things i really really love right now.

I now have to commute 20min (one way) by car, but i dont really care. Sitting at home all the time and having your highlight of the day being a walk to the grocery store is depressing after several years ;)

I wouldnt want to work in a coffeeshop though, maybe as a writer, but as an engineer i need my large screens, comfy chair and big desk.

nate 4 days ago 0 replies      
I read in Esquire or Wired or somewhere (I'm having trouble finding the source online) that smells change our perception of time. Baby powder slows down time for us. Coffee beans speed it up. Perhaps we frequent coffee shops to get work done, because we want the work or workday to get done faster.
alexknowshtml 3 days ago 0 replies      
I recently went on our company retreat and spent 7 days in the Spanish countryside.

The last 2 days of the trip, we were in Barcelona.

While the entire trip was productive for important reasons (Wildbit is an entirely distributed and international team and spending social time with the team was extremely valuable and enjoyable), I found more inspiration and motivation from the couple of days in the city compared to the peacefulness of the countryside.

I wrote more about the experience here:


Of note:
"Noisy environments provide sort of a filter to cut through the noise in my head. Sort of like panning for gold, if everything goes well, all of the cruft fades away and I'm left with some nugget of gold."

bxr 4 days ago 0 replies      
I think that many of the points hint at it, but don't directly address the fact that the work space is inherently temporary. If I were to get up right now and go use one of our lab bench computers I'd be getting more done even though its not 20 feet away from my desk. Nothing else would change, anyone who came to or called my desk would still get to me, it wouldn't change the hour I leave. I think our designated workspaces can get us into a rut that leads to less productivity.
daydream 4 days ago 0 replies      
At both the office and home distractions abound. Different distractions, and there's generally less at home, but they're the types of distractions I either can or need to engage with.

At a coffeeshop there are distractions, but they generally aren't ones I can or feel compelled to engage with. For me, it's easier to focus on the task at hand.

kadavy 4 days ago 0 replies      
I practically have my coffee shop productivity down to a science. I have certain places for brainstorming, certain places for when I want some quiet & solitude, & certain places for when I want to feel relaxed vs. focused. I've written a good deal of my book at a very well-featured Whole Foods in Chicago.

I also meet up with other entrepreneurs at a coffee shop every Wednesday. It's great for exchanging ideas, or just having someone to watch your laptop while you go to the bathroom: http://jellychicago.com

When I lived in SF, I started compiling information on various coffee shops, based upon how good they were to work at. I kept track of if they had open outlets, and how the staff acted towards people on laptops. It might be a bit out of date, but here it is: http://moworking.pbworks.com/w/page/10316102/San-Francisco-B...

pacaro 4 days ago 0 replies      
"My headphones, they saved my life"

I work at a larger company, on a team with ~30 devs, my role is less about how much code I produce personally, and more about how I help the entire team produce code.

However, there are still days when what matters is me getting shit done - when those days come I pull on a pair of headphones (decent DJ cans) and put my entire music collection on random (which is less "random" than I'd like, but HN already knows that) - volume set to the lowest audible level.

This works for me: 1) the headphones are a subtle (and therefore more effective) Do-Not-Disturb sign; 2) the ambient conversations are eliminated (which is why DJ cans with high passive noise reduction are better); 3) It helps shrink the world down to the space of me and the problem to be solved.

Changing my environment also works: shutting an office door (if you have one); working in a conference room; cafeteria; coffee shop; park; library; at home at the kitchen counter - but all of those require that I _visibly_ isolate myself from the team, the headphones are more like a psychological invisibility cloak or SEP field.

mpg33 4 days ago 0 replies      
I find this works for me when i working on "output" related tasks..such as assignments, projects (ie creating something).

However when I am trying to learn/study and retain information I find I need a mostly quiet area.

rilindo 3 days ago 0 replies      
I'm surprised. 81 comments in and nobody commented on the real reason why you work best in the coffee shop: that's where all the hot people are.

You don't go there to work. You go there to look busy while you people watch and maybe- just maybe, get somebody's attention. As a result, you get a lot of work done, because you can only surf idly for so long alone.

Its like the gym, except more fattening.


RobertKohr 4 days ago 1 reply      
Anyone know a good mp3/ogg of coffeeshop sounds? That might be entertaining to play on headphones to pretend that you are at a coffee house.

Better yet if a coffee shop sets up an audio stream :)

spjwebster 4 days ago 0 replies      
My take on this is that there's a little part of my brain that wants (nay, needs) to be distracted so that I can actually get on with working. It can be a TV in the next room, a movie or video game soundtrack pumping in my headphones, or a coffee shop full of people and white noise. If that part of my brain isn't distracted by these things, it interrupts the main thread and I find myself lost in HN, my feeds, the BBC website… anywhere but my work.
coffeenut 4 days ago 0 replies      
FWIW, parts of Windows Server 2003 were written at Starbucks ;)
MatthewB 4 days ago 0 replies      
I never liked working at coffee shops when I was working for myself. For me the best place to work is at home. Working at home can be very difficult but if you have the discipline and a quiet work space it can be very productive.

Definitely don't have a TV in the same room as you work. Also, playing music with headphones is great to get in the "zone."

michaelty 4 days ago 0 replies      
Just needs to be a clean and well-lighted place.
ericmoritz 4 days ago 1 reply      
plus there's an endless supply of coffee; that does wonders for productivity.
IBM's infamous "Black Team" t3.org
228 points by shawndumas  2 days ago   51 comments top 15
sp332 2 days ago 4 replies      
Does anyone have a link to the version where the Black Team member found a bug in rigourously (mathematically) proven code? edit Ah, here it is: http://www.penzba.co.uk/GreybeardStories/TheBlackTeam.html
diiq 2 days ago 2 replies      
Searching for "the black team ibm -mustaches -infamous" on google returns (nearly) nothing. I find it astonishing that there is no record of such a team that doesn't mention mustache twirling.

I suspect that some hacker wanted a version of the Black Watch to look up to, so he invented one. I don't object to the invention of legends, but we should include some traditional legend-flagging phrases: "long ago", "never heard from again", &c.

trickjarrett 2 days ago 0 replies      
This reminds me of the article about the developers and testers who worked on software for the shuttle. In all the years, they only had 6 bugs ever reported on the shuttle, all of which were fairly minor as I recall.

It's so true that bugs are now simply part of life, and it has to do with the speed at which development must happen. I wonder what the Black team of old would think of today's web development wild west sort of approach.

Here it is: http://www.fastcompany.com/node/28121/print - They Write the Right Stuff (2007)

bioh42_2 1 day ago 0 replies      
Reading this story makes me sad.

There's also another story (google fails me) about a legendary IBM programmer around whom IBM built an entire team of testers, documenters, etc, all to keep this one guy's way above average productivity going. That story also makes me sad.

These stories make me sad because I know how huge a difference the environment makes to everyone's job.

The key points about the black team:

1. A few individuals that happen to be a bit above average at finding defects.

2. Bring them together, create a team.

3. Support them, but mostly just get out of their way and don't distract them with management B.S.

Very little change and support results in a huge jump in their productivity!

Same thing with the single legendary programmers, simply relive him of non-programming tedious tasks, give him enough support staff to keep up with his output and again HUGE productivity boost.

What's so sad about this is that is so rarely happens.
I think most people are capable of having this productivity jump, if only they'd get the same support. OK, let me back of a bit from most and be more precise and say, you should be at least a bit above average.

But why does this so rarely happen?
Sadly I think for most sizable companies minor process changes are a huge obstacle.

The bright side of this? Startups.
Startups are like these kinds of teams within a behemoth like IBM, except without the behemoth. Or actually a startup up ought to be like that, because that is one of the key advantages a small business should have over the big ones.

gchucky 2 days ago 0 replies      
Does anyone know what became of the Black Team? Presumably it's a defunct group, but when did that happen, and under what context?
wglb 2 days ago 0 replies      
I am thinking that this might just be humbug. Possibly motivational humbug, however.

But I did witness first hand some shenanigans done by the Field Engineers on XDS tape drives in the 1970s. They did use a kind of resonant thing to test the limits of how well a particular tape drive was working. It would do a lot of rewinding, stopping, reversing and the like. These drives had long vacuum (work with me here) chambers, one on each side where a loop of tape would be suspended. Thus, a fast back-and-forth operation could be performed on a short section of the tape without moving the reel. The goal was to try to get the tape moving in such a way that it would pop out of the vacuum chamber and fault the tape drive.

Somewhat like the Black Team's efforts are alleged to do, the net result was that all the tape drives, after adjustment, were able to pass this tough diagnostic.

dauwk 2 days ago 0 replies      
Having worked on both the hardware and the software for most of IBM's tape drives, from the '60s era 2400s through the '70s 3480s, which cover the time of the Black Team, I find this story difficult to believe. On all these drives adjusting the start/stop mechanism required the engineer to be inside the enclosure with hands on the the very read head area whilst running many patterns of start/stop/rewind/fast forward. If I had felt any significant enclosure movement it would have indicated to me a major problem.
aeontech 2 days ago 0 replies      
Lots more comments on the previous discussion: http://news.ycombinator.com/item?id=985965 as well as http://news.ycombinator.com/item?id=994358
sinamdar 2 days ago 1 reply      
Nice article. This "Black Team" is sighted as an example in the book 'Peopleware: Productive Projects and Teams'.
seles 2 days ago 0 replies      
It would be nice if there was info about the methods they developed for testing, rather than just how effective it was.
dkersten 2 days ago 4 replies      
Pity the websites text takes up only 20% the width of my monitor... text looks cramped and awkward to read while 80% of my monitor is blank white space...
wcchandler 2 days ago 0 replies      
As a hardware tester at IBM, this makes me happy. I never see any adoration. People think of it as a 9-5; not as a chance to "best" somebody.
mikerg87 2 days ago 1 reply      
I remember hearing about the Black team from a training manual given to new hires who worked at Sperry/Univac in the late 70's - early 80's. There was a passage where the Black Team considered it a "failure" when they couldn't identify a defect during a testing round. And conversely they considered it a "success" when they identified a problem. Its almost as if they were doing TDD before anyone knew what to call it.
BasDirks 1 day ago 0 replies      
If anyone doubts whether they should read this article, let me quote:

"Team members began to affect loud maniacal laughter whenever they discovered software defects. Some individuals even grew long mustaches which they would twirl with melodramatic flair as they savaged a programmer's code."

aangjie 1 day ago 0 replies      
All throughout reading about the Black Team, i couldn't help but recall the Stanford Prison experiment(http://en.wikipedia.org/wiki/Stanford_prison_experiment) and how the article says a lot about how people behave in groups ... Hmm....odd,given the goal of the article..
Have you ever printed a boarding pass? bbryson.com
224 points by kilian  4 days ago   53 comments top 10
mcantelon 4 days ago 0 replies      
tldr: Man prints out his boarding pass in large format. Delight ensues.
kilian 4 days ago 5 replies      
Not a single person making a fuss about a "non-standard" boarding pass is something I wouldn't have guessed at all. Happiness all around :)
evanw 4 days ago 1 reply      
It looks like we've broken his web server. Here's a cached version: https://webcache.googleusercontent.com/search?q=cache:a4-v04...
patrickk 4 days ago 10 replies      
It would also be cool if you printed your boarding pass to PDF, but instead of printing a giant-sized version, loading it onto an Amazon Kindle and presenting that at the gate. Test Amazon's claim of being just like paper to the limit. Can't see why it wouldn't work.
zem 3 days ago 0 replies      
the sad thing is that my first thought was "as a brown guy, there's no way i'd ever dare to do that".
rwmj 3 days ago 0 replies      
Reminds me when I printed a boarding pass in "2-up" mode. Of course the 2D barcode on the pass was half-sized and none of their machines could read it.
perokreco 4 days ago 0 replies      
The article is down, but on-topic, I know bunch of large airlines(EasyJet for example) that let you present boarding pass on your smartphone so you do not have to print it.
blahblahblah 4 days ago 0 replies      
Glad he had fun because that's kind of an expensive boarding pass. When I've printed posters at Kinko's before, it usually comes out to about $40 for a 1 meter^2 poster.
stretchwithme 4 days ago 0 replies      
good one. I love it when take something and muck around with our expectations of it. Isn't that one thing art is supposed to do?
davidmurphy 4 days ago 0 replies      
This makes me happy. =)
Joel Spolsky is doing an IAmA on reddit reddit.com
208 points by chrisboesing  3 days ago   52 comments top 4
chrisaycock 3 days ago 5 replies      
Every time you feel like you've made the world better by upvoting a story about injustice, you're just making yourself feel smug. Forget the upvotes... go work on making the world a better place.

He was writing about how stories of social injustice get a ton of upvotes, but nobody actually goes out and does anything to fix the situation. I'm sure there's a lesson here for HN.

euroclydon 3 days ago 12 replies      
I really want to learn C, like he says. I get plenty done without knowing it, and I have few doubts I can continue to find decent work without knowing it, but I haven't been able to gain any traction when I try to learn it.

I've got the books sitting front of me, and I've written some trivial visualizations of sorting algorithms using terminal output, but damn if I can find a way to use C as a web developer. If there were just some use case where C would help me get something done, I'd be all over it.

ceejayoz 3 days ago 2 replies      
A dedicated ama.stackexchange.com could be an interesting experiment.
Apocryphon 3 days ago 4 replies      
He mentions how functional programming is valuable, something that many graduates are lacking in. Does JavaScript count as a functional language?
Steve Yegge v. Rich Hickey re: "Clojure just needs to start saying Yes" google.com
207 points by cemerick  3 days ago   154 comments top 25
cynicalkane 3 days ago  replies      
One of my favorite things about Clojure is the things it's said "yes" to, as first-class language builtins: char, vector, set, and map notation; first class "vars" (actually thread locals), ways to manage and dereference state, Java interop that doesn't set your hair on fire, namespaces, keywords and namespaced keywords, and a whole bunch of features I'm probably forgetting.

Instead, Steve Yegge is asking for things that don't seem terribly important to me. Excluding loops from the language core is obvious; Clojure makes it really easy to use the functional style instead and loops would serve as a newbie trap. For the last two weeks I've been programming Clojure full-time and haven't used a single loop macro or loop-recur. He complains about the lack of circular dependencies; but no circular dependencies across namespaces is A Good Thing just as you shouldn't have circular dependencies across Java packages or C++ libraries. The guy he cites who declared macros evil is obviously not a part of 'mainstream' Clojure culture, and maybe someone can explain Yegge's anger about single-pass compilation, because I don't get it.

And of course, Clojure is a highly extensible language that has implicitly said Yes to a vast ideascape.

SwellJoe 3 days ago 2 replies      
I don't know much about Clojure, but I know an argument from authority when I see it, and Yegge is making an argument from authority. He does a lot of, "You need users and you don't know anything about language marketing and building an ecosystem." With the implication being that Yegge does know all about language marketing. I'm disappointed Yegge would go there...he's a smart guy. But, we all have bad days.

Unfortunately for Yegge's argument, he's never built a sizable language ecosystem from scratch, while Hickey has. So, he's making an argument based on authority that he doesn't have and the person he's arguing with does.

Hickey has done a brilliant job stripping off all the "I know better than you" bits of Yegge's comments, and brings it back down to the discussion of the language and nothing else. Frankly, it was pretty devastating, and I'm surprised Yegge walked so cockily into it. If I had any dogs in this fight, I know whose side I'd be taking.

dkarl 3 days ago 3 replies      
If you can't say no, what do you say to the users who are begging you to say no because they like the language as it is? Every language design decision is a compromise, and a person empowered to make those decisions is going to make people unhappy. Just look at Perl and Python: Perl said no to people who demanded orthogonality, and Python said no to people who wanted a free-for-all TIMTOWTDI language. Java said no to deterministic finalizers. C++ said no to exceptions that could be restarted. All those languages are doing fine.

A better idea would be to never turn your back on any class of users -- ignoring their specific requests, perhaps, but always making sure they can solve their problem using their language. Even that strategy doesn't require language support for everything. CPython does just fine with the scientific computing community by punting to C bindings, for example.

The examples Yegge provides don't make any sense to me. The need to port Java code to Clojure is questionable, since Clojure provides good Java bindings. If you want a more Clojure-y version of a Java library, then a straight line-for-line port is not much of an improvement. As for the LOOP macro, you don't have to add everybody's little helper function into the standard library. I'm a big fan of languages adding helper functions to standard libraries if there's one obvious way to write them and the act of adding them will save everyone else from including their own version in all their projects, but a LOOP macro is absolutely NOT that.

(Unless it's just a straight-up reimplementation of Common Lisp's LOOP macro, which would do exactly one job, which is allowing Common Lisp programmers to be more comfortable in Clojure. Hey, guess what -- Common Lisp programmers already have one of the easiest paths to learning Clojure, since they know a Lisp already. Many non-Lisp programmers are tackling the learning curve and embracing Clojure, so CLers can't complain that it's too hard. Plus, many Common Lisp fans regard LOOP as an abomination and never use it anyway. Writing control structures in a complex DSL that few programmers bother to learn completely is not one of Common Lisp's best features. Saying "no" to a Common Lisp-style LOOP macro would be the right thing to do.)

gcv 3 days ago 0 replies      
Good discussion. I agree with Rich in principle: a language's design should not say "yes" to everything, but I agree with Steve in some of the particulars.

After programming in Clojure for two weeks, I had a list of complaints about the language. It has now been two years, and nearly all of my objections went away as the language evolved and as I adapted to using it. Two years later, I just have two notable things to grumble about. Coincidentally, they seem to be closely related to Steve Yegge's concerns.

1. No condition system. This is a big deal. Clojure piggy-backs on Java's exceptions, and currently lacks a good mechanism for user-defined error handling. Unfortunately, Java exception handling also happens to suck, since it throws away data in all stack frames from the throw to the catch. Implementing better error handling is under discussion on the Clojure development page (http://dev.clojure.org/display/design/Error+Handling). (Steve complains about the lack of non-local transfer of control, and he has a point: it could be used to make an arbitrarily powerful condition system, see http://www.nhplace.com/kent/Papers/Condition-Handling-2001.h...).

2. No built-in debugger. The way CL code doesn't just crash and throw away stack information is amazing. The lack of this feature in other languages is appalling.

In addition, I sort of agree that being forced to forward-declare things is annoying. I got used to it, but I don't really like the restriction. I do understand the reason behind it, though: auto-interning symbols in the reader (as Common Lisp does) can be confusing and occasionally problematic.

frisco 3 days ago 0 replies      
Oh god I so hope that Rich ignores this. A "just say yes" mindset is incredibly dangerous; yeah, ok, a trillion people use C++ but that doesn't make it a good language. Clojure to date has been built with great discipline, and it would be tragic to see it go off those rails in the hopes of satisfying a huge mass-market that a) Clojure is fundamentally unsuited for and b) will never be happy regardless. When I say that Clojure is fundamentally unsuited for a certain "mass-market" I don't mean that it shouldn't or won't catch on and into i.e., the TIOBE top 10 -- just that it can't be everything to everyone, and shouldn't try to be. It's a wonderful language that understands what it tries to be, and I hope the Rich never forgets those underlying intuitions on (state, identity, functionalism). Abandoning that discipline own't make it any more powerful, but will just muddy the language to try and satisfy people who won't be impressed because of it.
gruseom 3 days ago 2 replies      
This is a fascinating case study. I got sucked into reading the entire thread. Steve Yegge is talking about cultural and marketing issues that seem obvious to me. The responses on the list may not be representative of the community, but assuming they are, one can hazard a guess about the long-term trend: there's a clear failure to connect with what Yegge is saying. (Edit: I deleted an unnecessarily personal example here.)

Yegge isn't arguing for the abandonment of taste and rigour in a race to incorporate every kitchen appliance into the language. He's arguing that languages and communities that take a prescriptive (someone said "paternalistic") stance end up marginalizing themselves by their own rigidity, and that the antidote for this -- as well as the passageway toward wider adoption -- is to actively listen to and court new users. I couldn't agree more.

(Side note, this is why I like Common Lisp. Its loosey-goosey flexibility that always assumes the programmer knows best leads to an awesome fluidity that finds its around any obstacle. CL is unpopular, but not because of its pluralism. Qua language it has a deep respect for the user.)

There's another point here. Whether you're a fan of Steve Yegge or not (I didn't use to be, but after nodding with everything he said here I am now), he has a proven ability to mobilize a significant body of programmer opinion. To ignore what this guy says about the marketing of programming languages itself already displays a foolish disregard for the market.

KirinDave 3 days ago  replies      
I'm not sure why Yegge is talking about the Shangri-la of non-local exits, but... He is right that the culture of Clojure has some issues and often rejects ideas even when they're obvious, and it's not clear why.

Examples I've run into:

1. The current clojure lambda shortcut-syntax is atrocious, and we can do better. Why don't we do better?

2. Clojure could really benefit from a scheme-esque Macro-by-example library. A few exist but they seem largely ignored by the community; despite well-known-benefits to such a system in the normal daily use of macros.

3. A strange hatred of macros. Yes, some people are reasonable and argue that functions should have primacy over macros because of composability (and they're right). But then there are people who will tell you macros are always bad, and if you show up in freenode #clojure to ask for help with them they will actively laugh at you.

I love Clojure and I feel like I know it pretty well, so I'm not trying to say Clojure is considered harmful, etc. But I do think that some of Yegge's criticisms"while poorly delivered and sometimes poorly expressed"have an element of truth to them.

* Full disclosure: I was involved in an effort to write a Clojure book for O'Reilly until I got involved with a new startup and had to terminate my involvement in the effort. I may not be the most unbiased judge of Clojure.

loumf 3 days ago 0 replies      
Youngme Moon, in her book "Different", offers the idea that one should say Yes where others say No, and No where others say Yes to produce meaningful difference. Once you go this way, you iterate and improve on the ways you say Yes to accentuate the difference. There are similar suggestions in other recent books on differentiation (Moore's "Dealing with Darwin", for example)

Look at all the ways Clojure says Yes, where other lisps/languages have said no. Being a lisp, embracing the JVM, immutable by default and everywhere, first-class concurrent primitives -- clojure should probably say No if it compromises these core goals. It should say YES when it furthers these tenets.

I don't use it enough to really understand Yegge's gripes, but the best move for clojure is probably to make these things possible as libraries if it doesn't want to embrace it in the core, and iterate on the core so that it's the only possible choice when you are in the situation that requires it.

gregschlom 3 days ago 2 replies      
I don't understand Rich Hickey's point. How does his thought experiment address Steve Yegge's points?

Would someone care to explain?

Goladus 3 days ago 1 reply      
I'd say be careful about reading into this thread too much. It's a public, ad hoc discussion, and won't be as carefully phrased or researched as a blog entry. Cherry-picking one quote, like "Clojure just needs to start saying yes" from a discussion like that and treating it like a mantra doesn't improve discussion nor is it fair to any of the participants.

It's an interesting thread, I'm glad it was linked, but take it for what it really is. Hickey's response is a good counter-argument to "languages should always say yes" but basically ignores any of the other subtleties of the discussion. His comment is a starting point, but he hasn't engaged and no one has responded (yet).

dusklight 3 days ago 2 replies      
One of the best things I like about clojure actually is how it is currently keeping away new users.

It is probably the best thing for clojure right now, looking at the rapid changes that have been happening from 1.0 to 1.2, 1.2 to 1.3 .. Hickey has a clear vision for what he wants the language to be and it doesn't look like he has finished thinking about it yet. It is good that the community is still small and the tools are immature. When the language design is at a stable state then would be the time to start evangelizing.

I keep thinking about what happened with ruby .. it is such a beautiful language but the community grew too fast and it is now stuck with so many conventions that could have been better thought out.

d0m 3 days ago 0 replies      
Clojure already says YES where other lisp-based languages said NO to. (Instead of wrapping all java, let's use it for its strength. Don't just use list, use vector, map, etc. No tail-recursive? Aight, let's do it anyway and find a work-around)

Now, we make to differentiate users and what they ask. Is it a new clojure user who is used to C++ and try to code C++ in clojure and suggest missing features? Or an experienced clojure user offering useful patches to the community. In the first case, the right answer should probably be "Oh, but XYZ is already in Clojure, it's a little bit different than what you've been used in C++, but in fact, it's even more powerful. Here's how you can do it [...]". In the second case, it's more complicated.. but we should go toward the Yes if it adds real value to a day to day task. I mean, even thought clojure is great, you can't guess everything that will be needed.. so you shouldn't be shy to add missing stuff.

Still, Rich's answer is pretty great IMO.

smithbits 3 days ago 0 replies      
My quest for languages that say "Yes" ended with assembly. I started in BASIC and found the strait jacket it imposed much too restrictive. Then Pascal was the big teaching language but it has it's own ways. It wasn't till I got good as x86 assembly that I felt totally in control and able to do things just the way I wanted.
Having achieved that feeling 20ish years ago I've been running away from it ever since. C and C++ proved that at some size of code base even the most brilliant programmers can't do memory management correctly[1] so I love that modern language said "No" to memory management. I've worked at large software companies where the programming style guides ran to 80+ pages. That's 80 pages of documentation of how people will format their code so that it's done consistently. I love that Python said "No" to letting people format their code however they wanted. It's likely that these aren't the kinds of things that Steve Yegge is talking about, but there are a lot of things it's worth saying "No" to, and if they bug you (like they used to bug me) there's always assembly language.

[1] http://research.microsoft.com/pubs/70226/tr-2005-139.pdf

rplevy 3 days ago 0 replies      
This whole "I used the language for 2 weeks so now I am qualified to change it radically" attitude is well-established in the Lisp community (yes I know Steve Yegge is a veteran lisper, but this still applies, especially to some of the other commenters). The authority on this is Brucio, the fictional author of "Lisp at Light Speed" http://replay.web.archive.org/20080722232746/http://brucio.b... Bruce's First Law of Lisp is "If it does not do exactly what you expect with zero hours consideration or experience, it is a bug in Lisp that should be fixed."
sharkbot 3 days ago 1 reply      
Yegge points to the Tiobe language index as a metric to language adoption, and implies that "No" languages have low uptake. What about Lua? The Lua implementers don't accept public patches, rather they will rewrite public patches that are submitted, and only if the change makes sense (citation: http://lua-users.org/lists/lua-l/2008-06/msg00407.html). Yet Lua is #12, up from 20.

Saying "no" to language suggestions is not a bad thing, it all depends on the goals of the language and the philosophy of the language authors. Perhaps Yegge should try and understand the reasons why patches aren't accepted, rather than force their acceptance via his soapbox.

eschulte 3 days ago 0 replies      
Clojure is an aesthetically clean, wonderful language for parallel manipulation of lazy sequences. However after using it as my main language for slightly over two years, I found myself all-to-often butting heads with its paternalistic functionalism (trust me, I'm a grown up, I can manage a little bit of mutable state without shooting myself in the foot).

While I would definitely use Clojure as a first choice for any project which was primarily defined by a need for massive parallelism, I am now happily using common lisp as my main language, and I'm a noticeably happier and more productive programmer (while most tasks can be transformed into a purely functional lazy sequence manipulation, the process often takes time and results in code which is harder to read and maintain).

fogus 3 days ago 2 replies      
I think more important than always saying Yes is providing the ability to make anything possible.
pnathan 3 days ago 0 replies      
My understanding of what Steve Yegge is saying is that the community needs to be open to ideas and debate, and in his experience it is not. He pulled a few examples from the top of his head.

At least that's how I read it.

sausagefeet 3 days ago 3 replies      
Isn't this how we got C++?
maddalab 3 days ago 0 replies      
For starters, I dislike the title of the post. The thread language design and community/eco-system creation around the language.

I am not aware of how if at all Steve is qualified to make a well judged statement on either language design or the communities that develop around languages since afaik he has not done either.

I have been following the thread from the beginning since I am on the list, Steve Yegge comes of as delusional with his claims on how he influenced the Python community with one of his blog posts.

On the couple of instances where Steve was requested about what he contributed in tangible fashion, he mentions months of effort an JSwat command line support, everyone interested should check out the project and his contributions on code.google.com

On the second occasion he refuses to publish anything that the community can see or use or enhance.

I hope he proves me wrong, but at this time, he is blowing a lot of hot air, off the wrong end, and should stay away from the blog post he intends to write, his facade of being a real voice for a developer community is crumbling fast

markokocic 3 days ago 1 reply      
I'll just quote Antoine de Saint-Exupery here:
"You know you've achieved perfection in design, Not when you have nothing more to add, But when you have nothing more to take away."
Estragon 3 days ago 0 replies      
Yegge is full of it. Python said "no" over and over again, and it's gone like gangbusters.
theclay 3 days ago 1 reply      
I see a lot of people bashing Yegge, bashing Tiobe, praising Clojure, and praising Rich Hickey. All of that is strawman bullshit though as far as I can tell.

Virtually no one seems to be addressing what I think is Yegge's core point: neglect.

Is it true that there is a considerable body of extant patches/libraries/fixes out there that are being neglected so that a sizable portion of Clojure's user base is feeling neglected.

If so, then Yegge has a valid point. If not, then argue why he is wrong about neglect.

thinkingeric 3 days ago 1 reply      
Mr. Yegge says, "it only takes a few people to poison a community"

Yes, and it is the kvetchers that are putting a black cloud over a language and community that has been evolving quite nicely otherwise.

perfunctory 2 days ago 1 reply      
Let's see how Steve's statements stand against evidence.

> to get users, languages have to say Yes

What languages in wide use today got popular because they said yes? Is it

C, Java, Python, Lua?

I don't think those qualify as "yes" languages. Oh, maybe it is


Bug 647959 " Add Honest Achmed's root certificate mozilla.org
197 points by there  4 days ago   112 comments top 8
ra 4 days ago 5 replies      
The problem is that we are forced to trust a particular CA because the company we are dealing with chose to buy their certificate from that particular CA.

Whilst PKI provides solutions for this [1], they are not really practical in SSL.

In any case, that's not how it works in the real world.

In the real world, Achmed's uncles do trust Achmed, and might well trust him to validate the identity of a business partner.

In essence, that's the logic behind PGP.

What if SSL could be enhanced to allow PGP verification of counterparties? That way anyone could become the equivalent of a root CA, but your value would only be as good as your reputation / integrity.

Trusted entities, like the governments, could vouch for the keys of their agencies - or other governments.

Friends and Family could vouch for each others keys, businesses for their partners... etc etc.

Unlike PKI, PGP would enable counterparties to establish their identiy by having many validating partners (paid and unpaid), as opposed to the one single root CA that is available.

As well as bringing the source of trust closer to the relying party (really, I live in Australia, who the hell is Comodo anyway?), the network of trust that would result could be articulated in the browser in many different ways.

eg: 25 of your friends and 600 businesses agree that this is the identity of Visa.

Twenty years ago this wouldn't have worked. But today, we could use the root CA SSL system to bootstrap a network of trust that becomes independent of the old hierarchy.

I hope all that makes sense.

[1] eg: certificate revocation, or even the user removing the root CA from her own key store.

adulau 4 days ago 1 reply      
For the curious, the procedure for applying to be included as a CA : https://wiki.mozilla.org/CA:How_to_apply and https://wiki.mozilla.org/CA:Recommended_Practices

and the list of included certificate with their audit "certificate: http://www.mozilla.org/projects/security/certs/included/ or the pending list: http://www.mozilla.org/projects/security/certs/pending/

As long as Mr. Honest Achmed is able to provide the appropriate audit certificate...

hartror 4 days ago 0 replies      
All of this talk reminds me of Rainbow's End by Vernor Vinge, a near future novel set in a future where augmented reality is ubiquitous. As part of the climax of the book there is talk of revoking a root certificate which would cause most of European commerce to grid to a halt.

Vinge is a computer scientist so the whole thing reads very well from a hacker's perspective. Also it won the Hugo & Locus awards in 2007. Am a big fan.

Wikipedia Entry: http://en.wikipedia.org/wiki/Rainbows_End

Buy it: http://www.amazon.com/Rainbows-End-Vernor-Vinge/dp/081253636...

pnathan 4 days ago 1 reply      
Achmed is honest. He even says so.

Give him a root cert, certainly errors won't happen more than once, and anyway it would be an honest slip-up.

tlrobinson 4 days ago 2 replies      
Is this a satirical response to something Mozilla did?
Groxx 4 days ago 1 reply      


codeup 3 days ago 3 replies      
TLDR for this HN thread: The problem posed by the mix of technical issues and ethnic prejudices in this "bug report" is apparently more complex than what the HN community can deal with. This is a low point for the quality of discussions on HN.
bdr 4 days ago  replies      
This is kind of racist.
QuickSilver for Mac Lives lovequicksilver.com
189 points by GeneralMaximus  4 days ago   56 comments top 17
greattypo 4 days ago 0 replies      
Really glad to see this picked up!

I know Quicksilver has had some false starts before..




I'm curious - are these efforts building on each other, or is everyone starting from scratch each time?

justinchen 4 days ago 4 replies      
I've moved on to Alfred. http://www.alfredapp.com/
p0ppe 4 days ago 1 reply      
I changed to LaunchBar (http://www.obdev.at/products/launchbar/) about a year ago. No complaints so far and I also like supporting an independent developer.
ionfish 4 days ago 0 replies      
I just switched back to Quicksilver after trying Alfred for a bit. The basic reason was that Quicksilver's search is better and more easily customised: it finds the things I'm looking for, and given that the whole point of using such things is to improve one's productivity (by cutting out the whole "Open Finder, dig down through the file hierarchy, finally manage to open the file, directory or application being searched for" process), it doesn't really matter to me how well its competitors do in other areas.
mambodog 4 days ago 1 reply      
For me, cmd+space (Spotlight) has been quite sufficient, and it's right there out of the box.
krosaen 4 days ago 3 replies      
too bad the spinoff project within google, Quick Search Box (same guy who started quick silver was involved), didn't really ever gain traction and has not been actively developed in a while, it had promise:


glenjamin 4 days ago 3 replies      
For the uninitiated, what does this do that spotlight doesn't?

The mac is the only platform where I haven't felt the need for a decent launcher app!

snewe 4 days ago 1 reply      
beck5 4 days ago 3 replies      
I have found Alfred a very polished alternative and a little more robust than quicksilver has been over the past couple of years.
angusgr 4 days ago 1 reply      
If anyone's looking for alternatives on other platforms, I find "GNOME Do" quite good on GNOME/Linux and "Launchy" is decent for Windows.
dedward 4 days ago 1 reply      
That's good news - does anyone else know what to do about the problem with the Shelf plugin and the Shelf popping up unwanted all the time? (whenever windows or apps are closed, perhaps even when they are opened sometimes, sometimes when quicksilver is invoked - very flaky)
amanuel 3 days ago 0 replies      
Glad to see QS getting updates. The new plugins are welcome...well they are new to me (QRCode and Viscosity)
jh3 4 days ago 1 reply      
It's been "living." This is just an update, correct?
fjw 3 days ago 0 replies      
I've stuck with QuickSilver throughout, just because it's what I am used to and most comfortable using. Compared to the other apps that I have tried (Alfred, Spotlight), QuickSilver seems to get the most done with the least amount of effort -- it's simple to learn, harder to master, yet still extremely intuitive.
lovskogen 4 days ago 1 reply      
I'm just doing QS for the global shortcuts, anyone know of a native way of launching apps from keyboard shortcuts?
blaenk 4 days ago 0 replies      
Anyone know if I have to have a plugin installed to use the "Latest Download" and other proxy objects? Nothing seems to come up for me when I type that.
shubhamgoel 4 days ago 0 replies      
great.. but I have switched to alfred and love it
Joel Spolsky: Can your programming language do this? joelonsoftware.com
186 points by krat0sprakhar  20 hours ago   103 comments top 28
onan_barbarian 14 hours ago 8 replies      
I think there's some reasonable stuff buried in here, I really do.

But... having actually spent some time in the trenches dealing with a hard problem on a massively parallel machine - more than once - I find it hard to believe that something like map/reduce or the like - or any given small-scale language feature is going to be particularly significant in terms of parallelizing any goddamn thing that's actually hard to do. I see a lot of fatuous claims that language feature X is the missing link in parallelism for the everyday working programmer but I don't see a lot of new solutions for anything hard as a proof of concept.

We've only had OpenMP and all sorts of other kludgy extensions for Fortran and C for what, about 15 years? I'm not saying that they're great or elegant or anything, but so many of the things that are hard about parallel programming are NOT FUCKING SOLVED BY MAP-REDUCE. Oops, sorry, shouty. But anything that can be solved by map-reduce wasn't enormously hard to begin with. Map-reduce was itself initially publicized in terms of 'less useful but easier for mere mortals than generalized parallel prefix' which made sense to me.

What doesn't make sense for me is all this frenzied enthusiasm for dragging parallel programming into the never-ending programmlng language abstraction wars; at least when the topics being discussed only touch on the very shallowest things needed by parallel programming. You want some respect, solve something hard.

Yes, you can do the same thing to each element of an array. Whaddya want, a cookie?

kragen 19 hours ago 8 replies      
> Correction: The last time I used FORTRAN was 27 years ago. Apparently it got functions.

FORTRAN had user-defined functions since FORTRAN II in 1958; see http://archive.computerhistory.org/resources/text/Fortran/10... on page numbers 5, 14, and 15.

Joel unfortunately completely misses the point of why C and Java suck at this stuff: you can use functions as values in them (anonymous inner classes in Java) but they aren't closures. And his comment about automatically parallelizing "map" is a little off-base; if you take some random piece of code and stick it into a transparently parallel "map", you're very likely to discover that it isn't safe to run multiple copies of it concurrently, which is why languages like Erlang have a different name for the "parallel map" function. The "map" in MapReduce is inspired by the function of that name in Lisp and other functional languages; it isn't a drop-in replacement for it.

As usual, while Joel's overall point is reasonably accurate, most of his supporting points are actually false to the point of being ignorant nonsense. I think someone could tell as good a story in as entertaining a way without stuffing it full of lies, although admittedly my own efforts fall pretty far short.

grav1tas 10 hours ago 1 reply      
I think it might be important to note that while the terms map and reduce do come from Lisp, they're not one-to-one with what these functions do in Lisp. The original MapReduce paper mentions the borrowing, but doesn't really go into specifics. There's a good paper by Ralf Lämmel that describes the relation that MapReduce has to "map" and "reduce" at http://citeseerx.ist.psu.edu/viewdoc/summary?doi= . I liked this paper much better and found it the most informative functional explanation to MapReduce (note, it's in Haskell).

I think MapReduce is really part of a more general pattern where you have an (in more Haskell-y terms) unfold (anamorphism) to a foldr (catamorphism). If your operations on the items in your intermediate set of data in the MapReduce are associative/commutative, you can work out parallelization more or less for free. It's pretty cool stuff, and really not that complicated when you sit down and think about it.

JonnieCache 19 hours ago 3 replies      
Something that I only realised the other day which made me feel kinda embarrassed: in ruby, the reduce method is called inject.

For years I've been doing MapReduce functions, without realising it. MapReduce was in my mental pile of "genius things that cleverer people than me do, must be looked into when there is time."

For info on inject: http://blog.jayfields.com/2008/03/ruby-inject.html

sthatipamala 19 hours ago 1 reply      
This article shows that Javascript is truly the common man's functional programming language. Despite its ugliness, it got lambdas/anonymous functions right.
ajays 18 hours ago 2 replies      
He gives reduce as an example of "purely functional programs have no side effects and are thus trivially parallelizable.", but reduce by definition is not trivially parallelizable.
tybris 16 hours ago 0 replies      
Yup, any language I've worked with, including Java and C, can do that just fine. They just spread the verbosity differently. Organizing large projects is a pain in JavaScript, trivial in Java. Using anonymous functions is a pain in Java, trivial in JavaScript.

(Not so) fun fact: The public MapReduce services by Google and Amazon do not (directly) support JavaScript.

gaius 15 hours ago 0 replies      

The very fact that Google invented MapReduce, and Microsoft didn't, says something about why Microsoft is still playing catch up trying to get basic search features to work

I don't believe this is true, and that's easy to prove: There was parallelism of SELECTs in SQL Server 2000. So there is a part of MS that is perfectly happy with the idea, even in another bit of MS isn't. They just need to talk more...

chuhnk 19 hours ago 8 replies      
Has anyone else just read this and realised they need to go off an learn some form of functional programming? I ignored it for such a long time because I felt it wasn't relevant to my current situation but I was wrong. You gain some incredible fundamental knowledge that you would otherwise be completely oblivious to.

Is lisp really the way to go though?

rivalis 14 hours ago 0 replies      
Even when I'm working in a language that doesn't have first class functions, I find it easier to lay out my code by writing functional pseudocode and then "unrolling" maps into loops, closures into structs/objects, compositions into a sequence of calls, etc. It probably leads to idiomatically awful Java, but I find it easier to read and write, and nobody else needs to deal with my code. So...
SpookyAction 1 hour ago 0 replies      
"Look! We're passing in a function as an argument.
Can your language do this?"

Umm, yes it can....


sub cook_time {
($hours, $min) = @_;
$result = "$hours hours and $min minutes\n";
return $result;

sub animal {
$animal = shift;
return $animal;

sub cook_animal {
($get_animal, $get_time) = @_;
return "Cook $get_animal for $get_time";

print cook_animal(animal(cow),cook_time(5,23));

pmr_ 13 hours ago 0 replies      
Today I tried to explain someone what exactly boost::mpl::fold does and how it is supposed to be used (For those unfamiliar: boost::mpl is a collection of compile-time metaprogramming mechanisms for C++).

I took me a while to realize that the person I was explaining it to had only little problems with the templates and compile-time part but close to no idea what a fold or a lambda are.

Not knowing some basics of functional programming can keep a person from understanding so many different things and I have encountered those in different fields (e.g. explicitly like in formal semantics or implicitly in different theories of morphology).

I think the real point here is that different paradigms offer you new views onto the world and enhance your understanding all the programming language things aside.

hasenj 7 hours ago 0 replies      
I think this article was my first introduction to functional programming.

Yea, don't look at me like that. My university mostly taught us Java/C++; we only did functional programming in one course.

justwrote 18 hours ago 0 replies      
Yes, it can! Scala:

  def Cook(i1: String, i2: String, f: String => Unit) {
println("get the " + i1)

Cook("lobster", "water", x => println("pot " + x))
Cook("chicken", "coconut", x => println("boom " + x))

List(1,2,3).map(println) // or more verbose List(1,2,3).map(x => println(x))

ericf 3 hours ago 0 replies      
I implemented these examples in Ruby 1.9, would love to know if there are more efficient ways of doing some of these:

    def cook(p1, p2, f)
puts "get the " + p1.to_s

cook( "lobster", "water", lambda {|x| puts "pot " + x })
cook( "chicken", "coconut", lambda {|x| puts "boom " + x })

@a = [1,2,3]
@a.map {|x| puts x*2}
@a.map {|x| puts x}

def sum(a)
@a.reduce(0) do |a, b|
a + b

def join(a)
@a.reduce("") do |a, b|
a.to_s + b.to_s

puts "sum " + sum(@a).to_s
puts "join " + join(@a)

becomevocal 20 hours ago 1 reply      
I think this could also be called 'can your brain think like this?'... Many programmers stray from thinking in a massive way and tend to problems with similar, familiar codebases.
cincinnatus 13 hours ago 1 reply      
I don't like the way in line functions hurt the readability of code. Is there anything out there that solves that issue?

Also I haven't had an excuse to use it yet but F# seems to have great syntactic sugar for parallelizing things in a more natural way than the typical map reduce.

svrocks 19 hours ago 1 reply      
Does anyone else think it's a travesty that the AP Computer Science curriculum is taught in Java? Java was my first programming language and I've spent the past 8 years trying to unlearn most of it
hdragomir 17 hours ago 0 replies      
I remember my days as a CS student.

The single most mind-opening course I took was functional programming, where I learned LISP and Prolog.

That knowledge today is crucial as it deeply changed my mindset when tackling most any problem.

ScotterC 5 hours ago 0 replies      
Last time I used FORTRAN was all of 11 months ago. Thank god I've moved on to O-O and can actually declare functions.
leon_ 18 hours ago 0 replies      
Yes, Go lets me do this. Though I don't like passing anonymous functions too much as the codes becomes hard to read rather soon.
bluehavana 17 hours ago 2 replies      
It's funny that he mentions Google as an example of a company that gets the paradigm, but most of Google is C++ and Java. C# has better functional paradigm support than both of those.
mkramlich 6 hours ago 0 replies      
ahhh... Joel at his best. great piece of writing. and a gem about programming languages and abstraction.
buddydvd 19 hours ago 1 reply      
Can Xcode 4 compile code using Objective-c blocks into iOS 3.x compatible binaries? This article made me realize how much I miss anonymous functions/lambda expressions from C# and javascript.
mncolinlee 12 hours ago 0 replies      
The moment I read this, I immediately thought of the work I performed on Cray's Chapel parallel language. Chapel has an elegant way of expressing functional parallel code like this that is much more difficult to write in Unified Parallel C and High Performance Fortran. In fact, one Google search later and I found a student's presentation on Chapel and MapReduce.


nickik 16 hours ago 1 reply      
WOW, welcome to the year 1959.
jasonlynes 19 hours ago 1 reply      
i'm smarter for reading this. need more.
mariusmg 10 hours ago 0 replies      
So are we supposed to be impressed by clojures now ? Or are we supposed to be impressed that the "great" Joel Spolsky (a ex manager in EXCEL team !!!!) writes about them.
Great jquery tutorial jqfundamentals.com
184 points by bzupnick  2 days ago   11 comments top 9
kmfrk 2 days ago 0 replies      
Rebecca also teaches the jQuery Fundamentals course on CodeLesson: http://codelesson.com/courses/view/web-development-with-jque....

I don't have that much experience with CodeLesson, though, so it would probably be best to hear what others have taken away from using the service.

pdelgallego 2 days ago 1 reply      
Its a great resource, I am looking forward to see if Rebecca creates some Dojo (or even better a framework agnostic) learning materials.

It's great to see someone pushing us beyond the jQuery omnipresence.

Her blog [1] is full of good screencast and the last entry about Modern JS is very valuable for the community

[1] http://blog.rebeccamurphey.com/

ichilton 2 days ago 0 replies      
It's worth noting for anyone new finding this that she is discontinuing maintainership of this and handing it over to the jQuery project.

Full details here:

wh-uws 2 days ago 0 replies      
I like that it starts with a general javascript overview first.

This a great js beginner's guide.

marckremers 2 days ago 0 replies      
Just what I've been looking for, a no fuss explanation of the basics and beyond. cheers.
swah 2 days ago 1 reply      
After using Mootools for a while, I was once more comparing it to Jquery, and was pretty lost trying to find out how to do date parsing and number formatting with the latter.

A search returns loads of plugins with no documentation, you don't know which one satisfies your requirements and is mantained.

In Mootools, its intuitive to find those at http://mootools.net/docs/more/Types/Date and http://mootools.net/docs/more/Types/Number.Format, and you know they are good to use because they are official.

What is the usual approach Jquery developers take when they need something like this?

joakin 2 days ago 0 replies      
Really nice resource. A neat guide that I'm going to handle to my java partners.
donniefitz2 2 days ago 0 replies      
It would be nice to read this on my Kindle.
nateberkopec 2 days ago 0 replies      
Awesome. With JQuery becoming the default in Rails 3.1, I'm definitely going to have to be reading this.
Recommended Readings in AI - a list by Russell and Norvig berkeley.edu
179 points by fogus  3 days ago   34 comments top 6
Dn_Ab 3 days ago 1 reply      
Ever since Euclid listed (collected?) his axioms the march towards AI became inevitable. People don't realize how much AI "failures" have contriubted to computing, programming and society in general. Some examples include lisp, functional programming, garbage collection, object oriented programming and Walmart.
swannodette 3 days ago 2 replies      
This is a goldmine. Thanks for posting this. Interesting to see how much Prolog and Lisp texts dominate the programming section :)
SeanLuke 3 days ago 2 replies      
Sadly, it's three editions and AIMA is still hopeless regarding stochastic optimization (genetic algorithms, simulated annealing, ant colony optimization, hill climbing, and the like), and gradient-based optimization.

It seems they don't want to be bothered about the significant difference between search and optimization. So they stick all the stochastic optimization stuff into a section called "local search and optimization", and place it underneath the Search chapter (it's not search, and almost none of it is local). And then separate out optimization methods like gradient descent etc., placing them under "local search in continuous spaces", as if (1) they were search and (2) stochastic optimization wasn't applied to continuous spaces.

And if this wasn't muddled enough, their recommended books for stochastic optimization aren't under the Search chapter at all -- they've been placed under the Machine Learning chapter. And it's a strange collection.

I'm pretty disappointed with AIMA's seemingly poor understanding of this area. Well, I guess at least it's better than their cursory treatment of multiagent systems.

dvse 3 days ago 2 replies      
I never understood the popularity of their AI text - the discussion of topics other than the most basic search methods is uniformly obtuse and the authors hardly ever make any of the important connections with literature outside their "field", e.g. between reinforcement learning and classical control (see for example Russ Tedrake's notes on OCW [1])

1. http://ocw.mit.edu/courses/electrical-engineering-and-comput....

Killah911 3 days ago 0 replies      
Any AI book list that includes "On Intelligence" as part of the reading list is good with me...
gbrindisi 3 days ago 1 reply      
For your downloading needs: http://gen.lib.rus.ec/
       cached 24 April 2011 04:11:01 GMT