hacker news with inline top comments    .. more ..    27 Apr 2011 Best
home   ask   best   8 years ago   
2
Showoff showoff.io
791 points by jgv  6 days ago   213 comments top 47
1
pstack 6 days ago  replies      
When I see new projects and ideas that take off, my response is usually a curmudgeonly observation about how stupid and pointless it is or how it's so obvious and trivial that I can't believe anyone would waste their time making it.

This is not one of those. This is one of those cases where I'm not a bit jealous because something so obvious or inane or dumb took off and became Twitter, but because it's just so damn fantastic. This is one of those things that is so elegant and smart that it makes me feel like a complete idiot.

This is going to do very well.

2
jsdalton 6 days ago 3 replies      
Awesome. I'm definitely a potential paying customer.

Two suggestions and a question:

* How about a free trial for unlimited access or some other kind of cancellation guarantee?

* Minor one, but the dark contrast on your website UI is about equivalent to the background on a typical lightbox...i.e. it reads "disabled" to me on first glance.

One question:

I assume I could map a CNAME record to the showoff URL? (So cookies work, etc.)

3
moe 6 days ago 8 replies      
If you have a public facing server then this can be had for free:

  ssh -nNT -R 8080:localhost:3000 myserver.com

Et voilà, myserver.com:8080 now points to localhost:3000.

4
Xk 6 days ago 1 reply      
There's an XSS on the page:

Try to login or create an account. Enter this as your username (or password, as long as it's not valid: you need an error), hit submit. Error + XSS.

</script><script>alert(1);</script>

You escape quotes, which is good, so I can't break out of the JSON request. But you have to remember how the HTML parsing of a page works. </script> will break out from within a javascript string.

5
hinathan 6 days ago 1 reply      
This looks like a slick expansion of the idea here https://github.com/progrium/localtunnel/
Will be sure to kick the tires with a side project or two.
6
lutorm 6 days ago 1 reply      
I think your web page is very unclear.

"Share a project on your laptop" doesn't really specify that I can share a web server on my laptop. Maybe that's implicit in the HN community, but my first thought was that it was a tool for automatically posting updates of my screen to a web site, which I thought was an awesome idea. Then I read the comments here, and with some disappointment concluded that it's forwarding a network port...

7
dholowiski 6 days ago 3 replies      
Wow that's brilliant. I'm definetley a potential paying customer. That's cheaper than spinning up an Amazon EC2 micro instance to develop on at $5 a month.

I'm not a big fan of dark gray background with darker gray text though.

8
ionfish 6 days ago 1 reply      
The usability of the payments form is poor. It doesn't state which fields are required (if it's all of them, it should say so at the top). I understand not remembering the credit card number or CVN between requests, but it also forgets the expiry date, and the country. I had to resubmit several times because of this. Fortunately in my case I had the patience to go through it regardless, but given the amount of research indicating people's willingness to drop out of payment processes halfway through, this is something really worth fixing.
9
mtogo 6 days ago 5 replies      
Why would i use this over a VPS? A VPS is a similar price and has similar functionality, but doesn't pose a security risk to my home workstation and comes with a slew of other benefits.

The tech looks reasonably cool, but what is the use case?

EDIT: Your site also makes it sound like it is HTTP(S) only. If it's not, you might want to clarify that.

10
larrik 6 days ago 3 replies      
I love this idea, and the pricing seems spot-on.

However, it's not obvious if this is Ruby-only or not? I don't use Ruby, so that's kind of a big deal.

11
chapel 6 days ago 2 replies      
At first I thought this was Ruby based since it was installed via gem, but at the footer it says it is built using Node.js. This is a fine example of the power of Node.js as well as the versatility.

On a side note, this reminds me that isaacs (creator of npm) has pushed npm to be for development and not for deploying to end users. Hence using gem instead of npm.

12
johnrob 6 days ago 2 replies      
Since this uses SSH, how do you deal with the fact that any user could login to the remote server? Is there a custom daemon running that implements the SSH protocol, or does the client use the real SSH daemon? If it's the latter, someone could easily get a command shell. In that case, there would have to be some sort of sandbox to make sure that user can't do anything dangerous. I'd love to hear how the creators deal with this - great product btw!
13
nikcub 6 days ago 2 replies      
I just use dyndns.org. free and easy to use and update
14
huherto 6 days ago 0 replies      
I am not picky about design. But please change the colors. There is not enough contrast. I am really making an effort to read just because everybody says it is worth it.
15
sarenji 6 days ago 1 reply      
Genuinely curious: Could anyone tell me what the difference is between showoff and localtunnel[1], aside from a payment plan? Great looking site, by the way, but I keep bracing for a JavaScript popup with the page so dark.

[1]: https://github.com/progrium/localtunnel/

16
Erwin 6 days ago 1 reply      
Does this use SNI ( http://en.wikipedia.org/wiki/Server_Name_Indication ) to allow for multiple vhosts on same IP? (since they use a *.showoff.io wildcard certificate). As I understand it, SNI does not work on IE7 on XP (Konqueror apparently has trouble with it too).
17
dools 6 days ago 1 reply      
I'd much rather be able to do this over an SSH port forward than having to install a Ruby gem.

I think the http://browserling.com/ guys are working on a solution where you can proxy your local web server over an SSH tunnel for previewing your website in multiple browsers without deploying it.

18
hardik988 6 days ago 0 replies      
The website looks really great, and I've had great success using localtunnel and I'm very happy with it. Kudos to the folks over at Twilio for opensourcing localtunnel.

Not to take anything away from Showoff, the execution looks near perfect.

19
hugh3 6 days ago 1 reply      
I don't understand why it says "laptop". Why not my desktop machine?
20
IgorPartola 6 days ago 1 reply      
This looks awesome. I would not use something like this because I have lots of ways to make it happen without paying extra (as in I already have a bunch of servers I can use for ssh tunneling). However, I can see how you would get a bunch of dedicated users. Good luck.

My only question is: won't this service be completely obsoleted by IPv6? I already use IPv6 instead of having to VPN into a NAT'ed office from my NAT'ed home. Just giving someone who has IPv6 connectivity a URL that uses one of my IPv6 addresses will accomplish something similar, will it not?

21
seiji 6 days ago 0 replies      
Why is your site so pretty? I feel lucky when I style a <ul> enough to not make me gag.
22
sciurus 6 days ago 0 replies      
Is the source code for the server running at showoff.io available? One thing I like about pagekite is that both the client and server are open source. If you have a team of people who regularly need to to allow access to local servers, setting up your own pagekite server seems attractive.

https://github.com/pagekite/PyPagekite

23
swolchok 6 days ago 0 replies      
I'm a little surprised that no one has mentioned webfsd (http://linux.bytesex.org/misc/webfs.html), which I use to solve this problem (specifically, as webfs -r .).

Caveats:

- I'm sure showoff is better if you're firewalled.

- It's probably slightly easier to share just one file in a directory with showoff, but this could be remedied with a script that made a temporary webfs root and symlinked the shared files thither.

24
voidfiles 5 days ago 0 replies      
So, the short story here is to use localtunnel?

https://github.com/progrium/localtunnel

25
dpritchett 6 days ago 1 reply      
I love it and I'll probably buy a few day passes here and there.

I also think Fortune 500 security teams are going to be blacklisting this domain soon just to keep their devs from opening holes in the firewall.

26
dools 6 days ago 0 replies      
This is similar to http://proxylocal.com/ that was posted on HN a few months ago.
27
jamesgeck0 6 days ago 0 replies      
I've used Opera Unite like this occasionally, via this [proxy plugin](http://unite.opera.com/application/272/). Unite's fallback proxy servers are usually on the slow side, though. It doesn't support HTTPS, though.
28
huhtenberg 6 days ago 1 reply      
May want to fix this - http://i51.tinypic.com/2a00yyv.png - this is in FF4 on Windows XP.

(edit) I meant the text overflow, not the lack of the background image (which appears to be a problem on my side).

29
seats 6 days ago 1 reply      
Here's the one question/problem I see-

How often do you really want to show someone something badly enough that you are willing to block your local dev environment to do so?

I could see it working if you are interactively talking to someone in chat, but over email, often times you could be waiting hours for someone to actually try out what you want them to see. You could certainly get around this by setting up a separate local 'dev-show' environment, but if you are going to all that effort you really could/should just set up a dev version on a vps/cloud server.

If you are talking to someone interactively, chances are they are another dev, which means they really don't need to see your environment running, they should be capable of using their own.

Also, in this thread I really feel people are exaggerating the level of effort to set up a 'showable' environment on a cloud instance. For rails as an example, I can go from server launch to a running app with db in less than 15 minutes (easily) with manual config and I really doubt I'm the exception here. Remember, this is a 'dev' environment still, your apache/nginx config doesn't have to be airtight, it just has to work for the single user you are showing it to (all defaults on everything is going to work at least as well as the built in rails server).

Not to mention the fact that if you aren't thinking about automating deployments yet, it's a really good investment of time and would work just as well for this case.

30
kaffeinecoma 6 days ago 2 replies      
Perhaps I missed it in the FAQ, but how do you turn it off? CTRL+C?

I understand that you're aiming for simplicity here, but it would be handy to have a menubar icon (or whatever is appropriate for the given platform) to indicate that it's running or not.

Or maybe you actually have that, but it's not clear that you do from the website.

31
geuis 6 days ago 1 reply      
One note about the setup. When you have multiple ssh keys available, the "pick one" menu looks like this:

Choose the public key you'd like to use:

[0] id_dsa.pub

[1] id_rsa.pub

[q] Quit

This should start with 1, not 0.

32
thomasdavis 6 days ago 0 replies      
Man, the whole site was a pleasant experience to browse. I was browsing it because it was just fun to.

The idea is perfect and I encounter this problem daily and always too lazy to bother setting up a dynamic dns.

Signing up for an unlimited plan right now.

Well done.

33
andrewl 6 days ago 0 replies      
Very nice tool. I expect to use it, and happily pay for it.

I like subtle, minimalist design, and the look grabbed me. But I also have somewhat diminished vision, and I had to blow the page up a lot to read it.

34
Travis 6 days ago 1 reply      
Text is hard to read. I'm still not exactly sure why I would use this over a VPS (maybe it's for devs who don't use VPS/EC2 as their dev machine?)

Is this for live demos or for sharing documents? If the former, why wouldn't I just turn on apache and give them my IP?

35
Dramatize 6 days ago 0 replies      
I really like the design of the site. Nice work.
36
pixeloution 6 days ago 1 reply      
I've got MAMP Pro running on my notebook and have a half-dozen "local" urls ( only valid URLS for me ) - is there a way to point this service at those URLS in addition to localhost?
37
erik_p 6 days ago 0 replies      
great idea - definitely why haven't I thought of this pain point that I run into all the time. No fussing with dynDNS, opening ports, or scrambling to deploy a public interweb accessible version of dev code.

Kudos, man.. kudos.

38
mkrecny 6 days ago 0 replies      
Great idea yes. Anyone heard of tunnlr - they've been doing this for ages.
39
andrewheins 6 days ago 2 replies      
I was looking for exactly this service a few weeks ago and would be a paying customer, but I develop on Windows.

A future market, if you're interested.

40
allanscu 4 days ago 0 replies      
The concept is so simple and needed. I've run into this problem so many times. I like it. Wow!
41
42
maheswaran 6 days ago 2 replies      
i havent tried showoff.io and little curious about how about internal reference translations? for example css files?

does this
<link rel="stylesheet" href="localhost:3000/mystyle.css"/>
translates to
<link rel="stylesheet" href="myserv.showoff.io/mystyle.css"/>
???

43
teyc 4 days ago 0 replies      
The design is absolutely beautiful too. Congrats. Did you do the design yourself?
44
southpolesteve 6 days ago 1 reply      
This is so completely awesome. Just yesterday I was looking for something similar. I will start using this immediately.
45
meow 5 days ago 0 replies      
Gr8 idea, but I wonder how many would be willing to use it (especially when there is an option of firing up a amazon ec2 instance when needed).
46
josiahq 6 days ago 0 replies      
Happiness, is a warm socket.
47
blahed 6 days ago 1 reply      
rock & roll!!!!!!!!
3
Amazon's $23,698,655.93 book about flies michaeleisen.org
662 points by rflrob  4 days ago   139 comments top 26
1
siegler 4 days ago  replies      
A similar thing happened to me as a seller. I saw that one of my old textbooks was selling for a nice price, so I listed it along with two other used copies. I priced it $1 cheaper than the lowest price offered, but within an hour both sellers had changed their prices to $.01 and $.02 cheaper than mine. I reduced it two times more by $1, and each time they beat my price by a cent or two. So what I did was reduce my price by a few dollars every hour for one day until everybody was priced under $5. Then I bought their books and changed my price back.
2
Qz 4 days ago 2 replies      
A quick visit to amazon:

http://www.amazon.com/gp/search/ref=sr_adv_b/?search-alias=s...

Multiple books priced upwards of $600m. One has a Kindle edition for 9.99.

3
lsc 4 days ago 3 replies      
oh man. reminds me of the days when I sold books on amazon and half.com. I wrote a script that took the 'nickel less than the other guy' approach.

These things are /wonderful/ when it comes to making the market more efficient. Really, though, the shipping costs eat up most of the efficiency. Amazon needs an easy way to say "I want these 10 books, used. Find me the lowest price (including shipping)" - the idea is that the more books you could buy from one seller, the less shipping friction would be involved, but amazon isn't really set up that way, which makes it much less efficient for the low end used books.

4
anigbrowl 4 days ago 1 reply      
This explains a lot. I wrote a small book back in the 90s that had few sales and was unlamented when it fell out of print. A couple of years ago saw it as the subject of an amazon.com sidebar ad and was astonished to find that it was listed at $100 or thereabouts, and couldn't imagine how or why it might have become collectible. The idea that this was the result of competing pricing algorithms makes a great deal more sense to me.

Now, if you'll excuse me I'm off to weep over my broken dreams of belated celebrity.

5
benvanderbeek 4 days ago 1 reply      
I work at a mid sized Amazon 3rd party seller. We reprice automatically. With thousands of SKU's there's no other way to be competitive. There are many layers to consider though, you definitely can't let your whole catalog auto-reprice. Usually sellers just focus on their top X% SKU's and let the rest auto-calculate.

I wonder if we have a price ceiling setup...?

Edit: Yes we have a floor and ceiling. I have no idea what I was thinking.

6
jonnathanson 3 days ago 0 replies      
Very funny and enlightening analysis. I've seen the fringes of algorithmic pricing a few times in other categories -- especially out-of-print DVDs or VHS tapes.

Example: don't ask me why I enjoy the movie My Dinner With Andre, but for whatever reason, I do. A few years back, I wanted to buy a copy of the DVD and checked Amazon. It turned out that the DVD was long out of print, and new copies were going for $400 apiece. I figured this price was high, but nevertheless, it was nothing that couldn't be explained by actual rarity and supply/demand metrics. Rare DVDs have been known to climb into the hundreds of dollars, especially if new and unopened. But I came back a few days later, and the price was $1932.78 (or something unusual to that effect). The next day, $3500 and change. Which struck me as odd, to say the least. Was some nefarious Goldman trader attempting to fix the market for Wallace Shawn's back catalog?

Needless to say, I didn't love the movie quite that much. So I passed. These days, Criterion has released a new DVD version of the film, and accordingly, everything's dropped back down to about $30 per copy -- including the price of the original, OOP version. I feel sorry for anyone who actually might have taken the plunge at $400, which is not out of the realm of possibility. That's a lot of money for a film about two guys having dinner.

7
hellasbook 3 days ago 0 replies      
I have been an on line book dealer for the past 10 years. Originally the rules about having actual possession of the goods offered for sale were quite strict.

Then came the megalisters - agencies with software that listed all books in print and a contract with publisher's warehouses/library supply services/factors to arrange delivery of in print books to customers.

Then came the "phantom listers" - agencies with software that spiders through the listings of legitimate on line book dealers looking for titles with few (or no) copies listed in Amazon. They then list them at inflated prices. Some of these have more than one alias on Amazon.
I think of them as the Piranhas of the Amazon in the ay that they consume the smaller fish.

If and when they receive an order then then try to purchase a copy from one of the "real" owners of the book.[Some are cheeky enough to request as well a"trade discount" on the dealer's price] Small businesses listing on Amazon and abebooks.com must keep their "fulfillment"rating high, so they can ill afford to refuse to supply such parasites.In fact when I requested once that I NOT be obliged to support what I think of as an unethical selling and pricing model I was told

1] that the buyer was a valued long time customer
and that
2] I could approach the agency in question and - at THEIR discretion - request that they stop ordering books from me.

Since AZ and ABE take out some % of the list price for the the books sold on their site (as well as monthly listing fees, closing fees and sometimes a portion of the shipping charge ) they can and do make a much greater profit from the high priced phantom listers than from the legitimate reasonably priced offerings of small book dealers, consequently they are not very interested in aggressively policing the situation.

Now there is "Monsoon" and other software to automatically adjust prices on line. In practice most often works to REDUCE the price below the lowest price already listed - a rush to the bottom where books get listed for mere cents.

One tactic I recommend is to search for books and to see the widest range of options is to use "addall.com" It comes in 2 flavours "New' and "Used"and searches around 30 book listing services You can compare prices (ascending or descending) and also see the kind of dealers who sell the books.

Many listings have boilerplate descriptions "we ship fast" "books may have..." etc. which indicate that no human may have examined the object being listed.Other listings have descriptions of content and condition that clearly demonstrate that it is a"real"book from a "real"dealer.
When in doubt look for THAT dealers OWN website to ask questions and get personal service.

8
jaysonelliot 3 days ago 1 reply      
I can't begin to describe how much this pleases me. It's straight out of a William Gibson novel, but happening right now.

God, I love living in The Future.

9
geekfactor 3 days ago 1 reply      
Hmmm. I wonder if this would work in reverse? Suppose I want to buy a book that has several new copies for sale on the Amazon Marketplace for say $60. If I post a new copy for sale for $10, perhaps one of these algorithms would kick in to reduce the price?
10
kbrower 4 days ago 1 reply      
I have a book with 1 sale and 6 sellers including amazon. They definitely do not have the book.
11
Jach 4 days ago 2 replies      
I'm interested where the 1.270589 number comes from. sqrt(golden ratio) is close: 1.27201965
12
cookingrobot 3 days ago 3 replies      
Automatic pricing is super common these days, even on more expensive things than books. I'm actually working on a startup called shopobot.com that wants to help people use the volatility to their advantage. We see rapid $50-100 swings on things like SLRs, so it's actually pretty significant.

Ironically our site is down right now because we're based on Amazon's web services. Karma? :)

13
derrida 3 days ago 0 replies      
Its all innocent when it is Amazon. But I am sure there must be similar phenomena taking place in the financial markets, but the algorithms that determine pricing there are more complicated than multiplying by a simple factor.
14
originalgeek 3 days ago 2 replies      
> My preferred explanation for bordeebook's pricing is that they do not actually possess the book.

There's only one problem with this theory. The Amazon TOS requires sellers to have inventory before they can sell inventory. And they will ban you for life for violating that.

15
russellallen 4 days ago 1 reply      
Sanity checks, people. Sanity checks.
16
keeran 4 days ago 1 reply      
I'm surprised the author (and no-one here) hasn't worked out the starting date of the automated price war based on the original RRP of the title :)
17
pitdesi 4 days ago 3 replies      
Someone should try selling a book that there are limited copies of (ie one that can't be procured easily) to see what happens if someone buys from a seller who doesn't have it.
18
JonnieCache 4 days ago 1 reply      
It's gonna be quite a giggle when this happens with the stock market.
19
guynamedloren 3 days ago 2 replies      
[Discalimer: I've never looked into this before and I have absolutely no idea how the Amazon reseller market works, so this might be impossible or prohibited.]

What's stopping somebody from relisting every product that already exists in the reseller market, but raising the price a few bucks? Even if people are more likely to buy the lesser priced product, the seller has to lose since they don't actually have any products on hand or skin in the game. Better yet, they could relist the products for less than the competitors and jack up the shipping prices to skim a few bucks profit off the top of each sale (this would work if somebody sorted by item price only, instead of price + shipping).

20
maxxxxx 3 days ago 0 replies      
Sometimes you can use this for trade ins to Amazon directly. Last week I traded in a book that had several prices for $100.
Amazon took it for $27. On other sites you can get it for $24.
21
atakan_gurkan 3 days ago 0 replies      
Pricing something high can actually increase sales. There is an example in Cialdini's book "Influence", near the beginning of chapter 1. A jewelry store owner was trying to get rid of some items, instead of marking the price down, she marked them up (by a factor of 2), and they were rapidly gone.

It is a nice book to read, and will probably make you feel uneasy when you realize the abundance of manipulation tactics around us.

22
Figs 3 days ago 2 replies      
I've seen lots of books on Amazon priced at $0.01 or $0.02 before. Maybe a result of similar processes? (I always assumed they were some kind of scam, since why the hell would someone sell a book so cheap?)
23
jeffdavis 3 days ago 0 replies      
"they have a huge volume of positive feedback"

Is that a pun or something?

24
anactofgod 2 days ago 0 replies      
Heh.

I suspect that this is happening in a less transparent way with automated commodities trading systems.

25
xudir 3 days ago 0 replies      
awww
26
njharman 3 days ago 0 replies      
I see this all the time. On Ebay too.

Hmmmmm, maybe I'm spending too much time trolling for things to buy?

4
Sony: All personal data stolen from PSN playstation.com
587 points by estel  6 hours ago   202 comments top 41
1
cryptoz 5 hours ago 6 replies      
I wonder how many times a company can install trojans on your computer, destroy your OS's security, secretly watch all your actions, then proceed to not properly protect your data when you voluntarily give it to them...before going out of business.

Sony's size and momentum must be pretty crazy. Or maybe it's our society. I just can't imagine a small record store in the 1960s, after being caught spying through the bedroom windows of its customers, ever staying in business.

I feel terrible for anyone caught in this. But maybe, just maybe, Sony isn't the company to do business with anymore?

2
OmarIsmail 5 hours ago 3 replies      
This is a much much bigger deal than the Gawker security breach. Sony had substantially more information on its users than Gawker could ever hope to dream of. Specifically information on real names, addresses, and potentially credit cards.

This is a big F'N deal and I wouldn't be surprised if it cost Sony more than Microsoft's infamous 1 billion dollar write-down with the Xbox 360's Red Ring of Death.

I don't think this will kill PSN or the PS3, but it's going to significantly dent things. I'm curious to see how much media attention this gets and if we'll see a macro shift towards the Xbox 360 and Wii.

If I was Nintendo and particularly evil I would leverage this opportunity to tout the new system and emphasize the cutting-edge online modes with rock solid security. And MS can also talk about their great track record in the online world.

3
hebejebelus 5 hours ago 3 replies      
There were sixty million[0] PSN accounts. This is impressive, and amounts to (judging by a quick search) the largest-scale ID (and possibly credit-card) theft ever [Not so, see child comment]. Not even factoring in credit card details, the usernames, emails, addresses, ages, passwords, mother's maiden names, favourite pets, of sixty million people is worth a hell of a lot.

I have to wonder how much data that is, in terms of storage. How could you even take that without someone noticing?

Hats off to whoever it was. Now, I'm off to change my passwords. Thank Christ I had the sense not to use a credit card to buy from PSN.

[0] http://www.derangedshaman.com/2011/01/06/sonys-60-million-ps...

[edit] On a related note, paypal refuses to let my change my password to something longer than 20 characters - or have spaces in my password. Why is this the case? Surely the only thing that an upper limit on the length of a password does is help the attacker.

4
dfischer 3 hours ago 0 replies      
http://psx-scene.com/forums/f177/sony-has-been-bad-boy-ridic...

"A well known hacker i don't want to reveal here had all the Sony PlayStation Network functions 100% decrypted as well as providing some nice info about how Sony dealing with PSN members privacy in their online servers.

Apparently, Sony server gathered everything they can from the PSN connected PS3 console. When i said everything, i meant it. Here, i make all the list of what they squeezed from the IRC chat logs conversation between the hackers.

Sony monitors all messages over PSN.
All connected devices return values sent to Sony server returns TV, Firmware version, Firmware type, Console model
They also collects data in your USB attached device.
Credit card sent as plain text, example: creditCard.paymentMethodId=VISA&creditCard.holderN ame=Max&creditCard.cardNumber=4558254723658741&cre ditCard.expireYear=2012&creditCard.expireMonth=2&c reditCard.securityCode=214&creditCard.address.addr ess1=example street%2024%20&creditCard.address.city=city1%20&cr editCard.address.province=abc%20&creditCard.addres s.postalCode=12345%20
*The best part of all, the list is stored online and updated when u login PSN and random.

But, that's not all, with the PSN functions fully decrypted, this hacker can use the function to get all games, DLC, you name it, from PSN store without paying anything."

5
mrcharles 5 hours ago 3 replies      
This seems like a really big argument for never allowing your data to be stored by a 3rd party.

Does anyone see any reason why these companies should do anything other than store the data locally on your system, encrypted/obfuscated, and then only ever send once, via encrypted connection, and then immediately delete the info remotely?

I mean, if someone breaks in to my house and steals my PS3, they already have access to all of that information.

6
dman 5 hours ago 1 reply      
Holy Cow! This has to be one of the most serious breaches I remember in recent times. While I dont work in security and my security foo is weak it appears that they did not have a strong layered security apparatus in place? Is it just a coincidence that this breach and geohotz exploit happened around the same time?
7
redthrowaway 4 hours ago 1 reply      
"Although we are still investigating the details of this incident, we believe that an unauthorized person has obtained the following information that you provided: ... PlayStation Network/Qriocity password and login,"

Seriously? Even Sony is keeping passwords in plaintext? There wasn't a single competent person involved in the design of PSN who might have mentioned that was a terrible idea?

8
norova 5 hours ago 4 replies      
FTA: we believe that an unauthorized person has obtained the following information that you provided: ...PlayStation Network/Qriocity password and login...

I'm curious if this means they store everyone's password in plain-text, or if by "password" they really mean a hash of some sort.

9
unexpected 5 hours ago 3 replies      
This is unreal. What bothers me the most, is that when this happened to me one time before, that particular company paid for a year of credit monitoring services.

In this case, Sony is too cheap to do even that, pointing you towards where you could download your credit report online. Ridiculous.

10
lotusleaf1987 5 hours ago 2 replies      
Thanks for waiting a week to tell me my credit card info has been stolen Sony.

I am not a big fan of MSFT usually, but the next time I am buying a console I'm not buying a PS4.

11
michaelchisari 5 hours ago 0 replies      
So how do we sign up for the class action lawsuit?
12
moondowner 4 hours ago 1 reply      
Notice how they never apologize? The closest thing to apology, but it's not an apology, is:

> "We thank you for your patience as we complete our investigation of this incident, and we regret any inconvenience."

Sony apologizes only to Chuck Norris.

13
maximilianburke 5 hours ago 2 replies      
I'm disappointed but not surprised. When I had to change my password a few months ago on the Sony developer's network site I was told that my new password was too similar to the last ones. I was wondering how they knew that, aside from storing the passwords in plain-text, something I'd assume they'd be too smart to do.

I guess I gave them too much credit.

14
dman 5 hours ago 2 replies      
Funnily enough the stock doesnt seem to have moved at all as a result of this news - http://www.google.com/finance?q=sne
15
famousactress 4 hours ago 1 reply      
These malicious actions have also had an impact on your ability to enjoy...

Interesting. Is it not fair to also say the negligent actions that made these malicious ones possible had an impact?

I'm completely sick of the way these press releases sound.

16
ares2012 5 hours ago 2 replies      
So it is as bad as we feared. The only silver lining I can see is that Sony made the difficult business decision to turn off the network until they were sure it was secure. While that doesn't make me feel better as a PSN user I do respect their honesty and commitment to fixing it.

Time to get a new identity! =)

17
aeontech 5 hours ago 1 reply      
Does anyone else find it odd that they "strongly recommend that you log on and change your password" instead of just force-resetting everyone's password and sending them an email with an activation link? Out of 60M subscribers, I'm certain that a large proportion will never see this message.
18
parfe 5 hours ago 1 reply      
What the hell Sony? I just tried logging into http://us.playstation.com/psn/playstation-home/ the SSL connection to https://store.playstation.com gave Error code: sec_error_unknown_issuer.

It's like you're actively trying to make me never trust you again.

19
Splines 5 hours ago 0 replies      
It's too bad we don't know what's going on inside the sausage factory. It'd make for a very interesting post-mortem.
20
estel 5 hours ago 2 replies      
Giant Bomb is reporting that passwords are supposedly secure (of course, "no way" is clearly false), so I'm guessing there's at least a decent salted hash: http://www.giantbomb.com/news/good-news-psn-back-maybe-withi...
21
jasonneal 5 hours ago 1 reply      
How could they have gained access to passwords? Do they mean, rather, gained access to your secure password hash, or did they simply store passwords in an unencrypted format? Being a member of PSN, this has me concerned. I'm making it a point to change all of my security questions and passwords all throughout all websites I use.
22
ams6110 2 hours ago 0 replies      
I think that what we're seeing here is evidence that there's just too many ways to screw up handling personal information on line. The sane stance is to now assume that any profile you provide to any website will eventually become public, and proceed accordingly.
23
dirtbox 5 hours ago 0 replies      
Interesting fact for the day: 75 million accounts is a new world record for information theft.
24
rkon 5 hours ago 3 replies      
Payback for GeoHot or what? Haven't heard anything about the source of the attack since the DDoS that Anonymous took credit for...
25
wilschroter 5 hours ago 1 reply      
I can't even begin to fathom the magnitude of this considering how many people likely use the same login credentials for all of their sites.

The problem you run into is that communicating both the nature of the breach and convincing people to respond accordingly is incredibly hard.

This will continue to happen across many sites. I think after enough of these breaches, though, people will start to think about the protection of their online identities a lot differently, which is good, albeit at a painful cost.

26
blhack 5 hours ago 1 reply      
Wow this sounds really really bad. As much as I dislike sony's actions in the Geohot case, and as much as "this is what you get for failing at security", I feel pretty bad for them right now (and even worse for all of their customers)

>To protect against possible identity theft or other financial loss, we encourage you to remain vigilant, to review your account statements and to monitor your credit reports.

>We have also provided names and contact information for the three major U.S. credit bureaus below. At no charge, U.S. residents can have these credit bureaus place a “fraud alert” on your file that alerts creditors to take additional steps to verify your identity prior to granting credit in your name.

27
guelo 4 hours ago 2 replies      
We've seen several examples recently of Japanese corporate culture's secrecy and lack of candor. Toyota, TEPCO nuclear plant and now Sony same pattern of not wanting to admit to the problem. I wouldn't bet on their long term competitiveness.
28
nwatson 4 hours ago 0 replies      
This is case-in-point for centralized log-archival-and-analysis tools like SenSage. No matter how secure you make your infrastructure, in situations like this you want evidence of all activity on your networks, computers, DB's, app servers, apps, etc. Storing log data related to this activity can consume petabytes over a multi-year span.

I don't know what kind of forensic tools Sony's using, hopefully they have something like SenSage.

29
chrischen 4 hours ago 1 reply      
Any idea who's behind the data theft? I'm much more interested in that...
30
unwantedLetters 5 hours ago 1 reply      
If someone has had their data stolen, are there any steps that they can take to ensure that they are not fleeced or does this mean that it's only a matter of time (or perhaps luck)?
31
pdenya 3 hours ago 0 replies      
"We greatly appreciate your patience, understanding and goodwill" - I'm all out of good will for Sony. I already canceled the credit card I had on file with them, hopefully nothing happens with my personal info.
32
sdkmvx 5 hours ago 1 reply      
They say passwords were stolen. This must mean they are not properly hashing passwords with salts stored outside of the database.

How many times does this have to happen before people realize that passwords are never to be stored in plaintext? The only exception is a client-side program that needs to log you in and in an ideal world that would be handled by a Kerberos-like ticket system.

33
neilalbrock 3 hours ago 0 replies      
Frankly I'm in shock. That a company as large and experienced as Sony would allow this to happen, well it beggars belief. The contempt shown to customers, not just by Sony but by other large tech companies (I'm looking at you Apple) is disgusting.

I choose not to be part of Facebook because I'd rather they didn't know every detail of my life. Now I have to consider if I want to use products from Sony because of concerns that they can't even protect my private data, which they force me to give them in order to use their services.

Unbafuckinglievable.

34
Hominem 5 hours ago 3 replies      
I haven't really been following this but there have been rumblings all week that a hacked firmware was released that allowed anyone who installed it, and twiddled with some other things, access to the PSN development and testing network. Anyone know more?
35
idheitmann 4 hours ago 0 replies      
For many folk who may not use PSN much or recently, the first concern I imagine would be to recall whether they ever provided Sony with the most sensitive things on that list.

A quick gmail search tells me that they had my mailing address and full name, but I have no idea if I ever gave them my CC or DOB or SSN or Gitmo prisoner bar code or whatever else.

I'm glad I use lastpass because I have a nice list of sites to update password info, but I imagine this process is going to take quite a while. Too bad I repeated that password so many times.

36
alexknight 5 hours ago 0 replies      
While I don't know the details of how this happened, it's a sure fire bet that they were not doing something right when it came to securing their infrastructure. How many times have we heard of big name companies running un-patched operating systems and SQL databases or even weak passwords? From the consumer end, this really sucks. Especially if their personal data was compromised.
37
GrandMasterBirt 5 hours ago 1 reply      
Wait, "Password" was stolen? WTF they store unencrypted passwords?!?!?!!?! I sure hope they meant password hashes otherwise upset many people should be.
38
allending 2 hours ago 0 replies      
"+ OreoPoptart on April 26th, 2011 at 12:58 pm said:
JUST STOP! FIX THE GOD DAMN PSN FIRST THEN POST THIS CRAP UP GEEZ"

Heh.

39
rickdale 3 hours ago 0 replies      
Thats ridiculous. Sony should feel in debt to their customers for such a security breach. I hope they catch the people responsible and give them jobs!
40
edtechre 3 hours ago 0 replies      
"Wait, ENCRYPT credit numbers? I thought you said decrypt!"
41
vipivip 5 hours ago 1 reply      
Sad day, what's next for playstation owners Sony?
5
Amazon Web Services are down amazon.com
550 points by yuvadam  5 days ago   332 comments top 45
1
timf 5 days ago 5 replies      
Some quotes regarding how Netflix handled this without interruptions:

"Netflix showed some increased latency, internal alarms went off but hasn't had a service outage." [1]

"Netflix is deployed in three zones, sized to lose one and keep going. Cheaper than cost of being down." [2]

[1] https://twitter.com/adrianco/status/61075904847282177

[2] https://twitter.com/adrianco/status/61076362680745984

2
yuvadam 5 days ago  replies      
Current status: bad things are happening in the North Virginia datacenter.

EC2, EBS and RDS are all down on US-east-1.

Edit: Heroku, Foursquare, Quora and Reddit are all experiencing subsequent issues.

3
asymptotic 5 days ago  replies      
Amazon's EC2 SLA is extremely clear - a given region has an availability of 99.95%. If you're running a website and you haven't deployed across across more than one region then, by definition, your website will have 99.95% availailbility. If you want a higher level of availability use more than one region.

Amazon's EBS SLA is less clear, but they state that they expect an annual failure rate of 0.1-0.5%, compared to commodity hard-drive failure rates of 4%. Hence, if you wanted a higher level of data availability you'd use more than one EBS volume in different regions.

These outages are affecting North America, and not Europe and Asia Pacific. That's it. Why is this even news? Were you expecting 100% availability?

4
ig1 5 days ago 1 reply      
A couple of hours into the failure, and no sign of coverage on Techcrunch (they're posting "business" stories though). It shows how detached Techcrunch has become from the startup world.

Edit: I tweeted their European editor about it and he's posted a story up now.

5
mcritz 5 days ago 0 replies      
This feels the same way as hearing that the whole Internet just got shut down.
6
kylec 5 days ago  replies      
I guess this is one Reddit outage that can't be blamed on poor scaling
7
mtodd 5 days ago 2 replies      
Why is ELB not mentioned at all on the Service Health Dashboard?

We're experiencing problems with two of our ELBs, one indicating instance health as out of service, reporting "a transient error occurred". Another, new LB (what we hoped would replace the first problematic LB), reports: "instance registration is still in progress".

A support issue with Amazon indicated that it was related to the ongoing issues and to monitor the Service Health Dashboard. But, as I mentioned before, ELB isn't mentioned at all.

8
dsl 5 days ago 3 replies      
4/21/2011 is "Judgement Day" when Skynet becomes self aware and tries to kill us all. http://terminator.wikia.com/wiki/2011/04/21

I am just a little freaked out right now.

9
jws 5 days ago 1 reply      
Silver lining: Hopefully I can test my "aws is failing" fallback code. (my GAE based site keeps a state log on S3 for the day when GAE falls in a hole.)
10
helium 5 days ago 2 replies      
I just launched a site on Heroku yesterday and cranked up the dynos up in anticipation of some "launch" traffic. Now, I can't log in to switch them off. Thanks EC2, you owe me $$$s
11
powdahound 5 days ago 0 replies      
I'm seeing 1 EBS server out of 9 having issues (5 in one availability zone, 4 in another). CPU wait time on the instance is stuck at 100% on all cores since the disk isn't responding. Sounds like others are having much more trouble.
12
jedberg 5 days ago  replies      
Yes, they are. :(
13
espeed 5 days ago 0 replies      
Quora is down, and evidently "They're not pointing fingers at EC2" --
http://news.ycombinator.com/item?id=2470119 -- I was going to post a screen shot, but evidently my Dropbox is down too.
14
paraschopra 5 days ago 2 replies      
http://venuetastic.com/ - feel bad for these guys. They launched yesterday and down today because of AWS. Murphy's law in practice.
15
smackfu 5 days ago 1 reply      
So when big sites deal use Amazon Web Services for major traffic, do they get a serious customer relationship? Or is it just generic email/web support and a status page?
16
potomak 5 days ago 2 replies      
Quora says: "We'd point fingers, but we wouldn't be where we are today without EC2."
17
alexpopescu 5 days ago 0 replies      
Instead of enumerating who's down, I'd be more interested to hear about those that survived the AWS failure. We could learn something from them.
18
mathrawka 5 days ago 3 replies      
I think this is a good example of how the "cloud" is not a silver bullet to making your site always up. AWS provides a way to keep it up, but it is up to each developer to ensure that they are using AWS in a way to make sure their site can handle problems in one availability zone.

I think we will see more of a focus from big users of AWS about focusing on how to create a redundant service using AWS. Or at least I hope we will!

19
olegp 5 days ago 2 replies      
Assuming the problem is indeed with EBS, I would say this should be a warning sign to anyone considering going with a PaaS provider, which Amazon is quickly becoming, instead of an IaaS provider like Slicehost or Linode.

The increased complexity of their offering makes it more likely that things will break, leaving you locked in.

I did a 15 minute talk on the subject, which you can check out here: http://iforum.com.ua/video-2011-tech-podsechin

EDIT: here are the slides if you can't bother watching the video http://bit.ly/eqDNei

20
dmuth 5 days ago 0 replies      
Holy crap. An Amazon rep actually just posted that SkyNet had nothing to do with the outage:

https://forums.aws.amazon.com/message.jspa?messageID=238872#...

21
smtm 5 days ago 2 replies      
AWS/S3 has become the new Windows - great SPOF to go for if you want to attack. This space needs more competition.
22
oomkiller 5 days ago 1 reply      
They better start writing their explanation now. Multiple AZ's affected?
23
ig1 5 days ago 0 replies      
Given that Heroku's parent company (Salesforce) owns a cloud platform, it seems kinda inevitable now that Herkou will perhaps sooner-than-later switch back-ends (or at least use both)
24
frekw 5 days ago 0 replies      
It's a bit ironic that Amazon WS has become a SPoF for half the internet.
25
antonioe 5 days ago 0 replies      
Had our blog go down. Didn't realize it was AWS wide..did a reboot. Now I am in reboot limbo. Put an urgent ticket into Amazon. They just said they are working urgently to fix the issues. Let's see how long this goes.
26
tybris 5 days ago 0 replies      
All hosting services go down occasionally. If you want to stay up you need to build a fault-tolerant distributed system that spans multiple regions and potentially multiple providers.

Also, Amazon should fix EBS.

27
vnorby 5 days ago 0 replies      
From EngineYard: "It looks like EBS IO in the us-east-1 region is not working ideally at this point. That means all /data and /db Volumes which use EBS have bad IO performance, which can cause your sites to go down."
28
jjm 5 days ago 1 reply      
Everyone talks about SLAs but I believe it doesn't consider the fact that the EBS vols are still up (not on fire, and available) and are phantom writing or that the network is queued up the wazoo so writes don't even happen in a timely manner as you'd expect.
30
ck2 5 days ago 0 replies      
So what percentage of the top 1000 sites are now crippled by this?
31
swedegeek 5 days ago 0 replies      
In case anyone is late to the party and missed the non-green lights on the AWS status dashboard, here is the page as of about 9:30 EDT...

http://screencast.com/t/p69xAoDJRSer

32
piramida 5 days ago 0 replies      
Today, April 21st 2011, according to the "Terminator", Skynet was launched... No wonder AWS is down
33
antonioe 5 days ago 0 replies      
1:23EST and Reddit is back up. Quora/4SQ still down. My site still down.
34
mathrawka 5 days ago 3 replies      
So do we get some credit on our AWS accounts? I haven't really read their SLA for EC2.
35
singlow 5 days ago 0 replies      
It's definitely a limited outage. My three instances seem to have operated all night with no problem. Two of them are EBS instances.
36
dmuth 5 days ago 1 reply      
Being unable to get much done here, my co-workers have found other things to do in the office: http://www.youtube.com/watch?v=u1-oGxDHQbI :-P
37
xlevus 5 days ago 1 reply      
Ruh Roh. The service I'm using to acquire accommodation seems to be dependent on AWS. Guess I'm going to be homeless tomorrow if it doesn't get fixed. :X

Note to self. Don't ever build a service reliant on AWS.

38
pextris 5 days ago 0 replies      
reddit.com is down, but luckily http://radioreddit.com is not.
39
greaterscope 5 days ago 1 reply      
Wish were able to download our ebs snapshots, which are supposedly hosted on S3. What does everyone else do?
40
hcentelles 5 days ago 0 replies      
It seems like availability zone us-east-1c it's working, i can launch a EBS backed instance right now.
41
kennethologist 5 days ago 0 replies      
Thankfully, my major clients are using the Asia EC2 Instances!
42
hendi_ 5 days ago 4 replies      
Yay for relying on the cloud \o/
43
marjanpanic 5 days ago 0 replies      
Amazon down - just one more reason to try out BO.LT and their amazing CDN and page sharing services...

Just launched:
http://techcrunch.com/2011/04/21/page-sharing-service-bo-lt-...

44
jwr 5 days ago 2 replies      
It's really mostly EBS failures, so the title is overly dramatic. And EBS has been known to have issues.
45
Jun8 5 days ago 0 replies      
This is like witnessing your parents having sex while a kid: you sort of knew this is a possibility but it is a devastating blow to your belief system nevertheless.

The amount of services I use that depends on Amazon is amazing. They have really become the utility company of the Web.

6
Dear Dr. Stallman: An Open Letter alexeymk.com
485 points by AlexeyMK  3 days ago   199 comments top 39
1
emilsedgh 3 days ago  replies      
A few points i would like to remind everyone who criticizes rms.

1) rms is a radical guy. You cannot change that. He fights for what he thinks is right. He is not the kind of person you can ask to censor himself.

If he thinks u.s goverment is to blame for 9/11, no matter how saying it in a lecture seems childish, he will say it.

If you invite rms for a lecture, he is coming with his radicalism. That is to be expected. You cannot invite rms and expect steve jobs.

2) rms is a practical guy. stop acting like he's a mad man who knows nothing. He started GNU, wrote emacs, glibc, gcc and probably others. He created the concept of free software and wrote a license as good as GPL to defend it.

He also managed to gather a community around this very crazy idea of free software.

3) rms doesnt want people only to use free software. he wants people to value their freedom, and as a result of that, use free software.

It doesnt really matter if whole world uses android instead of ios. The point is, these days, most people involved in open source community, do not even care about free software and the freedom it offers.

Most people are interested in technological advancements or affects of an open source project on market. None of them are concerns for rms.

And, what i said above is just what i interpreted from his actions and are not facts.

2
danieldk 3 days ago 5 replies      
Years back, I used to be an FSF member. Not that I liked the GPL much (in fact, I mostly use the Apache License), but they raised important issues, and had a track record of investing into fine software (GNU) that I benefitted from a lot.

However, their campaigns were getting so off-target, that much of my sympathy dwindled, and I ended my membership. Childish 'anti' advertising, such as 'BadVista' and DDoSing Apple's genius bars (gee, that's will convince anyone who was visiting an Apple Store) only made the whole free software movement look bad, childish, and unsocial. To this day, they seem to put their energy into almost hilarious campaigns (Windows 7 sins? Seriously?).

This open letter is on the mark, their current course only marginalizes the FSF and part of the FLOSS community. Whatever happened to relying on your own strengths, rather than caricaturizing the competition?

3
mbateman 3 days ago 3 replies      
Just a quick thought from skimming this thread: It seems like there are two issues, radicalism and eccentricity.

Saying that the government may have caused 9/11 is eccentric. It's crazy and "radical" in an uninteresting way, and most intelligent people will ignore it.

But the idea that one shouldn't use Google docs if one values freedom is radical. I think RMS is completely and totally wrong, but the radicalism or apparent impracticality of the idea is not what I object to.

The cheesy campaigns and slogans have elements of both. They are radical ideas presented in an eccentric way. I think the 7 sins stuff, Swindle, etc., are stupid and childish and really have no upside.

But contrary to what the OP seems to suggest, while many people are turned off by radicalism, radicals are influential way out of proportion to what one would be led to expect by making a quick survey of people's negative reactions to them.

It can be hard to separate what's radical and what's merely eccentric. Especially if you're the one trying to figure out how to present radical ideas.

4
andywood 3 days ago 3 replies      
I dislike the idea that every single leader in the world must only conduct themselves according to Dale Carnegie, as it were. There is an over-abundance of people doing just that, and judging from this post and many of the comments, a lot of people seem to want others to conform to that sort of uniform "persuasive" behavior. I'm not saying it isn't effective, but surely not everybody needs to do that. Isn't there room in this big world for a few genuine personalities?
5
michaelpinto 3 days ago 1 reply      
If Stallman wasn't a crazy hippy he wouldn't have been into this cause years before even the first dot.com boom. It's unfair for someone to insist that a visionary go corporate all these years later because you feel uncomfortable. If you want to be the next generation spokesman than become that, but don't waste time trying to make a zebra shed his stripes.
6
rjbond3rd 3 days ago 1 reply      
This man is arguably the greatest hacker of all time. He's hacking the culture, he's been incredibly successful.

It's shocking and irresponsible that people are commenting on "what rms says" based only on hearsay, speculation and mis-quotes. At least take the time to Google before condemning the man for things he never said.

7
sliverstorm 3 days ago 1 reply      
On the flipside, if Stallman was less crazy I wouldn't have my second-favorite comic strip of all time:

http://xkcd.com/225/

8
jrockway 3 days ago 2 replies      
I don't think this guy gets it. Clearly, he has drawn the line at "proprietary software is fine, as long as it's useful to me". To RMS, though, that's not where the line is: he simply refuses to use software he can't tweak or audit. That's not like calling Obama Hitler or saying global warming is a scientific fraud. It's just an ideology, like not driving a car or only eating foods that don't come from animals. Nothing wrong with that, so why all the hate?

This article is sillier than calling the kindle "swindle".

9
elwin 3 days ago 0 replies      
I hear this opinion a lot, and I think it slightly misses the point. The open-source world already has plenty of socially conventional advocates promoting their products. If the FSF became an ordinary open-source software promoter, it wouldn't have nearly as much influence as, say, the Ubuntu marketing team.

But there aren't many organizations trying to derive software principles from objective logic instead of subjective cost-benefit analysis, who insist that freedom and controlling your own computing is not just another feature but a vital issue. RMS may not convert many Windows users, but he does come up with valuable insights. If no one else is going to be a vocal, uncompromising advocate for software users, I can cringe through Windows 7 Sins and jokes about letting presidents drown.

10
omouse 2 days ago 0 replies      
Also, Stallman has been taking baby steps for a very very long time. He didn't start with a completely free software machine at all. He used a proprietary compiler at first, a proprietary editor, etc. in order to build everything up.

In today's world, it is possible to do all of your work using free software. It may not be Google Docs, but LibreOffice is pretty awesome and you can just use Empathy or Pidgin IM to talk to people over an XMPP network.

At some point you have to pry yourself away from proprietary software and the sooner you do it, the less likely you are to cave in and use more proprietary software.

Stallman has been repeating the same ideas for years. It's hard to sound persuasive or charismatic when you're not trying to be a leader and when you've been saying the same damned thing over and over and having to adapt it to the realities of today (which could have been avoided if only more people had listened in the first place...)

11
mark_l_watson 3 days ago 1 reply      
I don't think that Stallman should tone down his message. Sure, he can be rough, but so what. (I've experienced this in email with him when he asked about re-releasing some of my early Lisp books under the FSF doc license, but that is OK.) The world needs people with strong contrarian opinions and even if I don't always agree I value what they say.

Way off topic, but: I can imagine a future world where there is an underground using free software, private but linked ad-hoc networks, etc. The victories of the super rich over the rest of us in the last decade actually have me looking at fiction like the world in Gibson's 'Johnny Mnemonic' as a real possibility for the future.

12
leoc 3 days ago 1 reply      
Eben Moglen http://emoglen.law.columbia.edu/ is a more winning spokesman for the FSF these days. I hope he'll forgive me for mentioning that there's plenty of him on YouTube http://www.youtube.com/results?search_query=eben+moglen :)
13
danenania 2 days ago 1 reply      
Ignoring the particulars of the issue, I think it's sad that non-mainstream views on 9/11 automatically makes someone a valid target for ad hominem attacks, even in educated and intellectual circles. This form of social exclusion is essentially nationalistic and a dangerous trend.

While conspiracy fetishism is unfortunate, it's good to have people questioning the official line, and it isn't as if all the facts around 9/11 are clear cut--yes, there is much unsupported speculation that can be thrown out immediately, but simply positing government involvement on some level is not a priori outlandish.

I don't think we should expect anything different from Stallman, or any radical thinker. Controversy is part of the job.

14
bad_user 2 days ago 0 replies      

    today's proprietary stuff isn't marijuana; 
it's heroin, and it's really, really good

I beg to differ -- today's proprietary software is exactly like marijuana.

When it comes to freedom, there's is no black & white classification, only shades of gray. And the whitest of them all are the BSD-like licenses, which are frowned upon by the FSF.

And the reason I think proprietary software is like marijuana is because you CAN be careful when using it, you can also take it in small dosages where it makes sense, you're only required to use it responsibly to not end up hooked with freedoms lost.

And of course, for some people marijuana usage turns into heroin -- but everybody fears heroin and heroin comes with a heafty price for the junkies, which isn't that good in the eyes of consumers (best things in life tend to be cheap ;))

Also, leaving this analogy behind -- let's take as example GIMP.

Gimp is awesome for what it does and I actually think its grotesque interface is the reason why I ended up working with masks, really groking effects like smart-sharpening.

But if you're deep into photography, using Gimp is unacceptable. First, it has serious limitations like 8-bits per color (which means you're losing color info, when importing from RAW formats, or when composing layers -- and as a practical consequence, correcting over/under-exposed photos becomes a nightmare). CMYK support is hackish at best, and photographers (professional or passionate amateurs) do print their photos (whether it's for selling, or for doing exhibits). Then, while the UI forces you to learn about the inner-workings of digital image processing, it becomes a pain in the ass to quickly retouch hundreds of photos (unless you can do them as a batch, with some scripting, but since every photo is unique, no, you can't).

So comparing a product like Adobe Photoshop, which provides real/measurable value to photographers, with a drug that you can get rid of -- that's stretching it a lot. I also view a product like Photoshop as something that gives you more freedom for expression -- also saving you bucks which you'd spend otherwise on extremely high-priced gear.

15
lell 3 days ago 1 reply      
Proponents of the FSF desire the hegemony of free software with the same uncompromising fervor as revolutionaries in russia desired communism in 1917. And the analogy does not end there. In many ways, free software is a communists dream.

Marx hoped that technology would make human labour redundant through mechanisation of production, allowing humans to spend all their time doing r&d (or r&r) --- he hoped that stuff like food would become free to create. With software, it's already possible to make this a reality, as programs can be replicated without cost. All other things being equal, this should lead to great benefits to society, a prospect that has attracted RMS and others.

That being said, it is as useless to ask RMS to compromise as it would be to ask those revolutionaries to in russia, 1917.

Furthermore, asking them to take baby steps is condescending, and they will ignore this advice. The reason is that their motivation differs from the majority of the hackernews readership. Sure, if they took baby steps and focused on PR and focused their agenda, free software might become more mainstream, and many entrepreneurs and small companies would benefit. But they don't want entrepreneurs and small companies to benefit, esp. if it means making these compromises.

Essentially, this is why I find articles like this condescending. The point of FSF is to improve society by advocating universal adoption of free software. Entrepreneurs indirectly benefit from these endeavors. Entrepreneurs then complain that the FSF could be more effective if they compromised their platforms. But this is sort of disingenuous because it's essentially the entrepreneurs telling the FSF to redirect effort that would benefit all of society to effort that would benefit the entrepreneurs. Granted, the former efforts are harder than the latter, but it is no one's place to tell the FSF how to direct their charity and advocacy, especially not someone who stands to gain from the reallocation that they themselves suggest.

16
Typhon 3 days ago 1 reply      
Can somebody tell me when exactly did Stallman say that someone who used proprietary software was a hater of freedom ?
In the last interview I read, he seemed able to understand that almost nobody would go as far as him on the side of software freedom.

http://www.networkworld.com/news/2011/031411-richard-stallma...

17
3dFlatLander 2 days ago 1 reply      
When I first started to become computer savvy, Stallman had already moved into the activist stage, and further away from programming. I had always heard that he was a great programmer. But, I've never actually seen any code he's written. The earliest software versions I can find on gnu.org's FTP are from 1994--I'm guessing most of the projects had multiple contributors by this time.

Anyone happen to know where some pure Stallman code can be found?

18
AlexeyMK 1 day ago 1 reply      
I definitely hadn't expected as much feedback and discussion as the post got; thanks, everybody! In case you're curious, I got an email back from RMS: http://alexeymk.com/dear-dr-stallman-the-aftermath
19
st3fan 3 days ago 1 reply      
When you talk about the risk of software as a service, you can mention that the US gov't is attempting to collect identifying user data from the Wikileaks Twitter account, or the recent domain name seizures of PokerStars and other online gambling websites.

These are practical consequences of a lack of Free Software

Huh Wut!?

How is free or open software going to prevent any company from receiving a court order to disclose data about its users?

This has nothing to do with technical implementation of a service.

20
gnufs 2 days ago 0 replies      
Relevantly, FSF's new executive director is asking for feedback, and specifically criticism:
http://lists.fsf.org/archive/html/info-member/2011-04/msg000...
21
msutherl 2 days ago 1 reply      
What bother's me about Dr. Stallman is that he is concerned with particular freedoms, such as the freedom to tinker with software, and brands this as the freedom. This is ideology. For me the most important freedom is the freedom to get my work done with the best available tools. Often the best available tools are not "free" software, especially if you do anything other than web or systems programming.

I actually prefer to see him marginalize his views " it helps intelligent people who are not interested in buying into other people's ideology realize that he is an egomaniacal cult leader without needing to waste time considering the soundness of his ideas.

I prefer the "open source" guys.

22
omouse 2 days ago 0 replies      
Saying "Big Brother" is simply telling the truth. When the US government and other governments feel it's okay to illegally wiretap people and the TSA subjects you to full body searches and cellphones make you easy to locate and the US government uses drones to kill people in other and the NSA wants backdoors into encryption schemes and ways to break them, I think it's safe to call them Big Brother.
23
angus77 3 days ago 1 reply      
I pretty much agreed with everything except the ridiculous idea that Stallman should try out Google Docs so he could see how "good" it is.
24
jberryman 3 days ago 0 replies      
I really appreciated the tone of this piece. Respectful, well written and convincing.
25
derrida 2 days ago 1 reply      
I've always found a connection between brilliance and "yelling at cars on the side of the road." Take the good with the bad.
26
gsf 2 days ago 0 replies      
I wonder how many 24-year-old CS students have given this same advice to Stallman in the last 25 years. Not that Alexey shouldn't voice his thoughts, but it's well-trod ground.
27
cgray4 3 days ago 1 reply      
I really don't think the signs that are used to illustrate this article are comparable. The Kindle/Swindle sign isn't making up a new name for the Kindle. It is saying that this thing in the sign is a swindle. You shouldn't buy it because it makes false promises. If a person made up a sign with a bottle of Coke and put "Tastes Great" beneath it, that person wouldn't be calling Coke "Tastes Great".

Sure, it's negative advertising, but that doesn't put it on the level of Lyndon Larouche advertising. It might be on the level of the people who called Microsoft M$ on Slashdot 15 years ago, but I don't really think it is. I didn't see the talk, so I don't know if he called it a swindle during the talk but if he did, then I would put the remark in the latter category.

I'm even less sure what the objection to the other sign is. Is it the word "sins"? They want you to go to their website to see the things that they don't like about Windows 7. Mainly, I would guess, in the way that it restricts your freedom. What is a short word that is less incendiary that means things-I-don't-like-about-a-thing-that-restricts-my-freedom?

Finally, "baby steps"? In this day and age? I've used almost exclusively free software for over ten years. It's really not that hard. I prefer it. So start using free software or don't. I don't care. But don't pretend it's a big hassle that someone told you that you should.

(To be clear, I'm not a total apologist for RMS. He has said some distasteful things about women and from what I hear his hygiene isn't the greatest either.)

28
originalgeek 2 days ago 0 replies      
It's too late for RMS. Once a pickle, never a cucumber.
29
smellyboy 3 days ago 0 replies      
Whilst I'm against negative campaigning, rms has been and still is the consciousness of free software. We would be in a very bad place if not for him. yeah sometimes he's a dick, but then we all are.
30
autarch 3 days ago 0 replies      
I think Stallman needs to read this book - http://www.amazon.com/gp/product/159056233X

Activism doesn't need to be mysterious, there's lots of psychological research that you can look to when you ask "how can I convince people to {go vegan, support software freedom, support gay rights}?"

31
cosmok 3 days ago 0 replies      
It is almost impractical for me to be like Stallman and shun a lot of Hardware/Software. But, I do not wish for Stallman to be any less radical than he is as: by being radical he gains my attention and some of his thoughts and ideas stick with me and, it has made me think about 'freedom' while buying any piece of Hardware/Software.

I would never want to work him on anything - I watched him tear apart people while responding to their concerns - but, people like him are essential to the Free Software movement.

32
pgbovine 3 days ago 1 reply      
minor nit: i don't think rms is a "Dr.", since he didn't get a Ph.D. (unless he has a secret M.D.)

From wikipedia:
"Stallman then enrolled as a graduate student in physics at MIT, but abandoned his graduate studies while remaining a programmer at the MIT AI Laboratory. Stallman abandoned his pursuit of a doctorate in physics in favor of programming."

perhaps he has an honorary doctorate?

33
FrojoS 2 days ago 0 replies      
I respect RMS a lot, too, but I have to agree with this piece. He clearly tries to be a PR/sells person first of all. An advocate for free software in all industries and all other parts of society. Why then, doesn't he play to the rules of society where it makes obvious sense? As much as I find his stubbornness personally sympathetic, I doubt he is doing the FSF a favor at this point.

As an example I know from the people at MIT Open Course Ware, that even though they share a very similar view on free information, and RMS can show great achievements, it was really hard to use RMS as an effective advocate for OCW. The key audience was simply turned off by his appearance. As smug as that might sound.

In reality, you can hardly put people into sales, when they refuse to wear a tie every so often.

34
Tichy 3 days ago 1 reply      
Damn you, Photoshop (presumably the only piece of closed software some people just can't do without).
35
6ren 3 days ago 1 reply      
> When we asked, you mentioned that you do not write much code anymore.

This may be partly why he seems out of touch with programmers.

36
flocial 2 days ago 0 replies      
Isn't marginalization the whole point of FSF? They want to attract people committed to free software not appeal to popular culture. Dressing RMS up and giving him public speaking lessons will not change the ideology. His image as it stands is consistent with his image, the complete opposite of Steve Jobs.
37
kraemate 2 days ago 0 replies      
I had submitted something about Stallman and his genius some days back : [http://news.ycombinator.com/item?id=2471620]
38
ralfd 2 days ago 0 replies      
Sorry, I cant read something about RMS without remembering his horrific foot incident:

http://www.youtube.com/watch?v=I25UeVXrEHQ

39
poink 2 days ago 0 replies      
I think RMS's most likely response to this article would be, "If I hadn't been so crazy, you wouldn't have written about my talk at all."

If I were in his shoes, I'd be crazy too.

8
Building a Web Application that makes $500 a Month " Part I tbbuck.com
408 points by mootothemax  5 days ago   115 comments top 25
1
mootothemax 5 days ago 6 replies      
Hi everyone, blog post author here. Over the last 18 months I've learned an incredible amount from the users here on Hacker News, and I figured that it's about time I started giving something back.

None of my web apps make serious money, but they bring in enough to make a difference to life, and I'm hoping that my articles will help others get on the same path - if not somewhat more successfully! ;)

Please feel free to ask me any questions. I'm aiming to have the next part written and live by early next week.

2
shazow 5 days ago 2 replies      
I love the tone of your writing (both in the post and on TweetingMachine), it's modest yet confident.

Some feedback:

* [Edit: Looks like it's right there in the center and I'm just blind. :)] I couldn't find the pricing after the 10 day trial anywhere. This frustrated me and made me not want to try it at all without knowing what I was getting into.

* Twitter frowns upon auto-follow/unfollow services. You should be very careful with this. I recently had to make changes to my own service because the Twitter API policy team didn't like me telling people who unfollowed them.

Great work, I look forward to the second part!

3
MicahWedemeyer 5 days ago 0 replies      
Wow, it's like reading my own history, especially the beginning, the knowing that gazillions of people will be clamoring for what you build. It's tough when you find that 99% of reactions are "meh..."
4
revorad 5 days ago 1 reply      
Hey I remember when you launched! I even gave you some "business" advice - http://news.ycombinator.com/item?id=1166641.

Well done, waiting anxiously for the rest of your story (you drama queen :-P).

5
zacharyz 5 days ago 0 replies      
I absolutely love reading articles like this that are grounded in reality where the point of the app is to actually make money. The fact that you are making any money at all brings value to what you have to say that all of us can learn from. I think most entrepreneurs will agree that it is that first sale that is the hardest part. I look forward to your next article!
6
nikcub 4 days ago 1 reply      
Get around the blogger problem by generating a referral token for each blogger to use in linking to your site, and show them the token/link in their account page.

Make the rules so that a blog post referral only requires a single click - that way you can verify that the blog post is from who they say they are

Once you have that infrastructure, you can use it as a more generalized referral system for people that want to tweet out a link to your product, etc. For eg. every tweet referral signup gets you a free month

There are hordes of freelancers out there who do nothing but affiliate marketing. For those types, you want to send them 30-50% of revenue. There are entire products to do this for you - cj.com et al. Their referrals made up over 50% of new users on a product I worked on previously.

7
djb_hackernews 5 days ago 1 reply      
It looks like what you are getting at is that the design is important. I don't think your original design is a bad starting point, much better than any designs I've come up with for my projects.

It'd be nice if you got into the details of integrating a themeforest theme with an existing code base. A reason I've avoided buying a theme, don't know what I'd do with it once I had it.

8
phlux 4 days ago 3 replies      
Ok, maybe I'll be the loan dissenter - and a torn one at that. I really think this is a great article and I think this is information that should certainly be shared out...

I have one nitpick with what the author states that bothers me greatly.

Bare with this statement - I am not against what he did, but this is the biggest worry in hiring any outside dev to do any work for you, and thus has a clear and real issue for the market and industry overall:

The only thing that I feel is a little weird is that we have a developer who says he saw a bunch of requests for an application that does X.

he then turns and build the application to do X himself.

This in and of itself is fine -- but it will certainly wind up pissing people off/alienating people entirely.

The point is that the valley mantra is "ideas are worthless, only execution matters* -- then it makes all devs look like they are just sitting around waiting to leach off others ideas and execute on them themselves for personal profit.

I don't think there is a clear win-win situation here, I just want to point out that the way this article is written, it seems to prove out the worry of the non-dev side of the world == "What is the risk of trying to recruit a developer for my idea, then they just turn and build it for them selves?" (MZ Comes to mind)

This means that ideas ARE NOT worthless, they do have value. Surely, only if you can manage to execute on them - but if the ideas were so worthless in-and-of themselves why arent the millions of developers constantly outdoing themselves with utterly amazing works. They aren't. It takes a great idea AND great execution to matter.

9
karterk 5 days ago 1 reply      
Thanks for posting this. What do you use for handling the recurring billing?
10
bane 5 days ago 1 reply      
Thanks for writing this up! We're currently going through a similar experience. The good news is that we went from 1 signup a month to quite a few per day (with a few days at over 100 new signups) -- mostly by following things that we've learned here on HN.

A few weeks ago, my co-founder was getting pretty bummed out and was starting to look like throwing in the towel...then we managed to get covered in a few places simultaneously (finally, months of contacting the editors of various popular sites paying off) and we had more new traffic than we new what to do with.

Exciting stuff!

11
taylorbuley 5 days ago 1 reply      
I'm happily surprised that the word "lifestyle" hasn't been dropped yet in this thread.
12
dpcan 5 days ago 4 replies      
I'm not sure why you are using prgmr over Linode, Slicehost, or the Rackspace Cloud servers. The others have really simple backup tools.
13
jamespacileo 5 days ago 2 replies      
Hey guys,

not sure if anyone has realized, but for SaaS projects the Extended version of the theme is required. So $1000 is what he should have paid for the admin panel...

I hope no one involved with ThemeForest finds out :)

14
dr_ 5 days ago 1 reply      
This is great, look forward to reading the second part.

At first, the price seems expensive, but when I think about it, it's really not. You are offering a lot of functionality when it comes to scheduling tweets and that's a feature people will find super useful (unfortunately a lot of those people may be spammers, but at the end of the day as far as your concerned, your business is your business and theirs is theirs).

I'm not a huge advocate of freemium and I would not suggest lowering the price, however you may want to extend the free trial period a little longer.

15
davidmat 5 days ago 2 replies      
Hey everyone, I have a semi-related question (since it's briefly mentioned in the article): is it possible to make a decent salary exclusively by doing freelance work through vWorker (rentacoder) and the likes?
16
josefresco 4 days ago 0 replies      
Love how he cited the crappy design as the main reason why the app sucks ... Then proceeds to put the minimum effort into design to adress the problem. Give a designer a chance to work for you, you will end up with a result that is better but more importantly unique.
17
eggbrain 5 days ago 1 reply      
Greatly written article, but I wish it hadn't been pulled apart into three separate posts.
18
moe 5 days ago 0 replies      
I like how you build the tension towards the cliffhanger. ;-)
19
delvan07 5 days ago 0 replies      
This is interesting. In the next piece can I request you go into a bit more detail about how you increase in traffic flow to your site...basically what marketing did you do?
20
eurohacker 4 days ago 0 replies      
it would be good if you also describe

what kind of architexture you are using for the app and why - what frameworks , languages ( django, jquery )

also what other tools you are using for development/testing and why ( github, eclipse, firebug etc.)

21
sushumna 4 days ago 0 replies      
Thats an inspiring story and informative tips..Iam on my way to build one small app. I am expect atleast half of your earnings. Thats also big for me... as pocket money. Waiting for your next part.
22
liamgooding 4 days ago 0 replies      
Thanks, an interesting read that puts a lot of the other "I'm so amazing because my rails app makes me and my uni buddies £50k a month" into perspective.

But I'd echo the comments above - would be good to see the next posts go up soon in particular to what you actually started doing differently that saw it increase. Maybe others here will be able to build on that to give some advice to turn that $500 into $5k :)

23
csomar 5 days ago 0 replies      
Great read. However, I wanted to mention that this is actually a story and not a guide.
24
ramynassar 5 days ago 1 reply      
Will you continue this story?
25
myearwood 5 days ago 2 replies      
The fact that this article is so popular is sort of sad.That shows that a lot of people on HN are not making $500 a month from their web apps .We should have higher goals than this.
9
IPhones and 3G iPads log your location in an unencrypted file on the device oreilly.com
410 points by petewarden  6 days ago   189 comments top 37
1
allwein 6 days ago  replies      
So after doing a quick analysis of the data on my iPhone, I've come to the conclusion that this isn't a huge issue at all.

First, I'll start with the WiFi data (WifiLocation table):
Among the information captured is MAC, Timestamp, and Lat/Long. I have a total of 118,640 records in my table. I did a "SELECT DISTINCT MAC FROM WifiLocation", and got... 118,640 records. This tells me that it's not "tracking my every move" via Wifi location since there's a single entry for each MAC. The question might be, is it updating the Timestamp when I'm near a specific Wifi Network? My guess is no. I did the backup and analysis this morning, April 20th. Yet the last entries in my database are from April 16th. This tells me that it's not an always on tracker and that it's not updating timestamps.

Next, I looked at the CallLocation table:
The same thing held true with this table. The last entry on my phone was from April 16th. Also, I have 6300 entries in my CellLocation table. I decided to start restricting the precision of the Lat/Long to see if there were duplicates that would indicate "tracking". At 5 decimal points, there were no duplicates. At 4 decimals, there were a handful that had 2 dups. At 3 decimals, there were more dups, with the most being 6. At this point I still had 5672 uniques. At 2 decimals, the most had 89 and I had 2468 uniques. At 1 it really went down, obviously, and I was down to 253 uniques. The other thing I noticed was that there was no regular timing of entries, and that when there were entries, a large number of them had the same timestamp.

So based on my analysis, this isn't a feature that enables detailed tracking of a user. It will allow you to see if a user has been in a certain location the first time, but that's the extent of it. For instance, I could see that I made a trip to Washington DC in late October of last year. But you can't really tell my movements around my home town with any amount of precision. My assumption, like others, is that Apple is using this to enable easier use of Location based services. I assume (which I'm going to test), that whenever a user enables a Location Based app (Google Maps, FourSquare), iOS updates this database with all local cell towers/wifi locations and the Latitude/Longitude. The more comprehensive the local database is, the quicker/easier it is for Location Based Services to help pinpoint a users location. Instead of waiting for GPS to spin up and get a satellite lock, it will be able to get a more accurate lock off of cell tower/wifi triangulation.

2
runjake 6 days ago 5 replies      
I didn't know this was news. I and other security researchers & law enforcement have known about it for a while. I assisted in one court case where the data was used as evidence.

I suspect the slick-looking iPhoneTracker app finally made it interesting to the media.

Edit: There was a similar deal on iOS 3 but it seemed more like a bug, not a feature. Data would be purged at some unpredictable interval. I can't recall the file path and don't have an iOS 3 device handy.

3
petewarden 6 days ago 4 replies      
I'll be checking in here for technical questions. The github direct link is http://petewarden.github.com/iPhoneTracker/
4
ceejayoz 6 days ago 1 reply      
> We're not sure why Apple is gathering this data, but it's clearly intentional, as the database is being restored across backups, and even device migrations.

My understanding is that all data and files is persisted in that manner. Not sure why they're implying this file has been singled out.

5
desigooner 6 days ago 1 reply      
It might not be directly related but there was a news story on CNET [1] yesterday about cops in Michigan using a device from Cellebrite to download information from phones of people they stopped for violations that includes contacts, phone logs, messages, photographs and location history.

Does Apple's decision of having such information stored on the phone unencrypted make it easy for such devices? The device claims to subvert phone passwords though.

[1]http://news.cnet.com/8301-17938_105-20055431-1.html

6
awakeasleep 6 days ago 2 replies      
I wish this wasn't presented as sinister.

The fact is, that phone companies store all that data for EVERY cell phone, and it's always available to government agencies and divorce attorneys after a subpoena.

All this does is raise the common man's awareness, and possibly provides an afternoon of fun looking at your travel history. If you want your iphone data secret, it prompts you to encrypt your backups when you first plug the phone in.

7
tomkinstinch 6 days ago 1 reply      
For those with jailbroken iPhones and SSH, the data can be accessed or copied directly. The information is stored in this file:
/private/var/root/Library/Caches/locationd/consolidated.db

The file can be viewed with any ol' SQLite browser, and the location information is stored in the "CellLocation" table.

After using an iPhone 4 since release day, I have ~1400 entries.

8
justsee 6 days ago 0 replies      
The same community that would generally react very negatively to reports of a company storing passwords unencrypted in a database seems to effortlessly explain away Apple's approach to storing a significant amount of personal tracking data unencrypted, not on one pretty inaccessible server but on multiple easily-accessible devices. Fascinating.
9
chadp 6 days ago 1 reply      
Someone should make an app for jailbroken phones to disable this location logging (or delete it regularly).. many would likely pay for it!
10
pieter 5 days ago 0 replies      
Of course, Apple would know your location most of the time anyway, whether or not this file exists. You send the ID's of cell towers and wifi points to Apple, which returns you the location of those points. Apple could always have been storing your location based on that interaction alone.

In fact, keeping a database like this could actually give Apple LESS information about your location, as you don't have to request a new location if you already have the info of all the near ID's in your database. I'm not sure if this actually happens though.

The same, of course, can be said for any Android device and Google's A-GPS database; you have no guarantees that Google isn't logging your location whenever you're using location services.

11
gpambrozio 6 days ago 0 replies      
Apple has been known to collect this information for a while now [1] but storing all this information in a database should not be required for this.

If you tuink about how much information you have on your phone, if somebody has access to it or to your backups, I think your locstion history is the least of your problems. But I do agree that it should not store this information, encrypted or not...

[1] http://news.cnet.com/8301-31021_3-20010948-260.html

12
cube13 6 days ago 1 reply      
Could this be related to the mobleMe "Find my iPhone" feature that Apple added in 4.0?

If so, this is probably a non-story. I'd be interested if it still logs if Location Services are off, too.

13
ck2 6 days ago 2 replies      
BTW all cellular devices are recorded as they move through tower locations while they are on and police don't feel they need a warrant for such data, so your location is pretty much available without that file.
14
yardie 6 days ago  replies      
I can sort of understand the outrage but I don't see the utility of it. Apps that are written for the App store don't have access to this data without the permission of the user. And the only way an app would be allowed access to a file outside the sandbox is if its jailbroken.

I'm not familiar with the in and outs of iOS LocationManager but it generally gives you the immediate coordinates at the time you request and nothing more. As for why the database of locations? It's entirely possible they are using it for QoS.

As for access to device backups. If someone has unauthorized control of your desktop computer you have bigger problems.

15
mirkules 6 days ago 0 replies      
Funny, I had to go to a location without internet access, but where I periodically have to "mark" where I am so I can reference it later. I was about to write my own app for this purpose when I saw this post. To boot, I had my iPhone on me the last few days anyway, so this will definitely come in handy.

Despite the utility I got out of this, I wish we would be told about it...

16
tlear 6 days ago 1 reply      
This is a perfect timing for promotion of Playbook and BB security. I am sure RIM will miss the opportunity though.
17
zenocon 6 days ago 1 reply      
About 6 months ago, I left an ipad on a plane. Unsurprisingly, all my attempts to recover it led to dead ends. I didn't have the mobileme / findmyiphone app installed on it. I understand privacy concerns, but I'd actually like it if Apple did have a copy of this db, and they allowed me to proxy through them / law enforcement so that I could locate this lost device. I know someone has it b/c I can see they were using my Netflix account.
18
edw 6 days ago 5 replies      
Does no one else agree with me that this is awesome? I love being able to visualize my comings and goings. It's the story of the last year or so of my life, in colored dots.

I hope Apple doesn't respond to the "outrage" by no longer collecting this data. To a first order approximation, I am with Scott McNealy over in the "Privacy?! Get over it" camp:

http://www.wired.com/politics/law/news/1999/01/17538

As an aside, can real outrage even exist anymore in this age of the easy forum post or re-tweet or tumblr entry or Facebook post? And if it does, how do you identify it? And if you can identify it, what does it mean?

19
aj700 6 days ago 0 replies      
Okay, but do the devices do this if 'Location Services' are turned off.

And I assume Cydia will now get an app that forces them off if the os ignores the setting.

20
serialx 6 days ago 1 reply      
Created a GPX file generator. Use it to convert the database into a GPX file format. Open it up with Google Earth.

https://github.com/serialx/iphonegpx

21
pgio 6 days ago 0 replies      
This was noted last September by C. Vance here:

http://blog.csvance.com/?p=39

Good detail on how and why it is generated.

22
plainOldText 6 days ago 0 replies      
I can imagine a jealous spouse saying now to the other "i love you so much honey and from now on i will do your iphone backups. Just to make sure everything is safe for you" Then the jealous spouse downloads the iphone tracker visualization tool: "So honey, where were you last night? Really ? Dont you dare lie to me" :)
23
ljdk 6 days ago 0 replies      
In addition to cell tower and Wi-Fi hotspot locations iTunes keeps a backup of all text messages and recent calls. A while ago I've even made a small web app to chart it - http://datalysed.com/?p=130
24
acrum 6 days ago 3 replies      
The simple solution is select encrypt backups in your iTunes options. If my computer or phone got stolen, I'd have more important things to worry about than whether the thief can find a list of locations I've been. It's fun/interesting to see it mapped out though.
25
nicklovescode 6 days ago 0 replies      
Apple is simply building a mandatory foursquare competitor, it's not a big deal guys
26
xsmasher 6 days ago 1 reply      
I assume Apple collects this data to pass back to skyhook so they can update their database of wifi-to-geolocation data. Must be nice to have millions of sensors roaming around collecting data for you.
27
jstn 6 days ago 1 reply      
Whether or not this is true, Apple should add something like File Vault to iOS. Encrypting your backups is redundant if you're already encrypting your whole home directory, but none of that matters if they have access to your unencrypted phone. Check out the police downloader devices the ACLU is investigating: http://www.aclumich.org/issues/privacy-and-technology/2011-0...
28
templaedhel 6 days ago 0 replies      
From what I understand, at least with google, this data (the data sent anonymously) is used amoung other things, for the maps traffic feature. If a fair number of phones are traveling below the speed limit on a road, it can be assumed that the traffic is bad on that road. Not sure if the apple data is used for that, or if they get the traffic data from google, but it is one legitimate use.
29
kovar 6 days ago 0 replies      
Apple license agreement covering the collection of location data - http://pastebin.com/EdFJr6iU
30
Limes102 6 days ago 0 replies      
When I read this I simply had to try it out for myself and quickly plot the data. It's a nice reminder of the places I have been over the past year.

I don't mind that Apple have saved the information on the device, what I mind is that they haven't given us an option to clear the logs or to actually visualise the data directly from the phone.

31
sambeau 6 days ago 1 reply      
If you have a 3G device the cell towers already know this and the data is already tracked. So what is new here?
32
polar 5 days ago 0 replies      
Not news at all to someone in the digital forensic community: https://alexlevinson.wordpress.com/2011/04/21/3-major-issues...
33
dgulino 6 days ago 0 replies      
34
ramynassar 6 days ago 0 replies      
This has been happening for a long time, has it not?
35
jawngee 6 days ago 1 reply      
Jailbreak + cron + rm
36
BigZaphod 6 days ago 2 replies      
If the man really wants your location, he can just ask the phone company.
37
uptown 6 days ago 0 replies      
All of this from a device which prevents you from ever removing its battery.
10
Tell HN: Please bring back comment scores
380 points by Maro  12 hours ago   159 comments top 54
1
blhack 11 hours ago  replies      
People can be wrong about things, and comment scores are useful information that helps us know if they are.

"that is not how mysql works" with 2 Points
And
"that is not how mysql works" with 102 points
Are not the same piece of info.

I don't see any benefit to hiding this from people. It also helps newbies understand the customs here (they can get feedback on other people's comments)

edit now that I'm not on my phone:

When you google things, you probably skip over results like daniweb, about.com, or expertsexchange.com, and hope for a stackexchange page.

The little URL at the bottom of the description tells you "hey, this is from $foo source" and you're a smart human being that can put this information to good use.

Of course, you could make a case that this is bad, because you should read each one of the results and judge it based on its merits. Maybe we should even strip all of the identifying information away from the page, and just let it stand on its own (this would be a neat experiment, actually [and that's the experiment that I think we're performing here]).

The point counts on the comments act just like the URL does on google results. It's not saying "this is definitely 100% accurate", but it is useful piece of information that we can put to good use. Depriving us of this information doesn't break the comments, but I have certainly found myself reading comments a bit less lately as a result of it (instead of actually reading comments, I'm usually just skimming them now). With comment scores, things seem to have a bit of order to them, without, it just feels like a lot of people shouting at one another.

(Maybe this was the point?)

Naturally, I'm never going to stop reading HN; it is by far my favorite website on the internet. Complaining about the lack of comment points here is like complaining that my favorite bar switched to a new, very slightly different glass. I can see the difference, but it's not really going to change my habits.

2
tokenadult 10 hours ago 1 reply      
Since I registered a username here on HN 890 days ago, I've seen a lot of comments about comment karma and about upvoting and downvoting. The most significant statement I have seen about comment voting here on HN was posted recently by pg, the founder of HN, in a thread-opening post 22 days ago titled "Ask HN: How to stave off decline of HN?"

http://news.ycombinator.com/item?id=2403696

He wrote, "The problem has several components: comments that are (a) mean and/or (b) dumb that (c) get massively upvoted."

So the founder of HN thinks that before the recent experiment there was a comment voting problem: (a) mean comments were getting too many upvotes, and (b) dumb comments were getting too many upvotes, and (c) too many of the comments that got the most upvotes were either mean or dumb or both. Let's stop and think about what that means. That means that, according to pg posting as of that moment, comment karma scores were often NOT reliable signals of good comments, comments worth finding rapidly when skimming a thread.

With that condition of HN less than a month ago in mind, how do the highest-voted comments visible in the bestcomments list

http://news.ycombinator.com/bestcomments

look to all of you recently? Are there fewer mean comments than before? Are there fewer dumb comments than before? Are the comments that are "massively upvoted" since the experiment began mostly comments that are reasonably kind and well-informed, helpful comments on the whole? In most of the treads you visit, do helpful, thoughtful comments seem to rise to a position of prominence, while mean or dumb comments gray out?

A link and comment in another recent metadiscussion thread largely sums up the back-and-forth about visible comment scores as a signal on comments in active threads:

http://news.ycombinator.com/item?id=2465357

>> Please bring back the comment scores. It helps a lot in parsing the comments and assigning a proportional weight to each when reading them.

> I had to think about this a bit, and I disagree so far. I'm finding that I'm not pre-judging comments as much. It's nice to be able to read someone's comment without knowing first that 70 or 80 or 3 other people thought it was worthwhile.

My impression too is that even with comment scores not visible, it is still convenient to browse threads to find thoughtful, informative comments, but now there is less anchoring bias

http://www.sciencedaily.com/articles/a/anchoring.htm

of most votes on a comment converging to one score level that shows up early in a thread's development, and more engagement by readers of HN in actively reading comments and upvoting (or downvoting) based on each comment's characteristics in light of the context of the thread. So far I can still find good comments quite readily here on HN. Indeed, I think that since the experiment began I am seeing more good comments more readily than before.

The main motivation stated by pg for the current experiment with making comment karma scores less visible is to "stave off decline of HN," and that is what will decide if the experiment was successful. If the previous visibility of comment karma scores led too many casual readers of HN to upvote mean or dumb comments, and too few readers to upvote thoughtful, informative comments and to downvote mean or dumb comments, the arguments on the side of reader convenience aren't going to be convincing. It isn't convenient for ANY reader of HN if the comment scores are a poor signal, and if bad comments become more prominent and good comments get skimmed right over by readers in a hurry. If a change of rules here makes every reader read comments more carefully and more thoughtfully, and vote based on comment inherent quality rather than on crowd appeal, that is a feature rather than a bug. For comment scores to be a good guide to every reader here, every reader can help by actively upvoting informative, helpful comments, and also by downvoting comments that are either mean or dumb--and especially comments that are both. As I recall, the experiment has also involved some changes in the effects of flagging, so flagging inappropriate comments is also helpful.

After edit: many comments in this thread ask about the karma rules and voting rules imposed by the software. We can all read the news.arc software ourselves

https://github.com/nex3/arc/blob/master/news.arc

if we would like to see what the rules do (except I think that maybe a few aspects of the current experiment are hidden from the current distribution of the source code), as previous HN threads have pointed out.

http://news.ycombinator.com/item?id=1307128

http://news.ycombinator.com/item?id=2034449

3
awakeasleep 11 hours ago 4 replies      
I really like the lack of comment scores. Things are still sorted, so the cream floats to the top, and it made me realize I felt group-impulses based on the score.

Now, there is little to no incentive to one-up someone, and I don't consider people refuted based on their score, but rather based on what I think of their comment. That last part has nurtured my curiosity, I find myself exploring thoughts I didn't on the 'old' HN

4
jplewicke 12 hours ago 3 replies      
The only place I really miss having them is in older articles and on searchyc.com. I feel a much greater compulsion to engage each comment on its merit when voting and when reading, and I feel like reading HN has become more intellectually stimulating.

However, this is more of a problem when I'm trying to assess information in areas that I'm not already familiar with. If I'm searching for information on which DNS providers are best and I find an Ask HN from 4 months ago, I can no longer tell what the true community consensus on it is. I expect the current masking of comments will provide less biased voting, so I think displaying comment scores on stories that are more than two months old would eventually provide the best of both worlds.

5
lotusleaf1987 11 hours ago 1 reply      
I disagree. It forces people to read the comments and judge them on their own merit. Often times the highest voted comment seemed to be highest voted comment simply because it was the first comment and kept being upvoted for being upvoted by others.

Also, the ordering of the comments does the same thing as having the comment scores!

I do wish there was a way to still search for the highest rated comments on searchyc.com, but I still think it's a small sacrifice for an overall better community/environment. I have definitely seen less iOS/Android/Windows flaming, so I think the site is already benefitting from the changes.

6
losvedir 11 hours ago 0 replies      
Nah, I like not seeing the comment scores.

But today I did think of a different improvement I'd like to see, which for lack of better place to put it, will say here:

When I click on "reply" to a comment, it takes you to a page with just that comment and a text box. I'd like to see that comment's parents all the way to the OP, to give me some more context as I frame my reply.

I think it would improve discussion as you'll see the context in which the person you're replying to replied, and might interpret their words a little differently.

7
achompas 10 hours ago 0 replies      
I'm struggling to understand why you need comment scores to know what to think about something. I can make up my mind about a comment's quality using the info available to me right now.

Let's take the oft-repeated example in other comments: two comments on MySQL, one with a handful of points and the other with a lot of points. There are a number of indicators of comment quality:

>> comment sorting works very well

>> if a low-vote comment is controversial, you will surely find spirited discussion below it--the volume of discussion would be an indicator of community disagreement

>> hit up Stack Overflow and find out if MySQL works as stated in the low-vote comment or if it works like the high-vote comment describes.

The arguments for visible points boil down to "I don't know how to think about this statement, so I need external confirmation." Indicators of comment quality still exist, and karma gamesmanship looks like it has decreased a lot. Finally, you really want to avoid groupthink--hiding scores accomplishes that pretty well.

8
iamdave 12 hours ago 1 reply      
I found the comment scores a good motivation for thinking about what I'm typing before hitting that 'add comment' button. Good imperative to participate instead of troll.

The opposite is true for others, I'm sure.

9
hooande 11 hours ago 1 reply      
The most important aspect of the comment scores was that they let me know what the HN community thought of a particular point or argument. I'm capable of making up my own mind about any topic. I find it interesting and useful to see what other people think. As a geek I miss being able to see that "people agreed with this side of the argument at +20 as opposed to that side at +10".

If some people want to treat it as a "who can get more points" game, then so be it. I find that I can learn a lot from looking at which way public opinion is leaning.

10
iterationx 12 hours ago 3 replies      
I like the new system. It discourages winning an argument with numbers. 20 people upvoted the previous comment so that guy must be right.
11
thought_alarm 10 hours ago 0 replies      
Removal of the comment scores is a great innovation that has improved the quality of Hacker News.

With scores visible, most "discussions" end up as little more than opinion polls.

12
siddhant 11 hours ago 2 replies      
Cant we have a "showcommentscores" option for displaying comment scores? Personally, I really (really) miss seeing comment scores, but its apparent that there are a lot of people who like HN the other way.
13
msluyter 10 hours ago 0 replies      
FWIW, I find myself up/down-voting less without the scores. I guess I mostly tend to vote to rectify imbalances. If a comment has a lot of upvotes already, I probably won't upvote (I figure once it's near the top, it doesn't matter much anyway). But if its popularity seems unwarranted, I may be more likely to downvote.

Conversely, I tend to upvote mostly what appear to be underrated comments that are low on karma. Not saying there's anything admirable about this approach -- what am I, some kind of Karma Robin Hood? -- but the new system definitely discourages it.

14
RuadhanMc 11 hours ago 1 reply      
Without the comment scores we're confronted with a wall of text that is hard to filter. Should I have to read every single comment just to find the gems? Comments scores have their downsides but they make filtering out noise much easier.

Unfortunately I don't think everyone has time to judge each comment on its own merits -- there are simply too many comments -- so we need a little crowd-sourced ranking. It does lead to some group think at times but that's a (relatively) small price to pay.

15
thekevan 11 hours ago 0 replies      
I don't usually want to post, "me too" posts but...

I really miss the comment scores.

Sometimes I am not totally familiar with whatever the original post is talking about, often the top rated couple of comments give me some good insight or jumping off points to look into it further.

I respect the HN community and have learned a lot here. I generally trust their judgement and I have found if a comment is rated highly, it most likely adds a lot of value to the discussion.

Sometimes I disagree with the highest rated comment(s). I then see my opinion is in a minority and maybe I re-examine it or stand firm and make a comment to the contrary.

16
jongraehl 1 hour ago 0 replies      
Ignorance is bliss. I'm happier not knowing that the crowd disagrees with my judgment of a comment's worth.

I also feel like people are avoiding posting crappy comments with the intent of tapping into a popular vein for a high score.

This could be a placebo, or perhaps, if real, it's instead caused by an improvement in voting.

17
brk 10 hours ago 0 replies      
Another thing (and some might accuse me of "doing it wrong" this way) is that lack of score showing changes my motivation for voting overall.

Some comments are so awesome, they deserve 50+ upvotes. Other comments are pretty good, and deserve maybe 8. I do not/did not personally try to upvote every single comment. I try to add upvotes to the comments that seem "best" in a particular thread discussion, and allocate votes in this way.

Perhaps my behavior is something that pg was attempting to fix with this change, but I have a feeling I'm not alone in this regard.

18
jbail 11 hours ago 0 replies      
Seeing comment scores helps me to quickly scan comments to find the ones of most interest. It's not about "who's right" or "who's wrong" --- but what comments are more insightful and interesting. This is what the community of HN provides in voting people's comments up and down. This concept is missing now.

All in all, not showing comment points is a step backwards in helping people get the most out of the site in the most efficient way possible.

19
grandalf 10 hours ago 0 replies      
I think that people are missing the point. A comment's quality is not measurable by it's score. If anything, that is a rough aggregation. By this logic, Ke$ha is a better musician than Joshua Bell.

Top comments on HN were becoming more "top 40" and something had to be done before people started posting links to Trollface, etc.

One approach would be to use category-based voting, which adds a lot of complexity.

One approach would be to implement some sort of vote weighting sytem (time based, reputation based, context based), but that's ad-hoc and may not fix the problem.

And one approach is to simply hide numerical comment scores from all but each user's own comments. This turns the quest for high karma into a personal battle against one's self, not a sport.

PG wisely chose to make Karma a personal Everest for each individual to care about (or not).

20
dexen 11 hours ago 0 replies      
Please do.

But also, please split upvotes from downvotes. There's a huge difference between a +25 / - 24 comment (apparently a controversial one) and +1 / -0 (probably a mediocre one).

Or perhaps, display only upvotes, and use some weighted form of (UPVOTE * U_WEIGHT - DOWNVOTE * D_WEIGHT) for positioning the comment among other ones.

21
brk 12 hours ago 0 replies      
One of the things I liked about the Slashdot comment system was that you assigned a rating "Insightful" "Funny" "Helpful", etc.
22
fr0sty 8 hours ago 0 replies      
No one on this thread has picked up on the 'range' suggestion by Maro yet, so I'll add a thought:

For my purposes, displaying actual scores within the range -4 -> 10 would be sufficient. the low end is already capped, and the high end could either just have a ceiling of 10 or a score of 10+.

I am occasionally a "someone is Wrong on the internet!"[1] type and my inclination to wade in is directly proportioanl to the perceived traction of the inaccuracy. Without such a heuristic the choices are: reply to all, none, or a random sample which result in "poor information", "needless pedantry", and " undefined behavior" respectively.

[1] http://xkcd.com/386/

23
huhtenberg 11 hours ago 0 replies      
Ah, no, don't. Stop fixating on the score and trying to write comments that other people like instead of writing what you actually have to say.
24
lwhi 10 hours ago 1 reply      
How about allocating a 'hotness' quotient to comments?

At the moment, a comment that's down-voted past zero becomes lighter. Perhaps very popular comments could be made more visible, or highlighted?

I think this kind of fuzzy indication of popularity might be a good compromise.

25
mcn 10 hours ago 0 replies      
Removing comment scores seems to have increased the amount of mediocre/poor comments around contested topics: I am noticing more more brother/sister comments that are basically reiterating each other and more debates that veer to uncontrollable levels of indentation when the key points were already covered in the top level post and first child.

The relative absence of these black holes of discussion is one of the things that brought me to HN in the first place, and I think that showing comment scores discouraged them on multiple levels. Public upvoting lets people express their view on the topic without posting points similar to those already expressed. When two comments have a lopsided point spread it lets one "side" of the debate feel more comfortable letting the other have the last word.

26
rexreed 10 hours ago 0 replies      
Disagree. I believe that I can make a fair judgement of the quality of a comment by simply reading it. I don't do TL;DR on comments that I care about, so using scores as a proxy for quality doesn't mean much for me.
27
joshfinnie 11 hours ago 0 replies      
The only issue I see with not being able to see the scores of comments is that joke or off-the-cuff comments are probably getting a lot more points.

If there was a comment that made me laugh (while sticking to the point) I will be more likely to upvote it, but if it already had 10+ upvotes a laugh on my part probably didn't justify another upvote.

28
uptown 11 hours ago 0 replies      
I'd prefer a system that reveals the score of comments you've already either replied to, or voted on. Gives you some kind of feedback on where the rest of the community's mind is with regard to that comment.
29
apl 11 hours ago 0 replies      
One observation: I think that comment ordering is an inadequate substitute for numerical scores. A lot of interesting information goes missing when reducing a scale from interval to ordinal.
30
ignifero 10 hours ago 0 replies      
When i am interested in the subject, i usually read ALL comments. Having them in order of popularity helps, but does not really discourage me from reading on. Scores don't really matter.

There is a tendency for short comments to sink down, regardless of how informative they are, simply because people spend less time on them, so they're less likely to hit that upvote button.

Also, like all forums, the first upvoted comments get more replies creating a positive feedback loop, not necesarily because they are the best, but because people know their replies will be more visible.

It would be interesting to have the statistics of number of upvotes vs position of the comment in the page.

31
jashmenn 11 hours ago 0 replies      
I really miss comment points on book recommendations. I can't tell you how many books ive purchased over the years based on a highly rated HN comment.

I second what others have mentioned that it would be good to re-display comment scores on older posts. This way we could at least see the community consensus after some time has passed.

32
ghotli 8 hours ago 0 replies      
I found it particularly hard to read the recent Amazon Outage thread. There was so much information to sift through. It had me missing the comment scores.
33
ck2 11 hours ago 0 replies      
Remove scores/points for people entirely.

That way only posts/comments get points/scores, not people.

34
bakhlawa 10 hours ago 0 replies      
I understand the minimalist theme at HN, but would a simple toggle switch to show/hide comment scores be a terribly bad idea? These could be set by logged in users (wouldn't apply to drive-by or anon users).
35
hanifvirani 10 hours ago 0 replies      
With the comment score not being displayed, I find myself commenting less often for some reason. Others have echoed a similar sentiment in some of the earlier threads.
36
kqueue 4 hours ago 0 replies      
It's interesting to see an 8 hours post that has 374 points, and 156 comments on the second page instead of being on the first page.
37
patrickk 11 hours ago 1 reply      
My initial reaction was also "just bring em back".

With reflection, I think a good idea might be to show the score after you vote.

This way you get the feeling of making some difference i.e. immediate feedback, but also the knowledge that your vote wasn't subconsciously affected by a visible score beforehand.

The main downside of this would be people voting out of curiosity to see what a comments current score is. Perhaps displaying the score of a comment once it reaches a certain age (maybe three or four days old) would mitigate against this.

38
pclark 11 hours ago 0 replies      
I have been quite surprised at how my enthusiasm for contributing to Hacker News has diminished at the removal of comment scores. Not necessarily a bad thing for anyone.
39
acrum 11 hours ago 1 reply      
I like the lack of comment scores (to avoid everyone piling on one comment), but I think I would like it more if I knew what went into how high a comment was on the page. Is it a fact that the first displayed comment will be the one with the highest score? I know some different inputs are used, such as the karma of the submitter, how new it is, etc. but I guess we don't know "for sure".

I don't think the solution is to bring back scores, though. A possible "simple" solution could be to color/star a comment above 50/100 points, etc. Comment scores could also be displayed as percentage or on a scale of 0 to 1, 0 to 10, etc. I'd be more likely to read a comment with a score of 95% than one with a score of 20%. This way you at least get an indication of the helpfulness of the comment other than just its position on the page.

40
elbenshira 11 hours ago 1 reply      
I wonder how Hacker Monthly (http://hackermonthly.com/) will pick out the "best" comments.
41
ambirex 9 hours ago 0 replies      
I would like it if the comment rating was only available in an html data attribute (eg data-rating="10"). That way my old user script would still be able to sort and high light comments.

You would have to go out of your way to see the score but could still be used by us hackers who like to customize our experience.

42
Symmetry 10 hours ago 0 replies      
I wonder if showing the rating of a comment only after you had voted on it would work? That would prevent some level of groupthink by forcing people make their own evaluations before seeing what others thought. That would require a +0 vote option, though, to prevent some obvious failure modes.
43
iworkforthem 11 hours ago 0 replies      
By removing comment scores, does it increase/decrease traffic to HN?

My gut is telling me that traffic is likely to be lower. Reality might be different of course.

44
jsherry 11 hours ago 1 reply      
Hidden comment scores help us avoid groupthink.
45
known 11 hours ago 0 replies      
Previously, I used to read the comments first and then the article.
Now I'm reading the article first and ignoring the comments/vote.
46
sktrdie 7 hours ago 0 replies      
I'm not going to read all the comments when they're more than 50. Finding insightful comments is hard without any number next to them. But I understand that it might bring more karma to "stupid comments" instead of "really insightful comments"... but who cares, the insightful comments is still there and probably going to get more karma than it would without any number next to it.
47
techtalsky 10 hours ago 0 replies      
I think a middle ground would be good. I understand the reasons for dropping comments scores but it makes it harder for me to get a quality experience out of the site and easily find the information I need. The scores mean something to me.

I liked someone's suggestion of basing the comment score on "upvotes per view" so older comments don't dominate, and I also like the idea of using a dark-to-light gradient (dot) instead of a concrete number.

Just sorting to the top (kinda) really just makes it hard to wade in, and makes me less likely to take a look at a topic I know little about and would like to see a couple of definitive words on it. It may be groupthink to some extent but this is a damn smart group.

48
citricsquid 11 hours ago 1 reply      
http://hackerne.ws/item?id=2477527

My comment here explaining it has made me comment less has 40 points. I think a lot of people agree.

49
chanux 10 hours ago 0 replies      
Button to make comment scores visible, please.
50
oscardelben 10 hours ago 0 replies      
I would make it an option.
51
coffeedrinker 11 hours ago 0 replies      
Comment scores help me get to the best points (even if they are in disagreement) without spending a lot of time reading the whole page.

I'm reading a lot let now because there are no scores; I just skim the top and then move on.

Comment scoring allows the community to reveal quality.

52
AndyNemmity 7 hours ago 0 replies      
I much prefer it without. I like it like this
53
sibsibsib 12 hours ago 0 replies      
I didn't even notice they were gone at first...
54
vipivip 12 hours ago 0 replies      
+
11
"... so now I will jiggle things randomly until they unbreak" is not acceptable' gmane.org
348 points by signa11  5 days ago   126 comments top 16
1
j_baker 5 days ago 5 replies      
I can't help being reminded of this hacker koan:

A novice was trying to fix a broken Lisp machine by turning the power off and on.

Knight, seeing what the student was doing, spoke sternly: "You cannot fix a machine by just power-cycling it with no understanding of what is going wrong."

Knight turned the machine off and on.
The machine worked.

2
coderdude 5 days ago  replies      
I always get a kick out of how Linus talks to people. It's so direct and he never coats his arguments to make them easier to swallow. You could learn a lot about not bullshitting from that guy.
3
akent 5 days ago 0 replies      
It gets better later in the thread:

Yinghai, we have had this discussion before, and dammit, you need to understand the difference between "understanding the problem" and "put in random values until it works on one machine".

"There was absolutely _zero_ analysis done. You do not actually understand WHY the numbers matter. You just look at two random numbers, and one works, the other does not. That's not "analyzing". That's just "random number games".

4
gregschlom 5 days ago 6 replies      
Am I mistaken in thinking that in this case, it might also be a cultural/communication problem?

When Yinghai answers:

  We did do the analyzing, and only difference seems to be:
good one is using 0x80000000
and bad one is using 0xa0000000.

he clearly didn't understand what Linus meant by "think and analyze".

I don't know the Chinese culture well enough (assuming Yinghai is from China), but I am under the impression that they would emphasize more on results (fix the problem) than on processes (understand why the fix works, in order to be sure we are not breaking something else).

Am I wrong?

5
vog 5 days ago 3 replies      
Once again, a great Linus Torvalds statement! I especially like the last paragraph, which has so much truth in it, and can be applied to small as well as large software projects:

Don't just make random changes. There really are only two acceptable models of development: "think and analyze" or "years and years of testing on thousands of machines". Those two really do work.

6
7
smcl 5 days ago 2 replies      
I think Linus forgets that at one point he too was inexperienced and liable to make these hit-and-hope fixes. I agree with his point in general, but the dickish manner in which it's delivered isn't particularly helpful ("Why don't we write code that just works?")
8
kemiller 5 days ago 0 replies      
I was all set to defend the "jiggle randomly" school of development with something along the lines of "sounds like someone who has never had an external deadline to worry about" but then I got really sad, and didn't.
9
benwerd 5 days ago 0 replies      
Well, there goes my development methodology.
10
rams 5 days ago 1 reply      
Programming by coincidence as the PragProg says (someone has already posted the link).It's extremely common here in most Indian companies, especially with freshers.
11
lindvall 5 days ago 0 replies      
Another aspect of this thread that I find very refreshing is the fact that the previous implementation may have used magic numbers doesn't reduce anyones desire to actually understand what is going on going forward.

The realization that a magic number was already being used could have caused one of two outcomes:

a) justification for replacing one magic number with another

b) realization that more research needed to be done to understand exactly what was going on in the first place

I appreciate seeing (b) as the option chosen. We should all strive to be this diligent.

12
hobbes 5 days ago 3 replies      
Well, that approach worked fine for the evolution of complex life-forms.
13
mv1 5 days ago 0 replies      
I call this "poke it with a stick" debugging. Let's poke the code like this and see if it works now. This approach is so wrong yet so common it's infuriating.
14
ciupicri 5 days ago 0 replies      
That patch reminds me about a nouveau bug[1] I had a couple of weeks ago. According to one of the developers behind nouveau it was caused by a new memory mapping/allocation scheme that broke things on systems with more than 4 GBs of RAM. Some device memory (registers etc) was mapped above 4 GBs and some of them don't like this. So he built a new kernel which reverted the change and the problem was "miraculously" fixed.

[1] https://bugzilla.redhat.com/show_bug.cgi?id=689825

15
enjoy-your-stay 5 days ago 0 replies      
Linus seems to be (correctly) railing against what is classic cargo cult behaviour.

Making changes until someting seems to "work".

16
aangjie 5 days ago 0 replies      
Reminds me of some of my last work experiences...
12
AWS is down, but here's why the sky is falling justinsb.posterous.com
341 points by justinsb  5 days ago   80 comments top 15
1
mdasen 5 days ago 6 replies      
Amazon has probably correctly designed core infrastructure so that these things shouldn't happen if you're in multiple Availability Zones. I'm guessing that means different power sources, backup generators, network hookups, etc. for the different Availability Zones. However, there's also the issue of Amazon's management software. In this case, it seems that some network issues triggered a huge reorganization of their EBS storage which would involve lots of transfer over the network of all that stored data, a lot more EBS hosts coming online and a stampede problem.

I've written vigorously (in previous comments) for using cloud servers like EC2 over dedicated hosting like SoftLayer. I'm less sure about that now. The issue is that EC2 is still beholden to the traditional points of failure (power, cooling, network issues). However, EC2 has the additional problem of Amazon's management software. I don't want to sound too down on Amazon's ability to make good software. However, Amazon's status site shows that EBS and EC2 also had issues on March 17th for about 2.5 hours each (at different times). Reddit has also just been experiencing trouble on EC2/EBS. I don't want this to sound like "Amazon is unreliable", but it does seem more hiccup-y.

The question I'm left with is what one is gaining from the management software Amazon is introducing. Well, one can launch a new box in minutes rather than a couple hours; one can dynamically expand a storage volume rather than dealing with the size of physical discs; one can template a server so that you don't have to set it up from scratch when you want a new one. But if you're a site with 5 boxes, would that give you much help? SoftLayer's pricing is competitive against EC2's 1-year reserved instances and SoftLayer throws in several TB of bandwidth and persistent storage. Even if you have to over-buy on storage because you can't just dynamically expand volumes, it's still competitively priced. If you're only running 5 boxes, the server templates aren't of that much help - and virtually none given that you're maybe running 3 app servers, and a replicated database over two boxes.

I'm still a huge fan of S3. Building a replicated storage system is a pain until you need to store huge volumes of assets. Likewise, if you need 50 boxes for 24 hours at a time, EC2 is awesome. I'm less smitten with it for general purpose web app hosting where the fancy footwork done to make it possible to launch 100 boxes for a short time doesn't really help you if you're looking to just have 5 instances keep running all the time.

Maybe it's just bad timing that I suggested we look at Amazon's new live streaming and a day later EC2 is suffering a half-day outage.

2
akashs 5 days ago 2 replies      
Amazon makes it pretty clear that Availability Zones within the same region can fail simultaneously. In fact, a Region being down is defined as multiple AZs within that zone being down according to the SLA. And since that 99.95% promise applies to Regions and not AZs, multiple AZs within the same region being down will be fairly common.

Edit: One more point. In the SLA, you'll find the following: “Region Unavailable” and “Region Unavailability” means that more than one Availability Zone in which you are running an instance, within the same Region, is “Unavailable” to you. What it implies is that if you do not spread across multiple Availability Zones, you will then have less than 99.95% uptime. So spreading across AZs should still reduce your downtime, just not beyond that 99.95%

http://aws.amazon.com/ec2-sla/

3
justinsb 5 days ago 1 reply      
A quick tldr: Availability Zones within a Region are supposed to fail independently (until the entire Region fails catastrophically). Any sites that designed to that 'contract' were broken by this morning's incident, because multiple AZs failed simultaneously.

I've seen a lot of misinformation about this, with people suggesting that the sites (reddit/foursquare/heroku/quora) are to blame. I believe that the sites were designed to AWS's contract/specs, and AWS broke that contract.

4
risotto 5 days ago 1 reply      
These outages are very rough. Clearly a lot of the Internet is building out on AWS, and not using multiple zones correctly in the first place. But AWS can have multi-zone problems too as we see here. Nobody is perfect.

But what people forget is: AWS has a world class team of engineers first fixing the problem, and second making sure it will never happen again. Same with Heroku, EngineYard, etc.

Host stuff on dedicated boxes racked up somewhere and you will not go down with everyone else. But my dedicated boxes on ServerBeach go down for the same reasons: hard drive failure, power outages, hurricanes, etc. And I don't have anyone to help me bring them back up, nor the interest or capacity to build out redundant services myself.

My Heroku apps are down, but I can rest easy knowing that they will bring them back up with out an action on my part.

The cloud might not be perfect but the baseline is already very good and should only get better. All without you changing your business applications. Economy of scale is what the cloud is about.

5
jpdoctor 5 days ago 5 replies      
Every time someone bitched at me for not having a "cloud-based strategy", I kept asking how many 9s of reliability they thought the cloud would deliver.

We're down to 3 nines so far. A few more hours to 2 nines.

The cloud is not for all businesses.

6
weswinham 5 days ago 0 replies      
I'd say your choice between Quora's engineers being incompetent or AWS being dishonest/incompetent is a completely false dichotomy. Anyone who has been around AWS (or basically any technology) will agree that the things that can really hurt you are not always the things you considered in your design. I just can't believe that many of the people who grok the cloud were running production sites under the assumption that there was no cross-AZ risk. They use the same API endpoints, auth, etc so it's obvious they're integrated at some level.

Perhaps for Quora and the like, engineering for the amount of availability needed to withstand this kind of event was simply not cost effective, but I seriously doubt the possibility didn't occur to them. It's not even obvious to me that there are many people who did follow the contract you reference who had serious downtime. All of the cases I've read about so far have been architectures that were not robust to a single AZ failure.

As for multi-az RDS, it's synchronous MySQL replication on what smell like standard EC2 instances, probably backed by EBS. Our multi-az failover actually worked fine this morning, but I am curious how normal that was.

7
endergen 5 days ago 1 reply      
Read how @learnboost who uses AWS was not affected by the AWS outages because of their architecture design:
http://blog.learnboost.com/blog/availability-redundancy-and-...
8
EGreg 5 days ago 1 reply      
This is again the problem with centralized vs distributed services. Not just Amazon's infrasturcture.

http://myownstream.com/blog#2010-05-21 :)

9
grandalf 5 days ago 1 reply      
It's pretty wild that this stuff happens. Similar to today's nasty outage, Google has had some massive problems with its app engine datastore...

I'm curious if anyone has any predictions about what the landscape will be like in a few years? Will these be solved problems? Will cloud services lose favor? Will everything just be designed more conservatively? Will engineers finally learn to read the RTFSLA?

10
cafebabe 5 days ago 0 replies      
Relations. At the viewpoint of a non-cloud-user, this is a pretty normal situation. Systems fail. Maybe, we should think about cloud as a service, that is managed somehow different (to enable easier access to our wallets and budgets) but do eventually fail the same way as standard services. That's how I saw it as the first headline about cloud services appeared in front of me couple a years ago.
11
ww520 5 days ago 2 replies      
One data point. I have one of my clients' servers in the east-1d availability zone. East coast region, zone d. So far things are holding up, no crash or no slow down. Fingers crossed.
12
wslh 4 days ago 2 replies      
I use dreamhost and never had a failure like the Amazon one.

It's an irony.

13
parfe 5 days ago 0 replies      
Reddit goes down when a butterfly in India flaps her wings.
14
KeyBoardG 5 days ago 0 replies      
The ending of this article came of very slanderous rather than just a report of why the problem occured. Keep it.
15
delvan07 5 days ago 1 reply      
Crazy how that crashed and brought other sites like Reddit, Quora etc down.
13
My National Security Letter Gag Order (2007) washingtonpost.com
303 points by boredguy8  5 days ago   60 comments top 14
1
nbpoole 5 days ago 4 replies      
Since that editorial was published (back in 2007), the person who wrote it, Nicholas Merrill, has been "partially un-gagged": he is now able to talk publicly about portions of the case.

A followup Washington Post article: http://www.washingtonpost.com/wp-dyn/content/article/2010/08...

He also did an IAmA post on reddit, which has a lot of information: http://www.reddit.com/r/IAmA/comments/fjfby/iama_director_of...

(Since reddit is down right now, here's the cached Google version: http://webcache.googleusercontent.com/search?q=cache%3Ahttp%...)

---

Edit: Wanted to add a link to a later followup post he made on Reddit, talking about his plans to start a "Non-profit ISP and Teleco": http://www.reddit.com/r/reddit.com/comments/fkndx/update_nat...

(And the Google cached version: http://webcache.googleusercontent.com/search?q=cache%3Ahttp%...)

---

Edit: And in case people are curious about the actual court case: http://en.wikipedia.org/wiki/Doe_v._Ashcroft

2
ck2 5 days ago 1 reply      
Fun fact: under Obama the rate of "national security letters" has only increased as well as the number of whistleblowers prosecuted.

Not saying he personally directed the FBI to increase, just saying it has and nothing has stopped it.

But he has personally sought to expand NSL powers.

some background:

http://www.eff.org/deeplinks/2009/10/obama-sides-republicans...

http://www.nytimes.com/2010/06/12/us/politics/12leak.html?_r...

http://www.salon.com/news/opinion/glenn_greenwald/2010/04/16...

http://www.webwire.com/ViewPressRel.asp?aId=135889

http://www.nonprofitquarterly.org/index.php?option=com_conte...

And Manning is in serious, serious trouble under Obama, I will be amazed if he gets only life, because they purposely just added an "aiding the enemy" charge which carries a death sentence:

http://www.salon.com/news/wikileaks/?story=/opinion/greenwal...

http://www.cnn.com/2011/POLITICS/04/21/obama.interruption/

3
Cushman 5 days ago 1 reply      
Outrageous proposal time: NSL DDoS.

Let's say I own a business. Every week, I get two dozen letters purporting to be from the FBI requesting information on my customers. Some of the requests are clearly ridiculous; others might be genuine. If they are genuine, I'm forbidden from discussing them over the phone; the requests aren't a matter of public record, so I can't look them up; I don't have a secure fax, I run an internet company. I could tell my lawyer about it, but he'd be subject to the same restrictions as me.

My only options are either to submit an individual request for verification for each letter by delivery service, or comply with every request I receive, deluging the FBI with frivolous documents. Either way, thousands of companies attempting to comply with dozens of such requests every week and the secret police system would quickly grind to a halt.

To be followed shortly by a lengthy prison sentence " if they're lucky " for anyone participating in the fabrication of government documents, of course. Still, it's a fascinating prospect.

4
Zak 5 days ago 1 reply      
I have to wonder whether a person is legally obligated under such an order to actively hide the existence of the NSL request. Does he really have to lie to his clients, friends and family when asked directly about it, or would "I can't answer that" satisfy the letter of the law while giving the asker a strong clue as to the answer to their question?
5
jdp23 5 days ago 0 replies      
Remember that the several clauses of PATRIOT Act will sunset unless they're renewed by the end of May. Once Congress returns (the week of May 2) expect floor fights in both the House and Senate.

It's a great opportunity to introduce reforms -- including NSL's and gag orders. EFF has more at https://secure.eff.org/site/Advocacy?cmd=display&page=Us...

6
__david__ 5 days ago 0 replies      
Having never seen the contents of a national security letter, I wonder what the ramifications would be if you opened it and read it aloud for the first time in front of a large (or small) group of people. Or perhaps have it read aloud to you in front of a large group of people. Certainly you can't be expected to know that is going to gag you until you've read it once and by that time it would be too late.

Is it worded such that the whole group of people would be gagged? There's got to be some interesting way to circumvent it.

7
zacharypinter 5 days ago 0 replies      
Here's a video of a talk Nicholas Merril gave about the gag order:

http://www.youtube.com/watch?v=-6xsv4azzpc

8
imrehg 5 days ago 2 replies      
As a non-lawyer, what would be the situation of this person is asked in the court about some of their actions that is explained by the existence of the gag order? Would "the truth, the whole truth and nothing but the truth" override that gag order, or they had to somehow withhold that information?
9
Volscio 5 days ago 0 replies      
Please date old articles in the subject line. i.e. "My National Security Letter Gag Order (2007)"
10
megamark16 5 days ago 0 replies      
I must be getting old. After reading this article I wrote an email to my representative. I'm pretty sure that's a checkbox on the form I fill out when I get a physical:

Have you ever sent a strongly worded letter to an elected official? []Yes []No

11
binarymax 5 days ago 0 replies      
A sign similar to this was proposed by librarians:

"The FBI has not served this library a national security letter. Please watch for removal of this sign."

12
jeffreyg 5 days ago 0 replies      
(2007)
13
shareme 5 days ago 0 replies      
a comparison..a person entering the US military and getting a the lowest level of clearance has less punishment if caught disclosing than these NCLs..
14
viggity 5 days ago 2 replies      
this is not hacker news.
14
Working with the Chaos Monkey codinghorror.com
292 points by CWIZO  1 day ago   40 comments top 14
1
kragen 1 day ago 0 replies      
> And that's why, even though it sounds crazy, the best way to avoid failure is to fail constantly.

This is my biggest concern with things like large nation-states, large banks, large reinsurance companies, large RAIDs, and large nuclear plants: we centralize resources into a larger resource pool in order to reduce the chances of failure, but in doing so we make the eventual failure more severe, and we reduce our experience in coping with it and our ability to estimate its probability. In fact, we may not even be reducing the chances of failure; we may just be fooling ourselves.

Consider the problem of replicating files around a network of servers. Perhaps you have a billion files and 200 single-disk servers with an MTBF of 10 years, and it takes you three days to replace a failed server.

One approach you can use is to pair up the servers into 100 mirrored pairs and put 10 million files on each pair. Now, about 20 servers will fail every year, leaving ten million files un-backed-up for three days. But the chance that the remaining server of that pair will fail during that time is 3/3650 = 0.08%. That will happen about once every 60 years, and so the expected lifetime of the average file on your system is about 6000 years.

So it's likely that your system will hum along for decades without any problems, giving you an enormous sense of confidence in its reliability. But if you divide the files that will be lost once every 60 years (ten million) by the 60 years, you get about 170 thousand files lost per year. The system is fooling you into thinking it's reliable.

Suppose, instead, that you replicate each file onto two servers, but those servers are chosen at random. (Without replacement.) When a server fails (remember, 20 times a year), there's about a one in six chance that another server will fail in the three days before it's replaced. When that happens, every three or four months, a random number of files will be lost --- about 10 million / 200, or about fifty thousand files, for a total data loss of about 170 thousand files a year. You will likely see this as a major problem, and you will undertake efforts to fix it, perhaps by storing each file on three or four servers instead of two.

This is despite the fact that this system loses data at the same average rate as the other one. In effect, instead of having 100 server pairs to store files on, you have 19,900 partition pairs, each partition consisting of 0.5% of a server. By making the independently failing unit much smaller, you've dramatically increased your visibility into its failure rate, and given yourself a lot of experience with coping with its failures.

In this case, more or less by hypothesis, the failure rate is independent of the scale of the thing. That isn't generally the case. If we had a lot of half-megawatt nuclear reactors scattered around the landscape instead of a handful of ten-gigawatt reactors, it's likely that each reactor would receive a lot less human attention to keep it in good repair. When it threatened to melt down, there wouldn't be a team of 200 experienced guys onsite to fight the problem. There would be a lot more shipments of fuel, and therefore a lot more opportunities for shipments of fuel rods to crash or be hijacked. And so on.

But we might still be better off that way, because instead of having to extrapolate nuclear-reactor safety from a total of three meltdowns of production reactors --- TMI, Tchernobyl, and Fukushima --- we'd have dozens, if not hundreds, of smaller accidents. And so we'd know which design elements were most likely to fail in practice, and how to do evacuation and decontamination most effectively. Instead of Tchernobyl having produced a huge cloud of radioactive smoke that killed thousands or tens of thousands of people, perhaps it would have killed 27, like the reactor failure in K-19.

With respect to nation-states, the issue is that strong nation-states are very effective at reducing the peacetime homicide rate, which gives them the appearance of substantially improving safety. Many citizens of strong nation-states in Europe have never lived through a war in their country, leading them to think of deaths by violence as a highly unusual phenomenon. But strong nation-states also create much bigger and more destructive wars. It is not clear that the citizens of, say, Germany are at less risk of death by violence than the citizens of much weaker states such as Micronesia or Brazil, where murder rates are higher.

2
DanielBMarkham 1 day ago 1 reply      
I think we're going to be seeing a lot more of Chaos Monkey.

CM is a form of active TDD at the system architecture level. This might evolve into setting up partition tests as a prerequisite to instantiating the deployment model (Translation: before you start putting something on a cloud instance, write code that turns the instance off and on from time to time) This assures that the requirements for survival are baked into the app and not something tacked on later after some public failure like the Amazonocolapse.

I was reading on HN the other day a guy talking about Google. He said he saw engineers pull the wires from dozens of routers handling GBs of data -- all without a hitch. The architecture was baked enough that failure was expected.

Many times failure modes like this are burned into hardware, but that kind of design is a long, long, long way from most people's systems.

3
sp332 1 day ago 0 replies      
The Guiness World record for most steps in a Rube Goldberg device was just set at a competition at Purdue University. The device has 244 steps to water a flower! Now, if you saw the Mythbusters' Christmas episode with the Rube Goldberg device, you know it's really hard to make all those steps go right. But in this one, the engineers used a "hammer test": at any point during the operation of the machine, an engineer could tap the side with a hemmer. If it screwed up, that stage was redesigned. http://www.popularmechanics.com/technology/engineering/gonzo... The end result was the most complex machine of its kind, but it runs very reliably.
4
pwim 1 day ago 1 reply      
When I first read about the Chaos Monkey, I had assumed it was used on their development/staging environment, but this article implies it is on their production system. Does anyone know which is correct?
5
jamii 1 day ago 1 reply      
Here is an erlang version of the chaos monkey:

    potential_victim(Minions) ->
fun (Pid) ->
not(pman_process:is_system_process(Pid))
and not lists:member(Pid, Minions)
end.

death_from_above(Minions) ->
Pids = lists:filter(potential_victim(Minions), erlang:processes()),
case Pids of
[] -> none;
_ ->
Victim = lists:nth(random:uniform(length(Pids)), Pids),
Name = pman_process:pinfo(Pid, registered_name),
exit(Victim, kill),
{ok, Victim, Name}
end.

The idea is to run it during load tests. Afterwards run your normal unit tests to check that nothing got permanently broken. It's good for finding broken supervisor trees.

6
augustl 1 day ago 0 replies      
Akin's Laws of Spacecraft Design [1], law no. 2:

To design a spacecraft right takes an infinite amount of effort. This is why it's a good idea to design them to operate when some things are wrong.

[1] http://spacecraft.ssl.umd.edu/old_site/academics/akins_laws....

7
neebz 1 day ago 1 reply      
Netflix is turning out to be my favourite tech company. Just a week ago, in an extensive interview they mentioned that to provide a consistent interface across so many platforms we ended up porting our own version of webkit. And now the Chaos Monkey. It's amazing how technically sound they are considering they were just an online DVD rental company at the start.
8
cpeterso 1 day ago 1 reply      
The Chaos Monkey reminds me of some papers I've read about "crash-only software" and "recovery-oriented computing". With this approach, server software is written assuming the only way it would shutdown is a crash, even for scheduled maintenance. The software must be designed to recover safely every time the service is started. Instead of exercising recovery code paths rarely, they are tested every day.

http://www.armandofox.com/geek/past-projects/recovery-orient...

http://www.usenix.org/events/hotos03/tech/candea.html

9
guelo 1 day ago 0 replies      
Wow cool idea, but I don't think I'd be able to convince my company to do this.
10
andrewcooke 1 day ago 1 reply      
argh! so why was the server crashing? you can't leave me in such suspense....!
11
tokenadult 1 day ago 0 replies      
A decade or so ago, I heard computer programming described as a very good occupation for a person who had Asperger syndrome or perhaps limited social skills. I also recall reading then that some surveys of programmers working in that era suggested that those programmers were much more introverted than the general population. But I used to notice when I installed new programs on my Microsoft Windows computer, even after installing Windows 95, that sometimes installing one program would disable another program. That made me wonder if maybe social skills are an essential element of good programming skills. Now when software may have to run in the cloud, interacting with other software hosted on other hardware, all attempting to operate synchronously, wouldn't "software social skills" be rather important for any developer to understand?
12
adamc 1 day ago 5 replies      
Building things this way strikes me as expensive. At Netflix's scale, it pays off, but for systems that don't serve as many requests I'm forced to wonder whether just avoiding the cloud might be more cost-effective.
13
ankimal 1 day ago 1 reply      
Even linux has a chaos monkey of sorts. http://linux-mm.org/OOM_Killer
14
buddydvd 1 day ago 2 replies      
The blog post seems to imply Stack Exchange is working with the Chaos Monkey when it really isn't. They didn't really build a system that randomly shuts down servers or services. The difference is subtle but important.
15
7% of Americans Subscribe to Netflix, Now Larger than any Cable Company cnn.com
283 points by mmcconnell1618  1 day ago   141 comments top 11
1
ShabbyDoo 1 day ago  replies      
The cable companies' competitive response to Netflix has been laughable although no worse than any other established industry's response to disruption. The service Netflix provided from the beginning is a wide selection of content with little applied time on the customer's part. Going to the video store, picking out titles, and remembering to return them was a huge timesink. Netflix fixed that albeit at the expense of elapsed time. Then, Netflix drastically cut the elapsed time with streaming content. That the cable companies did next to nothing to combat this disruption is astounding given that they already owned and controlled a pipe into the customer's house!

Imagine if, in 2002, the cable companies had come offered a little, Roku-esque box which could hold two almost DVD-quality movies. A consumer would have attached a Y-splitter in front of his existing cable box and then added this new box in parallel. The box then would have connected to the TV's analog inputs. One would have loaded up his box by going to the cable company's website and picking movies from a large catalog -- one movie for slot "A" and another for slot "B". Each slot would have taken 8 hours to download over the cable line and then could have been watched using a minimalist remote to pause, rewind, etc. -- just like a VCR. Consumers could have gone to their local cable company office and picked up this box along with unlimited movie service for, say, $20/month.

I've purposefully suggested the crappiest, most minimalist implementation I could imagine as a thought experiment. Would this have been sufficient to combat Netflix? Presuming identical catalogs, why would I fiddle with snail mail, scratched-up DVDs, etc.? Such an offering certainly would have created an uphill battle for Netflix.

It is not as if video-on-demand services had not been discussed since at least the early 90s. What I have proposed is worse than what was suggested even back then, right? So, why didn't the cable operators do anything? I'm bet it had to do with fear of cannibalizing existing revenue streams. Why pay for HBO (sans original content of course) if you could pick ANY movie? How many industries have hurt themselves by not cannibalizing their existing revenue streams with sufficient aggressiveness?

2
portman 1 day ago 1 reply      
Correction: now larger than any cable company was last year.

The latest subscriber numbers from Comcast [1] are from September 2010, for Q3 2010, when they reported 22.937M cable video subscribers.

It's conceivable that Comcast has been growing as well and has as many subs as Netflix.

But the media loves a horse race!

[1] http://www.cmcsk.com/releasedetail.cfm?ReleaseID=523403

3
bambax 19 hours ago 1 reply      
> 7% of Americans Subscribe to Netflix...

I subscribe to Netflix (streaming only, obviously), am not American and not located in America; they do check the IP but not as aggressively as Hulu, so if you use a proxy they don't seem to mind.

I wonder how many of us there are?

4
ronnier 1 day ago 7 replies      
Even a strong reason for cable companies such as Comcast to enforce their 250 Gb monthly transfer limit.
5
rickdale 12 hours ago 0 replies      
I dream of a day when I can turn on my television or whatever, select a season, select an episode and boom I am watching the show. For me, OnDemand is miniature to what it could be. I have more shows on my HD than any cable company would offer onDemand. I will say Netflix has a lot of content, but in terms of actually enjoying a movie you find on netflix, imho the movie will be B grade at best.
6
ck2 19 hours ago 0 replies      
I am sure the cable companies will be lobbying congress soon to get laws against netflix or allow them to throttle/charge them.

It's far too good to last. I really hope I am completely wrong.

7
codex 6 hours ago 0 replies      
How does the number of subscribers to Netflix compare to the number of subscribers to all cable companies? In most areas, cable companies have a local monopoly, and so they tend to not go national for antitrust reasons.
8
clistctrl 1 day ago 2 replies      
I'm actually surprised... I thought the number would be far larger than 7%
9
karolisd 1 day ago 10 replies      
Is there a Netflix competitor? Is there room for another Netflix?
10
chopsueyar 1 day ago 1 reply      
Die Comcast, die!
11
rickdale 1 day ago 6 replies      
Netflix is terrible. You spend more time looking for stuff to watch than actually watching stuff. I know a movie is bad if it's available for streaming on netflix.
17
A minimalistic desk to handle cables and electronic clutter elegantly elzr.com
280 points by elzr  3 days ago   43 comments top 15
1
jrwoodruff 3 days ago 1 reply      
Wow, this thing is beautiful. I love the cost to have it built: $190USD. I'm fairly certain it would cost me 3 times that to build it myself here in Michigan.

Is it solid wood or veneer? Also, anything you would do differently?

2
johnohara 3 days ago 2 replies      
3
sigzero 3 days ago 0 replies      
The binder clip cable catcher ... so simple ... so perfect!
4
sliverstorm 3 days ago 2 replies      
Have you considered adding something like recessed USB ports on the desk front for flash drives and other transient USB accessories? I always thought that would be pretty slick, but I haven't had a chance, and this desk, being custom designed seems like a great place to try it
5
trickjarrett 3 days ago 1 reply      
This looks fantastic! I may incorporate some ideas in my standing desk when I get it built later this year.
6
chromejs10 3 days ago 2 replies      
That is a beautiful desk. I also found the link to the underdesk with the peg board to be ingenius. I currently use a glass top corner desk. The glass is pretty awesome because I can use it as a whiteboard :D. However, a smaller corner desk is kind of awful for the 3 monitors I work with.

One problem I see is if you had the desk up against the wall, it would be a huge pain to access everything to say remove drives or add new stuff. Though judging from the picture he doesn't keep it by the wall.

Fantastic design though! Too bad I'm sure it would cost a fortune more to have it done here in Cali :(

7
hartror 2 days ago 1 reply      
All I can think of when I look at that desk is constantly losing things into the gap, pens being the main loss.
8
ilikepi 3 days ago 2 replies      
I really like the look of the slot. My only concern would be small stuff (e.g. writing utensils) falling into it, but I suppose as long as it wasn't backed up against the wall it wouldn't be an issue.
9
mr_november 2 days ago 2 replies      
Any chance you are planning on selling this commercially? I would definitely purchase. Awesome work.
10
mikerg87 2 days ago 0 replies      
Well done. My thought thought is to fashion a vanity panel for the back to obscure the cable or device clutter if you are going to keep the desk out on the open. Something like two panels to cover the back affixed by magnets. Maybe one in each corner.

Another possibility would be to make the shelf back on a hinge affixed to the underside of the desktop so that you could access the gear on the shelf from the front. if the desk was ever positioned against a wall it would be painful to access that back area to install new gear or retrieve the odd pen that fell into the table top slot.

A very cool design to be sure

11
dfischer 3 days ago 1 reply      
I've done something similar with a standing desk. I'll clean it up and post pictures soon.
12
jaxn 2 days ago 1 reply      
Unfortunately this wouldn't work for an office where other people may be sitting on the other side of the desk for meetings.
13
carbonx 3 days ago 0 replies      
I love it. A simple, elegant solution to a seemingly complex and frustrating issue.
14
stevedekorte 2 days ago 0 replies      
Instead of changing your desk, get rid of your electronics. A phone and a laptop are all you (probably) need.
15
lobster_johnson 2 days ago 3 replies      
That's clever, but I wish you had chosen a nicer-looking veneer. For all that work, the result hilariously cheap- and cheesy-looking. Jennifer Newman's slot desk, by contrast, is gorgeous; love the firetruck red (although the grey is also lovely).
18
2-D Glasses 2d-glasses.com
276 points by hammock  6 days ago   67 comments top 19
1
ElbertF 6 days ago 4 replies      
I see we've finally come to a full circle. Here's a device to view the entire world in 2D: http://i.imgur.com/BjY53.jpg.
2
storborg 5 days ago 1 reply      
You can hack these up yourself from the "Real 3D" glasses given out at most 3D theaters.

    1. Obtain one pair of glasses
2. Pry apart at the seam with a putty knife or small flathead
3. Remove one of the gel lenses
4. Flip it over and put it back in, note the areas that need to be trimmed
5. Trim with scissors
6. Reinsert gel, stick the plastic back together

There will be a slight bit of ugliness where the gel filter doesn't quite take up the entire cutout area in the plastic frame, but they work well.

3
kgermino 6 days ago 0 replies      
I just hope 3D never becomes so pervasive that I actually have to buy one of these... Bookmarked just in case.
4
TeHCrAzY 6 days ago 3 replies      
I really hate when designers use flash for basic text elements without some sort of backup text behind it. My proxy blocks flash, and this results in a very broken website.
5
ivank 6 days ago 3 replies      
Their FAQ says "Do 2D Glasses work at IMAX theaters?
Alas, no. IMAX uses a different technology than normal movie theaters so 2D-Glasses will not work at an IMAX theater."

Anyone know how IMAX is different?

6
afhof 6 days ago 2 replies      
Obligatory 1D Glasses: http://jpgdump.com/files/5798.png
7
Groxx 6 days ago 1 reply      
Thinkgeek has some as well, a fairly recent addition: http://www.thinkgeek.com/interests/looflirpa/e8be/
8
drivebyacct2 5 days ago 0 replies      
Flash for menus? Tacky.

http://k.min.us/ik7yRo.png

9
hammock 6 days ago 0 replies      
This is not mine, I just thought it was such an obvious and useful innovation the second I saw it.
10
PanMan 5 days ago 0 replies      
Can't you just wear polarised sunglases for the same effect?
11
ddrmaxgt37 5 days ago 0 replies      
-Watch a movie
-Get two 3d glasses
-Take them home and hack them into one
-Voila free 2D glasses.
12
blameslz 5 days ago 1 reply      
This is great. I have amblyopia (= lazy eye) and I'm practically blind in one eye and when I have 3D glasses on I only see red stuff but now with these glasses from what I understand I will be able to watch 3D movies (even though I won't get the 3D experience) when there's no 2D version for it
13
gohat 5 days ago 0 replies      
This is the type of breakthrough idea that you look at and wonder why you didn't come up with it.

This could really help fight the rising incidence of 3-D media watching associated dysphoria.

14
chalgo 5 days ago 0 replies      
15
jcarreiro 5 days ago 1 reply      
Wouldn't wearing these reduce the perceived intensity of the screen quite a bit?
16
ikamal 5 days ago 0 replies      
it's like close one eye with 3D glasses ?
17
grantg 5 days ago 0 replies      
2-D Glasses == Sunglasses
FTFY
18
yhlasx 6 days ago 1 reply      
Sounds like joke.
19
cypherpunks01 6 days ago 0 replies      
Relevant xkcd: http://xkcd.com/880/
19
Linus Torvalds on Garbage Collection (2002) gnu.org
268 points by AndrewDucker  4 days ago   203 comments top 26
1
ekidd 4 days ago  replies      
Shortly before Linus wrote this article in 2002, I wrote an XML-RPC library in C that used reference counting. By the time I was done, I'd written 7,000+ lines of extremely paranoid C code, and probably eliminated all the memory leaks. The project cost my client ~$5K.

The standard Python xmlrpc library was less than 800 lines of code, and it was probably written in a day or two.

Was my library about 50 times faster? Sure, I could parse 1,500+ XML-RPC requests/second. Did anybody actually benfit from this speed? Probably not.

But the real problem is even bigger: Virtually every reference-counting codebase I've ever seen was full of bugs and memory leaks, especially in the error-handling code. I don't think more than 5% of programmers are disciplined enough to get it right.

If I'm paying for the code, I'll prefer GC almost every time. I value correctness and low costs, and only worry about performance when there's a clear business need.

2
barrkel 4 days ago 2 replies      
Reference counting is GC; a poor form if it's the only thing you rely on, but it is automatic memory management all the same.

Generational GC will frequently use the (L2/L3) cache size itself as its smallest generation, meaning it shouldn't suffer from the pathologies talked about by Linus here.

What GC really gives you, though, is the freedom to write code in a functional and referentially transparent way. Writing functions that return potentially shared, or potentially newly allocated, blobs of memory is painful in a manual memory management environment, because every function call becomes a resource management problem. You can't even freely chain multiple invocations (y = f(g(h(x)))) because, what if there's a problem with g? How do you then free the return value of h? How to you cheaply and easily memoize a function without GC, where the function returns a value that must be allocated on the heap, but might be shared?

Writing code that leans towards expressions rather than statements, functions rather than procedures, immutability rather than mutability, referentially transparent rather than side-effecting and stateful, gives you big advantages. You can compose your code more easily and freely. You can express the intent of the code more directly, letting you optimize at the algorithm level, while the ease of memoization lets you trade space for speed without significantly impacting the rest of your program. Doing this without GC is very awkward.

GC, used wisely, is the key to maintainable programs that run quickly. You can write maintainable yet less efficient programs, or highly efficient yet less maintainable programs, easily enough in its absence; but its presence frees up a third way.

3
jfr 4 days ago  replies      
> A GC system with explicitly visible reference counts (and immediate freeing) with language support to make it easier to get the refcounts right [...]

To be a little pedantic on the subject, such a system (reference counting and immediate freeing) is a form of automatic memory management, but it is not GC in any way. Garbage collection implies that the system leaves garbage around, which needs to be collected in some way or another. The usual approach to refcounting releases resources as soon as they are no longer required (either by free()ing immediately or by sending it to a pool of unused resources), thus doesn't leave garbage around, and doesn't need a collector thread or mechanism to.

There are partial-GC implementations of refcounting, either because items are not free()d when they reach zero references, or to automatically detect reference loops which are not handled directly.

I agree with Torvalds on this matter. GC as it is promoted today is a giant step that gives programmers one benefit, solving one problem, while introducing a immeasurable pile of complexity to the system creating another pile of problems that are still not fixed today. And to fix some of these problems (like speed) you have to introduce more complexity.

This is my problem with GC. I like simplicity. Simplicity tends to perform well, and being simple also means it has little space for problems. Refcounting is simple and elegant, you just have to take care of reference loops, which also has another simple solution, that is weak references. I can teach a class of CS students everything they need to know to design a refcounting resource management system in one lesson.

GC is the opposite: it is big, complex, and a problem that the more you try to fix it, the more complex it becomes. The original idea is simple, but nobody uses the original idea because it performs so badly. To teach the same class how to design a GC system that performs as well as we expect today, an entire semester may not be enough.

4
famousactress 4 days ago 2 replies      
We should really encourage eachother to put the date in the title when submitting old articles to HN. It's a total brainf*k to read through the entire article, and not realize the context it was in.. or to just glance at the title and assume the topic is a current one. Just saying.

[Edit] Not that I have a problem with older posts, btw.. I actually really like them most of the time. But the date would give everyone a better opportunity to evaluate whether they want to read the article, and would be reading it with reasonable context.

5
loup-vaillant 4 days ago 6 replies      
So. Programs that use Garbage Collection tend to be slow.

Cause: Hardware don't like it.

Solution: fix the hardware?

Seriously, I'm afraid we're stuck in a local optimum here. It is as if machines are optimized for the two dominant C/C++ compilers out there, and we have then to optimize our program against that, closing the loop. Shouldn't compilers and hardware be designed hand in hand?

6
jasongullickson 4 days ago 3 replies      
What he's advocating sounds a lot like how things work in the iOS world, in my experience.
7
wladimir 4 days ago 4 replies      
[2002]

Though his argument about cache does still hold.

8
sklivvz1971 4 days ago 3 replies      
It's 2011, FFS.
This kind of mindset is really self defeating in the long term.
Sure, hand optimizing is better. Having a gazillion lines of shit legacy code and technical debt to fix because you hand optimized for the 90's, it's not so great.
I'll keep my GC and sip a Mohito on the beach, while Linus keeps on fixing Linux's "optimizations" ten years from now.
9
iskander 4 days ago 1 reply      
I'm very suspicious of anyone (even Linus) claiming that gcc is slow because of its memory management. The codebase is crufty and convoluted--- it's probably slow for a thousand different reasons. If you refactored into a clean design and rewrote the beast in OCaml (or any other language with a snappy generational collector), you'd probably get a large performance boost.
10
__david__ 4 days ago 0 replies      
I like the way the D language approached this. It's garbage collected but it also has a "delete" function/operator. That way you can use garbage collection if you'd like, or you can manually free memory when you think it's worth it.

That seems like a reasonable compromise and I'm surprised that more languages don't do it.

11
albertzeyer 4 days ago 1 reply      
When I read this, I immediately thought about std/boost::shared_ptr. This is a bit ironic since Linus hates C++ so much.

shared_ptr is a really nice thing in C++. (For those who don't know: It is a ref-counting pointer with automatic freeing.) And its behavior is very deterministic. In many cases in complex C++ applications, you want to use that.

12
joeyespo 4 days ago 0 replies      
I think this is another case of everybody thinks about garbage collection the wrong way: http://blogs.msdn.com/b/oldnewthing/archive/2010/08/09/10047...

From the article: "Garbage collection is simulating a computer with an infinite amount of memory. The rest is mechanism."

Whether or not it's reference counting or generational, the goal is still to simulate infinite memory. That way, you can focus on the high-level problems instead of the technical memory-related details. So it's not necessarily a bad mindset to have.

13
manveru 4 days ago 1 reply      
Might be worth mentioning Tcl in this context, as it uses reference counting for the GC [1].

It also doesn't allow circular data structures, which are quit hard to implement if all you have are strings anyway.

[1]: http://wiki.tcl.tk/3096

14
joshhart 4 days ago 2 replies      
Here are a couple of reasons why I think it's not so clear cut:

1. If garbage collection was that damaging to the cache, Haskell wouldn't be nearly as fast as C.
2. Copy-on-write data structures are nice because the immutability allows for concurrent access without locking.

Granted, this was from 2002 and Linus may no longer feel so strongly about the topic.

15
KirinDave 4 days ago 0 replies      
That was 2002. Here is the state of the art in 2008: http://cs.anu.edu.au/techreports/2007/TR-CS-07-04.pdf

Unsurprisngly, things have changed. Many of Linus's complaints were valid, and we've learned how to address them.

16
ww520 4 days ago 1 reply      
This is like arguing assembly is better than high level languages because it's faster with explicit control. The thing is 99% of the time it doesn't matter.

In most cases, GC-based programs have good enough performance to get the job done. For the 1% case, sure use the C/C++/Assembly to have the explicit control and performance. Doing things in non-GC systems because of potential caching problem sounds like a case of premature optimization.

17
Vlasta 4 days ago 2 replies      
I like him mentioning the programmer's mindset associated with GC being a big danger. Some people consider GC a magic bullet and refuse to think about what's happening under the hood. I do not consider that a good habit.
18
kerkeslager 4 days ago 0 replies      
> In contrast, in a GC system where you do _not_ have access to the explicit refcounting, you tend to always copy the node, just because you don't know if the original node might be shared through another tree or not. Even if sharing ends up not being the most common case. So you do a lot of extra work, and you end up with even more cache pressure.

It's possible that things were different in 2002, but I don't really think this is the case now. In general, I make the node immutable and never copy it (copying an immutable object makes no sense). In a well-designed code base, mutations happen within the function where the data is created (read: on the stack, where cache locality is a given). Immutability also addresses Linus' concerns with thread-safety. And that's not accounting for concerns which Linus DOESN'T mention, such as increased development speed and correct program behavior.

I'm not the only one saying this. Josh Bloch, for example, recommends immutability and cites cache reasons (http://www.ibm.com/developerworks/java/library/j-jtp02183/in...). And many languages (Haskell, Clojure) are designed heavily around avoiding mutation and sharing nodes within data structures.

This talk of copying nodes to avoid your objects changing out from under you sounds a lot like what I call "writing C in Java". Linus is looking at this from the perspective of, "If they took away explicit memory management from C, this is how I would do it." But OF COURSE if you just bolt a feature like GC into a language that didn't have it before, it won't work well. Effective cache usage in a GCed system requires other language constructs (like immutability).

Now, after all that, I won't make the claim that immutability in a GCed language like Java or C# is faster or even as fast as C with explicit memory management: it would take a lot of profiling code and comparing its functionality to make that claim with any kind of certainty. But it doesn't seem like Linus has done that profiling and comparison either.

19
mckoss 4 days ago 1 reply      
Didn't Linus forget

    newnode->count = 1;

20
teh 4 days ago 1 reply      
Slightly related: He mentions that when the containing structure of a sub structure goes away you can free all the resources. The guys behind Samba 4 developed talloc [1] which is build around that idea.

[1] http://talloc.samba.org/talloc/doc/html/index.html

21
mv1 3 days ago 0 replies      
I find it sad that, to this day, one has to spend so much time worrying about memory management to get decent performance. I've yet to work on a performance oriented project where I didn't need to write at least a couple custom allocators to reduce memory management overhead.

GC systems are no better in this regard. I was told of an interesting hack in a Java program that implemented a large cache of objects by serializing them into a large memory block so that the GC saw it as one big object and didn't traverse it. This resulting in dramatically reduced GC pause times (10x+). When needed, objects were deserialized from the array. Disgusting, but effective.

22
LarrySDonald 4 days ago 0 replies      
So.. Essentially man the F up and live without GC in the parts that are going too slow instead of saying "Oh it's cool, just wait ten years and hardware will be fast enough to run this anyway". Use GC for stuff that needs to be simple and is fast enough anyway, don't bog down code that's too slow with it.
23
earino 4 days ago 0 replies      
Guy who writes kernel code cares about performance, film at 11.
24
mmcconnell1618 4 days ago 1 reply      
I'm quite sure the machine code generated by my compiler isn't nearly as good as it could be if I hand coded it but the efficiency of not writing in machine code far outweighs any potential performance gains.
25
VladRussian 4 days ago 0 replies      
"All the papers I've seen on it are total jokes."

Couldn't agree more. We were actually laughing in the office when an office mate brought up such a paper many years ago.

"I really think it's the mindset that is the biggest problem."

Linus is a superhero 20+ years working on the supertask of changing people's mindset.

26
mfukar 4 days ago 1 reply      
Hacker News, another place where 10-year-old emails are submitted as news.
20
Never say “no,” but rarely say “yes.” asmartbear.com
266 points by SRSimko  1 day ago   35 comments top 15
1
abalashov 19 hours ago 1 reply      
On a related note, though orthogonal note, never assume anything overly logical about the customer's math. A lot of pricing issues come down to something psychological or presentation-related, rather than objective and quantitative.

I long ago noticed--as many consultants know--that there is only so high that hourly rates can go for an individual or a very small company, and it's not really that high. Beyond that, people start googling their eyes, making incredulous expressions and hyperventilating. They have a certain metric in their head of how much a glorified typist like yourself should be making, and if you exceed that metric, they start comparing your contract rate to (1) their own salary (which is obviously fallacious, since they are not contractors) or (2) other contract rates they've been exposed to lately, like when they tried to offshore the project they are now trying to get you to rescue after it flamed out spectacularly (http://en.wikipedia.org/wiki/Recency_effect#Recency_effect), and (3) the billing rates of their attorneys and accountants.

"Obviously," in their mind, you can't be charging as much as attorneys and accountants. You're in a different sociological category from people whose very profession has always been defined by high costs and high-rolling swagger in popular imagination; you're just some kind of computer guy. And if you're charging 8x what the folks in Bangladesh quoted for a team of 10, are you really worth it? And $250/hr? Good heavens, that's almost 8x the ~$34/hr I make on my $70k salary as the lead IT guy! Who do you think you are?

The point is, once the impressive-sounding hourly rate has grabbed people by their egos, you aren't going to be able to re-center them and capture pricing based on value. You've lost to the dark side of dick size contests and pissing matches, and as often as not, you're up against some relatively pedestrian actor who is determined to put you in your place, not a key fiduciary stakeholder and/or decision-maker. There's really no way to recover from that nosedive.

So, in practical terms, you can't really get beyond $100-$200/hr rates in the small to medium contract IT and software development services sector without crossing a number of irrational psychological barriers. Obviously, this doesn't apply to the enterprise segment, and doesn't apply to any company that has successfully fostered the expectation of either elite expertise or high overhead--ideally both. If you're Accenture or IBM, you definitely don't have to worry about this, but if you're part of a well-known ninja specialist squad, you probably don't either.

Notwithstanding outliers, I was very surprised to discover that quoting a flat price has a good way of keeping the customer centered on the value they stand to gain and away from judging your swashbuckling billing. There are some ostensible reasons for this: it soothes the feelings of customers burned by open-ended enterprise-style darken-the-skies-with-people hourly billing, and demonstratively puts the risk the consultant. But it also gives them a fairly opaque number to work with instead of sizing up whether you're "the kind of person that ought to be charging that." Most importantly, and most relevantly, it completely disrupts their mathematical circuitry; it is amazing how many times I've run into customers that get in a tizzy when you quote them "10 hours at $250/hr" but happily sign on the dotted line when you tell them it's going to be $2500 flat and be done in two days. Even weirder, they'll still sign even if you _tell_ them it's "about 10 hours of implementation time." I don't think it's because they can't do division in their head; I think when the conversation starts out with a fixed-price premise, that computation just gets routed to a different psycho-social/emotional bucket. This is dissonant to the basic governing intuition of the hacker brain, but that's often how it goes.

So, I don't quote projects in hours anymore. I usually provide some idea of estimated implementation time so they can do the division themselves, being ethically reluctant about a price-support scheme that is reliant solely on opacity of cost basis and/or what the implementation actually involves, but it doesn't seem to matter. The results are undeniably and resoundingly positive. Something about human nature... I don't know what it is.

All this to say: If you're feeling shaky on quoting something "insane" in hourly terms, quote it as a fixed-price project. You might be surprised by the disparity in outcomes.

2
patio11 1 day ago 1 reply      
Something which is easy to say but non-trivial to execute on (I know I have to get better at this and it has quite recently cost me thousands of dollars): do not negotiate against yourself. You'd be surprised -- I've been surprised -- at how high clients can go when they're sufficiently motivated to solve a problem. Their conception of money is totally different than your's or mine. Don't sell yourself short by e.g. assuming that there is a "fair" price which you can't go above. (Mutual agreement makes any price fair.)
3
boredguy8 1 day ago 0 replies      
One note: if you're charging enough to cover a hire, make sure you have someone in mind willing to take the position. Otherwise you'll easily end up hiring someone you don't really want in order to cover a commitment you made.
4
luckyisgood 1 day ago 4 replies      
I remember the first time I discovered this principle. There was a web project I really didn't feel like doing. A big demanding client wanted a project done in a very short amount of time. I quoted at 3x our price, and it was immediately accepted by the client. The project was successful and we did a great job.

It made me realize one important, no, crucial! thing: If you're mad at your client for any reason, it only means you're not charging him enough. It's very hard to be mad at a client who's paying you a ridiculous amount of money for something you're fabulous at.

5
pg 1 day ago 2 replies      
Also the motto of unscrupulous VCs.
6
btilly 1 day ago 1 reply      
One piece of advice that I've seen people learn painfully.

If you're going to name a high price to do what you don't want, make sure that the price is truly high enough for the grief you'll suffer. Don't just name something that is high enough that you think they won't take it, because they just might.

7
alextp 1 day ago 0 replies      
This is a great strategy. It also trivially extends to show that there is no such thing as "too many customers, need to hire out", as you can always increase the price.

The one problem is that this doesn't transfer easily to things that are likely to be undoable no matter the resources (i.e., "I want 100% uptime on my blog", or "the project needs to get done in a month, and a saw a guy on the internet claiming he could do it in a weekend", etc).

8
mcantor 1 day ago 1 reply      
Is there any way to apply this to a full-time salaried job where you can't change your "price", but are still asked to evaluate commitments of various sizes over the course of your position?
9
Travis 1 day ago 0 replies      
The graph of the two different revenue streams is interesting (taking the job for a lower cost gets you more jobs, but higher cost gets you more per job). At some point, I bet you can find an equilibrium where gross revenue is the same with the higher cost jobs.

In addition, if you price yourself so that your schedule isn't booked, you'll be more open to new clients, which gives you a better opportunity to find even higher priced clients.

Seems like as a freelancer, at least, your best long term strategy is to price yourself out of a decent amount of work.

10
xbryanx 1 day ago 0 replies      
I've had this very same strategy for a while, but never really noticed I was doing it. Thanks for putting a name to something I should reinforce in my practice!
11
jodrellblank 1 day ago 1 reply      
Does this imply a heuristic such as:

"To earn more money, I should look for jobs that I can do, but don't like doing so much that I will charge a lot more than my normal rate, then quote for a lot of those jobs"?

12
jlees 22 hours ago 0 replies      
Love the summary. A qualified yes such that if they agree, you're happy, and if they decline, you're still happy.
13
espinchi 1 day ago 0 replies      
It reminds me of the saying "Everyone has their price".

I do like this principle, and I already identified several occasions in the past where we said "No" when we should've said "Yes: the price will be X", being X an amount that would certainly have been bigger than we initially thought.

14
RBr 1 day ago 0 replies      
Similarly, one of the most important words I was taught never to use was "guarantee".
15
suarezkop 1 day ago 0 replies      
Always been a fan of long term strategy over short term benefits. Completely agree with your views.
21
Doom engine code review fabiensanglard.net
257 points by franze  4 days ago   27 comments top 5
1
hapless 4 days ago 4 replies      
It's amazing that this would run halfway well on a 33 MHz 486. Doom had a 35 fps cap, and ran at 320x240 (square pixels):

2.7 million pixels per second at 35 fps (the cap).

1.4 million pixels per second at 18 fps (~50% of cap).

At the more realistic target of 18 fps, you have 24 clock cycles per pixel. A 486 averaged about 0.8 instructions per clock, so you're looking at 19 instructions per pixel. With a 33 MHz memory bus and the DRAM of the day, you're looking at about 5 clocks for memory latency. That looks like an upper bound of no more than 4 memory operations per pixel.

A convincing 3d renderer averaging 19 instructions and 4 memory operations per pixel. And we're not even counting blit/video delays here. Good lord is that savage optimization work. Carmack is famous for a reason.

P.S. The really scary thought is that Doom would hypothetically run on any 386 machine -- can you imagine painting e.g. 160x120 on a cacheless 20 MHz 386 laptop?

2
thibaut_barrere 4 days ago 2 replies      
The first (and only one, currently) comment brought me back years ago! A major performance trick back then was to ensure the code and data would remain into the (very small) cache, as well as preferring structures that would be read in order.

====================

"Because walls were rendered as columns, wall textures were stored in memory rotated 90 degrees to the left. This was done to reduce the amount of computation required for texture coordinates"

The real reason is faster memory acces when reading linearly on old machine, less cpu cache clear. It's an old trick used on smooth rotozoomer effect in demo scene year ago.

3
light3 4 days ago 0 replies      
4
Luc 4 days ago 1 reply      
Michael Abrash' "Zen of Graphics Programming" has a good overview of many of the tricks used during that era: http://www.amazon.com/Zen-Graphics-Programming-Ultimate-Writ...

(now, of course, mainly to be read for nostalgic reasons).

5
Tyrant505 4 days ago 0 replies      
Any qed users? I liked it for map editing most.
22
The Guantanamo Files wikileaks.ch
250 points by dsplittgerber  1 day ago   56 comments top 6
1
earl 1 day ago  replies      
Congrats America. We officially torture randoms (some Afghans were given up to the equivalent of a year's salary to turn folks in; quite a nice way of getting rid of some SOB and getting paid). We also violate our laws and constitution, then sit around and circle jerk over whether dunking someone's head under water a couple hundred times until he's just this side of drowning is torture or not. Then, because it's definitely not torture, the cia deletes the videos. Oh, and apparently we now do indefinite detention as well, without legal representation except in front of a kangaroo court, maybe, eventually. Finally, now that we definitely know some people were innocent... we leave them to rot in a cell in Guantanamo.

Good job.

2
dpritchett 1 day ago 3 replies      
Don't miss the detainee population visualizations put together by the NYT with help from CoffeeScript/Underscore/Backbone wizard Jeremy Ashkenas:

http://projects.nytimes.com/guantanamo

3
alexqgb 1 day ago 0 replies      
The New Yorker's Amy Davidson did a really good write-up on this.

http://www.newyorker.com/online/blogs/closeread/2011/04/wiki...

The Hindu takes the prize for the most blistering, incandescent (and richly deserved) attack on the US Administration.

http://www.thehindu.com/news/international/article1767369.ec...

And Al Jazeera hosts some especially pointed commentary about what's NOT included in the leaks.

http://english.aljazeera.net/indepth/features/2011/04/201142...

tl;dr: Between them, Clinton and Bush amassed an odious and vile legacy of extrajudicial policy. Obama is too scared of House Republicans to say so, has no way to deal with them effectively, and cannot counter their political threats. Now that the story has gotten ahead of the Presidency, America is (quite fairly) getting slapped around the world for being stupidly violent, prone to extreme over-reaction, and disturbingly lawless.

4
ck2 1 day ago 1 reply      
the document dump sheds light on cases of accidental detentions of innocent or seemingly harmless men, including an Afghan shepherd who spent three years at Gitmo after being arrested near the scene of a roadside explosion
5
blhack 1 day ago 3 replies      
Is this still part of the Bradley Manning leak?
6
mrcharles 1 day ago 2 replies      
I was hoping the data would be in a text format so that I could build a tree of blame -- so basically you could see exactly how 'information' from someone who may not be giving accurate information impacted future prisoners.

Sadly, it's all just embedded PDF files.

23
Thank HN: 127 days since I asked for your advice.
250 points by throwaway267  5 days ago   44 comments top 12
1
danilocampos 5 days ago 2 replies      
Out of curiosity: Damn, $6k per month? What kind of work are you doing to have that much left over for debts?
2
pgbovine 5 days ago 1 reply      
awesome news! keep up the good work.

on a side note, perhaps this is a benefit of attending college that a lot of the anti-education crowd here on HN might not realize --- connections with alumni from your alma mater. of course, you still need to take massive initiative like OP did, but at least that option is available to you.

3
alain94040 5 days ago 2 replies      
Would you mind grabbing a cup of coffee

Agreed, this is key. Meeting people in person has amazing power. Call it serendipity if you will. For it to work, make sure you tell the other person that you are not looking for a job at their company.

I know I'm repeating myself, but http://letslunch.com does exactly that for entrepreneurs and tech people. Serendipity, no hidden-agenda get-togethers.

4
leelin 5 days ago 3 replies      
Awesome story and congrats. I second pgbovine's point that maybe the diplomas that grant instant kinship to fellow alums is worth quite a bit. I wonder if we can hack the kinship part without the tuition part?

Another question. Paying down your debt at $6K/month is fantastic (implying you are above and beyond you and your spouse's living expenses and the interest on the debt). However, based on your last post that debt was $70K credit card and $30K IOU to co-founder.

Now that you are back on your feet, have you considered declaring bankruptcy or defaulting on the credit cards? It'll ruin your credit and stress you out, but saving $70K while building a $6K a month nest-egg seems worth considering.

I hate suggesting that people walk away from their debt, but 7 years of bad credit vs. $70K today is the trade. Bankruptcy / consumer credit laws exist to help the little guys; the ABS traders and investors who package and buy your debt price in a certain delinquency / default rate in all their credit card deals. That comes from my own experience working on a hedge fund ABS desk. You'll probably miss the $70K a lot more than the people who have an unsecured claim on it.

Edit: OP responded with a firm "NO" to defaulting on the debt. Congrats again, much respect!

5
kloncks 5 days ago 1 reply      
1. Congrats. It's amazing to hear and read a story like this; I'd love to read a post with even more details down the road!

2. Why is your name in green on HN?

7
willheim 1 day ago 1 reply      
Hey! That's awesome! Thanks for coming back and letting us all know how you did and that I played a small part in it. (small as in that was one tiny blurb I mentioned and all the hard work was done by you alone). It made my day to read that.

Enjoy Philly!

8
wyclif 5 days ago 1 reply      
If you're going to @PhillyTechWeek, which I expect you are, hit me up via email.
9
sebkomianos 4 days ago 1 reply      
Without any intention to be rude trying to get in your personal life, you don't mention her at all. And I guess she played a major role in your "recovery", no?
10
ffumarola 3 days ago 1 reply      
I'm from Philadelphia, too! What neighborhood are you in?
11
Hisoka 5 days ago 3 replies      
Just curious, do you think this same strategy could work for finding someone to date? Cold emailing asking for a cup of coffee? Do I need to sneak in a hidden agenda?
12
3JBill 4 days ago 0 replies      
CONGRATS!
I though would appreciate some input on the resume help you got. I'm almost on the edge pf having to convert to corporate as you did. PM me please.
24
The Merge Button github.com
249 points by tmm1  1 day ago   18 comments top 7
1
timdorr 1 day ago 1 reply      
And from 4 days ago: http://ejohn.org/blog/pulley/

All that hard work for nothing. Damn, another company listening to the needs of their users. What are they thinking...

2
macrael 1 day ago 3 replies      
I like git a lot, but find that the more I hear about it, the more pitfalls I discover. The actual user interface often does not seem to have good defaults and just generally does things differently from how I expect.

So. I love that github is working to take you out of the command line. It is fabulous that they are working to make git easer to use at the same time as providing hosting space.

3
jvoorhis 1 day ago 3 replies      
It's a nice feature. I had a chance to use it today, and it worked beautifully. Its biggest downside is that you don't have an opportunity to run tests without first pulling the changes into your working copy, so I don't see it coming into play very often in my workflow.
4
pilif 14 hours ago 1 reply      
I haven't tried it yet and trying it out would be quite a lot of work (set up a test repo, fork it, send pull request, merge), so I'm asking here first in the hope that somebody has already tried it:

is it possible to select the email address you want to make the merge commit as? My main email address I'm registered with github isn't the address I would want to make merge commits as.

5
yuvadam 1 day ago 0 replies      
Amazing! This feature was long overdue.
6
flexterra 1 day ago 0 replies      
I really like how Github is always pushing new features all the time.
7
neworbit 1 day ago 0 replies      
This is an excellent idea and I look forward to actually using it. Thanks guys for making the world of distributed development and open source a better "place" to be!
25
SurveyMonkey to buy Wufoo (YC W06) for $35m allthingsd.com
243 points by sriramk  1 day ago   59 comments top 21
1
patio11 1 day ago 1 reply      
That's awesome news. Congratulations to the Wufoo team -- it is well deserved. (Their product is awesome.)

It is also great news for SaaS startup generally, since Wufoo is a little of column A and a little of column B on the typical grow via revenues VS get investment and grow massively dichotomy. That's a data point in the favor of at least some investors making investments in companies which have a projected trajectory where massive success results in a company on the scale of 37Signals/FogCreek/Wufoo rather than resulting in a company on the scale of Zynga/Groupon. $35 million won't exactly have VCs salivating but, oh well, if they don't invest they don't get a vote -- the angels and employees of Wufoo have to be happy like clams at this outcome.

2
webwright 1 day ago 1 reply      
SO happy for these guys! Well earned outcome.

The dollar-figure makes me wonder if there is a "valley multiplier" on startup valuations. These guys built a great business with epic growth outside of the valley with very little funding. Aside from their SaaS offering, they're doing $200k of commerce transactions PER DAY.

If they were in the Valley and had taken a few million in funding, would the price tag be different (i.e. much higher)? When I compare this exit to other 30-40M exits, Wufoo seems head-and-shoulders above the rest in terms of revenue/profit, proven growth, proven team, etc.

3
GavinB 1 day ago 0 replies      
Hopefully this is one of those acquisitions where the acquired company ends up taking over and reinventing the acquiring company.
4
vaksel 1 day ago 0 replies      
Wufoo is 5 years old?

Time sure does fly...if you asked me how old Wufoo was, I'd probably say a year...two tops.

5
jmtame 1 day ago 1 reply      
"Wufoo has helped people collect over $100,000,000 worth of revenue for the users and about $200,000 in payments per day."

I really hope you guys continue to kill it in the form space.

6
mtw 1 day ago 6 replies      
I don't get it; where did surveymonkey found $35m? is there so much money to make in online surveys?
7
mtogo 1 day ago 1 reply      
Link to original article, without blogspam: http://news.ycombinator.com/item?id=2481610
8
revorad 1 day ago 0 replies      
PG must be very pleased. I think he invested in Wufoo (in addition to the original YC investment).
10
ja27 1 day ago 0 replies      
Interesting that they are (soon to be were) in Tampa, FL - not exactly a hotbed of tech startups. Definitely cheaper to be there, but is it easier or harder to draw talent?
11
jasonlbaptiste 1 day ago 0 replies      
Congratulations guys. I'd normally never want to see this happen, but survey monkey is a great home and it's a great fit. I'm proud that we pay for your service.
12
savrajsingh 1 day ago 0 replies      
What's interesting is Wufoo didn't take a lot of funding at the start, and they grew over time.

It's great product that I've been happy to evangelize over the years. It does what it says, and it does it well. Congrats Wufoo!

13
p0ppe 1 day ago 1 reply      
Wufoo is YC W06.
14
marcamillion 18 hours ago 0 replies      
So I am going to go out on a limb and take the Wufoo guys at their word.

I used Survey Monkey a bit on some grad projects, but wasn't too impressed by what I saw. I mean, it was ok...but wasn't Wufoo-esque.

I love Wufoo and the founders have tremendous credibility in my eyes. It is clear that they are VERY passionate and extremely intelligent. The thought and effort they put into everything they do - from Particle Tree to Treehouse to Wufoo - is awe-inspiring.

Given that they have not raised a ton of money, I can only assume that they decided to do this deal on their own initiative and not based on pressure from both PG & PB (both of their angels).

So it would seem that they know what they are doing.

I hope that this works out for the best. Although I am just a free user of their product, I have a lot of respect for the team and have learnt a lot from them.

I truly wish them all the best - even if it means they needed to sell.

15
6ren 1 day ago 1 reply      
> customers frequently use Wufoo's forms to process online transactions.

> a process that previously required hours of work by web developers can now be done by anyone with web access in a matter of minutes.

Can I ask a question of the Wufoo team? Do you attribute your success primarily to: allowing anyone to do what only web developers could do (targeting non-consumption); reducing the work from hours to times; or processing online transactions?

16
slackerIII 1 day ago 1 reply      
Any idea how many employees Wufoo has? And how many are engineers/designers?
17
atourgates 1 day ago 0 replies      
A strangely perfect fit.

I use both Survey Monkey & Wufoo daily at work, and consider both of them to be fantastic tools, which both could both use some love on the visual customization end.

Survey Monkey's visual customization is limited to changing colors and uploading a logo. With Wufoo - it's possible to generate custom CSS (or do completely custom forms with their API) - but much more difficult than (I feel) it should be.

So - congrats to both companies, and hopefully they'll now move in the direction of more & easier visual customization.

18
dantheman 1 day ago 0 replies      
Congrats Wufoo!
19
tbrooks 1 day ago 2 replies      
Snarky 37signals post in t-minus...
20
staunch 1 day ago 0 replies      
Wuhoo for Wufoo!
21
neilxdsouza 1 day ago 0 replies      
Wonder if SurveyMonkey is interested in buying us for only USD 500K :)

http://sourceforge.net/projects/xtcc

Man is this going to get me downvoted or what?!!

26
IBM's infamous "Black Team" t3.org
231 points by shawndumas  5 days ago   51 comments top 15
1
sp332 5 days ago 4 replies      
Does anyone have a link to the version where the Black Team member found a bug in rigourously (mathematically) proven code? edit Ah, here it is: http://www.penzba.co.uk/GreybeardStories/TheBlackTeam.html
2
diiq 5 days ago 2 replies      
Searching for "the black team ibm -mustaches -infamous" on google returns (nearly) nothing. I find it astonishing that there is no record of such a team that doesn't mention mustache twirling.

I suspect that some hacker wanted a version of the Black Watch to look up to, so he invented one. I don't object to the invention of legends, but we should include some traditional legend-flagging phrases: "long ago", "never heard from again", &c.

3
trickjarrett 5 days ago 0 replies      
This reminds me of the article about the developers and testers who worked on software for the shuttle. In all the years, they only had 6 bugs ever reported on the shuttle, all of which were fairly minor as I recall.

It's so true that bugs are now simply part of life, and it has to do with the speed at which development must happen. I wonder what the Black team of old would think of today's web development wild west sort of approach.

Here it is: http://www.fastcompany.com/node/28121/print - They Write the Right Stuff (2007)

4
gchucky 5 days ago 0 replies      
Does anyone know what became of the Black Team? Presumably it's a defunct group, but when did that happen, and under what context?
5
bioh42_2 4 days ago 0 replies      
Reading this story makes me sad.

There's also another story (google fails me) about a legendary IBM programmer around whom IBM built an entire team of testers, documenters, etc, all to keep this one guy's way above average productivity going. That story also makes me sad.

These stories make me sad because I know how huge a difference the environment makes to everyone's job.

The key points about the black team:

1. A few individuals that happen to be a bit above average at finding defects.

2. Bring them together, create a team.

3. Support them, but mostly just get out of their way and don't distract them with management B.S.

Very little change and support results in a huge jump in their productivity!

Same thing with the single legendary programmers, simply relive him of non-programming tedious tasks, give him enough support staff to keep up with his output and again HUGE productivity boost.

What's so sad about this is that is so rarely happens.
I think most people are capable of having this productivity jump, if only they'd get the same support. OK, let me back of a bit from most and be more precise and say, you should be at least a bit above average.

But why does this so rarely happen?
Sadly I think for most sizable companies minor process changes are a huge obstacle.

The bright side of this? Startups.
Startups are like these kinds of teams within a behemoth like IBM, except without the behemoth. Or actually a startup up ought to be like that, because that is one of the key advantages a small business should have over the big ones.

6
dauwk 5 days ago 0 replies      
Having worked on both the hardware and the software for most of IBM's tape drives, from the '60s era 2400s through the '70s 3480s, which cover the time of the Black Team, I find this story difficult to believe. On all these drives adjusting the start/stop mechanism required the engineer to be inside the enclosure with hands on the the very read head area whilst running many patterns of start/stop/rewind/fast forward. If I had felt any significant enclosure movement it would have indicated to me a major problem.
7
wglb 4 days ago 0 replies      
I am thinking that this might just be humbug. Possibly motivational humbug, however.

But I did witness first hand some shenanigans done by the Field Engineers on XDS tape drives in the 1970s. They did use a kind of resonant thing to test the limits of how well a particular tape drive was working. It would do a lot of rewinding, stopping, reversing and the like. These drives had long vacuum (work with me here) chambers, one on each side where a loop of tape would be suspended. Thus, a fast back-and-forth operation could be performed on a short section of the tape without moving the reel. The goal was to try to get the tape moving in such a way that it would pop out of the vacuum chamber and fault the tape drive.

Somewhat like the Black Team's efforts are alleged to do, the net result was that all the tape drives, after adjustment, were able to pass this tough diagnostic.

8
aeontech 5 days ago 0 replies      
Lots more comments on the previous discussion: http://news.ycombinator.com/item?id=985965 as well as http://news.ycombinator.com/item?id=994358
9
sinamdar 5 days ago 1 reply      
Nice article. This "Black Team" is sighted as an example in the book 'Peopleware: Productive Projects and Teams'.
10
seles 5 days ago 0 replies      
It would be nice if there was info about the methods they developed for testing, rather than just how effective it was.
11
dkersten 5 days ago 4 replies      
Pity the websites text takes up only 20% the width of my monitor... text looks cramped and awkward to read while 80% of my monitor is blank white space...
12
wcchandler 5 days ago 0 replies      
As a hardware tester at IBM, this makes me happy. I never see any adoration. People think of it as a 9-5; not as a chance to "best" somebody.
13
mikerg87 5 days ago 1 reply      
I remember hearing about the Black team from a training manual given to new hires who worked at Sperry/Univac in the late 70's - early 80's. There was a passage where the Black Team considered it a "failure" when they couldn't identify a defect during a testing round. And conversely they considered it a "success" when they identified a problem. Its almost as if they were doing TDD before anyone knew what to call it.
14
BasDirks 4 days ago 0 replies      
If anyone doubts whether they should read this article, let me quote:

"Team members began to affect loud maniacal laughter whenever they discovered software defects. Some individuals even grew long mustaches which they would twirl with melodramatic flair as they savaged a programmer's code."

15
aangjie 4 days ago 0 replies      
All throughout reading about the Black Team, i couldn't help but recall the Stanford Prison experiment(http://en.wikipedia.org/wiki/Stanford_prison_experiment) and how the article says a lot about how people behave in groups ... Hmm....odd,given the goal of the article..
27
SETI Institute suspends search for aliens mercurynews.com
227 points by sage_joch  20 hours ago   92 comments top 17
1
javanix 12 hours ago 2 replies      
Just in case anyone didn't bother reading the article - SETI is not calling off all of their searches, just shutting down one of its main radio arrays in Mountain View. Their other time-shared operations sound like they'll continue.

Not really sure why people feel like misrepresenting articles like this all the time, but this is one of them.

2
worldvoyageur 19 hours ago 4 replies      
This is sad to read, but perhaps an opportunity in disguise.

It is a risk for any firm to be highly dependent on one customer for most of its revenue and SETI appears to have been highly dependent on US government funding.

It is dangerous to make conclusions based on a newspaper article, but it seems they found $50 million in private donations way back when to build the network. Now they can't find $2.5 million per year to run it?

If nothing else, something is seriously wrong if donors that ponied up $50 million to build the Ferrari no longer want to shell out $2.5 million run it.

The quote: "if everybody contributed just 3 extra cents on their 1040 tax forms, we could find out if we have cosmic company." suggests that the organization is focused on government funding, rather than individual donors - and continues this focus even after an evident failure of that model to keep things running. Meeting the needs of government funders and meeting the needs of donors are entirely different things. In my experience, you can have one or the other, but not both.

They should step back and rethink how they can make their non-profit work primarily based on voluntary donations. The world is full of non-profits that manage it with budgets much larger than the $2.5 million/year SETI appears to need.

Link to send SETI money: http://www.seti.org/page.aspx?pid=1468

3
MichaelApproved 16 hours ago 0 replies      
"There is a huge irony," said SETI Director Jill Tartar, "that a time when we discover so many planets to look at, we don't have the operating funds to listen."

That sounds more unfortunate than ironic. It would be ironic if funding went directly from SETI to the satellite which found the planets.

4
urbanjunkie 19 hours ago 3 replies      
Although I think the seti@home project was one of the groundbreaking distributed/crowdworking projects, my personal opinion is that the actual search for ETI is a waste of time (not factoring in any advances in science/engineering that have been derived from the project).

I'm a firm believer in the most common solution to the Fermi Paradox (http://en.wikipedia.org/wiki/Fermi_paradox) - ie there's nothing else out there - because for any civilisation that's even slightly more advanced than us (for galactic values of slightly), the effort to make a noticeable impact on galaxy is reasonably trivial.

5
bfe 17 hours ago 0 replies      
The beginning of the article is incorrect in identifying the Allen Array as the array in the film "Contact". That was the Very Large Array.
6
kordless 11 hours ago 3 replies      
This seems like massive mis-management of money. Paul Allen contributed $50M to build the array. Whoever was managing that project should have thought about sticking back 30% or so for running it after it got built. Didn't have enough to build all the scopes? Build a few at a time.

Presumably this is what happens when you don't have clear goals and objectives laid out - and oversight to make sure you don't screw it up.

Pretty sad. They should be asking for new leadership and $20M to run it for 10 years.

7
russell_h 19 hours ago 2 replies      
Searching for aliens seems like the ultimate high risk/high reward investment opportunity. If someone secretly made contact with an advanced extraterrestrial civilization the potential for profit would be huge. On the other hand, good luck executing on that.
8
stevenj 19 hours ago 3 replies      
This needs to be crowd-sourced.

Surely there are 5 million people who'd be willing to donate a dollar each for this.

9
erik_p 16 hours ago 0 replies      
this sounds like it has the potential for the largest kickstarter funded project to date!
10
barisme 19 hours ago 1 reply      
A better use of funds would be grants or investments in non-government space exploration / cargo / communications / research. That could be nonprofits or companies like Elon Musk's SpaceX. The latter is actually producing something we need.

SETI will detect radio signals from an intelligent source, sure. "We have detected extraterrestrial intelligence, and it is us." It's better for US interests if the source is speaking English, not Russian (only program capable now of delivering astronauts to ISS) or Chinese. So don't mourn the loss of SETI funding. Celebrate when you find out that it's going to something more productive.

11
ck2 19 hours ago 1 reply      
Part of me hopes the secret reason for this is they've actually discovered some so they don't feel they need to keep looking right now.

Paul Allen is still worth billions, can't he help a little more?

12
ignifero 14 hours ago 1 reply      
My worst fear comes alive, people; they walk among us, suspending our search projects.
13
AllenKids 13 hours ago 0 replies      
Google to the rescue!

Seriously, first they are both located at Mountain View. Second it is not that much money to begin with, at least for Google. Third it totally fits Google's pet project description - wildly awesome but realistically little chance to produce result.

14
HelloBeautiful 17 hours ago 0 replies      
Seth Shostak's podcast - http://radio.seti.org/ . Highly recommended ;-)
15
growt 10 hours ago 0 replies      
Some TV-Channel on alpha centauri just lost a viewer. Or maybe not, since we didn't find the channel yet.
16
mcorrientes 16 hours ago 0 replies      
I feel really sorry about it and I hope they'll make it.

Maybe they should consider writing more appreciated applications.

They could write more applications which involve people for the AI search.

I don't know what could reduce time to find AI, but
I would bet SETI could get many volunteers if they just would provide a social network application (e.g. facebook).

There must be more than just calculation distribution.

17
suprafly 7 hours ago 0 replies      
They should 'open-source' this project.
28
Finally someone makes sense of JavaScript's this keyword javascriptweblog.wordpress.com
225 points by legomatic  3 days ago   52 comments top 16
1
statictype 3 days ago 7 replies      
I used to always run into trouble when trying to use 'this' inside a closure defined within an object's method.

Now, I just put

    var that = this;

at the top of very method and use 'that' everywhere in the function.

2
djacobs 3 days ago 0 replies      
Douglas Crockford does a good job of explaining this in JavaScript: The Good Parts. That book comes highly recommended.
3
jessedhillon 3 days ago 2 replies      
Douglas Crockford has said that he considers Lisp and Javascript to be similar languages. For that reason, I think it would help those confused/frustrated with 'this' to read the chapters of SICP that relate to the environment model, and to 'eval' and 'apply':

Environment model of evaluation, http://mitpress.mit.edu/sicp/full-text/book/book-Z-H-21.html...

Metacircular evaluator, http://mitpress.mit.edu/sicp/full-text/book/book-Z-H-26.html...

The environment model is crucially helpful. Applying it to Javascript, think of bound functions (ones which are methods on instantiated objects) implicitly having a 'let this = instance' statement prepended to their bodies.

4
gruseom 3 days ago 3 replies      
I simply made a pact with myself never to understand that monstrosity and so never to use it (except when forced to, e.g. by some library). My voyages in Javascript have been delightful ever since.
5
aneth 3 days ago 0 replies      
Sense could be an overstatement.
6
stewbrew 3 days ago 0 replies      
Quite frankly, I personally fail to see the sense of this.

Interesting read though.

7
mdpm 3 days ago 3 replies      
dear coffeescript,

thank you for =>

8
d_r 3 days ago 0 replies      
I also found JavaScript Garden (posted on HN a few weeks ago) incredibly useful in un-confusing my understanding of JS.

http://bonsaiden.github.com/JavaScript-Garden/

9
hackermom 3 days ago 0 replies      
For the pragmatics, a simpler and not so cluttered example of the constructor, illustrating what 'this' does and how you can pass 'this' by argument:

  function value()
{
this.v = 32;
}

function twice( n )
{
n.v += n.v;
}

number = new value();
document.write( number.v + ' ' );

twice( number );
document.write( number.v );

10
pom 2 days ago 0 replies      
I had trouble with this at first, but an easy thing to remember is that "this" is scoped dynamically; as many have noted before, you can easily create a lexical "that" or "self" reference from it, but the other way around is impossible. This dynamic scope is a bit strange when everything else has local scope but it can be quite powerful.

When "this" gives you a hard time, the Function.bind() method is often a good solution too.

11
nitrogen 3 days ago 1 reply      
What is the benefit to having 'this' defined based on where a function is executed rather than where it is created? Is this mainly an artifact of the ability to assign a function created in one place to an object created elsewhere?
12
Breefield 3 days ago 0 replies      
I now feel fortunate to have started out with Actionscript 2. If there's anything that language drives home it's "this," due to the fact that you're usually writing code specific to an instance object on the stage. So the concept of var self = this; makes too much sense.
13
14
discopalevo 3 days ago 1 reply      
nothing about context, then function called in expressions

var a = { b:function(){...}};
a.b() // a
var c = false;
(c || a.b)() // window

this context is always global

15
tocomment 3 days ago 0 replies      
I still don't get it :-(
16
kwamenum86 3 days ago 0 replies      
duh.
29
Joel Spolsky: Can your programming language do this? joelonsoftware.com
214 points by krat0sprakhar  3 days ago   110 comments top 28
1
onan_barbarian 3 days ago 8 replies      
I think there's some reasonable stuff buried in here, I really do.

But... having actually spent some time in the trenches dealing with a hard problem on a massively parallel machine - more than once - I find it hard to believe that something like map/reduce or the like - or any given small-scale language feature is going to be particularly significant in terms of parallelizing any goddamn thing that's actually hard to do. I see a lot of fatuous claims that language feature X is the missing link in parallelism for the everyday working programmer but I don't see a lot of new solutions for anything hard as a proof of concept.

We've only had OpenMP and all sorts of other kludgy extensions for Fortran and C for what, about 15 years? I'm not saying that they're great or elegant or anything, but so many of the things that are hard about parallel programming are NOT FUCKING SOLVED BY MAP-REDUCE. Oops, sorry, shouty. But anything that can be solved by map-reduce wasn't enormously hard to begin with. Map-reduce was itself initially publicized in terms of 'less useful but easier for mere mortals than generalized parallel prefix' which made sense to me.

What doesn't make sense for me is all this frenzied enthusiasm for dragging parallel programming into the never-ending programmlng language abstraction wars; at least when the topics being discussed only touch on the very shallowest things needed by parallel programming. You want some respect, solve something hard.

Yes, you can do the same thing to each element of an array. Whaddya want, a cookie?

2
kragen 3 days ago 8 replies      
> Correction: The last time I used FORTRAN was 27 years ago. Apparently it got functions.

FORTRAN had user-defined functions since FORTRAN II in 1958; see http://archive.computerhistory.org/resources/text/Fortran/10... on page numbers 5, 14, and 15.

Joel unfortunately completely misses the point of why C and Java suck at this stuff: you can use functions as values in them (anonymous inner classes in Java) but they aren't closures. And his comment about automatically parallelizing "map" is a little off-base; if you take some random piece of code and stick it into a transparently parallel "map", you're very likely to discover that it isn't safe to run multiple copies of it concurrently, which is why languages like Erlang have a different name for the "parallel map" function. The "map" in MapReduce is inspired by the function of that name in Lisp and other functional languages; it isn't a drop-in replacement for it.

As usual, while Joel's overall point is reasonably accurate, most of his supporting points are actually false to the point of being ignorant nonsense. I think someone could tell as good a story in as entertaining a way without stuffing it full of lies, although admittedly my own efforts fall pretty far short.

3
grav1tas 3 days ago 1 reply      
I think it might be important to note that while the terms map and reduce do come from Lisp, they're not one-to-one with what these functions do in Lisp. The original MapReduce paper mentions the borrowing, but doesn't really go into specifics. There's a good paper by Ralf Lämmel that describes the relation that MapReduce has to "map" and "reduce" at http://citeseerx.ist.psu.edu/viewdoc/summary?doi=10.1.1.104.... . I liked this paper much better and found it the most informative functional explanation to MapReduce (note, it's in Haskell).

I think MapReduce is really part of a more general pattern where you have an (in more Haskell-y terms) unfold (anamorphism) to a foldr (catamorphism). If your operations on the items in your intermediate set of data in the MapReduce are associative/commutative, you can work out parallelization more or less for free. It's pretty cool stuff, and really not that complicated when you sit down and think about it.

4
JonnieCache 3 days ago 3 replies      
Something that I only realised the other day which made me feel kinda embarrassed: in ruby, the reduce method is called inject.

For years I've been doing MapReduce functions, without realising it. MapReduce was in my mental pile of "genius things that cleverer people than me do, must be looked into when there is time."

For info on inject: http://blog.jayfields.com/2008/03/ruby-inject.html

5
sthatipamala 3 days ago 1 reply      
This article shows that Javascript is truly the common man's functional programming language. Despite its ugliness, it got lambdas/anonymous functions right.
6
ajays 3 days ago 2 replies      
He gives reduce as an example of "purely functional programs have no side effects and are thus trivially parallelizable.", but reduce by definition is not trivially parallelizable.
7
tybris 3 days ago 0 replies      
Yup, any language I've worked with, including Java and C, can do that just fine. They just spread the verbosity differently. Organizing large projects is a pain in JavaScript, trivial in Java. Using anonymous functions is a pain in Java, trivial in JavaScript.

(Not so) fun fact: The public MapReduce services by Google and Amazon do not (directly) support JavaScript.

8
gaius 3 days ago 0 replies      
FTA:

The very fact that Google invented MapReduce, and Microsoft didn't, says something about why Microsoft is still playing catch up trying to get basic search features to work

I don't believe this is true, and that's easy to prove: There was parallelism of SELECTs in SQL Server 2000. So there is a part of MS that is perfectly happy with the idea, even in another bit of MS isn't. They just need to talk more...

9
chuhnk 3 days ago 8 replies      
Has anyone else just read this and realised they need to go off an learn some form of functional programming? I ignored it for such a long time because I felt it wasn't relevant to my current situation but I was wrong. You gain some incredible fundamental knowledge that you would otherwise be completely oblivious to.

Is lisp really the way to go though?

10
rivalis 3 days ago 0 replies      
Even when I'm working in a language that doesn't have first class functions, I find it easier to lay out my code by writing functional pseudocode and then "unrolling" maps into loops, closures into structs/objects, compositions into a sequence of calls, etc. It probably leads to idiomatically awful Java, but I find it easier to read and write, and nobody else needs to deal with my code. So...
11
justwrote 3 days ago 0 replies      
Yes, it can! Scala:

  def Cook(i1: String, i2: String, f: String => Unit) {
println("get the " + i1)
f(i1)
f(i2)
}

Cook("lobster", "water", x => println("pot " + x))
Cook("chicken", "coconut", x => println("boom " + x))

List(1,2,3).sum
List(1,2,3).mkString
List(1,2,3).map(println) // or more verbose List(1,2,3).map(x => println(x))

12
pmr_ 3 days ago 0 replies      
Today I tried to explain someone what exactly boost::mpl::fold does and how it is supposed to be used (For those unfamiliar: boost::mpl is a collection of compile-time metaprogramming mechanisms for C++).

I took me a while to realize that the person I was explaining it to had only little problems with the templates and compile-time part but close to no idea what a fold or a lambda are.

Not knowing some basics of functional programming can keep a person from understanding so many different things and I have encountered those in different fields (e.g. explicitly like in formal semantics or implicitly in different theories of morphology).

I think the real point here is that different paradigms offer you new views onto the world and enhance your understanding all the programming language things aside.

13
becomevocal 3 days ago 1 reply      
I think this could also be called 'can your brain think like this?'... Many programmers stray from thinking in a massive way and tend to problems with similar, familiar codebases.
14
svrocks 3 days ago 1 reply      
Does anyone else think it's a travesty that the AP Computer Science curriculum is taught in Java? Java was my first programming language and I've spent the past 8 years trying to unlearn most of it
15
hasenj 3 days ago 0 replies      
I think this article was my first introduction to functional programming.

Yea, don't look at me like that. My university mostly taught us Java/C++; we only did functional programming in one course.

16
cincinnatus 3 days ago 1 reply      
I don't like the way in line functions hurt the readability of code. Is there anything out there that solves that issue?

Also I haven't had an excuse to use it yet but F# seems to have great syntactic sugar for parallelizing things in a more natural way than the typical map reduce.

17
hdragomir 3 days ago 0 replies      
I remember my days as a CS student.

The single most mind-opening course I took was functional programming, where I learned LISP and Prolog.

That knowledge today is crucial as it deeply changed my mindset when tackling most any problem.

18
buddydvd 3 days ago 1 reply      
Can Xcode 4 compile code using Objective-c blocks into iOS 3.x compatible binaries? This article made me realize how much I miss anonymous functions/lambda expressions from C# and javascript.
19
bluehavana 3 days ago 2 replies      
It's funny that he mentions Google as an example of a company that gets the paradigm, but most of Google is C++ and Java. C# has better functional paradigm support than both of those.
20
ericf 3 days ago 1 reply      
I implemented these examples in Ruby 1.9, would love to know if there are more efficient ways of doing some of these:

    def cook(p1, p2, f)
puts "get the " + p1.to_s
f.call(p1)
f.call(p2)
end

cook( "lobster", "water", lambda {|x| puts "pot " + x })
cook( "chicken", "coconut", lambda {|x| puts "boom " + x })

@a = [1,2,3]
@a.map {|x| puts x*2}
@a.map {|x| puts x}

def sum(a)
@a.reduce(0) do |a, b|
a + b
end
end

def join(a)
@a.reduce("") do |a, b|
a.to_s + b.to_s
end
end

puts "sum " + sum(@a).to_s
puts "join " + join(@a)

21
nickik 3 days ago 1 reply      
WOW, welcome to the year 1959.
22
leon_ 3 days ago 0 replies      
Yes, Go lets me do this. Though I don't like passing anonymous functions too much as the codes becomes hard to read rather soon.
23
mncolinlee 3 days ago 0 replies      
The moment I read this, I immediately thought of the work I performed on Cray's Chapel parallel language. Chapel has an elegant way of expressing functional parallel code like this that is much more difficult to write in Unified Parallel C and High Performance Fortran. In fact, one Google search later and I found a student's presentation on Chapel and MapReduce.

http://www.slidefinder.net/l/l22_parallel_programming_langua...

24
mkramlich 3 days ago 0 replies      
ahhh... Joel at his best. great piece of writing. and a gem about programming languages and abstraction.
25
SpookyAction 2 days ago 0 replies      
"Look! We're passing in a function as an argument.
Can your language do this?"

Umm, yes it can....

#!/usr/bin/perl

sub cook_time {
($hours, $min) = @_;
$result = "$hours hours and $min minutes\n";
return $result;
}

sub animal {
$animal = shift;
return $animal;
}

sub cook_animal {
($get_animal, $get_time) = @_;
return "Cook $get_animal for $get_time";
}

print cook_animal(animal(cow),cook_time(5,23));

26
ScotterC 3 days ago 0 replies      
Last time I used FORTRAN was all of 11 months ago. Thank god I've moved on to O-O and can actually declare functions.
27
jasonlynes 3 days ago 1 reply      
i'm smarter for reading this. need more.
28
mariusmg 3 days ago 0 replies      
So are we supposed to be impressed by clojures now ? Or are we supposed to be impressed that the "great" Joel Spolsky (a ex manager in EXCEL team !!!!) writes about them.
30
Joel Spolsky is doing an IAmA on reddit reddit.com
208 points by chrisboesing  6 days ago   52 comments top 4
1
chrisaycock 6 days ago 5 replies      
Every time you feel like you've made the world better by upvoting a story about injustice, you're just making yourself feel smug. Forget the upvotes... go work on making the world a better place.

He was writing about how stories of social injustice get a ton of upvotes, but nobody actually goes out and does anything to fix the situation. I'm sure there's a lesson here for HN.

2
euroclydon 6 days ago 12 replies      
I really want to learn C, like he says. I get plenty done without knowing it, and I have few doubts I can continue to find decent work without knowing it, but I haven't been able to gain any traction when I try to learn it.

I've got the books sitting front of me, and I've written some trivial visualizations of sorting algorithms using terminal output, but damn if I can find a way to use C as a web developer. If there were just some use case where C would help me get something done, I'd be all over it.

3
ceejayoz 6 days ago 2 replies      
A dedicated ama.stackexchange.com could be an interesting experiment.
4
Apocryphon 6 days ago 4 replies      
He mentions how functional programming is valuable, something that many graduates are lacking in. Does JavaScript count as a functional language?
       cached 27 April 2011 02:11:01 GMT