hacker news with inline top comments    .. more ..    28 Jul 2012 Best
home   ask   best   7 years ago   
Google Fiber Plans & Pricing fiber.google.com
1072 points by stevewilhelm  1 day ago   451 comments top 108
api 1 day ago  replies      
Thank GOD they're disrupting this industry. All the telcos are worthless "Soviet bureaus" that deserve death.

It's really sort of amazing to think about. In all my years on this planet I have never been pleased with a telecom company. Never. I have always felt like I was paying far too much for inferior service, contemptible customer support, and endless efforts to further "monetize" me through harassing phone calls trying to sell me more stuff, intrusive DNS systems that redirect me to their crappy web sites, etc.

There is not a single other industry I can say that about, certainly not one that comes to mind so quickly. Hell, I can't even say that about the government.

As if that's not bad enough, this industry seems to spend a lot of money lobbying to destroy the open Internet, which is like a car company lobbying to increase the number of annoying traffic regulations in order to make it less enjoyable to drive.

Xcelerate 1 day ago 6 replies      
When I was at Georgia Tech, I got 650 Mbps. You realize the bottleneck isn't your connection at that point -- it's everyone else's. Which means a lot of sites still download just as slow. Although the big sites have optimized data centers, so it was pretty cool downloading an entire OS in a few seconds (although I think my hard drive write speed limited that a bit too).

My father works in the fiber optics industry and has told me that if fiber was brought directly to each home, every person would have more bandwidth than they knew what to do with. One thin, tiny fiber can carry an INSANE amount of information. The problem is the processing circuitry that converts these light signals into digital signals. These NICs have a much lower throughput than the fiber itself, but if the fiber infrastructure was already in place everywhere, upgrades would be much cheaper and quicker. (In other words, Google Fiber has easy upgrade potential to 10, 100, ... Gbps).

dhughes 14 minutes ago 1 reply      
Being in Canada this is going to suck immensely having people just across the border with 1Gbps meanwhile I pay $100 for 15Mbps/950Kbps (cable TV included).

Pretty soon I bet I'll come across someone say "Here look at this file...what wrong? It's only 9GB you should be able to download this no problem."

Or Xbox oh crap I'll forever be stuck with racist Scottish ten year-olds since I'll never get matched to my own region everyone will be 1Gbps.

recoiledsnake 1 day ago 6 replies      
While I am super happy that the cable industry is getting disrupted as it deserves to be(the wireless industry sucks too, maybe a tech company can buy TMobile or Sprint), I feel uneasy about the following two points:

1) Profitability: I wonder how profitable and sustainable this is for Google? They can subsidize it for only so long before deciding to discontinue it as has been happening to many of their services recently.

2) With Google as your ISP and TV channel provider, they will have the potential to know everything about you, probably even more than you know about yourself! Imagine you use all their services, they will know about your surfing habits(via ISP, Google searches and clicks, G+ buttons, Google ads, Google analytics etc.), your location(Android location services/GPS. Google Maps), TV channel viewing habits, Gmail, social network(Google+), phone calls(Google voice), etc. etc. It will also provide malicious and state actors with a one stop shop to steal/request information from.

pizza 1 day ago 6 replies      
Usually when people in Silicon Valley use the word "amazing", I roll my eyes.

But $0 internet is very amazing.

On top of that, $120/mo for cutting edge consumer entertainment is just a slap in the face to other service providers.

corford 1 day ago 1 reply      
I might just be overcome with the sheer impressiveness of this but it feels like the real reason behind everything that Google has done over the last 5 or so years (Youtube, Android, Chromebook, buying dark fibre, Chrome, G+, tablets etc.) has suddenly come in to crystal clear focus. They're going for the jugular with a grand unified strategy the likes of which the tech industry has never seen before.
InclinedPlane 1 day ago 0 replies      
If this pans out it'll be a huge boost for civilization for one reason:

The distributed internet. Today the internet as a transport layer is a multiply redundant mesh, to some degree, but the content of the internet is still highly centralized.

Now imagine a world 25 years from now where everyone has >100 mbit/s internet connections and multi-terabyte SSD (or memristor) drives are commonplace and cheap. Now the idea of caching huge chunks of the internet starts not only making sense but becomes imminently practical. Web sites will be viewed less like applications and more like git repos with front-ends.

tokenadult 1 day ago 0 replies      
I browsed around the site,


and although I have friends and relatives who live in the Kansas City metropolitan area, I don't live there myself. So as rollout happens there, I'll be glad to hear from HN participants who live there what's happening in the local Internet access market. Meanwhile, I'll still be dealing with the same SMALL subset of providers who reach my neighborhood as I have for years. Please let us know what you think of Google Fiber when it arrives in your neighborhood, okay?

neilk 1 day ago 0 replies      
Does this alter the trend towards mobile? Maybe not in terms of the kind of device you use - it might still be a tablet or a phone-like device - but in terms of data delivered over mobile telephony networks?

Perhaps part of the reason that my iPhone can compete with my desktop is that home internet is just so terrible. Maybe there will be a whole new class of bandwidth hungry apps that can't be duplicated in anything going over a phone network.

In places like Korea I believe they have high bandwidth fiber everywhere; what's the experience there?

topherjaynes 1 day ago 2 replies      
Was so excited then saw it was just limited to Kansas City, what a tease, but glad to see they're 'shaking' things up.
pwny 1 day ago 1 reply      
I guess living in Montreal means I'm not getting a piece of this action any time soon.

It had the merit of making me reconsider how I live my life though. When you think about moving to another country, even for just one quick second, for an internet service, it's probably way too central to your way of life to be healthy.

babar 1 day ago 7 replies      
The cable channel lineup looks limited (no ESPN? can you add HBO?), so I wonder if they will get much adoption there.

Internet pricing looks fantastic.

spiffworks 1 day ago 0 replies      

Read this first, and read it well.

patrickod 1 day ago 1 reply      
I'm hoping that this becomes available in the bay area in the coming years. $70/month for gigabit connection is an absolute no brainer.
Kell 1 day ago 0 replies      
As a matter of comparison. Here a regular situation in France :

I pay 63€ for a nominal 100MBps fiber connection + free calls from my landline (to other landlines in most of the world (more than 100 countries, and all the big ones (but neither Africa nor the Middle East)) and mobile phones in France and US), plus 170 televion channels (that I don't watch anyway), plus a femtocell device, plus a smartphone subsidized (2 years contract) with illimited calls to other mobiles or landlines in France, free SMS and MMS, 1Gb of "high speed" (in average 2Mbps) 3G , after 1Gb it switches to low speed (512Kbps). Oh, and I get Spotify Premium. All that for 63€ a month (That's 78 american $) (but I'm in a 24 months contract) It's quite the usual "quadplay" offer (Television + Phone + Internet + Mobile Phone) here in France. There are cheaper ones.

So yeah, at 200$ there is somebody being ripped off. But the Google 120$ option doesn't seem too bad, it's a little less than twice the price I pay here in France, but with great goodies like the tablet, and an incredible 1Gbps... Ten times my official speed. So the markup is deserved.

To French people, I have the SFR MultiPack offer with the Carré Web + the Spotify Premium at 5€ option, and the Classic Fiber option (not the Evolution one) cf. http://www.sfr.fr/mobile/multi-packs-de-sfr-mobile-et-adsl.j...

jiggy2011 1 day ago 0 replies      
1 Gigabit Download and Upload!

I have always believed that the internet would have been and fundamentally different place if we had fast upload from the beginning.

Rather than having everything in the cloud (i.e big corporate datacenters) people would have gotten used to running small server at home. After all was the web not originally designed to work more in this way?

This would have fostered a much better culture of privacy and freedom when you have ultimate control of what you are putting online, freedom from TOS etc.
Sure there are advantages to big datacenters such as backup and fireproofing etc but is every piece of data really that precious?

yumraj 1 day ago 0 replies      
Their free tier is better (with a higher upload speed) that what the max AT&T can provide my home, via DSL, in San Jose, Capital of Silicon Valley.
Go figure....
dkhenry 1 day ago 0 replies      
Nexus 7 as the remote. This could really be a game changer for tablets, and for google.
swalsh 1 day ago 0 replies      
I think Google has another unmentioned monopoly. Some of the smartest people in the world are employed by them. They have experience in AI, and as seen by the self driving car they also know how to do practical things with it.

Google being the kind of company that is opposed to hiring an army of manual labors, could very well create an army of digging robots to lay the fiber for them. I can see google scaling this faster then anyone else.

zupreme 1 day ago 0 replies      
Here's a good quote from a friend of mine, Craig Clausen of New Paradigm Resources Group (a market research firm specializing in telecom):

"This is more of a strategic play for Google. As they mentioned, there are three essential components to the Internet (i.e. Google's business model): Computing power, cloud storage and access. The first two have worked in Google's favor. The third hasn't kept pace. The telcos and other access providers (like the cablecos) don't care about what's good for Google and broadband speeds aren't where Google would like them to be. As they've done in similar situations (such as wireless and energy), Google is signaling to the telcos that, while they'd prefer not to be in the telecom business, they are certainly capable of doing so. Picking Kansas City " smack dab in the middle of the country " has less to do with Kansas City showing the most interest and more about being able to distribute that message evenly between the coasts. It's a different type of investment for Google " one in which they hope to spur the access providers into stepping it up or risk having the guy with the deepest pockets step in for them."

michaelhoffman 1 day ago 3 replies      
Why start this in Kansas City? Is this like trying to produce a future Broadway show in New Haven or San Diego, so you can work out the kinks before you go to the big time?
dr_ 1 day ago 0 replies      
Very interesting. All the packages sound great but it's really the free internet that's going to prove to be disruptive.
I, as many other I'm guessing, have moved away from traditional television, watching a lot of stuff on my laptop and iPad. There are a couple of TV shows I enjoy but I'm not bound to them and don't mind purchasing the episodes at a later time. And you have to imagine that Apple and Google will strike their own deals with content creators at some point.
jrwoodruff 1 day ago 0 replies      
Does it surprise anyone else that they're offering television with this package?

Clearly it makes sense, and seems like a great way to give Apple TV/Roku etc. a run for their money. Plus, if they expand this program, which they presumably will, it could put Google TV in a lot of homes... although it doesn't explicitly state that the 'tv box' will be running Google TV.

cygwin98 1 day ago 0 replies      
I looked at their Term
* Resale and Redistribution

The Fiber Services are intended for the personal use of you and others with whom you share your residence (including, within reason, guests who are visiting you). You agree not to resell or repackage the Services for use by people other than those with whom you share your residence. If you wish to use the Google Fiber Services to provide Internet service to others, you must enter a separate agreement with Google Fiber that specifically authorizes you to do so. *

I do have some questions though. I get that a Fiber user cannot resell the bandwidth like a VPS provider. But how about hosting his own web server running a HN clone site?

romaniv 1 day ago 0 replies      
This would be awesome if it wasn't coming from Google. If they control the browser, a good chunk of websites/services and the delivery mechanism, there are just too many opportunities for abuse. Although, maybe this will force other telecom providers to stop sucking so much.
rplnt 1 day ago 5 replies      
Free internet. That's rather creepy from Google. When they roll this big it would hurt other providers, which in itself isn't bad, but having only Google as a choice is. Can they even do that by the way (anti-monopoly regulations)?
blinkingled 1 day ago 1 reply      
Kansas, KS and Kansas, MO - what's with the Kansas fixation Google?

This sounds good but if they are going to take more than a few years just to move out of Kansas I am not sure if it would even relevant in the grand scheme of things.

drivingmenuts 1 day ago 0 replies      
I want this in Austin. Now. Yesterday.

Seriously, the customer service is a non-issue. You can't do any worse than the telcos/cablecos other than having zero customer service ever.

If this forces the entrenched companies to man up and throw down some fiber, it's worth whatever it costs as long as no one is getting physically injured.

27182818284 1 day ago 0 replies      
Even their homepage blows away the others. Simple and flat. It tells me the mbps for each and whether there are data caps. Contrast that with, say, Windstream where you have to go searching to find out the speed of "Merge"

The homepage alone is going to do good for the industry.

mikemoka 1 day ago 0 replies      
I'm surprised that I wasn't able to find the word privacy mentioned in the first page.

I think that Google is clearly losing the "don't be evil" credo, whatever you may think about a specific industry.

Google is trying to do everything, Google is not satisfied with ads and is trying to remove the middlemen, all the possible ones.

They tried to attack the fashion ecommerce niche with Google Boutiques, they tried to attack Groupon with Google Offers, now they become an Internet Provider, but why? Here are my two cents:

-they will be able to know everything, and I mean everything, their users do online, and they are one of the few organizations actually able to parse and exploit this size of data
-they will be able to know the identity of their users
-they may be able to control the speed or stability of the connection to specific sites, they may even hijack specific banner ads without anyone noticing

______ 1 day ago 0 replies      
The country of Australia is spending over $31 billion on a project called the National Broadband Network (NBN) to bring fiber directly to 97% of homes. That project is enormous in scale compared to the Kansas City experiment.
azakai 1 day ago 5 replies      
Does anyone know how they make money from the free option?
delackner 1 day ago 0 replies      
Can't believe no one has mentioned that the TV option does not include AMC, HBO, ESPN, Disney Channel, TNT, CNN, Cartoon Network. (List from the Verge).

So the only channel it has that I actually would watch is The Daily Show, I mean Comedy Central, which I can watch on the net for free (in a much better extended length version).

Spreading real modern internet infrastructure to the masses in the US though, yes that is a very positive thing.

xutopia 1 day ago 0 replies      
This reminds me of when Bill Gates was purported saying that he'd enter the RAM market if prices didn't go down. Google has nothing to lose here. They want a faster web because it means more chances to provide ads. If ISPs aren't willing to provide it they will... and once they start the ISPs will have to follow suit.
kayzee 1 day ago 1 reply      
Is anything actually 'free' in this world?? I wonder how much of my data they will be using without my permission to serve me ads...although I guess targeted TV commercials are a heck of a lot better than random stuff I see now.

Double-edged sword if you ask me.

gojomo 1 day ago 0 replies      
This is another demonstration of why federal network neutrality regulations would be premature.
soccerdave 1 day ago 0 replies      
Comcast in my area has been hurting due to the local power company offering fast fiber speeds. This is going to be much worse for those local companies. I'm not saying it's bad, that's what happens when you spend 10 years without innovating.
DASD 1 day ago 0 replies      
This might just make Kansas City the tech center scene for colocation startups. Run this out of your bedroom. Amazon watch out!

Note to self: Cross KC off my list of colo locations if an offer appears with outrageously low pricing.

normalfaults 1 day ago 0 replies      
The rally concept is pretty neat. If your fiberhood gets enough registrations you will get the next install batch plus all schools, government buildings and other public establishments will get gigabit in your fiberhood.
funthree 1 day ago 0 replies      
> Free internet at today's average speeds

> $300 construction fee (one time or 12 monthly payments of $25) + taxes and fees

Google is going full circle. I wonder if the cost for Google to give internet to people who didn't otherwise have it (and at no charge to them) actually paid dividends because that person is going to immediately generate new revenue for google from the inevitable use of Google/adwords. I think John D. Rockefeller would tip his hat.

engtech 21 hours ago 0 replies      
Does anyone know what hardware companies they are using for their network?

Are customers going to have an optical transceiver in their house?

calinet6 1 day ago 0 replies      
Who wants to bet these have HD's (or SSDs) in them and are set up for peer-to-peer content caching on the local city network?

They'd be stupid if they didn't do that.

bfe 1 day ago 0 replies      
I hadn't seen the rabbit mascot before, but I hope it's meant as a nod to Mr. Rabbit from Vernor Vinge's Rainbows End.
e28eta 1 day ago 0 replies      
I really want this to succeed. I have never had a great experience with high speed Internet.

I'd even consider adopting a KC home, and paying some or all of their $300 for free Internet if it meant it would come to the San Jose area sooner.

Maybe a Kickstarter to wire a fiberhood?

quellhorst 1 day ago 2 replies      
Meanwhile Verizon FIOS (fiber) is available in hundreds of cities. http://www.consumerfiber.com/fios-availability
daimyoyo 1 day ago 0 replies      
When this comes to Vegas I will buy a TV specifically to take advantage of this. If anyone at Google is reading this, please come here as soon as possible. Seriously, the internet quality out here is atrocious and I REALLY want to support this business model.
jshowa 1 day ago 0 replies      
Read through a couple of comments and all I see is people bitching about phone and cable companies.

The problem is, you can only get Google fiber if you get enough people in your neighborhood that want it and you meet a preregistration goal. Then you can choose the DSL crappy free internet, or the gigabit one for 70/month which is comparable to the Comcast rate, except faster speeds. Of course, the more people that choose Gigabit, the more demand there is to justify a price increase.

And, to be honest, my employer is running on fiber with a link speed of 1 Gbps and there isn't that much of a difference between Internet browsing at home.

antonioevans 1 day ago 2 replies      
How would they get this into SF or NYC? Verizon currently testing a 300mps I am sure they will scale that a bit in next 1-2 years.
zumth 1 day ago 0 replies      
In the FAQ :

> Can I run a server from my home?
> Google Fiber is intended as a residential Internet service. Our Terms of Service prohibit running a server.

This scares me.

cocoflunchy 1 day ago 1 reply      
What's the average price for an average connection in the US ?

Here in France, I get Internet + TV + free phone to 50+ countries for about 40$/mo.

I don't think there is anything above 60-80£/mo here (that would include all of the above + mobile + paid TV channels), so 120$/mo seems really kind of expensive to me!

pbhjpbhj 1 day ago 0 replies      
One day Google will launch something new and there will be a highlighted note at the top "based on your IP this release doesn't apply where you are [and probably won't ever]" and then I won't need to stare blankly for 10 seconds at an address form working out how it's possible to enter a real [read: my] address in to that weird configuration.

But then again, maybe not.

lawdawg 1 day ago 0 replies      
The $120/mo plan is incredible for what you get, considering 1TB on Google Drive is normally $50/mo. So jealous ...
jacoblyles 1 day ago 0 replies      
"fiber" is a horrible name for a suite of products! (But this still looks awesome)
JL2010 1 day ago 0 replies      
Did anyone else catch the "2 year contract" bullet point? Despite that, of all the Telcos out there, I suppose I would be more willing to sign one with Google than any of them.
denzil_correa 1 day ago 1 reply      
The most important piece of information I see than the speeds is "No Data Caps". Will this mean the end of Fair Usage Policies? I am anxiously hoping that this is the start of the end of the evil FUP policies.
neduma 9 hours ago 0 replies      
the whole Comcast Experiene is really bad. We bought new HDTV and went comcast store to get HD cable box to watch 2012 Olympics tonight. but they gave some wrong box and said they're sorry.
Trufa 1 day ago 0 replies      
Coming from a 3rd world country, free Internet sounds like a fiction, my expensive connection is slower than the free one Google is offering! Though I'm happy I will be one of the first homes to have fiber on the country.
dysoco 1 day ago 0 replies      
Living in Argentina, and paying quite a lot for a 3Mb/s connection, now I hate America a bit more :D
thenomad 1 day ago 0 replies      
So, anyone know if this is coming outside the US any time soon? I'm getting that "nose pressed against the glass" feeling...
hoka 1 day ago 0 replies      
having spent the summer on true gigabit internet, it's a shame people won't even understand how big of a difference this makes. Their site makes a cute attempt at it, but I don't think it does it justice. Gig internet just has to be experienced.
nsxwolf 1 day ago 0 replies      
Is Google going pull a Google with this and keep a copy of every you packet transmit and receive, holding onto it for ever and ever?
cmelbye 1 day ago 0 replies      
It's still limited to just Kansas City? Hm.
bmuon 1 day ago 0 replies      
I'm astonished! Free 5mb/1mb connection? I have to pay close to us$40/mo for 3mb down in Argentina.
normalfaults 1 day ago 0 replies      
During the live event it was mentioned that premium channels will be available at an extra charge per month. Google wanted to simplify the cable line up as much as possible.
catfish 1 day ago 0 replies      
Please GOOGLE, please come compete with ATT and TIME Warner in San Diego. I am ready to pay....
nsxwolf 1 day ago 0 replies      
There's no point even checking to see if it's coming to my city. I already know the answer is no.
javert 1 day ago 0 replies      
Note to telcos: If all your customers hate you, you are begging to be made obsolete. Looks like your wish is about to be granted.
pasbesoin 1 day ago 0 replies      
Google: The moment you're in my area, I'll sign up.

Unless sonic.net beats you to it. ;-)

(Please do whatever you can to speed up planning and deployments in additional areas!)

wildmXranat 1 day ago 0 replies      
We Canadians need this so badly it's not even funny. I would honestly pay for a 5 year long contract as long if that's what it takes.
pradeep89 1 day ago 0 replies      
I wish, this could come up in India early , Internet is worse here. If you want high speed internet,the charges are too high to afford for common man. I remember before 5 years i used to download at 2 kbps :(, now it's better but not cheap
lurchpop 1 day ago 0 replies      
Cool, but do they have a plan that doesn't freely give my data to law enforcement as part of a regular dragnet?
EdwardTattsyrup 1 day ago 0 replies      
I'm sorry but this is not actually happening in a meaningful way for the vast majority of people. Your city will never be on the list. There's way too much local government corruption and cable-co monopoly. Please stop wet dreaming on your HDTV. You'll void the warranty.
kennethcwilbur 1 day ago 0 replies      
If Google Fiber catches on, and you consider combining it with Google TV, Youtube content and Google TV Ads, the strategic possibilities are fascinating.

I wish I knew whether enough consumers are willing to pay to justify a large-scale infrastructure rollout. There is a whiff of that old saying here - Google engineers are great at building products for Google engineers.

ryana 1 day ago 0 replies      
I noticed that there are no FOX Broadcast Company channels on their channel list. That's a shame because I'd really want to keep Fox Soccer Channel if I was switching.

I would guess that has something to do with Google and Rupert Murdoch's well publicized icy relationship. It'll be interesting to see who blinks first if this takes off.

conradfr 1 day ago 0 replies      
So does it cut your Internet and your TV when your Google account gets suspended ?
andyl 1 day ago 0 replies      
Wow - impressive. Can't wait to say goodbye to comcast.
mrschwabe 1 day ago 0 replies      
No thanks. Google already harvests enough data on me that I don't need to pipe my entire internet & TV connection through them.
captaintacos 1 day ago 0 replies      
After living in Australia for about two years (probably worst internet in the developed world) and now living in Japan with gigabit speeds I can confidently say:
Do thank Google for this and do give them all your money!
SjuulJanssen 1 day ago 0 replies      
In comparison the pricing of my local provider: http://www.onsbrabantnet.nl/internet-weert
dakrisht 1 day ago 0 replies      
Kansas City - yeah, that'll do it. I'm sure everyone in Kansas City needs Gigabit Ethernet.
laserDinosaur 1 day ago 0 replies      
Could this ever come to Canada, or would Google be stuck behind the same Canadian ownership requirements that nearly blocked Wind?
tapsboy 1 day ago 0 replies      
Good for KC. Here, 3 blocks from Empire State building in Midtown Manhattan, the max I can get is 10/2 from TWC. FiOS is still negotiating with building authorities.
mikebracco 1 day ago 0 replies      
With this announcement by Google, we're at stage in the game were cable company execs hear this --> http://youtu.be/_hHDxlm66dE #finishhim #mortalcombat
Grepsy 1 day ago 0 replies      
I really like the little SVG animations on this site. Any ideas of how you would go about authoring this?
dgudkov 1 day ago 0 replies      
It seems like Google TV now done right.
antihero 1 day ago 0 replies      
Anyone read Snow Crash?
Brock_Lee 1 day ago 0 replies      
With the "Free Internet" plan, what happens if I pay the construction fee, then move out of my "fiberhood" into a house that isn't part of a fiberhood?
agumonkey 1 day ago 0 replies      
Onlive must be very excited.
jps359 1 day ago 0 replies      
I don't live in KC, but I'm very excited to see what kind of splash this will make in the industry. A lot of neat things might come from this.
ditoa 1 day ago 0 replies      
God I hope they bring this to the UK!
chaselee 1 day ago 0 replies      
This is exploding with awesome at every seam.
cma 1 day ago 0 replies      
DDoS will never be the same
dropshop 1 day ago 0 replies      
Anybody else thinking of moving there just for fiber?
knodi 1 day ago 0 replies      
Holy shit, I want this.
akennberg 1 day ago 0 replies      
One of these gigabit subscriptions is enough for a few houses / apartment floor. :-)
brh_jr 1 day ago 0 replies      
How is Google going to get past all the state and city monopolies?
orphol 1 day ago 0 replies      
Anyone know when is so cal is in pipeline
samstokes 1 day ago 0 replies      
Only available in Kansas City?
blktiger 1 day ago 0 replies      
I'm so sad that I can't get this where I live... :(
sebastianavina 1 day ago 0 replies      
Why Kansas and not Durango, MX?
robotment 1 day ago 0 replies      
It is great, but never can use it.
zx1986 1 day ago 0 replies      
WOW! what a great news!
I hope Google keep this job quicken in another country.
We here in Taiwan need a REAL FAST Internet!
discountgenius 1 day ago 4 replies      
So... I can get gigabit internet for either $70/month locked in for 2 years or $25/month locked in for 1 year. I'm not seeing the point of the $70 pricing option.
suyash 1 day ago 0 replies      
Google is taking over our lives :(
Metrop0218 1 day ago 0 replies      
This is awesome.
azinman2 1 day ago 0 replies      
tubbo 1 day ago 0 replies      
aww i wish they were around philly
thiagodotfm 1 day ago 0 replies      
Holy shit.

That's a lot of porn.

The Marco.org Review of John Siracusa's Review of OS X 10.8 Mountain Lion marco.org
629 points by iSimone  3 days ago   118 comments top 15
JonLim 3 days ago  replies      
> "In my testing, reading the 10.8 review took approximately 128 minutes. But I walked my dog briefly in the middle."

> "At medium brightness, my iPad (3rd-generation) battery fell from 73% to 56% while I read the review on it."

I... I think I love Marco Arment.

A great way to start my day, thanks for the chuckles, Marco.

lukeholder 3 days ago 8 replies      
Marco has perfected the followup blog entry HN spam. Even if it is hilarious and witty.

He jumps on a hot topic and gets the page views.

frou_dh 3 days ago 2 replies      
It's just sad that the actual review, which is tremendously interesting and substantial, has noticeably fewer votes than this, the #1 link.

Is the oft-mentioned decline of HN a bunch of tiny cuts?

aw3c2 3 days ago 4 replies      
I am honest: I don't get it. Is the reviewed review focused on meaningless metrics or why is this funny? I skimmed through a couple of pages and did not see any of that, in fact it seemed comprehensive if a bit subjective and fanboyish.
unreal37 3 days ago 2 replies      
Love the graph comparing word lengths of previous OS X reviews.
bitsoda 3 days ago 3 replies      
Christ does the HN community give Marco a lot of grief. He's like the LeBron James of this place.
king_jester 3 days ago 3 replies      
Has anyone reviewed this review review yet? I don't want to waste my time over here.
binaryorganic 3 days ago 1 reply      
"which Apple invented completely on their own." priceless.
motoford 3 days ago 1 reply      
Has anyone noticed yet in the real Siracusa review that marco.org is the first link in a screenshot of Siracusa's reading list?

Page 8

Killswitch 3 days ago 1 reply      
Best review of a review ever. A+++ would read again!
mapgrep 2 days ago 0 replies      
"There have been a few architectural changes to John Siracusa's OS X reviews as well. Siracusa has detailed the process in his separate explanatory blog post, because the review wasn't long enough and he had more to say." HA
stephengillie 3 days ago 0 replies      
It's highly legible!
Cyranix 3 days ago 1 reply      
As amusing as the parody is, it's tough to beat the original. Honestly, who cares about an update to Chess?
seivan 2 days ago 0 replies      
I love the fact that his review included code, just something small and trivial like that. Made my day more than slightly better. Can go to bed with a giant smile on my face.
vide0star 3 days ago 0 replies      
A review of a review?
John Siracusa's OS X 10.8 Mountain Lion Review arstechnica.com
483 points by thisisblurry  3 days ago   134 comments top 26
blinkingled 3 days ago 5 replies      

"In some ways, Mountain Lion is a refinement, enhancement, and yes, a major bug-fix for Lion. But the changes and additions are significant enough that they will inevitably come with their own set of bugs. Let's not forget Snow Leopard, which promised no new features but still brought plenty of bugs in its 10.6.0 release."

I hope they fixed memory management - Lion is too swap happy and slow on a decent 3Gb RAM Core 2 MBP. Windows comparatively flies on the same machine but of course suspend resume is flaky and other Win-Mac integration annoyances mean it's not a win-win.

keithpeter 2 days ago 3 replies      
I've suggested this before (and not done much about it), but that is not going to stop me suggesting it again.

How about a crowd-sourced 'Siracusa style' review of Ubuntu 12.04? Or Debian Wheezy, or Fedora (large integer, preferably 18 because that is the version that RHEL 7 will be based on)? Or the safe and conservative CentOS/SciLi/PUIAS version 6.3, now being targeted by Oracle. Or your favourite GNU/Linux, or BSD?

So many of the free software reviews are shallow.

I think the Penguins have done enough work to merit something with depth

nicholassmith 3 days ago 2 replies      
This is almost the best part of every OS X release.
swdunlop 2 days ago 2 replies      
Just don't put it on your development workstation yet; XCode 4.4 is blocking the install of Command Line Tools for some developers. This means Homebrew doesn't work, VIM doesn't work (no /usr/include for pyconfig.h), and things are generally horked unless you want to play end-user in the app store.

Welcome to release day, Apple style. :)

kyleslattery 3 days ago 1 reply      
Some info from Siracusa about the different ways to read the review: http://siracusa.tumblr.com/post/27978338524/about-my-mountai...

Sounds like if you buy the Kindle version when it comes out ($5), he'll get a direct cut, which is nice.

thought_alarm 2 days ago 1 reply      
The new iCloud support is interesting. I just edited the same iCloud document in TextEdit on two different machines, and I gotta say that is really damn cool.

On the other hand, this new iCloud support sadly highlights another unrelated new feature in Lion, and not in a good way: Automatic Termination.

You must use TextEdit to browse your iCloud documents, which is fine, but TextEdit keeps fucking disappearing on you because it has no open documents. Command+Tab away for a split second and it's gone, forcefully removed from your Dock and Command+Tab list.

Automatic Termination is a feature that is meant to serve only the most novice of Windows users who are coming to OS X for the first, while it breaks an existing feature of the OS that has been fundamental to Mac OS X and NeXTSTEP for the last 23+ years. And it's not even an option.

It is hugely frustrating and hugely disappointing.

vdm 2 days ago 2 replies      
TL;DR full screen with multiple monitors is still broken http://arstechnica.com/apple/2012/07/os-x-10-8/18/#full-scre...
kristofferR 3 days ago 1 reply      
I find it fascinating that he "wrote" this piece using Dragon Dictate, a dictation program.
da_n 3 days ago 1 reply      
Epic review as usual from John Siracusa. Dan Benjamin volunteered to do a reading of this for release in audio format, but I think John Siracusa ruled it out. Understandable as I think too many problems to overcome (images, footnotes, licensing etc), so just a dream really. Would happily have paid for that though. Hope they can sort out the Kindle release, definitely going to purchase that.
taylorwc 3 days ago 0 replies      
"Once the OS has been out for a while, try asking a Mac-using friend who is not obsessively reading multi-thousand-word operating system reviews on the Internet if he has noticed anything different about scrolling in Mountain Lion."


gaius 3 days ago 10 replies      
Am I missing an obvious single page link, or do they really want you to pay not to have to click through 25 pages?
abruzzi 2 days ago 0 replies      
My biggest pet peeve with 10.7 (besides the loss of arrows on my scroll bars) is the dumbing down of the Network control panel. In 10.7 you couldn't configure your WiFi to connect to 802.1x networks (i.e. WPA Enterprise.) You had to download the iPhone configuration utility, build a config file, then import it. 10.6 would let you set the configuration in the network control panel. Even iOS doesn't require ICU to connect. I didn't see anything about it in the review so I assume 10.8 is still set up the screwy 10.7 way.
m_st 3 days ago 1 reply      
And there goes my spare time... Every OS X release I'm more eager for Siracusa's review than the actual disk (or now download) itself :-)
agumonkey 3 days ago 3 replies      
Definition of a review, by ArsTechnica : http://imgur.com/RdohV
tbonnin 2 days ago 0 replies      
Marco made a review of John Siracusa review ;) http://www.marco.org/2012/07/25/siracusa-mountain-lion-revie...
jacoblyles 2 days ago 2 replies      
I just want to know if it will be easier or harder to compile ruby and python libraries with native extensions that depend on, say, libxml2.

I upgraded to Lion and it took 8 hours off of the installation process for lxml. But nokogiri was still nearly impossible.

outworlder 2 days ago 0 replies      
Is it just me, or did anyone else notice an increase in graphics performance? I did not measure, but it should be significant, as I am noticing it in normal usage.

I'll hook it up to an external monitor later to try and stress it a little bit more.

I have a Late 2009 Macbook White, with 8GB and an SSD.

ditoa 2 days ago 0 replies      
Siracusa really does an outstanding job with these reviews.
xutopia 2 days ago 1 reply      
The review doesn't say how long it takes to install once downloaded. Is this a lunch break install or a "wait-till-evening" installation?
thisisblurry 3 days ago 0 replies      
The quickest/easiest way is to subscribe: http://arstechnica.com/subscriptions/

You can pay $5 for a month and then cancel it. For the amount of work Siracusa puts into this thing, it's worth it.

locusm 2 days ago 1 reply      
Why is it when you enable dictation it needs to send your contacts to Apple?
shimonamit 3 days ago 1 reply      
Isn't there a TL;DR;TL;DR;TL;DR^25 somewhere?
wkral 2 days ago 0 replies      
If you need to properly prepare for this review checkout this video: http://patdryburgh.com/blog/preparing-for-john-siracusas-rev...
yskchu 2 days ago 0 replies      
For me, the update to the messages beta app alone was worth it: iMessage on the Mac is so useful to keep in touch with friends, and the beta app was buggy as
hiddenstage 2 days ago 1 reply      
What's with his obsession with iOS? He makes it sound like iOS is the quintessential operating system.
abc_lisper 2 days ago 0 replies      
> But I walked my dog briefly in the middle.

Don't take the article too seriously. If you did, you probably missed the point.

What I Learned From Increasing My Prices extendslogic.com
350 points by kanamekun  1 day ago   41 comments top 14
j45 1 day ago 0 replies      
Great post, way to walk the walk about the scary thing called making money with your startup.

An aside, posts like this make me wish hn had a separate section/tag simply called 'results', separate from opinions.

I enjoy the variety of geek-interested content here, but this kind of a post for me is real signal.

orangethirty 1 day ago 2 replies      
Thanks for sharing.

I don't think it was much the raising prices part, but the re-branding your product to resonate with the market. This is something a lot of people out there don't get. That is why you don't purchase used BMWs anymore, but certified pre-owned BMWs. The branding is very important.
The awesome thing is that you went and took a structured approach to it, and then used the data to re-focus your brand to the market that will buy your focus. The money will continue to pile on if you keep using such approach. I would suggest looking into offline marketing tools to broaden your horizons.

Good luck.

thibaut_barrere 1 day ago 2 replies      
I cannot find a clickable link to the author's product on the blog post. I think you are leaving money on the table right now!
hrabago 1 day ago 0 replies      
I appreciate the insight about naming the segments. It reflects on "benefits, not features" where you describe what you offer based on the context of the user, not the context of your application.
codegeek 1 day ago 2 replies      
I really liked reading this. Giving relevant names/titles to pricing tier can be very effective. I immediately understood the diff. b/w freelancer vs studio instead of saying Basic vs Premium.
zumda 1 day ago 2 replies      
A very interesting post! Thanks for sharing that!

I have one question though: Did you grandfather your old customers in? So you still charge them the old price and give them all features?

DenisM 21 hours ago 0 replies      
Interestingly, raising or lowering prices for iPhone apps have not changed my revenue in the slightest. I wonder if that tells us anything about the iOS app market?
ARobotics 1 day ago 1 reply      
Great post. One thing that wasn't clear to me - how did your pricing changes affect existing customers?

Did former "basic" accounts get automatically changed to "Freelancer" and start getting billed the extra $10 the next month? If so, how did you handle notifying users and was there much complaint about the change?

haydin 22 hours ago 1 reply      
Honest question: Is sending an e-mail just to show a sample somehow more efficient? Why not just provide a link to the sample? I tried 6 times to get an e-mail sent and finally got one on the last try. Almost gave up.

Edit: No, I still don't have an e-mail and no information as to where I can find a sample

zobzu 1 day ago 0 replies      
i think its simple and recurrent:

- have clear names, not stuff that "sound trendy like blehmium"

- price by comparing market prices and the targeted customer (hint: its the basics at business schools).
Aka niche market? high prices. Large distribution? Low price. And there's many middles. Just don't start thinking you should "ask zillions" or "make it super cheap".

Think first.

djt 1 day ago 0 replies      
(off topic but thought others might be interested)

What is the difference between BidSketch and Quoteroller?

I ask because i tried a trial of Quoteroller and it didnt work and had never heard of your site before now.

spiredigital 1 day ago 0 replies      
Raising prices a few years ago with my eCommerce site lead to an instant 30% increase in profits, so it really is powerful and is something that has the potential to do amazing things for your business. Great post, and congratulations on all your new dough!
wesbos 1 day ago 1 reply      
getting a timeout when I try send myself a sample.
bm1362 1 day ago 1 reply      
Seems the sample is timing out- just an fyi.
Gabe Newell Wants to Support Linux, Because Windows 8 is a 'Catastrophe' kotaku.com
350 points by checoivan  2 days ago   189 comments top 22
recoiledsnake 2 days ago  replies      
I think the headline of the article is pretty misleading as he means that Windows 8 is a catastrophe for Steam because of the Windows 8 app store. Looks like everyone's upvoting the article because they seem to think that Gabe meant that Windows 8 will be a failure.

Instead of having to go through Steam's distribution, games will have the option of going directly to the Windows 8 app store and get featured there, not to mention XBox Live coming to Windows.

Anyone know what Steam's cut for game devs is? Microsoft is charging between 20 to 30%, so Steam seems to be very worried about their revenue stream and thus supporting Linux as a hedge.

Of course the regular desktop Steam client will keep working, but not on Windows RT ARM devices. Also, doesn't the WinRT support full DirectX?

Ref. http://www.joystiq.com/2009/10/07/randy-pitchford-on-steam-v...

Says Pitchford, "It would be much better if Steam was its own business." If Valve spun off the content delivery system, it would also remove the perceived conflict of interest Pitchford takes umbrage with. "Steam helps us as customers, but it's also a money grab, and Valve is exploiting a lot of people in a way that's not totally fair," Pitchford says. "Valve is taking a larger share than it should for the service it's providing. It's exploiting a lot of small guys."

trotsky 2 days ago 2 replies      
I'm all for linux support, but clearly he's more worried about a future with microsoft taking their digital delivery business, not usability.
ryanisinallofus 2 days ago 2 replies      
Steam has little to worry about in regards to the Windows App Store. They just need to keep pushing platforms, keep securing great titles, and RELEASE ANOTHER DAMN HALF LIFE 2 EPISODE. Exclusively on Steam of course.

Steam is pretty great and it's not easy to replicate. See "Google Play."

megaman821 2 days ago 4 replies      
I don't understand this sentiment. If you don't like Metro, launch the desktop and forget it even exists. For rarely launched apps, hit the Win key and start typing the name and for frequently used apps, pin to your taskbar. Steam still works exactly the same on Windows 8.
Donito 2 days ago 6 replies      
The title should be: "Gabe Newell Wants to Support Linux, Because Windows 8 is a 'Catastrophe' [for steam]".

Steam is, from a user perspective, a video game marketplace on PC. In Win8, they will have to compete with the built-in windows marketplace, where most games will published directly. In other words, Win8 is a catastrophe for them as it's endangering their business.

cs702 2 days ago 0 replies      
Along with another HN submission ("Changing My Mind On Microsoft"[1]), Newell's comments are important in and of themselves because of what they represent: a tectonic shift in the business community's perception of Microsoft.

Business executives are now openly doubting the future relevance of the Windows platform!


[1] http://news.ycombinator.com/item?id=4295711

nas 2 days ago 0 replies      
My gut feeling is they have more compelling reasons to port Steam to Linux. Look at the growth in the number of relatively powerful personal computing devices (smartphones, tablets, etc) that don't run Windows. It's also been suggested that Valve is looking at developing their own game console. It is reasonable to spend some effort making their system less tied to the Windows API.
ilaksh 2 days ago 2 replies      
I use Linux. I am on Ubuntu right now. I just wish I could figure out how to get the Radeon/whatever graphics drivers right on this Lenovo Ideapad. I gave up.

I am rooting for Linux, WebGL, and other stuff. Whatever can help take down Windows is great. If Windows can actually take down Windows, even better.

fpp 2 days ago 3 replies      
After the Vista Candy shop its toy blocks now...

- haven't looked under the hood of W8 yet but if MS wants to limit me what I can install on my computer / take over boot / BIOS control this one will certainly find no home on any of my computers.

Call me old fashioned but if I go to a shop and buy a computer (or have it send to my place) I actually want to own it and not find a clause on line 432'678 of the license agreement that it's now completely legal that a Seattle Corp has pwned my computer.

lazugod 2 days ago 2 replies      
It's a catastrophe for Valve if third party markets like Steam don't fit seamlessly into the future of Windows.
sakopov 2 days ago 2 replies      
Back in my more involved gaming days, Steam was a catastrophe for PC gaming. Other than picking up a game at a good price and automatically getting game patches from Steam, there was really nothing great about it. They had virtually no support. Your best bet was the forums. Throttled downloads were really fun too. Especially when you download a 5-gig game pack and have to sit there and resume your download every 5 minutes. Constant startup issues. I was patiently waiting for Battlefield 3 and was delighted to find out that EA decided to ditch Steam for this release, however their own Origin platform wasn't much better. Honestly, If i had to pick between slightly higher prices at the stores and Steam - i'd pick the stores. With app stores rolling in all over different platforms i'm afraid to say there is going to be very little room for Steam and this is probably a good thing. I'd rather see a good remake of Counter Strike and Half Life from Valve, than a new gaming platform.
petitmiam 2 days ago 1 reply      
and saying he thinks the future of interaction will be through computerized wristbands

How far off are computerised wristbands?

chj 2 days ago 0 replies      
Love the fact that he votes with feet.
gnufied 2 days ago 0 replies      
It is one thing porting Steam to Linux and quite another making actual games available on Linux. Take OSX for example,a platform where Steam has been available for awhile now. and yet around half of most played titles on Steam are not available on OSX. What is more - Dota2 valve's own game is not available.
Produce 2 days ago 1 reply      
The year of the Linux desktop?
valgaze 2 days ago 0 replies      
"According to Newell, there's a guy in Kansas making virtual hats for $150,000 a year. $150,000 a year."
JVIDEL 2 days ago 0 replies      
It makes sense: with Linux support Valve could run roll Steam with a barebones distro creating their own OS (SteamOS?) to run on top of the rumored SteamBox, with better control of resources and no need to pay licenses to MSFT.
farinasa 2 days ago 1 reply      
Many people say that they have this or that application that keeps them clinging to windows and for me it has always been games. If it weren't for games, I would never use windows on any of my personal machines again, let alone purchase it. Linux distros don't even sell their products yet somehow are better thought out than any version of windows.
pbreit 2 days ago 0 replies      
Has anyone been able to get Windows 8 to work on VirtualBox for Mac? I got it installed but can't figure out how to make it work. Running W8 in a VM really demonstrates how silly hiding the Start menu in the bottom-left pixel.
flannell 2 days ago 0 replies      
hmmmm, is the secret Valve tv/games console Linux based?
tcarey83 2 days ago 0 replies      
I love this. Gabe rules.
Being a Developer Makes You Valuable. Learning How to Market Makes You Dangerous talsraviv.com
349 points by talsraviv  1 day ago   83 comments top 24
patio11 1 day ago  replies      
I really cannot emphasize enough how the intersection of these two fields is a) extraordinarily rare, b) extraordinarily capable of producing directly attributable, measurable improvements across an entire business and as a direct consequence c) very, very richly valued by the market right now.
klbarry 1 day ago 1 reply      
Note: I am not a developer. My experience is in e-commerce marketing and my degree (est. May 2013) is in Statistics.

I've loved the field of human persuasion for years, and I think I can add something that I have written in the past to this conversation. I essentially made a list of marketing "truths", based on my research and experience, and have tested it against others. What remains of the list is what no one has been able to refute, so I think it's decently close to a list of universal irrefutable "rules of marketing."

The Truths of Marketing

1) Ethos (your perceived character) is the most important.

2) People make judgments by comparison/anchoring.

3) People process information best from stories.

4) People are foremost interested in things that affect them.

5) Breaking patterns gets attention.

6) People look to other people's decisions when making decisions.

7) People will believe things more easily that fit their pre-existent mindset. The converse is also true.

8) People handle one idea at a time best.

9) People want more choices, but are happier with fewer.

10) People decide first, then rationalize - If people are stuck with something, they will like it more over time.

11) Experience is memory, the last part of the experience is weighted heavily.

* Keep in mind that this should not necessarily be used a checklist; see what the director of a large creative agency says on the subject:

"I think that in broad strokes these truisms are accurate, but they aren't really how I personally get to the bottom of the marketing equation when working on a brand.

Of them, I think 1 and 4 are probably the closest, but I think the biggest problem is the same problem you find in how any analysis of consumers, or what is usually called "consumer behavior" is used -- it is, by definition, one step removed from what you're trying to analyze, yet it's treated like the consumers themselves.

Because consumers are often perceived as black boxes to marketers, there's a temptation to analyze their behavior and then market to that analysis instead of to them. Maybe this is because I'm on the creative side, but for me the most useful role of research is to inform and guide what is a form of for our consumer. To not just analyze what drives them, but to genuinely it yourself.

Reading research about twelve-year-old girls' purchase decisions and focus group transcripts is not the same thing as thinking like one. I have a client in that market, and I read everything when I'm working on something -- research, web sites, fan magazines, television -- but none of it is a substitute for sitting in a dark room and genuinely trying to imagine the trials of what it must be like to actually be a twelve-year-old girl from a first person perspective.

It sounds absurd, but that's how you come up with great ideas -- to do your best to become a twelve-year-old girl, and then develop things that you would enjoy.

So I think truisms like yours are useful as long as they remain a means to an end, and not, as they so often do, a checklist, or worse, the end itself."

cek 1 day ago 1 reply      
One of my favorite sayings: Ideas are worthless. Execution is everything. Getting people to pay for what you executed on is more everything.

If there are 1000 people who are 'idea people', 100 of them can execute on an idea; actually build something. 10 can get that thing sold.

Those 10 are the dangerous ones.

Great post.

plinkplonk 1 day ago 2 replies      
But isn't this just stating the obvious? being a Developer + learning $cool_skill makes you $positive_adjective.

Well duh. Also, the sun rises in the east.

$cool_skill element_of {marketing, design, negotiation, leadership skills, design, writing ....}

$positive_adjective element_of {dangerous, rockstar, ninja , pirate, extraordinary, one in a million, cool, killer ... }

mix and match and you have a dozen or so catchy titles.

Throw in links to the top books on the topic and links to some blogs, and most importantly an ad for an ebook somewhere in the post. Profit.

Repeat a few dozen times. Landing pages for SEO. Hey you are a guru!

Maybe, I am just feeling too cynical abut such 'how to do a startup' porn. Feel free to ignore me.

OTOH HN generally has good discussions, even when the submission is spammy/low quality.

mindcrime 1 day ago 2 replies      
Being a Developer Makes You Valuable. Learning How to Market Makes You Dangerous

I hope this is true! I've been developing for 12+ years, and always had an interest in marketing, but never really studied marketing. Now, I'm one of three tech co-founders of a startup that does not yet have a dedicated marketing person. So, I've finally been diving into really trying to learn and understand marketing (and sales).

OK, wait, I did take "Marketing 101" at the local community college a couple of years ago, but that's the only formal education on the topic I've had.

Now, I have a huge stack of sales and marketing books I'm working through. So far I've mainly focused on the Jack Trout, Al Ries, et al. stuff... Positioning, Re-positioning, The 22 Immutable Laws of Marketing, etc., and I've been going through a video series from Chet Holmes (of The Ultimate Sales Machine fame).

Looks like some good resources in the linked article, so looking forward to chewing through some of that as well. And here's a pre-emptive "+1" for more good startup marketing related links on HN!

gersh 1 day ago 0 replies      
Plain old marketing is simple. You gotta reach your audience. You gotta figure out what they look at, and you gotta get their attention. If your audience is searching Google, you hit them with SEO and Adwords. If they are watching TV, you gotta get on the shows they are watching. Once you get their attention, they know about you, and then it is matter of sales if they buy.

For good branding, it helps to have a good name, logo, etc. Although, in the end it helps if they like your product.

If you're in a crowded space, marketing is going to be harder. Although, I've never really found marketing to be that complicated. If you are in a crowded space, it can be harder to market, because your message gets crowded out.

Who is struggling with marketing? Let me know. I don't really think it is that hard.

Alan01252 1 day ago 1 reply      
It's amazing actually how many programmers don't concentrate on marketing at all. I've recently got a few emails from fellow developers asking if they need a blog/twitter/google+ account in order to get customers as a freelancer.

The answer is yes, yes you do!

The proof, 3 months in to my freelance career and I'm already getting customers, via blogging and Google search results. Heck I've even been lucky enough to get one customer as a direct result of getting front paged on Hacker News.

I have no idea whether I'm approaching "dangerous" yet. But I know for certain, I've still got a lot of marketing effort to go, and one hell of a lot left to learn.

yesimahuman 1 day ago 2 replies      
I'm trying to dig into Marketing and really figure out how to hack it. I've found patio11 extremely helpful, and also some of Peldi's posts on the Balsamiq blog: http://blogs.balsamiq.com/product/2008/08/05/startup-marketi.... I will definitely check out some of the recommendations here.

Does anyone have any other actionable suggestions for reading? I've picked up a few books but I'm looking to really dig in and get better at selling SaaS stuff online.

tuhin 1 day ago 0 replies      
I would really extend this to the following qualities that people in the field of building products should try, to the best of their abilities, to be good at. They might not become the best at each one, but enough to become potent and in the very least better than they were before trying:

-Understand technology. What it takes to build, scale, maintain and fix things. What are the affordances and benefits of one approach over the other.

-Understand design. Not just pretty pixels (in fact that is the least important part, IMHO). But the WHY. How it affects people, what the goals are, why the current system is broken or why it does not exist.

- Understand Marketing. How do you convince others that your product (idea or finished) is THE one they should vouch for. Why should they invest time and money into it. How can you make them believe that you know what you are doing and better than any bad experiences they have had with products of that kind. Selling/Marketing is not evil. It is a necessary (evil). Many humble and smart people fail to understand that.

I totally understand it is very hard for one person to be great or even good at all three. But hey most don't even try, so you are already better then them the first day you try.

paraschopra 1 day ago 0 replies      
I started as a developer, coded the first version of Visual Website Optimizer myself, but quickly discovered the importance of marketing. I agree to the OP's assessment that marketing is every bit as juicy as coding. Discovering a channel, executing a successful marketing campaign and crunching out hard ROI from it, seeing 10 customers because of it is as exciting (if not more) as learning the wonderful node.js and implementing a chat server on your own.

That said, I'd say after a certain scale, it becomes incredibly hard to do both: a) coding; and b) marketing. There's so much to do in both fields that you cannot do justice to both _at the same time_. So you have to eventually build a team and decide between coding or marketing (but the great part is that by that time you can afford to do this).

ph0rque 1 day ago 2 replies      
This is a topic near and dear to my heart; I am a developer who tried to take up marketing but gave up.

So this post's call to action is to read/listen/watch various resources; in sum they are probably dozens of hours of learning. With a little searching, I'm sure the actual material available stretches into hundreds of hours.

So my question, similar to the one I posted almost a year ago [0], is: what is the best available material out there, where I can learn 80% of marketing in 20% of the material?

0. http://news.ycombinator.com/item?id=2967010

mrbrianholland 1 day ago 0 replies      

I own http://drivingtests101.com/ . It is a free driving test prep website covering ~400 tests across 11 countries, all states/provinces and vehicle types. It was not until we spent time on SEO and pushed to get into the media a few months later before we truly began to realize the fruits of our labor. A product or service can often lead to no monetization without a push from marketing, aside from the true viral one-off hits. Do not count on this!

Marketing is key and I would encourage all business owners, not just developers, to learn this invaluable skill.

Good luck!

tlogan 1 day ago 0 replies      
So true.
Reading books about marketing is good but they are too 'vague'.
From my experience, executing marketing / selling is much harder (and more fun) than you think and the only way to learn it. Just get out of the building and start embarrassing yourself (literally).
at-fates-hands 1 day ago 2 replies      
The best part about being a developer and interested in marketing is you have all kids of opportunity to learn and tinker.

You can build a mobile app, submit to the app stores and see how it sells. Track your sales, dive into the numbers and find out what works and what doesn't. Does a good design help? What key words help your app get found? What about your UI? Do people love it? Hate it? Why?

You can build a static website and attach Google Analytics and see where your traffic comes from, how is your SEO working? What mobile users are on your site? What pages are they viewing? What's your conversion rate?

Build an e-commerce website and see what sells. Does the position of the items matter? What colors are people buying? What are your buyers preferences? Are your price points too high? Too low?

There's so many cool things you can learn when you start to get interested in marketing. If you're hacker, you can have a lot of fun and learn a ton of stuff just by creating simple things and seeing how the public/users react and use it. It really is completely fascinating.

jawr 1 day ago 0 replies      
Although having great marketing skills is invaluable, I have come to the conclusion that having a deeper understanding of design is crucial to modern developers; it seems to me that more and more developers are cropping up and the demand has shifted from them to people with great design capabilities.

I guess the morale of the story is to learn everything you can!

patrickambron 1 day ago 0 replies      
I've always thought the person running the development team should understand marketing, and the person running the marketing efforts knows how to develop. The two are so intertwined, but at the same time, it's important to separate the effort. Just like in engineering, the devil is in the details not the conceptual idea. You want a marketing person who knows how to get everything just right, to get in front of customers, sign people up, etc
loeschg 1 day ago 1 reply      
Feel free to call me a n00b or uncultured, but I don't get your anti-spam measures to subscribe to your posts via email.

Prove humanity by completing the lyric: "Blame..."

jamesmcn 1 day ago 3 replies      
Speaking as a developer, I think I can see the analogy between dev and marketing. But from a learning perspective, software development has a big advantage. You can build everything from a hello world app to your own toy OS without being negatively impacted by existing software out there.

As a marketer, it seems like you always have to be on the cutting edge. What worked in the 1980s is unlikely to work in 2012. Even what was cool in 2008 is unlikely to be effective today. The next problem is that marketing is, by its nature, a public activity. Doing it badly is embarrassing and likely limits your ability to even give it a second try.

atomical 1 day ago 1 reply      
I'd like to hear more about non-web marketing like direct mail. I would think small businesses would be more receptive to that. After all, they might not realize they need your tool until you put an advertisement in front of them.
toeknee123 1 day ago 0 replies      
Definitely agree here. I have a Marketing background but have been learning how to develop the past 3-4 months.

Marketing = Principles/Thoughts
Software Development = technical/tangible skill

Marketing = Ideas; Software Development = Execution of those marketing ideas.

h2s 1 day ago 1 reply      
How to market on HN:

    - Write an interesting blog post and post it to HN
- Include a link to something you're selling
somewhere near the end of the post

There are so many people doing this it's unreal.

rshlo 1 day ago 1 reply      
The aim of a founder is not to be the best marketer or developer. It's goal should be knowing just enough to hire the best people in every field and lead the team. This what makes a great founder.
frankphilips 1 day ago 1 reply      
I'd like to call these marketers Growth Hackers. They're a different breed of hackers, and they're just as important as the engineers. A successful startup should have at least ONE growth hacker in the team for it to be successful.
chadhietala 1 day ago 0 replies      
I support this message as a Marketing/Finance major turned software engineer.
The Mac App Store's future of irrelevance marco.org
341 points by rrreese  2 days ago   234 comments top 39
cletus 1 day ago  replies      
There are two issues that are coming up here: sandboxing and paid upgrades. They are quite different.

As a consumer I am completely for sandboxing for myself and for other consumers. In a world where malware is increasingly a problem sandboxed apps will become the norm. That's the reality we live in. Sandboxing being a requirement means that I can fairly safely install anything from the (future) Mac App Store.

The OP correctly points out that certain system utilities cannot be sold this way. He is correct but consider the alternative: to not require sandboxing means no one will bother implementing it. Of course Apple could make effort to promote apps that do (or hide apps that don't) but this puts a considerable education burden on the consumer. I'm with Apple on this one: it's simpler and better this way.

Now paid upgrades I have mixed feelings about.

On the one hand paid upgrades can produce the wrong incentive on the developer: I've seen good apps go from 18 month major version upgrades, to 12 months to 6 months with no reduction in upgrade price. I've also seen old versions abandoned for pretty lame reasons.

IMHO having all users on the same version is better for the developer and the consumer. It makes support easier. It creates a consistent experience.

But on the other hand I do feel like there is a place for paid upgrades.

Are in-app purchases a possibility here? I honestly don't know what's possible with the Mac App Store here.

I think developers do get too concerned with turning a user into a perpetual revenue stream however. This is really an old business model that is somewhat outdated.

Steam provided the first evidence of this that I can recall. Some years ago they started selling older games for $5 and under. In some cases IIRC the revenue for discounted sales exceeded release date sales at the premium price. More: [1]

The iOS App Store produced and continues to produce further evidence that lower prices and a higher volume can often be a better result than selling the "old" way (higher price, fewer units, which typically also involves paid upgrades).

Often content producers (and I include developers who sell software in this) don't always know what's best for them. This all sounds remarkably like Netflix in many ways. Netflix has provided a means of monetizing old and less popular content yet Hollywood seems to view them as the enemy.

Perhaps another model worth considering is to start the price of your app low and as it improves and gains popularity, steadily (and predictably) raise the price.

Has anyone tried this? Did it work?

EDIT: sandboxing goes beyond "malware". I increasingly don't want apps making arbitrary changes to my system. Some may be what I want but most won't. This includes things like forgetting to untick the checkbox that installs some browser toolbar to (on Windows anyway) apps making arbitrary (and sometimes wrong) changes to local policies, registry entries, etc (so the Mac equivalent of that).

EDIT2: as a consumer, I want to buy through the App Store. Apple has my payment details. I have a common place to get updates. When I buy from a third-party site I have to deal with:

1. Registration;

2. A payment gateway that may or may not work;

3. Despite an automatic payment a human may need to email me a license file and/or download link that in some cases has taken days;

4. Whether or not to trust your site with my information; and

5. A completely separate process for updating.

So anecdotally as one consumer, if your app can be on the Mac App Store and isn't I'm simply not buying it with very few exceptions (eg I'd still buy Photoshop even if I don't want to).

[1]: http://www.gamasutra.com/view/news/174587/Steam_sales_How_de...

jawngee 1 day ago  replies      
I'm pulling my video editor Shave (http://shavevideo.com) out of the Mac App Store for these reasons, though most specifically the lack of paid upgrades. For a $10 video editor that, quite frankly, beats the pants off a lot of crap out there for day to day editing tasks, I can't afford to devote the time and resources it requires to maintain it without it having a baseline level of income. Not having an upgrade path to a major version number bump is ridiculous.

Also, the lack of direct contact with customers is obnoxious at best, crippling at worst. But the sandboxing entitlements kill the best bits of functionality, so I have not much choice but to say "fuck it" and move on.

I doubt Apple will miss me, though I'm the only other worthwhile editor on there other than iMovie and Final Cut in terms of actual editing functionality. The video category is a giant pile of shit save a few select pieces, mine included.

Bobby_Tables 2 days ago  replies      
The iOS App Store does the same thing with changing the rules in the middle of the game, but it is not subject to potential irrelevance because it is the only place to get software for iOS devices. So the obvious solution for Apple is to make the Mac App Store the only place to get software for MacOS.

I'm actually a little surprised this didn't happen in 10.8.

MarkMc 1 day ago 0 replies      
The accounting software I develop [1] is a year out of date on the Mac App Store because of the restrictions that Apple have introduced.

And yet I disagree with Marco: This is a one-time problem as Apple tighten requirements for publishing on the Mac App Store. In a few years users will have forgotten this problem (if they ever noticed in the first place) and the benefits of the Mac App Store will make it the dominant distribution platform. Those benefits are: easy to make a purchase, easy to see reviews for a product, easy to find a product, easy to install on multiple machines.

Eventually I'll have to change the software to meet Apple's stricter requirements so it can be published on the Mac App Store again.

[1] http://www.solaraccounts.co.uk

tomflack 2 days ago 1 reply      
I've nightmares about endless apps having their own updaters. Adobe/Microsoft updaters really are the culprits here. Its this that makes me wish the app store would drop these product-killing restrictions. I don't trust every app developer to do updaters.
moron 1 day ago 1 reply      
Marco is accusing Apple of having made a critical strategic error. Where are all the people who call him a member of the Apple cult and all that crap? Funny how they disappear when it comes time to talk about what Apple is doing wrong.
djbender 1 day ago 3 replies      
Really think he's missing the part where laypeople will see the Mac App Store as their main portal for software. Sure us geeks will always know what the best option is, but what about from a common majority consumer's perspective?
crag 1 day ago 1 reply      
Again, Apple doesn't really care what developers feel. What matters are the masses. Which don't care about these issues. The App Store is easy to use, accessible and allows them to download their apps on multiple Macs. It's a win win from their point of view.

Apple's thinking is simple: You (us developer) don't like it, PayPal is that way". Just that simple. And since The App Store is where most non-developer mac users get their apps from.. we have a simple choice.

Do I like it, hell no. Will I play by Apple's rules, yes.

crazygringo 1 day ago 2 replies      
All these problems could be solved if there were a way for companies to transfer your existing non-App-store licenses to the App Store, allow for paid upgrades on the App Store, and allow App Store licenses to be associated with real serial numbers (or something else) that could be used to "take" your license with you, in the event the App Store suddenly becomes too restrictive.

Basically make it seamless, monetarily and "upgradily", to go to/from the app store.

Of course, Apple will never let this happen.

zemo 1 day ago 2 replies      
this whole article goes over my head, because it assumes the reader is already familiar with the sandboxing issue. I'm not a desktop OS X developer; I'm not familiar with the topic. I searched around and the only articles I could find are blog posts about how the sky is falling, with no sources from Apple that plainly state the app sandboxing requirements.

If you're going to write an article about how some upcoming thing is a doomsday event, please explain the event clearly and link to the proper sources, especially if your article is a criticism of a technical specification. That one or a few applications are backing out of the app store is not, in itself, evidence that they are justified in doing so.

This article has taught me nothing, but it has made me more afraid.

grayprog 2 days ago 2 replies      
As a Mac developer who's also affected by the mandatory sandboxing requirement, I fully agree.
One of our applications, Trickster, doesn't work sandboxed (being a system utility) as is and we're in this situation where we have customers on the Mac App Store who can't receive updates anymore. Needless to say, neither us nor our customers (who'll mostly blame us) are happy about this.
fecklessyouth 1 day ago 0 replies      
Somewhat related anecdote in light of 10.8: I never properly upgraded from OS 10.5. I pirated 10.6 and skipped 10.7, partially because I'm a cheap college student and partially because the changes between versions never seemed that dramatic to me.

I had to re-install my OS and thus downgraded from 10.6 to 10.5. All of a sudden, none of my iLife apps are working, plus a host of others, and I can't download Apple ones designed for 10.5. I lose momentum scrolling, and realize that, hey, an upgrade once every three years is a good idea. I wait for 10.8 to come out, so I can buy it like a good, normal consumer, and soon discover I need 10.6 to install it.

So I'm buying two upgrades? Alright...wait, I can't buy 10.6 anymore. 10.8 is the only OS available, and I don't have 10.6 on a disc. So it looks like I'm pirating again.

tnorthcutt 1 day ago 0 replies      
I think I agree with everything Marco says in this article except for

Apple can never require an App-Store-only future

I think they can, and will, regardless of their sandboxing or other policies.

guelo 1 day ago 3 replies      
I'm going to put this comment here even though it doesn't belong because I wanted to make people aware and there's nowhere else to put it. Some HN moderator is censoring stories about Twitter being down, probably the biggest tech story of the morning.

First, this story was on the front page and it got killed https://news.ycombinator.com/item?id=4296416

Then this other story, https://news.ycombinator.com/item?id=4296811, appeared briefly on the front page. But interestingly it wasn't killed it just doesn't show up on the front page.

If you scroll through the new story submissions you'll find a bunch of dead Twitter stories.

franzus 1 day ago 0 replies      
I tend to buy small utility apps in the range of $10 on the app store. But I would never buy "serious" software on the Mac app store.

The MAS already has this flea market feeling to me and so I'd rather have a real license from the original vendor than this app store receipt thingy.

jharrier 23 hours ago 0 replies      
Marco's follow-up post to this one is only technically correct because he uses words like "most", "many", "probably", and "nearly". In fact, do a Google search for "most many probably nearly" and his post is #6!

He should have just stood by his words. Instead of taking criticism, he reverts to treating readers as idiots who didn't understand his post. We understood. And many disagreed. It happens. From his follow-up post (quotes from original post):

I've gotten a lot of feedback on my Mac App Store post this morning, and I'd like to clarify some points and respond.
I did not say or intend to suggest any of these:

1. I will not buy anything from the Mac App Store again.

"But now, I've lost all confidence that the apps I buy in the App Store today will still be there next month or next year. The advantages of buying from the App Store are mostly gone now. My confidence in the App Store, as a customer, has evaporated.

Next time I buy an app that's available both in and out of the Store, I'll probably choose to buy it directly from the vendor."

2. Most Mac users will stop shopping in the Mac App Store.

"And nearly everyone who's been burned by sandboxing exclusions " not just the affected apps' developers, but all of their customers " will make the same choice with their future purchases. To most of these customers, the App Store is no longer a reliable place to buy software."

3. Most developers will stop putting apps in the Mac App Store.

"And with reduced buyer confidence, fewer developers can afford to make their software App Store-only.

This even may reduce the long-term success of iCloud and the platform lock-in it could bring for Apple. Only App Store apps can use iCloud, but many Mac developers can't or won't use it because of the App Store's political instability."

jsz0 1 day ago 0 replies      
How about a content rating system? People are familiar with this from movies, television and music. A developer could declare their un-sandboxed application the equivalent of NC17 while most users live happily in their equivalent of a PG13 universe.
tadhgk 1 day ago 3 replies      
Future of irrelevance to who exactly? The 1-in-10000 users who like to play with root level access and scripts and stuff perhaps, but do the other 9999 even notice the change? I'm thinking not.

Granted it's a big change from earlier times, the strategy of sandboxing works really well for security and general OS integrity (as iOS has proved) in a world that has no patience for fragmented and exploitative software. Perhaps we lose a little something in the process, but it's in the service of a better overall integrated service and experience.

Oh and by the way: Microsoft is going the same way too with Metro.

kevindication 1 day ago 0 replies      
Contrast this end-user experience with that of Steam's. These are essentially the exact same applications except that Steam has 1) carefully cultivated a huge, multi-platform ecosystem of apps and 2) has never willingly broken the application's environment.

Steam is arguably the only game distribution platform of significance (yes, yes, battle.net and its limited selection of widely played titles) and will continue to dominate because of the careful thought they've put into distribution.

Seems like Apple should take a lesson, no?

cageface 1 day ago 0 replies      
This even may reduce the long-term success of iCloud and the platform lock-in it could bring for Apple.

Apple is never going to have the kind of platform dominance that would make iCloud really compelling. At least the other vendors realize that they need to operate in a polyglot world.

As much as I like my MBA and iPad I refuse to lock my essential data into a system I can only really access with hardware from a single vendor.

fpgeek 1 day ago 0 replies      
I'd like to believe Marco is right, but let's not kid ourselves. There's been large, high-profile, five-year experiment on exactly this and the disappointing results are in: [almost] nobody gives a sh*t about Freedom 0.
m_st 2 days ago 1 reply      
As a customer I totally agree. I wouldn't care about simple 5$ tools being removed. But if more expensive tools like PDFPen Pro (which I bought) get removed then I will certainly be very upset. And just because this danger is present I'm very careful and not buying any expensive software anymore from the Mac App Store whenever given the choice (except Apple software of course). That's a pity for both developers and customers.
Aloisius 2 days ago 4 replies      
What software besides Postbox has pulled out? The author makes it sound like there is a mass exodus.

I've bought most of my software outside the App Store, so I'm not tuned in.

caf 1 day ago 0 replies      
It seems like there's a market opening here for a third party App Store equivalent, that offers the single-point-of-payment / ease of upgrades advantages for the consumer, but eliminates the developer pain points (sandboxing and paid upgrades).

This is viable for as long as third party applications can still be downloaded and installed outside of the Apple App Store.

jusben1369 1 day ago 0 replies      
I think PG highlighted that Apple doesn't get software back in 2009: http://www.paulgraham.com/apple.html

In that particular case he was lamenting the App Store approval process and how much it alienated developers. Here we are 3 years later with no change to that policy and no slowing down the App Store juggernaut.

Garwor 1 day ago 2 replies      
The neckbeards will buy from the developer. The ordinary customers Apple has been relentlessly targeting for a decade will buy from the Mac App Store. Which group is bigger?
zurn 1 day ago 0 replies      
The thing that kept me from installing even free apps
from the mac app store is the same that keeps me away from Market/Play on Android: it requires Apple ID.
mikejarema 1 day ago 2 replies      
I'm wondering how soon until we see OSX being jailbroken.

I suspect system level choices like this are bound to continue in order to push users in the direction of iCloud and AppStore usage (ie. vendor lock-in).

Though such moves appear draconian from our perspective, I believe Apple times such moves around when some internal metrics indicate a tipping point has happened, namely the developer backlash won't be substantial enough to affect Apple's bottom line.

It's only a matter of time before the iPhone dev team puts some effort towards serving the OSX crowd that wants to jump ship but not give up "OSX" completely.

smoody 2 days ago 3 replies      
It seems to me that there's an opportunity for someone to create a knock-off of the Mac App Store without the restrictions. I'd be a customer.
pkamb 2 days ago 0 replies      
Several of my apps won't receive any new updates because they use the Accessibility API, which isn't sandbox-compatible. It's a real shame, because developing for the Mac App Store for the past year has been awesome. Unfortunately the money/discovery isn't there for sole devs to go outside the Store.
Caballera 1 day ago 0 replies      
I don't know if it's been said or discussed before, but why don't Developers just make yearly revisions to sell on the Mac App Store, or even the iPhone iOS app store. You could have a Genericproductname'2012 or '12, and next year release a 2013 ('13) that has the new features, then remove the prior versions.
carson 2 days ago 1 reply      
I think the idea that Apple can't require MAS is wrong. Maybe they can't for everyone but I would bet they can for a lot of people and probably plan to. At some point developers will be buying the developer version of the OS that gets them around the sandbox.
js4all 1 day ago 1 reply      
The lack of paid upgrades is irrelevant. I never understood what people are missing here. Apple has shown how to do it: Phase out the old version and offer the new version as a new product.
olleicua 1 day ago 0 replies      
I think the first sign that App Store wasn't trustworthy was Apple leaving gcc out of the free version of XCode. I never trusted App Store. I think without Steve Jobs Apple has lost it's ability to pretend it's a decent company. Why doesn't someone do what RedHat did except for for consumers.
jusben1369 1 day ago 1 reply      
Can anyone give a 101 explanation specifically around what the "sandbox" issue is?
mej10 2 days ago 1 reply      
I hope they will weaken this restriction, and will just not have non-sandboxed apps show up without searching for them, or without switching some "power user" option on.

Some types of applications simply cannot exist with these types of restrictions. They are removing entire classes (e.g. window managers, system utilities) of applications from the App Store with this.

soapdog 1 day ago 0 replies      
for those looking for an alternative app store for mac, look at http://appbodega.com/ I think it is great. It looks good. It is easy to interface with and they appear to be quite responsive.
mzuvella 1 day ago 0 replies      
Seriously? Do you remember what company this is? Apple does this every time and then corrects it down the road. Why? Because they receive more press that way. Come on guys, they are not stupid.
Vecrios 1 day ago 0 replies      
I have an incorrect understanding of the word sandbox in this context. So to clarify, what does the author of this blog mean by "the sandbox is restricted"?
Google Fiber fiber.google.com
337 points by benackles  2 days ago   210 comments top 34
kevinalexbrown 2 days ago  replies      
This move might be good for Google, but I find it somewhat alarming as a consumer. Those who control or curate content (Google, NBC/Comcast, Facebook) should not control infrastructure. This was less of a concern when Google was acting as a conduit of information, but as Google moves to treat personal (if aggregate) data as property and profits from user-generated data like YouTube or Google+, the lines become significantly blurred.

There may be nothing wrong intrinsically with controlling content and infrastructure, but it seems to be bad for consumers generally, as exhibited by Comcast[0]. And while a high-speed competitor to local cable monopolies is exciting, I worry about trading one dictator for another just because there's some competition in the transition period from the old guard to the new.

Is there some safeguard in place Kansas City and other fiber recipients have arranged to prevent Google from having the power to exert cable-esque control? Not rhetorical, genuinely curious.

EDIT: I'd like to make it clear that I'm not anti-Google, I don't think they're evil, or abuse consumer data, or are actively trying to become the next Comcast. I am, however, expressing concern over the position of power they will find themselves in if Fiber takes off. So far I've seen a lot of comments suggesting Google is not evil, to which I agree, but I haven't seen any indicating that there are adequate checks in place to (relatively easily) prevent the abuse of power.


revelation 2 days ago 2 replies      
There is not much difference in perception to a normal customer between 50Mbit and 1Gb - the former manages two 1080p YouTube streams perfectly well with quite some headroom for extra misc traffic. Torrenters will always crave more bandwidth, but even there its going to be tough to fill 1Gb downstream.

But all of that is missing the point. The internet is not just a better TV with cats - it has true full duplex communication!

It's hard to believe because the monopolies in control of the last mile will generally offer you tons of downstream with little to no upstream. In some cases, the upstream is merely enough to send TCP ACKs for when you use all of your downstream. Its a natural move for these big old companies because upstream traffic from customers is more expensive for them and they are still stuck in the mindset of the "media consumption machine".

Google, of course, realizes that 1Mb of upstream is bad news for Google Hangouts and terrible for uploading 1080p video to YouTube. And all kind of distributed systems benefit tremendously from matching upload and download bandwidth.

jtchang 2 days ago 4 replies      
I am absolutely ecstatic that Google is attempting to build the next generation wired network.

Why does this matter? Well owning the pipe is always a good move especially in the face of net neutrality. But there are all sorts of other tie ins. For one, payments.

Our current payment infrastructure is based around private leased lines. If you really wanted to take on the payments industry you have to start with the infrastructure. Otherwise you are always at the mercy of a credit card processor just shutting you off. When you own the network where the transactions actually flow it is a different story.

nilsbunger 2 days ago 3 replies      
We have google fiber at home - Stanford faculty homes have had it the past year, like Kansas City.

It's pretty cool. My record so far is to consume 400Mbps, using 4 computers downloading from about 10 places, all wired through gigabit switches.

In practice, though, it doesn't make much difference compared to a 30Mbps cable modem for most consumers:

most streaming video is < 10 Mbps;

large file downloads are generally limited by a server (or somewhere else in the network?), so it's hard to exceed 30-40Mbps download speed;

web browsing feels about the same, because it's limited by round trips of DNS and http requests, not by bandwidth (spdy will help here?);

many consumer-grade NAT boxes (linksys and friends) are capable of only 100-200 Mbps

The one place it's made a big difference so far is uploads. For example, backup to a cloud backup service (backblaze) often goes 50Mbps or higher. But I did have to try several backup services because some were limited on the server side to a few Mbps.

Running services from home could be a use case too, but then you get into reliability of power/etc, and the fact that so far you can't get a static IP address through google fiber.

So for now google fiber is mostly a fast cable modem from a "don't be evil" provider. I think the real disruption will come with new services that don't really exist yet. What kind of new things can be built if there's enough audience?

kennywinker 2 days ago 1 reply      
Background for those who aren't following this: http://en.wikipedia.org/wiki/Google_Fiber
antimatter15 2 days ago 2 replies      
Going to the 404 page reveals a menu which hints (well, it says that there's going to be some plural number of cities, not too much else) at what is being announced tomorrow: http://fiber.google.com/savethedate/404

The main menu has links to "About", "How to get it", "Plans & Pricing", "Cities", "Help" and a button "Pre-Register"

Also, since that page doesn't seem to indicate the time, the Google fiber blog (http://googlefiberblog.blogspot.com/) says it's at 11AM CDT. Also an impressive stat is that they've apparently laid down over a hundred miles of fiber.

robomartin 2 days ago 2 replies      
...until Google algos shut down your account; now you loose gmail, docs, drive, etc., etc., etc., and fiber.

OK, I am kidding to some extent. Maybe not. I'd sure like to see them move in a direction that assures users that all services will not be cutoff without recourse for unknown algo violations.

Nrsolis 2 days ago 1 reply      
Honestly, the more network is out there, the better.

There is probably an upper limit on the number of networks any given city can support, so it's going to be natural for some combination of content sources to be partnered with fiber distribution networks to revenue share.

It's enormously costly to deliver "wireline" fiber to the home service. Most of this is labor cost, but you also can't discount the impact of property taxes and upkeep on infrastructure that is supposed to weather the elements and wildlife (including human) for upwards of 30 years.

Fiber is a great solution but wireless is still the most economical way to deliver access for the "last-mile". If we ever find a way to provide low-cost, high bandwidth wireless service within a one mile radius that doesn't make the NIMBY types have an aneurism, then fiber-to-the-home will seem as quaint as an individual copper pair to every residence.

disclaimer: In 1997-1998 I worked at a municipally owned city utility that was able to deliver 10mbps symmetric Internet access to a development of homes in FL. Bellsouth subsequently had laws passed to prohibit political subdivisions from engaging in the provisioning of telecommunication services.

rachelbythebay 2 days ago 2 replies      
Honest question: what sort of support can you expect when something goes wrong? Fiber attracts backhoes (buried routes) and bored hunters with shotguns (aerial ones). It's a fact of life. Who's going to do customer service, Google, or some other agency? What's their track record like?

(As the old gag goes, this is why you should carry a short piece with you, in case you are stranded on a desert island. Bury it and when the backhoe shows up, get a ride back to land with the driver).

bcherry 2 days ago 1 reply      
Unlikely to be any news about service beyond their long-planned Kansas City project. The skyline in the teaser is a dead ringer for the Kansas City skyline (for example, http://4photos.net/blog/wp-content/uploads/Kansas_City_Skyli...)
dkasper 2 days ago 2 replies      
So I'm all for getting better broadband in this country, and good on Google for trying to make it happen. But allow me to place devil's advocate for a minute. This is like AOL 2.0 right? Isn't it a really bad idea to have one of the largest sites on the internet also be your ISP?
sakopov 2 days ago 0 replies      
As a Kansas City resident, this is going to be inexplicably interesting. We have a pretty decent tech scene here. I'm just wondering how this will impact it, if it will at all.
Legion 2 days ago 1 reply      
Man, was I pissed when Austin lost out to Kansas City for the Google Fiber for Communities pilot project. The lack of FIOS in a city like Austin is painful. Austin is "AT&T territory" and AT&T's UVerse service isn't even worth talking about.

I sure hope this announcement is, "Fiber is coming, and Austin's in the first wave!"

ricardobeat 2 days ago 3 replies      
1gbps is not as impressive 2 years later... 100mbps broadband is already becoming affordable here in Brazil, it's a given that 1gbps will be available in large scale in 2-3 years.
corkill 2 days ago 0 replies      
In Turkey at the moment.

Just got a fibre plan 20mbps for $30 US per month, no contract, they also give the router and modem. Although had to pay $70 or so upfront for the no contract option.

They also have a 1000mbps plan. First world countries really dragging there feet on fibre networks.

dmishe 2 days ago 1 reply      
Any chance that in the future, this project will drive ISP prices down? Internet in the US is ridiculously expensive compared to, say, Eastern Europe.
csense 2 days ago 0 replies      
What does Google hopes to accomplish with making fiber available?

They want to enable entirely new applications.

For example, online video was an application enabled by the widespread availability of broadband Internet. Before broadband, downloading videos was possible, but streaming was not. Simply put, the (average) rate at which you download frames of video has to be greater than the rate at which frames are displayed for streaming to work.

The most interesting changes were not quantitative, but qualitative.

Google -- and most HN readers -- probably believe that higher broadband speeds are an inevitability, although the process has been going much slower in the US than most of us would like. And new ways of using the Internet will be enabled as speeds get faster. And, if it offers fiber, Google will be at the forefront of that wave, which will help Google by:

(1) Accelerating the change, pushing those new markets to be created sooner than they would have been created without Google Fiber
(2) Putting it in a good position to capture the new markets -- i.e. if Application X is eating a lot of bandwidth on Google Fiber, that might be an early signal that the Application X space is a growth market and Google should find a way to get involved.

snorkel 2 days ago 2 replies      
! Gbps upload and download speed. Wonderful. Unless you can give me an static IP address and let me serve whatever I want from that IP than I'm not failling out of my chair over this.
donbronson 2 days ago 1 reply      
Google will have to offer cable TV for this to work. I really hope they do. The MSOs deserve some competition.
salimmadjd 2 days ago 0 replies      
As google gets larger and larger the slower and less nimble it becomes.

But given the other competitors in this market, I'm glad google is making their move, US is falling behind other countries when it comes to broadband access and this will only open up so many new business opportunities in US.
However, as others said I'll be curious about neutrality of Google when it comes to content. Will they block vimeo in favor of youtube?

jervisfm 1 day ago 0 replies      
The announcement is now streaming live at https://www.youtube.com/watch?v=6uZVqPuq81c&feature=play...
vampirechicken 1 day ago 0 replies      
> hose who control or curate content (Google, NBC/Comcast, Facebook) should not control infrastructure.

And those who were granted a monopoly in order to build the infrastructure should not control the content.

ww520 2 days ago 0 replies      
This is pretty amazing. It's a transformative move if Google can pull it off. Google might be best served by being the wholesale pipe provider rather than customer facing ISP.
spaghetti 2 days ago 0 replies      
What happens when a large company that Google deems a competitor tries to buy bulk use of the fiber? I could see Facebook lobbying for government regulation of Google's fiber in an attempt to secure competitive pricing.
DonnyV 2 days ago 0 replies      
I work for a GIS Services company and I know one of the major issues from
us not going full cloud is that we deal with really large data-sets. If we had access to a large fiber
pipe we could easily dump all our servers. I would imagine any engineering, graphics or video production office runs across the same problem. I see this as completing the cloud story and alleviating business of having to run there own data centers.
jebblue 2 days ago 0 replies      
If Google doesn't do it who will? Who else has the vision? As long as they keep not doing evil everything is cool. If they do start doing evil we are screwed but we will have high bandwidth to get on the Internet and read about how messed up we are.
andyjsong 2 days ago 0 replies      
How does Google fiber compare to other countries around the world like S. Korea and Japan where their bandwidth is much higher. We're #1? USA?! USA?! err Kansas City?!
allbombs 1 day ago 0 replies      
This confirms Google can't win the TV race without owning internet access
conradfr 2 days ago 0 replies      
I'm so dumb I thought for a moment that Google would announcing some kind of new clothes fabric.
radarsat1 2 days ago 0 replies      
Google is coming out with a breakfast cereal?
lizzard 2 days ago 0 replies      
For about 3 seconds I thought maybe this was about yarn.
ebtalley 2 days ago 0 replies      
please santa, please oh please oh please.
suyash 2 days ago 1 reply      
how is it going to be useful for everyday developer?
OAuth 2.0 and the Road to Hell hueniverse.com
283 points by dho  2 days ago   70 comments top 22
carsongross 2 days ago 2 replies      
Having written client code for multiple OAuth2 implementations, I can tell you: it's a total clusterf$%k, and for exactly the reasons Eran outlines: the oauth spec is a giant ball of design-by-committee compromise and feels exactly like the disaster that is XML web services and it's technologies.

We would be better far off it a single company/dictator (like, shudder, facebook) came up with a simple, competently designed one page authentication mechanism, provided some libraries in the popular languages and we all just went with that.

rendezvouscp 2 days ago 7 replies      
It saddens me to see OAuth 2.0 in this state. As someone who's made really minor contributions to 2.0 (and thus listed as a contributor in the spec), I have been really looking forward to it being finished and ready for production use (where production use means no more drafts). I stopped following the mailing list last year because most of the threads seemed all too familiar or out of my realm of knowledge to contribute (read: enterprisey).

I run a 1.0a service provider and write clients against it. I'm thinking about wading through the current 2.0 draft, picking out the relevant parts to small startups with an API, and publishing a post about how to implement the sane parts of the 2.0 spec.

ap22213 2 days ago 2 replies      
I've worked on standards committees off and on for many years, and his experience seems typical of the issues that crop up. There are many problems with standards groups and the way that they work.

One major one is that, often, the participants come from different areas with different perspectives and visions of the outcome. Since participants rarely go to work with a firm set of agreed upon requirements or use cases, it leaves each member room to craft their own understandings of the goals. I've seen way too many working groups attempt to create a 'master' spec that takes into account all possibilities. Or, alternatively, clusters of people form from similar problem domains, and powerful clusters can take the work off course.

A second major problem is that there often is a lack of real user participants. Standards work is as close to the dullest engineering work one can get. Worse, it seems to attract certain types of engineers that love building specifications. Because of this, usually the real users flee immediately. This usually leaves a body of over-educated, overly-technical people to argue a lot over sometimes irrelevant details. Those types of people are definitely necessary for the standard to work in the end. But, because the real users flee, their influence is usually unchecked.

A third reason is that working groups rarely seem to use iterative and incremental life cycles. There's rarely any working code, often little V&V, and participants and users often can't experiment with their ideas. As we know, what's good in theory, sometimes doesn't work well in practice.

I think there are systematic reasons much standards work fails. The 'design-by-committee' outcomes arise from 1) lack of firm use cases to bound the work, 2) dissimilarity between participants, 3) lack of real user participants, 4) lack of iterative / incremental cycles.

bryanh 1 day ago 0 replies      
We at Zapier have seen it all when it comes to API's. IMO, the biggest poison around OAuth are options. Optional grant types, various refresh token options, miscellaneous state and scope options, etc...

What is the point of a standard that cannot be implemented the same way twice? It's insane.

That said, most smaller vendors stick to the sane bits, its the big guys like Intuit or Microsoft that over-engineer their auth and pull out every fiddly feature in the spec.

Robin_Message 2 days ago 1 reply      
Hang on a minute, from where I'm standing as a client developer OAuth 2 is much better than OAuth 1.

Firstly, reducing the burden of tricky and unnecessary crypto code on the client is useful.

Secondly, some of the article's points don't even make sense, like saying tokens are necessarily unbounded, which isn't true. The issuer can easily include the client_id in the token and check for its revocation when used, as it did in OAuth 1. The same is true for self-encoding: clients don't have to issue self-encoded tokens and can instead issue unique id-style tokens with long expiry times. As for refresh, that's unfortunate but issuers could easily work around it if the OAuth 1 way was preferable.

In short, OAuth 2 is simpler to implement for the client in exchange for being slightly harder on the issuer, whilst also being more flexible. Yes, it relies on SSL for security. So does your bank.

krosaen 1 day ago 0 replies      
Might OAuth WRAP make a comeback? Bret Taylor wrote about it years ago as a simpler approach:


(his blog seems to be defunct, hope that's not permanent)

Seems like OAuth WRAP has been officially deprecated in favor of OAuth2.0 but given these issues...

andraz 2 days ago 2 replies      
Yes, this is sad. And it happens all the time.

Exactly the same thing happened to the whole Semantic Web effort at W3C. It basically got overtaken by enterprise and now it is of little interest or use to regular web developers.

pjzedalis 2 days ago 3 replies      
I don't understand any of this. Microsoft develops and publishes 'protocols' (used lightly) and everyone hates them because they are pushing workable code out on everyone else...

Bunch of people in a committee take three years trying to build the security token system to end all security token systems and have yet anything to show for it and we are sad?

Why are people trying to do this anyway? oAuth is just an idea. Hey here's a really good way to handle things and if you do it this way it has some really great benefits.

Why aren't these things like javascript frameworks where everyone has an idea. I don't think it's practical that every sdk and framework will use one security system that was agreed upon. It's just not going to happen. Everyone has unique requirements.

I think he's just upset that more people have concerns and needs and nobody can compromise to solve all of them. Well yeah. Naturally. They wouldn't be needs if people could just overlook them for someone else's idea on how to do it. They would just be problems people are looking for someone else to solve.

Jach 2 days ago 0 replies      
> With very little effort, pretty much anything can be called OAuth 2.0 compliant.

This was both a sigh of relief but also slightly horrifying when I was working on an oauth2 server in Node. It encourages a lazy "implement the parts we care about and that are required, and take shortcuts for unspecified things." I thought the separation of access tokens and refresh tokens was wrong because once you're just giving the client an encrypted string to avoid certain DB lookups later you can put whatever data you want in it (the spec doesn't care) including managing refreshes, revokes, etc. I like the idea of expiring tokens of course, but it would simplify the client significantly to just replace the currently used token with a new token issued by the server if one is returned. I recall the 'standard' flow is "request with access token, fail, request new access token with refresh token if you have one, maybe succeed, maybe get a new refresh token, if succeed request with new access token". Having the access token manage the data to refresh itself is simpler. I'd agree it's bad that token security is reduced to cookie security by default, really the whole rant is spot-on.

georgemcbay 2 days ago 1 reply      
Design by committee fails again.

Maybe next time it'll work!

JamesLeonis 1 day ago 0 replies      
> The enterprise community was looking for a framework they can use with minimal changes to their existing systems, and for some, a new source of revenues through customization.

That right there is what killed OAuth 2.0. From day 1 these members didn't have the specification as the highest priority. They were only thinking of how the specification could serve their own ends. This isn't unique to the enterprise world, but that mindset has more than its fair share. The web community represented the group that put the specification as it's highest priority. When the specification was perverted, they left.

sgt 2 days ago 0 replies      
Interestingly I am developing a single-signon provider as we speak, and I chose OAuth 1.0. I did it mainly because the libraries (jersey-oauth in my case) seemed more mature for 1.0, and the fact that 1.0 is a standard, whilst 2.0 is a draft at the moment.

I do realize that it's slightly more complicated for the client developer, but all things considered I think documenting my API in the best possible way will outweigh the perceived disadvantages.

nerd_in_rage 1 day ago 0 replies      
I witnessed one guy implement an OAuth 2.0 provider completely wrong (he was accepting user credentials as client credentials, or something similar.) This guy was smart, and just couldn't understand the spec.

Upon reading the spec, it seemed that OAuth2 is really just some rough guidelines. Pick and choose what you need for the particular flow you're implementing.

jaequery 2 days ago 3 replies      
somewhat, shocking. i guess i'll be moving away from Oauth, feels like a relief and scary at the same time. what are some good alternatives?
olliesaunders 2 days ago 2 replies      
What does WS-* mean?
msie 1 day ago 0 replies      
Heh, why not form another committee with just the web guys and without the enterprise guys and create your own OAuth 2.0? You'll get something done before the enterprise guys and they'll be inclined to use your working protocol.
krmmalik 1 day ago 0 replies      
just been reading about the debacle with xml and w3c snd this whole oAuth business sounds like it's heading down the same path. Seems like the browser vendors had the ultimate say when it came to html. in terms of oAuth i wonder where the ultimate power lies because those would be the factions that need to do something right now to deal with this mess.
abarth 2 days ago 0 replies      
Something similar is happening to HTML5 in the W3C:
kyberias 1 day ago 1 reply      
Again it's very hard to read the small grey text on white background.
hippieheadcase 2 days ago 1 reply      
What was the supposed purpose for oath 2.0?
PaulHoule 1 day ago 0 replies      
pluginitis strikes again
Those budget 27" IPS displays from Korea are for real techreport.com
282 points by geoffgasior  2 days ago   154 comments top 25
riobard 2 days ago 2 replies      
Hmmm, LG panels...

These panels were available since a few years ago on Taobao.com (China's eBay if that helps) in small quantities, and there were lots of excitement to make your own “Cinema Display” at 1/3 the price. As a big screen lover, I was also intrigued. So I did some research.

Here's the background story I heard a year or two ago from an anonymous guy claiming he's working in LG's factories in China. I didn't verify if it's true (unless I saw LG's contract with Apple, which means impossible), but it makes a lot of sense to me anyway. You have to judge by yourselves.

These 27" S-IPS (yes, not e-IPS) panels were indeed manufactured for Apple's iMacs and Cinema/Thunderbolt Displays. Apparently Apple has pretty high and tight standard (which they do, if you've ever used authentic ones) about these panels. Once in a while a production run will not meet Apple's expectation for some reason (e.g. color/brightness/contrast uniformity). So Apple rejects the faulty batch, and LG has to find some creative ways to deal with the rejects without losing too much (these are expensive panels) AND not breaking the contract with Apple.

Turned out Apple forbids LG to resell rejected 27" panels to any well-known brands in meaningful quantity. The restriction makes a lot of business sense: you don't want a major brand suddenly floods the market with comparable displays but at less than half of the price of iMacs and Cinema/Thunderbolt Displays. Especially so when you had spent a lot of money to secure the supply of such giant panels.

So what does LG do in the end? They first sort the faulty batch into two categories: the better ones that can be salvaged by LG itself, and the worse ones that have to go somewhere else. LG re-cuts the slightly better ones into smaller panels (24" and below), and re-sells these to its partners as high grade IPS panels, as this is not forbidden by the contract with Apple (only 27" ones are forbidden). And worse ones? They go to various unknown brands in small quantities (again, this is not forbidden by the contract).

My bet is that these Korean panels are from the second category.

Xcelerate 2 days ago 4 replies      
These monitors a great deal. One thing that always bugs me on forums (or deal websites) where these are discussed is that people are quick to point out that these monitors have A- panels -- the rejects from the Apple and Dell supply.

While this is true for a few of the companies, it is not true for all of them. "Korean monitors" isn't one entity. There's a lot of companies selling these things, and a lot of them are A quality panels. (So the lesson is to not just repeat things you hear without finding a primary source first).

I believe most rumors point to Apple releasing a 27 or 30 inch super resolution display within the next year. I'll be waiting for these Korean companies to release their own version and then snatch it up for a cheap price.

sachingulaya 2 days ago 2 replies      
As an owner of one I'll toss in my 0.02.

I purchased the Achieva Shimian 27" off of eBay for $290 a few weeks ago. It arrived within 5 days of purchase from Korea. There are no dead pixels or other defects. I hook it up to my macbook pro using: http://www.amazon.com/Monoprice-DisplayPort-Thunderbolt-Dual...

This monitor only accepts dual-DVI input. You can buy the HDMI version for $350 if you want.

Other variations include "pixel perfect" displays. Pixel perfect means they opened the box and there were no dead pixels. 80% of people on hardocp reported no dead pixels.

The stand is definitely a bit wobbly.

For $290--it can't be beat.

rogerbinns 2 days ago 2 replies      
Annoyingly they are 16:9 which means you lose some vertical resolution versus 16:10. I'm pretty sure I bought the last two displays with 16:10 in this area.

For those who care about colour checkout http://www.hughski.com for a device and http://lwn.net/Articles/499231/ for details. It turns out that manufacturers do ship displays with completely wrong colour calibration (yes I'm looking at you Lenovo).

citricsquid 2 days ago  replies      
Hmm, I currently have 3 x 24" Samsung monitors and I've been considering the Samsung MD230X6 (6 x 23", 1920 x 1080) but the cost (around $4,000) vs. the $1000 cost of getting 3 x 27" (2560 x 1440) means I'm tempted to try out these "korean" monitors.

My main concern is why are they so cheap. The article mentions that they're displays that companies didn't deem fit to sell, does that mean these are displays that were due to be destroyed and someone is just selling them on, or are they purchased in bulk at a big discount and then being resold? I figure all that matters is if they work and is only $1,000 but I would be much more comfortable purchasing if I knew exactly what their story was.

northisup 2 days ago 5 replies      
We bought a bunch of them at DISQUS, our current failure rate is about 20-30% failed within the first three months of use. Would not buy again.
ori_b 2 days ago 2 replies      
I wish that they were still making 4:3 LCDs. I like having 3 side by side, and I currently have that setup with 3 20" 1600x1200 monitors, but at some point I'll want to upgrade to something with higher pixel density. My concern is that with new wide screen displays, things will span too wide horizontally to be usable.
kristofferR 2 days ago 1 reply      
I've heard some great things about the Yamasaki Catleap, another cheap 27".

I might be ordering one of them soon.

The vast majority of the receivers have received perfect screens and been really happy with their purchase it seems, check out the reports in the table here:

mappu 2 days ago 1 reply      
A friend of mine bought one recently, and it arrived yesterday... his comments were along the lines of 'Build quality is not as good as Dell, but passable, and it's a third of the price'. He found a calibration file and seems very happy with the purchase.

The common brands to search for are the Yamakasi Catleap and (to a lesser extent) the Shimian QH270.

kitsune_ 2 days ago 2 replies      
The last thing I would compromise on with my set up is the monitor. After all, I'll be staring into this thing every day for at least two years if not more.

No TN-Panel. No glare. I also refuse to buy a 16:9 monitor.

rrmm 2 days ago 1 reply      
The guy in the article mentions having trouble figuring out how to change the brightness using the non-OSD controls.

As great as LCD monitors are, the one thing I miss about CRT's is having an analogue brightness control. It made it really convenient to adjust the monitor at night.

(Also, I miss not having blue LED's and touch-sensitive controls. They really need to force designers to read Don Norman's book).

jfb 2 days ago 0 replies      
I rock the Cinema Display, but would seriously consider one or two of these as secondary displays. Of course, it the mooted third party Thunderbolt breakout boxes actually show up (and work), the economics change for me.
kevhsu 2 days ago 0 replies      
I got the Auria EQ276W from Microcenter for 400+tax last week. Fantastic purchase. Includes Displayport, HDMI, DVI-D, and VGA ports. Colors are vivid, and I don't believe I have any dead/stuck pixels. Mine has minor backlight bleed, but probably not more than the $700+ Apple and Dell displays.
wesbos 2 days ago 2 replies      
I've been researching these lately as well and I've found the Crossover 27Q LED to be the best build quality and the nicest casing.

Quite tempted to snap one of these up before they get popular and drive the price up..

theatrus2 2 days ago 1 reply      
Sadly, not a good monitor review (from the likes of say Anandtech). If you're going to do a qualitative review, you need to check color accuracy, tracking, gamut, etc.
duncans 2 days ago 1 reply      
Jeff Atwood got some, doesn't look like it worked out well:

"two of the 27" Korean $350 s-ips monitors arrived. They are amazing panels, but lack of hardware brightness control may be a showstopper." https://twitter.com/codinghorror/status/224213213649190913

"beautiful S-IPS panels but no hardware LED backlight brightness control times 3 is untenable. Oh well, experiment over." https://twitter.com/codinghorror/status/224217599746117632

Scene_Cast2 2 days ago 0 replies      
Anyone looking for a high-quality review of the panel, check out http://www.tftcentral.co.uk/reviews/dgm_ips-2701wph.htm. The Pro/Con conclusion is at the very bottom of the page.
I cannot recommend tftcentral enough as a monitor review site.
dshep 2 days ago 0 replies      
Was interested until I got to: "glossy anti-glare coating". I think I'd have just said it was glossy.
eliasmacpherson 2 days ago 1 reply      
the HDCP complaint he lists seems like a feature, not a bug.
luigip 1 day ago 0 replies      
Some more discussion of these monitors here: http://www.tftcentral.co.uk/news_archive/26.htm#korean_ips27
serialx 2 days ago 0 replies      
Take a look at this: http://www.danawa.com/product/list.html?defSite=DISPLAY&...

Simply take 3 zeros in the price tag to match dollars. There's even 220 dollar display!

overworkedasian 2 days ago 0 replies      
there is alot of good information here on various 27" displays from korea. http://www.overclock.net/f/44/monitors-and-displays
brilyient 2 days ago 0 replies      
I can't wait for "tablets" to be sold as "portable displays" that we can plug into small, low-power general purpose computers. We'll get the resolution of the Apple iPad butwe can use a real keyboard, a better, open-source kernel and we can do real programming without the Apple-style lockdown nonsense.

The title is a little misleading. The "non-budget" diplays are often made in the same place (Asia, not the U.S.), from the same parts, maybe even the same factories, by the same workers.

adrianwaj 2 days ago 2 replies      
Big question is do they use LED or CCFL, and are they the cheaper e-IPS? What is their refresh rate, >10ms ?
hastur 2 days ago 7 replies      
OK, so let's bring this hype down to Earth.

For this kind of money this can only be an e-IPS monitor. (The specs on e-bay say S-IPS, but it's certainly a lie.)

For those who don't differentiate between the different variants of IPS, e-IPS is a relatively new thing introduced around 2 years ago (by LG, I believe) and it's a simplified and much cheaper to manufacture version of IPS. Its characteristics are somewhere between TN and IPS.

Apart from the display itself, a monitor's quality is also very much dependent on its electronics. For this kind of money it's obvious they've put the absolute minimum into this thing.

(For a little comparison: I have an old, med-quality 19'' NEC 1990FX monitor and a cheap 23'' HP 2310ti touch monitor. The NEC weighs 9kg, the HP 8kg. How can a smaller monitor be heavier? It's all electronics.)

Same applies to other aspects: you can have no confidence in the monitor's electromagnetic emissions or reliability. And if the power supply fries for some reason in 3 months, don't count on your warranty.

I mean, you didn't believe for a second that a no-name brand that sells in the US purely thru e-bay will honor any kind of warranty, did you?

Gnome: Staring into the abyss gnome.org
276 points by jonrob  1 day ago   169 comments top 28
kijin 1 day ago  replies      
1. As the author says toward the end of the article, I think the biggest problem with Gnome nowadays is that only a small number of people actually use it on a day-to-day basis. Popular distros like Ubuntu and Mint have shifted away from it. No matter what merits Gnome 3 might have, it was such a flop in its first few releases that it has the Windows Vista stigma attached to it. Of course, there's GTK and several Gnome apps that people do use on a daily basis. But for many people, Gnome itself is decidedly uncool. No wonder they don't want to contribute to it.

2. If Gnome really wants to win back the hearts of potential contributors (i.e. power users), they'd better make programs that appeal to that demographic. People who have the skill and motivation to make significant contributions to a free software project often want a lot of room for configuration, including the option to use the desktop in a traditional manner. Taking away those little checkboxes and toolbar buttons is like slamming the door on power users. You might win a billion non-technical users, but none of them will ever submit a single patch.

3. Gnome is too big for its own good. Why does a desktop environment project need to maintain a complete stack of apps and libraries, from GTK to Gnome Shell to a text editor to a bundle of games to a web browser to an email client to a media player to a full-blown spreadsheet app? Why can't they just tell people to get a third-party browser? They should spin off the rest and focus on GTK, the Shell, and a small number of essential utilities. If Epiphany or Gnumeric died a slow and lonely death, how many people would really care? Heck, if you don't have the manpower to maintain anything else, just give me GTK so I can install xfce or lxde on top of it. It's really just Firefox and LibreOffice and VLC that I want, and I don't need Gnome to run them.

Edit: some rephrasing.

mindcrime 1 day ago 1 reply      
One more anecdote for you... I'd been a content (if not exactly thrilled) user of Gnome for a decade or so. Then, I bought a new laptop, which prompted me to install Fedora 15, which was my first exposure to Gnome3, as my old laptop was running a really old Fedora version which had Gnome2.

So... after 2-3 hours of Gnome3, I had had more than enough to prompt me to bite the bullet and switch to KDE. There is nothing good I can say about Gnome3... trying to use it was painful in about every way I could imagine. Nothing works the way I expect, and nothing was intuitive at all.

KDE, on the other hand, has been a pleasant surprise. I'd dabbled with it 10+ years ago, but never made the permanent switch... and given that they had gone through their own "change everything and piss off all the users" thing a while back, I wasn't sure what to expect. But after using it a couple of days, I couldn't be happier. It took a few minutes to figure out some of the new approaches they've adopted but - by and large - a little trail and error, some exploration, and intuition, and I was back to productive work almost immediately.

I have no hard feelings towards the Gnome team or anything, but they're just trying to go in a direction that I'm not interested in. Best of luck to Gnome, but KDE is a clearly superior choice for me right now, and I'm thrilled to have made the switch.

yason 1 day ago 1 reply      
Gnome 3 shattered the experience and momentum.

While Gnome 2 was gradually approaching ultimate goodness with its essential configurability (not too many knobs but an explicit set of gconf properties that you could tune if you wanted to), consistency, ease of use, and ten years of GTK2 providing a platform for applications that look and behave uniformly, it was certainly lacking in the integration side (networking, messaging, etc.) for which Gnome 3 is a response.

However, Gnome 3 broke so many little things that it doesn't matter what the new features do. This is one of the cases where Microsoft has been right: when you're big enough, don't muck with backwards-compatibility.

hcarvalhoalves 1 day ago 3 replies      
I'm afraid everything desktop Linux, and by extension Gnome, just lost a lot of momentum by developers moving to Apple + web development. Nowadays most focus on open source front seem to be on lower level projects (languages, libraries, servers). FOSS focused on final-user applications remains a niche for academia and developer-centric tools, little has changed compared to what used to be available in the 2000's.
mithaler 1 day ago 0 replies      
The key problem with Gnome 3 is that it wants to be both general-purpose and opinionated.

Gnome 2 was an excellent basis for a desktop because there were so many ways it could be used. It didn't limit you; you were free to trim out things you didn't want, or add things you wanted. Aside from that, it got out of the way.

Gnome 3 aims for the same demographic, but tries to force too many opinionated decisions on its users from the start. Distros that care about branding hate that because it makes it hard for them to differentiate (see Ubuntu). Many users hate that because it's too many things that they can't change without learning a new Javascript platform. It's certainly fair to say that typical Linux users (their target market, whether they like it or not) aren't used to that.

This post is wrong to blame tablets and smartphones for the decline. Linux users aren't abandoning desktops; they're abandoning Gnome 3, because it isn't giving them what they want. It's that simple.

abenga 1 day ago 10 replies      
This is really sad if it's true. I'm probably in the minority, but I think GNOME 3 (even without extensions) is the best Linux desktop at the moment. It does seem strange when you look at it at first, but once you use it for a week or so and use anything different (even GNOME 2 which I'd used for years), you feel stifled in a way. It's hard to put into words, it just feels like it's out of your way.

Anecdote: I work at a small actuarial firm that uses Linux desktops, and when I migrated everyone (ten people) over to Ubuntu 12.04, they all loved GNOME 3.

ebassi 1 day ago 1 reply      
as one of the two people mentioned by name in Benjamin's blog post, I'd like to point out that I didn't "leave GNOME" (to work on other stuff).

I am still involved in the community, I am a director of the foundation's board, and I'm still working on Gnome projects in my spare time - which is actually easier these days since I moved from intel to mozilla.

I'm typing this from GUADEC 2012, in A Coruña; the conference is absolutely delightful, there's a lot of talks about direction and future involvement, and everyone here is really excited about moving Gnome forward, as well as regaining the enthusiasts market.

not everything is bleak and bad.

mike-cardwell 1 day ago 3 replies      
This would have worried me back when Unity was crap a year or so ago. Now Unity is actually good, it doesn't matter if Gnome dies a slow death; we'll still have multiple good alternatives that are still being enhanced.
keithpeter 1 day ago 0 replies      
RHEL 7 will be based on Fedora 18


which means we might see RHEL/CentOS/Scientific Linux/PUIAS users on Gnome 3.6


Personally, I've worked out I spend little time actually using the overall desktop GUI, so I'm not too sensitive to changes in UI logic. Typical end user I suppose.

I suspect the change to systemd will cause rather more fur to fly than the GUI in Enterprise circles.

I hope the author of the original article gets a bit of support and finds a direction for his labours.

CrazedGeek 1 day ago 2 replies      
This is the (dead) G+ post by Linus that he refers to: http://digitizor.com/2011/08/04/linus-torvalds-ditches-gnome...
loftsy 1 day ago 1 reply      
I'm fascinated that all the real innovation going on in desktop environments (gnome3, unity, windows 8) seems to be taking a hammering. This is possibly just a case of the vocal minority and normal resistance to change but it will be interesting to see it all shake out in a year or two.

In my view Gnome should try to emulate the android model. Build the whole stack up to the widget level (they are really good at this) and then publish a couple of apps and an app store.

rnadna 4 hours ago 0 replies      
If the existence of both GNOME and KDE has held back developers who are reluctant to halve their user community or double their coding efforts, then the collapse of GNOME may be a good thing.
PaulHoule 1 day ago 1 reply      
to understand the sickness of Gnome you've got to go way way back to the beginning...

I remember when KDE first came out -- I heard the first press releases and thought they were on drugs, but when I downloaded and built, I was like "wow! this is so close to being a commercial desktop"

Now, in 2012, we have KDE and Gnome and a few off-brand desktops and it's still like "this is so close to being a commercial desktop" -- but there isn't any Wow anymore.

Red Hat didn't like the license of the QT toolkit, so they had to go out and build their own desktop, which was probably the most disasterous decision in the history of Linux -- it's like Windows Vista without Windows 7.

For a long term all of the major linux distros have been wasting time and resources trying to make linux something nobody cares about. There's an obsession, for instance, about office suites that are so bad they make Microsoft Office look like a paragon of reliability and ease-of-use.

On the other hand, there's been a complete disregard for the people who ~really~ use Linux such as sysadmins and developers.

I've recently set up two laptops that run Windows as a host and Ubuntu linux inside Virtualbox. I use "putty" as my *term program and Cygwin/X to run the occasional GUI app I need from Linux. It's a sign of what a disgrace the Linux desktop is that putty has the same ease-of-use and reliability that xterm had 15 years ago, whereas the "terminal" program that Ubuntu tries to push on you is a bloated disaster in which cut-and-paste is as miserable as it was in Win 3.1.

forgottenpaswrd 1 day ago 4 replies      
GNOME is not necessary anymore. It was in the past, not today.

We have Qt that really works, much much better, and with LGPL license.

You can run stellarium, VLC or Marble in Mac or Windows without problems.

If you try to use gimp or inkscape in mac it opens x11, copy and paste does not work(in inkscape it copies pixmaps instead of vectors!!), what a botched job.

In windows you will have a lot of problems too.

GTK support for OpenGL, OpenCL was terrible, having to low code everything, while in Qt works as well as with cocoa.

Let GTK die and improve(or fork) Qt.

madmax108 1 day ago 2 replies      
Damn, I knew Gnome was doing badly, but din't know it was this bad! :|

I personally am a fan of Gnome, and hope that they become relevent again. Unity is too fancy for my linking, and Unity 2D, which seems an option, is now deprecated! :|

aaronh 19 hours ago 1 reply      

meh, other than bus factor this seems overblown to me. I use Gnome3 on Fedora and it is great. It takes advantage of Fitts law and there is just less fuck-around-ability with it. (I thought even the Alt-to-PowerOff controversy was overblown; I suspend far more often then I shut down, this is a welcome simplification) If anything I felt Gnome3 had ushered in a renaissance in Gnome. What "new" goals does Gnome have to have other than creating a great desktop?

Please don't drive more developers away with more gratuitous Hacker News "X-is-dying" bitching.

aksx 23 hours ago 0 replies      
I use Gtk on a daily basis while working to the elementary os. I just love their Vala language,it gives me the ease of c# and speed of C. But the thing is there is not much documentation present. The irc channel feels slow. I encountered a bug in Vte about a week ago about transparency and no one is able to help.
This as a dev discourages me. I use Vala and Gtk solely because the elementary team uses it. Gnome becoming uncool has started a vicious circle which will lead to its death.
jstalin 13 hours ago 0 replies      
Wow, am I one of the few people who really likes Gnome 3? I use it on my primary home PC on top of Ubuntu 12.04. I love it.
sgarrity 1 day ago 0 replies      
For what it's worth, I've been very pleased with Gnome 3 and the new direction and focus in Gnome interface and design.
UK-AL 1 day ago 0 replies      
A lot of people are mentioning a lot of distos are moving away from gnome. Infact they are moving away from gnome shell, not gnome. If gnome has problems, we all have problems.
MrUnderhill 1 day ago 0 replies      
I've always wondered why the platform Blender uses hasn't turned into a generic UI toolkit or even window manager. Granted, Blender itself is rather overwhelming at first glance, but you only have to spend a few minutes with it to fall in love with the snappiness and adaptivity of the interface.
CD1212 1 day ago 3 replies      
In my opinion Gnome needs a completely new start, from the ground up.

1. When I last used GTK (about 2 years ago) it felt too big, old and bloated. If GTK were simplified and followed Qt's lead into scripting and easier interfaces (eg. Qt Quick), plus a MIT or LGPL license, this would encourage a new culture of apps.

2. I hated Gnome 3 and Unity for that matter. Gnome 4 needs to take a step back and get out of the way. You don't use the computer just for Gnome, but you use Gnome as a stepping stone. All common apps should be one click away and everything should be as customizable and flexible as possible.

As kljin said, some Gnome apps are redundant and the workforce could do a much better job focusing on the core issues, that could bring more people back to Gnome and hence possibly continue these projects again in the future.

scribblemacher 23 hours ago 0 replies      
I like Gnome 3 in that it made me explore other WM/DE options and think more about what I wanted.

I found Awesome, and though there are some things I don't like about it (or namely, some programs that don't work nicely in a tiling environment), every time I try to use another WM or a DE, I miss the speed and keyboard accessibility of Awesome. It's a blast to use.

I installed KDE for my wife to use. It takes KDE an order of magnitude longer to start than Awesome, for all those services and stuff that it's running--you know, all that stuff I'm probably not even using.

jebblue 13 hours ago 0 replies      
At some point people will realize that smart phones are great for communication but the desktop and mainframes will continue to be how people get real computing work done and games.
raikia 20 hours ago 0 replies      
Drop Gnome 3. Convince Mint developers to have Gnome team join in on Cinnamon. Finally make Cinnamon stable. Profit.
FlyingSnake 1 day ago 0 replies      
Since no one mentioned Cinnamon (LinuxMint) let me add that to the discussion.

I've been using it exclusively for months and I feel that it is doing what Gnome3 wanted to achieve. It is simple, intuitive and rock-solid. Never froze or crashed and diagnostic tools are great.

LinuxMint is the old Ubuntu which you used to recommend to your friends and family.

jhaglund 1 day ago 0 replies      
I wonder how many other people, after reading this, ran:
sudo apt-get install gnome
SlipperySlope 1 day ago 0 replies      
This is the canary in the coal mine with regards to the acceptance of the Microsoft Metro interface on the desktop.
Why Do Startups Do This? asymptomatic.net
272 points by ringmaster  2 days ago   74 comments top 30
dredmorbius 1 day ago 1 reply      
Wholehearted agreement with Owen's post.

On point #2: describe your product in clear, what-it-does language.

Mistakes I see are emphasizing: how it does it (C, Java, OO, Rails, REST, ...), where it does it (PC, mobile, Mac, Cloud, ...), "ecosystems" it integrates with (Social, FB, Oracle, ...), who your investors or team are (VC, founders, investors...) etc. All of which may or may not be particularly relevant, but ... they're not key to me understanding what you do. Tell me these things, but focus on the what first.

Use direct, actionable language, not vague or nebulous terms. It's a "NFS file security permissions auditor", not "Cloud information assets security tool".

Describe a workflow or workflows from the perspective of your users. Not developers. Not architects. Not

This doesn't just apply to startups. I use a lot of Free Software, and many of these projects also fail to describe themselves clearly (though most, especially over time, eventually get it right, if only because other people can come in and rewrite idiotic descriptions). Reading through a list of package descriptions from Debian or Ubuntu, where a pithy, one-line description is your shingle to the world, should give a sense of good and bad descriptions.

Even long-established technologies such as Java suffer from this.

At www.java.com we have "What is Java?": "Java allows you to play online games, chat with people around the world, calculate your mortgage interest, and view images in 3D, just to name a few. It's also integral to the intranet applications and other e-business solutions that are the foundation of corporate computing." Um. OK.
open http://www.java.com/en/download/whatis_java.jsp

At Oracle, we have a Java landing page with ... no description of the technology or its components (which aren't self-evident): http://www.oracle.com/technetwork/java/index.html

One of the best succinct summaries I've seen in recent memory is from jwz's "Java Sucks" page:

there are four completely different things that go by the name "Java": 1. A [programming] language, 2. An enormous class library, 3. A virtual machine, 4. A security model.

Now that is something I can wrap my head around (he also goes on to describe strengths and weaknesses of each component, good essay, read it, it's still disappointingly relevant).

Note though: the best product description comes from a critic. If you fail to clearly define yourself, your critics will.

ed209 2 days ago 3 replies      
Same goes for newsletters / emails that I joined via a splash page or holding page. At the very top of the email tell me:

    1. You're receiving this email because you joined XXXX's beta waiting list on xx June 2012

2. XXXX is a product that helps you do ....

Basically remind me what you do and how you got my details.

mgurlitz 2 days ago 4 replies      
> The big logo-y thing at the top of your blog page? Yes, the one that currently links to your blog? Right, that one. It shouldn't link to your damned blog! Link it to your product's home page instead.

Every time I have to manually cut the /blog/ out of the location bar I wonder how many users were lost by requiring that little bit of extra effort.

jiggy2011 2 days ago 1 reply      
I hate it when people decide to slap a blog section on their website and it is completely disjointed from the rest of the site.

Usually this seems to happen because they have used some customised system to build their website and then just slapped wordpress or something on to use as the blog.

So as mentioned in the article you hit the blog page and "home" now takes you back to the home of the blog even though the site looks like the rest of the site.

Even worse when it's a tumblr or something and you have now ended up in a completely seperate Silo.

I can't imagine this is good for SEO purposes either.

The number of time this sin is committed by companies selling design services boggles the mind.

sopooneo 1 day ago 1 reply      
This all boils down to a rule I have for myself whenever addressing a large audience: start by stating the obvious.

This does two things. First, it gives people an easy mental on-ramp to follow the thread of what you are saying. And second, it forces you to back way up and cover the ground that is so central to your world you would probably forget to say it, even though it is completely unknown to most of your audience.

debacle 2 days ago 0 replies      
The "Tell me what your product does on every blog page" is one that Atwood is great at. The SO byline is very unobtrusive, but I would also assume informative to anyone who doesn't know what SO is.

It was so simple, and it has probably brought SO a ton of first-time viewers.

omgsean 2 days ago 1 reply      
I think some people want to blog without it seeming like one big advertising. Sometimes I read articles on startup blogs and think "why are they even writing about this" until I get 2/3rds of the way through and realize I'm reading an infomercial.
porterhaney 1 day ago 2 replies      
Same could be said of this blog:

There must be intelligent life down here

Doesn't tell me very much about who the author is, what he typically writes about.

alttab 2 days ago 0 replies      
Many a time I've posted on HN comments something to the effect of, "that's great, but I have no idea what Your product is." it's extra lethal if your company or product name doesn't describe anything at all to those not familiar with "FooBarlr".
frankphilips 2 days ago 0 replies      
While I do agree that the product link should be obvious on the blog page, I don't think it should be overly pervasive. A logo on the sidebar is more then enough. The main purpose of the blog is to write engaging content that correlates with your product. Startups should stop trying to use the blogs just to increase SEO rankings, rather focus on creating conversations and build relationships with potential customers and users.
acangiano 1 day ago 0 replies      
I posted this not too long ago: http://technicalblogging.com/5-common-blogging-mistakes-made...

I cover the same two mistakes, plus a few more.

hansy 2 days ago 1 reply      
Thank you. It's irritating when the blog link is blog.companyname.com and I have to manually replace the "blog" part with "www."
tsycho 1 day ago 0 replies      
To the OP: You link to your product, but when I click on the link, the images don't work - http://habariproject.org/en/screenshots
JonLim 2 days ago 0 replies      
Could not agree more.

Using something like Wordpress, it's not difficult to put a little box at the bottom of every post explaining what your product is and where I can learn more.

BryanB55 1 day ago 0 replies      
Good points, I haven't really noticed this all that often.

I'm actually designing a blog for a new startup now and the first elements I created were at the top of the right sidebar with a quick "what we do" summary with a call to action button at the end. Also used the author summary box at the bottom of the article with similar content. I think just a quick "We're xyz company and we do [whatever benefit you offer]" and a "Learn More" button is a quick, clean and concise way of putting that out there.

krogsgard 2 days ago 2 replies      
"If you're using software like WordPress or PHPBB (What that hell were you thinking?)"

What exactly did WordPress have to do with his point?

Using WordPress for your entire product site makes a ton of sense for most companies. You can have your blog and your product info all on the same site. No subdomain blog. No separate SEO. It's simple to link the logo to the product, and have custom sidebars or after-content widget areas with a call to action. Your blog is part of your site, and feeds traffic to your product. Everything integrated. Easy to use. What's his problem again?

benologist 1 day ago 0 replies      
I think there should be a #3 .... startups that devote their blogs to random hn fluff about being a startup rather than whatever they actually do.
jazzychad 1 day ago 0 replies      
Yep, totally agree. I even wrote a very similar rant here several weeks ago: http://blog.jazzychad.net/2012/05/28/startups-fix-your-blog-...
tibbon 2 days ago 0 replies      
Similarly, a huge % of bloggers overall make it impossible to contact them via their blog. I've never understood this unless you're intentionally being evasive.
mgualt 1 day ago 0 replies      
I agree with the advice, and I offer some in return about Habari: hire someone to make your demo video non-awful. The "muzak" and slow pacing alone were enough to send me running away from the whole project. A production that aesthetically displeasing is a sure sign of an unappealing product/environment.
francov88 2 days ago 0 replies      
Both very good points. I've even been guilty of it here and there... the trick is in making sure that the call to action is simple, present and not being deterred by anything else.

Having friends review is great, but sitting an intelligent stranger down and asking them to perform certain actions is what most startups need.

efa 1 day ago 0 replies      
I've run into this so often; I'm glad I'm not the only one. I often have to revert to modifying the URL just to get to their main page (change blog.company.com to company.com) after finding no possible way to link to the home page.
munsonbh 1 day ago 0 replies      
When you post a link to Habari at the bottom of your post, it would be nice if the screenshot IMGs weren't broken on screenshots page. That is also annoying.


kin 1 day ago 0 replies      
Completely agree! The extra effort to go to the actual product's website makes me wonder if the startup even knows this is an issue.
thijser 2 days ago 0 replies      
Great advice! I just changed it for our blog, we had the big logo at the top linking to the main page already, but I realized we didn't have a short description of what we do. Blogger allowed this to be easily added to the sidebar.
incision 1 day ago 0 replies      
I must have read about how Airbnb was "redefining the space"and taking all sorts of problems at least half a dozen times before I had any idea what it was.
bluetidepro 2 days ago 5 replies      
Anyone have a mirror or different URL? It seems to be down for me...
davidradcliffe 2 days ago 0 replies      
Amen! I've been saying the same thing: https://twitter.com/dwradcliffe/status/209703542192222208
tudorw 2 days ago 1 reply      
I recommend excessive use of the <blink> tag around any product mentions to really hammer things home :)
mindcrime 2 days ago 0 replies      
Good point. We don't currently do this very well on the
Fogbeam[1] blog[2], and I'll be making it a point to address that later this evening. Thanks for posting this and bringing this point to the forefront!

[1]: http://www.fogbeam.com

[2]: http://fogbeam.blogspot.com

Open-Source High-quality PDF of SICP github.com
270 points by oinksoft  2 days ago   67 comments top 15
peapicker 1 day ago 0 replies      
I, for one, would very much like to read an article about the preparation of this document and the various tools used in the toolchain, as well as the workflow -- it could serve as a primer for 'how to do ebooks' for the future that I would enjoy greatly.

It looks like Inkscape for the SVG editor, and texinfo for the text, but I know enough to know it would be nice to know what if any special config was done to get the nice fonts, etc.

zokier 2 days ago 3 replies      
The original SICP text seems to licensed with CC-BY-SA, and this PDF version has GPL license slapped on it. Is that legal? GPL is more restrictive than the original license after all.
squidsoup 2 days ago 2 replies      
Looks like there are epub and mobi versions out there as well. Great to see community efforts to keep this excellent book alive.

https://github.com/ieure/sicp epub)

https://github.com/twcamper/sicp-kindle (mobi)

splicer 2 days ago 0 replies      
I got a kick out of this:
"You are probably reading it in an Info hypertext browser, such as the Info mode of Emacs. You might alternatively be reading it TeX-formatted on your screen or printer, though that would be silly."
cellularmitosis 1 day ago 0 replies      
I split this book up into chapters:


by using pdftk:

  pdftk A=SICP.pdf cat A25-117 output SICP_chapter_1.pdf
pdftk A=SICP.pdf cat A118-286 output SICP_chapter_2.pdf
pdftk A=SICP.pdf cat A287-460 output SICP_chapter_3.pdf
pdftk A=SICP.pdf cat A461-624 output SICP_chapter_4.pdf
pdftk A=SICP.pdf cat A625-777 output SICP_chapter_5.pdf

I then use some booklet-preparation software to turn it into booklets which I could print out at Kinko's, because I didn't want to have to haul around the entire book with me:


baxrob 2 days ago 1 reply      
Just to note, it's even tastier with the videos: http://ocw.mit.edu/courses/electrical-engineering-and-comput...
joelthelion 2 days ago 1 reply      
What would be awesome would be a nice EPUB version.
gbog 1 day ago 1 reply      
I understand this is lisp version of SICP but heard the course is now taught with python. I searched and couldn't find this python version online though. Someone have some hindsight?
PhearTheCeal 2 days ago 3 replies      
What dialect of Lisp should I use to follow along the book with?
jiggy2011 2 days ago 5 replies      
I actually bought the paperback of this but haven't got around to reading it yet.

How long would it take someone of average programming skill (with no lisp/scheme background) to bang through this at say 4 hours a week? Is it even worth reading the entire thing as opposed to a few chapters?

derekp7 1 day ago 1 reply      
Does anyone know how to make a smaller page sized PDF from this texinfo source? Specifically, I'd like to target the screen size of the popular 7-inch tablets.
barking 2 days ago 2 replies      
OT, I am not familiar with GitHub and I know it's much more than a download site.
As a Download site, however, I don't think it's very user friendly.
You think you've have what you want when you see the file icon but that isn't clickable. 'Raw' doesn't sound very good and neither does 'History'
Then 'Downloads' link looks like a link to a more general page than the Download link for this file in particular.
Through trial and error I found what I was looking for after not too long but it left me feeling a little stupid and irritated at the same time.

Edit: it reminded me of this http://okcancel.com/comic/4.html

nessus42 2 days ago 0 replies      
O frabjous day! Callooh! Callay!
ziffusion 1 day ago 0 replies      
Am I the only one who can't open this in Acrobat?
rmk 1 day ago 0 replies      
This is great! Thanks!
Sal Khan responds to critic washingtonpost.com
266 points by danso  1 day ago   138 comments top 26
kamens 21 hours ago  replies      
Disclaimer: I'm part of Khan Academy. Not going to chime in w/ my deeper disagreement w/ the original critic and the other article on the frontpage.

I would like to correct a persistent misconception or two.

Persistent misconception: "...we suggest that Khan Academy desperately needs voices of teaching experience. Khan could tap into any number of existing networks..."

Truth: We have four ex-teachers as full-time employees. We have two high school math teachers as consultants. One Harvard Doctoral candidate in Education and one post-doc in neuroscience at Stanford are in residence. One UPenn Professor is also likely to begin a sabbatical with us. We have a 3 person team dedicated to working with and getting feedback from our 50 pilot classrooms and the 15,000 teachers actively using KA in classrooms.

Persistent misconception: "...it certainly requires more than just “two minutes of research on Google,” which is how Khan describes his own pre-lesson routine."

Truth: Go read Sal's AMA response (includes the sentence "When I did organic chemistry, I spent 2 weeks immersing myself in the subject before making the first video") before taking one of these "two minute" snipped quotes at face value: http://www.reddit.com/r/IAmA/comments/ntsco/i_am_salman_khan.... I've seen Sal's face light up when he gets an unwieldy new shipment of textbooks to start studying in preparation for his videos. Does he dive right into some videos? Absolutely. Is claiming that his "pre-lesson routine" can always be dismissed as two minutes of Googling disingenuous and patently false? Absolutely.

jgrahamc 23 hours ago 4 replies      
Having the debate framed as 'Sal Khan is the future of education' and 'No, he isn't, teachers are' is bogus. It's a ridiculous dichotomy. It does a disservice to both Khan and teachers to debate this in that way.

There will be many teachers who will use Khan's videos in their teaching (or to augment it), and I imagine that over time Khan will change the way he does things based on his own education about education.

It's self evident that the 'sitting in front of a machine watching videos' isn't the solution to the education. If it were the multimedia revolution wouldn't have petered out as it did. Children (and adults) need a variety of approaches. Khan's is just one.

jere 20 hours ago 1 reply      
>An effective math teacher will point out that “rise over run” isn't the definition of slope at all but merely a way to calculate it. In fact, slope is a rate that describes how two variables change in relation to one another:

What a dumb thing to argue about. I'm not a historian (or a mathematician), but the term "slope" seems pretty obviously adopted from a physical slope/incline/hill. Why? Because it's the easiest visual analogy for us apes to grok. It doesn't come from an earlier term meaning strictly "a rate that describes how two variables change in relation to one another."

If you're trying to teach someone a complex concept, are you going to use a phrasing that has zero significance to them? Rate? Variable? What? The people learning about slope aren't programmers or engineers. Give me a break.

Why not use a visual analogy that makes perfect sense and is still a valid definition: rise over run on a section of a hill, road, roller coaster, etc. I hope whoever is teaching this is relating it to a real world object. Just talking about a line on paper isn't going to help much, but neither is an overly complex definition.

droithomme 23 hours ago 4 replies      
From the article:

> Below is Khan's e-mail to me, which I shared with the author of Monday's post, Karim Kai Ani, a former middle school teacher and math coach who is the founder of a company called Mathalicious. He said Khan is wrong.

So, to summarize, "Khan is wrong, but I won't bother to explain why, he just is, and I have a self-proclaimed expert that says so."

bicknergseng 20 hours ago 1 reply      
From the article that sparked Khan's response:

>Take Khan's explanation of slope, which he defines as “rise over run.” An effective math teacher will point out that “rise over run” isn't the definition of slope at all but merely a way to calculate it. In fact, slope is a rate that describes how two variables change in relation to one another: how a car's distance changes over time (miles per additional hour); how the price of an iPod changes as you buy more memory (dollars per additional gigabyte).

Followed by this in her response to his letter:

>As math was not my subject in school, I don't know who is right but would love to hear from mathematicians out there.

I had to reread all 3 articles multiple times. Then I lamented the sad state of journalism as well as the complete willingness of our society to tolerate the "I'm just not good at math so I won't bother to understand it because I don't think it's worthy"-attitude. What angers me about the first article is not that it went by some editor without the editor saying, hey, this seems suspect, but rather the fact that the journalist decided to write about something she had little knowledge about with a tone that suggests that she thinks she knows knows more than Sal or someone else educated on the subject. An unapologetic combination of ignorance and condescension.

Cushman 22 hours ago 1 reply      
The way I've understood it, Khan isn't trying to disrupt the idea of teachers or schools per se, only alter it.

Basically, the traditional schooling model consists of "lectures" in class, where the teacher presents you with information for an hour, and homework, where you work on problems relating to that content alone in your own time.

Khan says, that's cocked up-- we should let students consume the raw informational content on their own time, where they can pause, rewind, and go over it as many times as they need to in order to understand it without disrupting anyone else, and then do "homework" in the classroom, where there's a focused environment that encourages exploration and somebody who can help each student with their individual difficulties.

Which has always struck me as a pretty straightforward, good-common-sense approach to at least try. Why is this concept so opaque to so many people?

zeteo 23 hours ago 3 replies      
Frankly the response only addresses a small part of the criticism (the definition of slope) and then launches an attack on the critic's motivation. The main point in the original article [1] was that Khan's preparation for his lectures is deficient ("I don't know what I'm going to say half the time") and often resumes to "two minutes of research on Google".

[1] http://www.washingtonpost.com/blogs/answer-sheet/post/khan-a...

mikeleeorg 23 hours ago 0 replies      
What Khan is doing is great. For some students, it is very effective. For other students, not effective whatsoever. Thankfully, dozens of similar offerings have appeared (40 by one count), as well as tools to help teachers create their own video lessons (ie. Educreations). And that's not even mentioning all the other students for whom video is not the best way to learn these topics.

The real problem is the media. And by that overarching term, I mean the rhetoric that various journalists, bloggers, and others have let themselves use for whatever purpose (ie. sensationalism, pageviews, linkbait, etc).

It's understandable that there's been a backlash to Khan. He got overhyped. The pendulum swung too much one way. Now it's naturally swinging the other way.

In the end, this is going to turn out better for students. As critics lash out in both directions (supporters and detractors both have gotten pretty vicious in this debate), there are a bunch of for-profit and non-profit efforts that are creating alternatives. Khan has a smart team too. I'm sure they're steadily improving their offerings.

delinka 22 hours ago 2 replies      
My son has been watching Sal give lessons for the last year. Sal makes mistakes sometimes. My son usually notices and asks for clarification. We end up with a nice discussion about the topic at hand and arrive at a correct answer. If it's math related, I've got it covered. If it's not, we do some research.

Classroom teachers make mistakes. Textbooks make mistakes. But the system is set up to disallow question of these two authorities. Example: Math class is the first class of the day; teacher follows textbook and textbook is wrong; child questions and is waved off; is that kid gonna remember at the end of the day that she needed to ask mom about this problem? No.

I much prefer when Sal makes a mistake because it makes for a learning experience. What's nice is when my son comes back some time later and tells me that Sal's video has been corrected.

mikemarotti 23 hours ago 2 replies      
"As math was not my subject in school, I don't know who is right but would love to hear from mathematicians out there."

Go find a professor, you sorry excuse for a "journalist".

spinchange 23 hours ago 6 replies      
Can anyone weigh in on their differing arguments regarding the definition of slope? I am not a maths expert by any means, but am genuinely curious who is correct here. Or is this a matter of one being technically correct versus the other being correct in practice?

(Cross posted from the original thread because I genuinely would like an opinion more informed than my own and the totally unhelpful "conclusion" provided by Valerie Strauss.)

*Edit, I missed that he (Khan) posted a video defending his definition http://www.youtube.com/watch?v=TNaQJjLAhkI

cs702 17 hours ago 1 reply      
This article is emblematic of the hardest challenge Khan Academy faces, IMO, as it tries to change the educational system from the current 'lecture at school, practice at home' approach used by virtually all schools today, to the 'learn at home, tinker and interact at school' championed by Khan Academy.

The hardest challenge is this: disrupting the educational experience requires buy-in from teachers, administrators, regulators, academics, and existing service providers, but all these parties are nearly always resistant to change and well-entrenched in their positions, making large-scale change very, very, very slow and difficult.

The optimist in me wishes Sal Khan and his team only success as they take on, and attempt to co-opt, the educational establishment. The realist in me thinks they face a long, tough battle.

bherms 20 hours ago 1 reply      
The original critical piece was written by someone with a vested interest in defaming Khan, as he runs a company that is (in a way) a direct competitor. That alone should be enough to dismiss his critique.
frankphilips 22 hours ago 1 reply      
Looks to me like a feeble attempt by Karim Kai Ani of Mathalicious to generate some PR for his for-profit website. What better way to drive traffic then to "build up some controversy" by hating on Khan? Sal's noble efforts have allowed countless people obtain a first-class education at virtually no cost. Shame on The Washington Post for promoting this hidden agenda. Seems like they also have an agenda of their own.

1) Post a controversial guest article bashing a guy whose actually trying to do something good in the world.

2) Act like the neutral party so they don't have to take any blame. Allow Karim to be the scapegoat.

3) Sit back and enjoy while traffic explodes to their site.

I see you Washington Post. You ain't fooling me!

justin_vanw 15 hours ago 1 reply      
Either the people who watch Khan's video's find them useful, and can do things they couldn't do, or understand things better than they did, or not.

Most public school teachers are awful. They have degrees in education, and most university programs in the teacher pipeline are intentionally easier and less demanding than the equivalent 'real' degree in a given subject. 'Math-ed' is a very undemanding degree compared to 'Mathematics'. If high-school teachers were placed under this kind of scrutiny we would be forced to completely re-evaluate how teachers are credentialed and licensed. They aren't put under this or really any scrutiny at all, and probably they never will be. Teacher unions fight as hard as they possibly can to prevent any measurement or evaluation of teacher performance.

Students will continue to try to fill in the gaps in their understanding of these topics, and if there are free resources available to them to do so, even better!

tomkin 23 hours ago 0 replies      
What's missing from this argument is that traditional education dogma has the same plight. Is anyone ready to state, on record, that any educational platform is perfect and free from error? Khan deserves/requires criticism, but let's not forget that these criticisms apply across the board.
saraid216 23 hours ago 2 replies      
I wasn't aware that KA was a non-profit. Where do they get their money? Entirely donation-driven?
coolpin5 1 hour ago 0 replies      
I never understood why Sal Khan does all the videos himself. Shouldn't they get the best Linear Algebra teacher that they can find for the Linear Algebra series, the best Calculus teacher for the Calculus series, the best Marine Biology teacher for Marine Biology, and so on? Maybe even get two sets of teacher with different lecture styles for each subject?

I've had dozens of amazing teachers throughout my education, all of which were excellent at gripping my attention, having a passion for their subject, and a knack for explaining it extremely well. If I thought my teachers were this good, can you imagine how good the best in the country would be? And what it could do for education to makes those lessons available for the entire world?

It's a shame he does it entirely himself. It's not for lack of funding, that's for sure. Maybe it's an ego thing?

robomartin 20 hours ago 0 replies      
Frankly, I don't know why he even bothered to respond. What KA is doing might not be perfect but it is one serious attempt to move education forward in some form. They deserve nothing less than admiration and support. I am sure that they welcome and encourage constructive criticism.

Aside from that, the only way the "(highly unionized) teachers do it better" argument can hold water is if the (highly unionized) teachers start producing students that actually place in a reasonable range in international comparisons.

nadu 23 hours ago 0 replies      
reso 20 hours ago 0 replies      
The author of the criticism is the founder of a for-profit start-up competing with Khan Academy. Khan Academy itself is a non-profit.

These incentives should frame everything we hear about this exchange.

andyl 21 hours ago 1 reply      
First they ignore you, then they laugh at you, then they fight you, then you win.
theoretick 21 hours ago 0 replies      
I found quite a bit in the original article that seemed to point to a misunderstanding of the site for someone who hasn't used it. Karim doesn't even mention one of the most valuable areas: the practice area. It isn't just about watching videos, and maybe I'm alone, but I use the practice first and the videos as supplements. Furthermore, the actual videos seem to often feature explanations in a clear way that does not merely point to following a series of steps (see "why division by zero is undefined"
caycep 17 hours ago 0 replies      
this also is reminiscent of this nytimes op ed: http://www.nytimes.com/2012/07/20/opinion/the-trouble-with-o...

that conveniently forgets to mention all the collaborative/interactive components built into coursera - the community TA's, the forums, chatrooms, etc etc etc.

moral: don't criticize unless you've actually read the book/seen the movie/etc. or maybe beware of hidden agendas.

themonk 21 hours ago 0 replies      
Mathalicious is doing this just to get popular, I never knew they exists before this debate. Google might improve page rank of mathalicious.com as well due to back links.
infinitex 21 hours ago 0 replies      
This is a storm in a tea cup.
1987 Time Capsule Predictions by Sci-Fi Writers About 2012 writersofthefuture.com
239 points by joshuahedlund  1 day ago   131 comments top 43
3pt14159 1 day ago 4 replies      
By far the most accurate:


In 2012 We will see:

1) That economic cycles caused by rises in technological levels will begin to level out"countries that have a falsely inflated economy will be forced to export their technologies to third-world countries where people are willing to work for less money. This will lead to a situation where knowledge, the key to our technologic success, will be spread across the world. We'll see rapid decreases in starvation levels, but will still be plagued with political turmoil.

2) Men's Rights"We will see a reaction to the women's movement. Men will demand to be portrayed by the media as the sensitive, caring creatures that they are. They will also demand equal rights in custody battles where children are seldom awarded to a father because our society chooses to believe a mother is a better care-taker by nature.

3) Introduction of x-ray microscopes in the early 2000's will lead to rapid progress in gene splicing. Look for rapid growth in medicine and mining, and food production. We may also see bacteria being engineered to simulate parts of the immune system (which could cure immune disorders such as AIDS and allergies).

DanielBMarkham 1 day ago 1 reply      
The lesson here is "don't ask sci-fi writers to predict the future."

Seriously, these guys are doing what we all do: making some kind of linear estimate based on past data. Usually these estimates end in some sort of crisis. No, we are not out of oil. It's debatable whether we've even reached peak oil, which I doubt. The U.S. has a shot at becoming a major oil exporter if we don't screw it up. No, hunger and disease are no more widespread and rampant than they were then. If anything, things have probably gotten a little better. No, we are not colonizing the moon or Mars. Our space program is still barely getting started. No, everybody isn't an illiterate slob watching CGI dramas, but that day still seems to be fast approaching, at least for the western world.

The internet really took the species on a hard turn from where we all thought we were going. Even idea of a hive mind where billions vegetate using computer stimulus wasn't considered. Everybody thought that the individual would stay the same and technology would evolve around them. What's happened is much more interesting: the idea of the unique individual is changing as more of the things that make us unique are being recorded and shared. Technology is not transforming the world; technology is transforming us.

ADD: An interesting title to this article might be "It's the end of the world as we know it, and I feel fine."

tokenadult 1 day ago 4 replies      
My elementary school included a class that put a time capsule together during the 1968-1969 school year, with pupil predictions of the year 2001. In the year 2001, the time capsule was opened up. We were quite concerned about air pollution and, wouldn't you know, running out of petroleum, and both turned out to be less of a problem than we anticipated. I expected more progress in the United States in "race" relations than there actually has been, but it is a fact that the United States is experiencing an ongoing reduction of "race" consciousness that is hard to imagine to people who remember the days of legal segregation by state law.


I was interested to see that another page on the site hosting the submitted article has a description of the late L. Ron Hubbard


that perhaps needs some fact-checking.




jwecker 1 day ago 6 replies      

Some promising technologies will still be struggling along in 2037. Others have
disappeared or been replaced completely. No matter how miraculous and marvelous
the advances that have happened though, society will have managed to consider
them mundane and probably inevitable. I don't know what quantum computers and
machine learning will have allowed by 2037- (better weather prediction? deep
mathematical truths?), but I suspect that the things people think about the
most, like how to have meaningful relationships and fulfilling work, will have
only been minimally affected. On the other hand, I predict that some people
thinking about those things now will have found, individually, exactly what
they were looking for.

Each generation will be less mature (at least until it reaches the same age as
the last). Each new generation (there will have been a couple by 2037) will,
despite its immaturity, regressiveness, and destructiveness, manage to yield
forth individuals who inspire and transform, who rise above petty concerns and
a world brimming with distraction and reveal something new about the capacity
of mankind.

In short, the fundamental struggle will continue between technology that exalts
our knowledge and capacity, and human nature that debases and waste them.

Existential threats to mankind will still occasionally surface. The threat of
genocide, tyrants who oppress and reign in terror, the Earth groaning under its
abuse as we attempt to listen better, to act better. But, as always there will be
pockets of peace and prosperity where others can sit and take a moment to write
an entry for 2062.

redthrowaway 1 day ago 2 replies      
"Because we will be in a trough between 20th-century resources and 21st-century needs, in 2012 all storable forms of energy will be expensive. Machines will be designed to use only minimal amounts of it. At the same time, there will be a general expectation that a practical cheap-energy delivery system is just around the corner. Individuals basing their career plans on any aspect of technology will concentrate on that future, leaving contemporary machine applications to the less ambitious or to those who foresee a different future. The most socially approved-of individuals will constitute a narrowly focused aristocracy, and will be at the mercy of dull functionaries and secretive rebels who actually perform the day-to-day maintenance of society. It should be noted that most minimal-energy devices process information and microscopic materials, not consumer goods. The function of "our" society may depend on processing information and biotechnology to subjugate goods-producing societies. These societies may be geographically external, or may be yet another social stratum within central North America. In either case, crowd-management technologies will have to turn away from forms that might in any way impair capital goods production. Social regimentation will then have become so deft that most people will regard any other social milieu as pitiable.  "

Most were quaintly charming, but that was right on. I'm surprised that fewer authors mentioned the information revolution as being a large force in society.

paul 1 day ago 1 reply      
This is wonderful. Their confident pessimism brightens my day :)
cjensen 1 day ago 2 replies      
Fears of War, Population Explosion, American Decline, and Japanese Ascension with some anti-Reagan sentiment mixed in. Pretty much a lesson in what pop-culture fear look like in 1987, and every one of them turned out to be a non-issue.

This ought to give one pause and great skepticism when evaluating today's pop-culture fears. Of course, just because an idiot picks (c) on an SAT question doesn't mean it's wrong.

chewxy 1 day ago 3 replies      
Some of these are very salient (Pohl), and some not so.

Brilliant points (Orson Scott Card):

> We must count ourselves lucky if anyone has leisure enough in 2012 to open this time capsule and care what is inside. In 2012 Americans will see the collapse of Imperial America, the Pax Americana, as having ended with our loss of national will and national selflessness in the 1970s. Worldwide economic collapse will have cost America its dominant world role; but it will not result in Russian hegemony; their economy is too dependent on the world economy to maintain an irresistible military force. ...

And this one by Roger Zelazny, pretty spot on until...

> It is good to see that a cashless, checkless society has just about come to pass, that automation has transformed offices and robotics manufacturing in mainly beneficial ways, including telecommuting, that defense spending has finally slowed for a few of the right reasons,

I stopped reading at "defense spending has finally slowed"... no one could have predicted George W. Bush I guess

This one is almost completely opposite:

>Multiple sclerosis and Parkinson's disease will be effectively cured. However, AIDS will not yet have been controlled. It will have become the leading cause of death worldwide with millions of new cases each year.

xiaoma 4 hours ago 0 replies      
My favorite of the bunch, Roger Zelazny, was also most accurate:


It is good to see that a cashless, checkless society has just about come to pass, that automation has transformed offices and robotics manufacturing in mainly beneficial ways, including telecommuting, that defense spending has finally slowed for a few of the right reasons, that population growth has also slowed and that biotechnology has transformed medicine, agriculture and industry"all of this resulting in an older, slightly conservative, but longer-lived and healthier society possessed of more leisure and a wider range of educational and recreational options in which to enjoy it"and it is very good at last to see this much industry located off-planet, this many permanent space residents and increased exploration of the solar system. I would also like to take this opportunity to plug my new book, to be published in both computerized and printed versions in time for 2012 Christmas sales"but I've not yet decided on its proper title. Grandchildren of Amber sounds at this point a little clumsy, but may have to serve"

* We're mostly a cashless society using cards and electronic payment -- CORRECT

* Automation has transformed offices and robotics manufacturing in mainly beneficial ways -- CORRECT

* Defense spending has finally slowed for a few of the right reasons -- MIXED : True as a ratio, but not in absolute $$$

* population growth has also slowed -- CORRECT

* Biotechnology has transformed medicine, agriculture and industry"all of this resulting in an older, slightly conservative, but longer-lived and healthier society possessed of more leisure and a wider range of educational and recreational options -- CORRECT on nearly all counts

* ...in which to enjoy it"and it is very good at last to see this much industry located off-planet, this many permanent space residents and increased exploration of the solar system. -- WRONG

* I would also like to take this opportunity to plug my new book, to be published in both computerized and printed versions in time for 2012 Christmas sales"but I've not yet decided on its proper title. Grandchildren of Amber sounds at this point a little clumsy, but may have to serve -- ...

Despite all the other excellent predictions, his death came much earlier than he forsaw :(

Dove 1 day ago 2 replies      
I find it curious that more than one of them predicted a decrease in literacy. Indeed, the opposite seems to have happened.

When I was in college, my English professor vowed that he would make us better writers by forcing us to write an essay every single night. And it really worked.

Yet we live in a world where people are writing, constantly writing, in a way my English teacher could barely have dreamed of. High school kids, debating endlessly with a hostile audience every day, are turning into frighteningly persuasive writers all on their own.

ilamont 1 day ago 0 replies      
Frederik Pohl's essay is great. By 1987, he had been writing science fiction for nearly 50 years and I think had become embittered by the course of 20th century history. Read Gateway (which I think came out that same year) to see his vision of 21st century Earth -- humanity lives on an overpolluted planet, with only a tiny portion of the population able to have access to "Full Medical" and live in clean, domed cities.

He's still alive, incidentally. And has turned to blogging: http://www.thewaythefutureblogs.com/

reasonattlm 1 day ago 3 replies      
Gregory Benford's notes are perhaps the most interesting, given his present position on the board of Genescient - a company that mines the genetics of longevity:


Interesting for the pessimism, that is. Every age and sub-age and decade and year, and so on down, has its seeping pessimism - yet here we are, still.

Humans are good at foreseeing, identifying, and solving problems. Yet despite the vast evidence of that trait at work in our history, recent and otherwise, our capacity for progress and success is greatly underrated in every present day in comparison to our capacity to build problems.

cubicle 1 day ago 1 reply      

  Isaac Asimov       Died 1992
Gregory Benford Alive
Algis Budrys Died 2008
Gerald Feinberg Died 1992
Sheldon Glashow Alive
Frederik Pohl Alive
Jerry Pournelle Alive
Tim Powers Alive
Orson Scott Card Alive
Robert Silverberg Alive
Jack Williamson Died 2006
Gene Wolfe Alive
Dave Wolverton Alive
Roger Zelazny Died 1997

A lot more of them have survived to be embarrassed by their predictions than I had thought.

zavulon 1 day ago 0 replies      
> I would also like to take this opportunity to plug my new book, to be published in both computerized and printed versions in time for 2012 Christmas sales"but I've not yet decided on its proper title. Grandchildren of Amber sounds at this point a little clumsy, but may have to serve.

Oh how I wish this was the case...

damian2000 1 day ago 4 replies      
This one stood out for me:

"Japan will be the central economic power in the world, owning or controlling a significant part of European and American industries. This "economic dictatorship" will be beneficial to Japan's client states, since Japan benefits by keeping its customers healthy and wealthy. Indeed, a peaceful and prosperous world community will owe its existence to this Pax Japanica." --SHELDON GLASHOW

Replace Japan with China and he's spot on. The idea has also been given the name "Chimerica" by some economists [http://en.wikipedia.org/wiki/Chimerica].

waterlesscloud 1 day ago 2 replies      
Doom! Gloom! Disaster Just Around The Corner!

How depressing. Not just the worlds they paint, but that some of the brightest minds were (and are) trapped into such a negative way of looking at the future.

joezydeco 1 day ago 0 replies      
When I was a kid I bought a paperback copy of The People's Almanac: The Book of Predictions[1]. The book is mostly wacky, but over the decades I would thumb through it from time to time and catch something that someone got right, although mostly by accident. The psychics that were interviewed were way off. The scientists? They did a lot better. It's a fun read if you can find a copy.

[1] (http://books.google.com/books/about/The_People_s_Almanac_Pre...)

Foy 1 day ago 0 replies      
TL;DR - By 2012 we'll have moon bases and missions to Mars if AIDS doesn't wipe us out first.

They really had high hopes for space exploration. Shame that never happened. :(

EDIT: Oh this was a good one (thankfully not at all accurate):

"TIM POWERS: Probate and copyright law will be entirely restructured by 2012 because people will be frozen at death, and there will be electronic means of consulting them. Many attorneys will specialize in advocacy for the dead."

mratzloff 1 day ago 0 replies      
What's always interesting about these kinds of predictions is not the predictions themselves, but the window into the hopes and fears of people at the time. It's all here: Japan as an economic superpower, space travel, AIDS, poverty, hunger. Now compare to predictions that sci-fi writers today make about 2037 and you'll have a similar window into our own hopes and fears--many of which will be just as accurate, at least in the next 25 years.
brimanning 1 day ago 0 replies      
"The American economy will have experienced a gentle yet relentless decline. Our children will not live such comfortable lives as we do. The spread between the rich and the poor will have grown, and crime will have become so prevalent as to threaten the social fabric. The rich and the poor will form 2 armed camps. Most automobiles and heavy machinery will be manufactured in Japanese owned planets located in America. Yet, agriculture and higher education will be our most successful exports. There will be no fast trains connecting American cities, but a network of levitated superconducting trains will be under construction in Western Europe and in Japan." - Sheldon Glashow

This was the most stunning part of the predictions as it's far more negative than what truly is the case, yet points to many truths and is what many could say about the next 25 years.

toomuchcoffee 1 day ago 1 reply      
Most Americans are barely literate, think in images rather than symbols, and think the future is something that will happen to somebody else…

Right on the money.

kposehn 1 day ago 0 replies      
I find it rather gratifying - even comforting - that by and large their predictions missed the mark by a wide margin.

Predictions are the product of your current experience - your time until now. Many brilliant people try and predict the future, but in the end the future is up to us.

ekianjo 1 day ago 0 replies      
SOme lesson to learn from this: most people are wrong, and totally wrong about the near future (provided 25 years is considered "near").

That should say a lot about not listening to what's being said currently about what will happen in 25 years from now.

nl 1 day ago 0 replies      
Lesson: Don't be pessimistic.

The disaster scenarios you imagine are inevitable are really just the worst case, and the things that actually problems aren't thing you can predict.

beefman 1 day ago 0 replies      
I reviewed these and counted 49 distinct and falsifiable predictions, of which I judged 11 correct (about 20%). Optimistic predictions fared better (30%) than pessimistic ones (10%). Zelazny did best with 4/8, followed by Benford and Glashow. Full accounting here: http://lumma.org/temp/1987-2012_Predictions.txt
allaun1 1 day ago 0 replies      
I think Sheldon Glasgow was the nearest to accurate. Most of their predictions were almost 100%.

Written on the Eastern Air Shuttle between Boston and N.Y.

What will life be like in the year 2012? There will have been no nuclear war, and the threat of such a war will have been removed by the mutual nuclear disarmament of the major powers. SDI, Reagan's ill advised Star Wars program will have come to nothing.

Japan will be the central economic power in the world, owning or controlling a significant part of European and American industries. This "economic dictatorship" will be beneficial to Japan's client states, since Japan benefits by keeping its customers healthy and wealthy. Indeed, a peaceful and prosperous world community will owe its existence to this Pax Japanica.

Many diseases will be curable: diabetes and gout, for example, will be treated by 'genetic engineering' techniques. Multiple sclerosis and Parkinson's disease will be effectively cured. However, AIDS will not yet have been controlled. It will have become the leading cause of death worldwide with millions of new cases each year.

The American economy will have experienced a gentle yet relentless decline. Our children will not live such comfortable lives as we do. The spread between the rich and the poor will have grown, and crime will have become so prevalent as to threaten the social fabric. The rich and the poor will form 2 armed camps. Most automobiles and heavy machinery will be manufactured in Japanese owned planets located in America. Yet, agriculture and higher education will be our most successful exports. There will be no fast trains connecting American cities, but a network of levitated superconducting trains will be under construction in Western Europe and in Japan.

derleth 1 day ago 1 reply      
It seems that the major pass-time among people who read time capsule predictions is to count the most idiot things as "hits" as hail the people with the most bizarre ideas as prophets.

In that spirit, here's a few "hits" for everyone here:

"A new world order will emerge from famine, disease, and social dislocation: the re-tribalization of Africa, the destruction of the illusion of Islamic unity, the struggle between aristocracy and proletariat in Latin America"without the financial support of the industrialized nations, the old order will be gone." -- Orson Scott Card

"America and the U.S.S.R. preserve an uneasy accord, each testing the other's will within well-defined limits. [snip]Vestiges of reading, writing, and spelling remain in the curricula of the public schools. Those who can read a few hundred common words are counted literate. The schools train their students for employment"how to report to computers and follow instructions. [snip] There is little sex outside marriage, which normally includes a legal contract. A single instance of infidelity is amply sufficient to terminate a marriage, with damages to the aggrieved party [snip] The population of the planet is below six billion." -- Gene Wolfe

"Most Americans are barely literate, think in images rather than symbols [snip] Berkeley, California will have a theme park devoted to its high period"the 1960s. [snip] There will have been major "diebacks" in overcrowded Third World countries, all across southern Asia and through Africa. This will be a major effect keeping population from reaching 10 billion." -- Gregory Benford

sampsonjs 1 day ago 0 replies      
Worth a read: Pitfalls of Prophecy:
Why Science Fiction So Often Fails to Predict the Future , http://www.locusmag.com/2009/Westfahl_Predictions.html.
Why do they think that the fact that they write science fiction makes it worth listening to their predictions anyways? Well, besides the genre inferiority complex, which requires its hacks to pretend they're prophets, so they can feel important.
dennisgorelik 1 day ago 0 replies      
All these writers clearly had no clue about the future.
Even "Back to the Future" was much closer in spite of being a comedy.
calinet6 1 day ago 0 replies      
"Berkeley, California will have a theme park devoted to its high period"the 1960s."

This pretty accurately describes Telegraph Ave.

perfunctory 1 day ago 1 reply      
> Bases on the moon, an expedition to Mars…all done.

How badly we underperformed.

ionwake 1 day ago 0 replies      

If we had a time-phone, now in 1987, we would beg you to forgive us. We have burdened you with impossible debts, wasted and polluted the planet that should have been your rich heritage, left you instead a dreadful legacy of ignorance, want, and war.

Yet, in spite of that, we have a proud faith in you. Faith that you have saved yourselves, that you are giving birth to no more children than you can love and nurture, that you have cleansed and healed your injured planet, ended hunger, conquered crime, learned to live in peace.

Looking toward a better future for you than we can see for ourselves, we trust that you will use your computers and all your new electronic media to inform and liberate, not to dominate and oppress, trust that you will employ the arts of genetic engineering to advance the human species and make your children better than yourselves. We know that you will be inventing new sciences that would dazzle us, opening brave new frontiers, climbing on toward the stars.

We live again through you.


Jack Williamson

dhughes 22 hours ago 0 replies      
I grew up in the 80s and the one main thing I can recall is the ever present fear of nuclear war.

If I had made a prediction back then I would say there would certainly be some sort of nuclear event at some point in the next twenty or thirty years.

bguthrie 1 day ago 0 replies      
A wonderful link. My votes are: Wolverton for prescience, Silverberg for wisdom. Your AUs per year may vary.
gwern 23 hours ago 0 replies      
As much as I love Gene Wolfe, his predictions are just embarrassing.
unimpressive 1 day ago 0 replies      
What's interesting to me is how some made assumptions that didn't turn out to be true but got much correct anyway.

Algis Budrys stands out in particular.

perfunctory 1 day ago 1 reply      
> Most Americans are barely literate, think in images rather than symbols

Any comments on this?

Aloha 1 day ago 0 replies      
Goes to show, just how much easier it is to write about a future, than it is to predict the future.

Though on whole, if you grab a statement out of each (almost) you have the world of today.

jonhendry 22 hours ago 0 replies      
Huh, no predictions that Tom Cruise would still not be exhibiting any OT powers.
fluxon 1 day ago 0 replies      
"Doom, doom, doom-doom, doom, doom ... doomie-doomie-doom, doomie-doomie-doomie-doom ... doom, doom. The end."
-- The Doom Song, GIR, Invader Zim
dkhenry 1 day ago 1 reply      
Is this the same L. Ron Hubbard of Scientology fame ?
brianchu 1 day ago 0 replies      
Gregory Benford: "Berkeley, California will have a theme park devoted to its high period"the 1960s."

He's right. Berkeley has People's Park. As for the theme, well...

TechNewb 1 day ago 0 replies      
Orson Scott Card wins.
Meteor Raises $11.2M from Andreessen Horowitz meteor.com
238 points by debergalis  3 days ago   107 comments top 30
jasonkester 3 days ago  replies      
No matter what else happens in the world, the core team will be able to focus entirely on Meteor for several years, without taking on consulting work or trying to create some other application on top of Meteor to sell

Does that raise any red flags for anybody?

Do development frameworks built for their own sake ever really work in the wild?

I think of successful ways to build web apps, and the names that spring to mind are rails, django, php, etc. that evolved by developers who were using them to build stuff. In some cases (rails, django especially), they were the side product of a single application.

I think of overarchitected nightmares such as Fusebox and some of the magic frameworks that would be Meteor's competition, and they tend to share the quality of having been built as the Ultimate Solution to Everybody's problem (with the conspicuous exception of the developers themselves, who aren't actually dogfooding it for anything).

To be clear, I'm impressed by Meteor and I'm looking forward to seeing where it goes. But it makes me a bit uncomfortable to see a declaration like "we're definitely not going to try building anything with it!" tacked onto their funding announcement.

enos_feedler 2 days ago 1 reply      
I will take Meteor seriously when it is acknowledged and supported by the existing node.js community and the leadership. There is a large group of module authors who are cranking out beautiful, useful modules for both the browser and server at 1000x the rate of a normal human being. I am not going to list names here but check github or npm repository, etc. It is this team that makes node.js special, not simply 'javascript on the server'. If Meteor can gain support from this _established_ community I will start taking it more seriously.

Why not use the 11M in funding to hire one of these node.js leaders full-time? Outside of Meteor, what contributions has the Meteor team made to node.js and the ecosystem of modules? The founding team has no presence in the development community and yet their technology stack is built on top of it. I need to see that people I trust in the node.js community are driving and influencing the direction Meteor is headed. In fact, its almost a warning sign if Meteor can't staff any of the node.js leadership full-time as they certainly have the cash to do so. I will be watching who they are hire next very closely.

The lack of node.js leadership in their organization is already evident in the product decisions they are making which wouldn't stand a chance if they had a true representative of the node.js community on their team. A glaring example is the decision to build their own package management solution instead of adopting npm or working with npm to drive it forward if its missing useful features. Meteor is leveraging the output of the node.js ecosystem yet not recognizing its existence in their own product. They seem to be segregating themselves.

blhack 2 days ago 5 replies      
Some naivety being thrown around in this thread:

Andressen Horowitz doesn't need to see a direct return from meteor. If they think that the existence of meteor is a good thing for their other investments, then the indirect return they see from those projects is a good thing.

rdl 2 days ago 0 replies      
$11m (implying a valuation of of over $20mm) is a lot of money. (Although in a world of $7 rent, $150k developers, etc, it isn't as much as you'd think).

However, Meteor looks like it has some chance of being a huge platform, maybe the next big one. Even if it just dominates a niche, that is more than enough to justify the investment. A bigger series a reduces risk for the company, and a lot of the other interesting hard tech companies out there raise that amount of funding early (eg Bromium).

Red Hat does a pretty good job of demonstrating how open source companies can make a lot of money.

100k 2 days ago 1 reply      
At Throne of JS last weekend, Meteor stole the show in my opinion. The other frameworks (Backbone, Ember, Angular, etc) were about how to build rich JavaScript web apps on top of current backend technology, whereas Meteor is envisioning something new.

I'm not sure I want to write JavaScript everywhere but what they are able to demonstrate was super cool.

radarsat1 3 days ago 2 replies      
I love the idea of Meteor and want to use it on some projects, the only thing I don't like is how language-centric it is on the back-end. node.js is okay, I'm not completely adverse to coding server stuff in JavaScript, but I'd prefer having a choice of languages, for instance most of my server-side code is in Python. I wonder how friendly its architecture would be to supporting multiple language back-ends eventually.
sharjeel 3 days ago 5 replies      
Congrats. Could someone please explain the business model of such open source based startups in general?
warech 3 days ago 0 replies      
First Github, then Meteor. As Peter Levine pointed out in his blog (http://peter.a16z.com/2012/07/25/meteor-magic/) Andreessen Horowitz is making strides towards "help[ing] developers build the next generation of applications."

Are there any other VC firms that have had such a stong foothold on the foundation of application development?

huhtenberg 2 days ago 3 replies      
Perhaps a naive question, but how does AH envision getting its $11M+ back and in what time frame?
equark 2 days ago 1 reply      
So far it seems like Meteor is focused on data syncing and live UI elements. That's neat when done well, but the main pain points for client-side heavy apps is the mismatch between the server and client and between the traditional URLs structure of web pages and the MVC structure of apps.

I'm looking for:

* Complete parity between server and client-side rendering for content. This is required both for first-page performance, caching, and SEO.

* URLs as the foundational organizing principle of the app. The mismatch between clicks, back buttons and external links makes code hard to organize and apps behave strangely without serious work.

* Database agnostic. Relational datastores remain incredibly productive and proven for the vast majority of apps.

JTxt 3 days ago 2 replies      
I've been leaning towards DerbyJS because pages are also rendered in html (SEO, no-js)and it uses npm instead of making it's own repository... http://blog.derbyjs.com/2012/04/14/our-take-on-derby-vs-mete...

But Meteor has the advantage now, looking forward to what they do with it. Congrats to them!

lukeholder 3 days ago 4 replies      
Wow, congrats. Great work.

Anyone know if they are close to having proper auth in the client for DB work yet?

gellis 3 days ago 0 replies      
If they can make Meteor as accessible as PHP, they have a great chance of success. It's definitely heading in the right direction.
tlogan 2 days ago 1 reply      
Interesting idea.

I have questions about the following:

   > Database Everywhere. Use the same transparent API to
> access your database from the client or the server.

Is this a good thing? I have been always under impression that separating data management and application logic is a good idea - basically a must.
Meaning lets think about more complex way to look at the database (temporal, stream, event processing, etc.). How this can be then possible?

drumdance 2 days ago 0 replies      
From my cursory review Meteor a few months ago, I got the sense that it could be a game-changer like Rails. However, by raising this money they're going a very different route than DHH and company.

It will be interesting to see how this plays out over the next few years.

tzury 2 days ago 0 replies      
After their recent shocking Github investment and now Meteor, a16z are, IMHO, the most impressive VC firm in the US.

They have a very long-term vision, they put their money where their vision is, and that's the best thing founders can wish themselves when raising funds, may my investor will look as far as I and ever further.


tlear 2 days ago 0 replies      
Congratulations! Talk at Throneofjs for Meteor was excellent, especially liked the

  meteor remove insecure

that was a great touch

bbayer 2 days ago 0 replies      
I really wonder the idea of putting $11M to a javascript framework.
wissler 2 days ago 0 replies      
"$11.2 million is a lot of money. What it gives us is certainty. No matter what else happens in the world, the core team will be able to focus entirely on Meteor for several years"

Assuming you're not targeted with any software patent suits.

smoody 2 days ago 0 replies      
What is their mobile native app story (if one has been announced)? -- I'm not sure how I'd integrate the meteor backend with a native client (can't use a phonegap-like sdk in my case).
jhspaybar 2 days ago 1 reply      
So, as I write Node.js and Socket.io based applications, lots of my code(on the server at least) focuses almost completely on protecting myself from bad data/malicious users. I know it works this way for almost any application, but how does Meteor handle this? I saw in the screencast someone opening their chrome console and touching the database directly. This seems like an absolute nightmare to protect! Currently, I only need to protect the individual web address that gets, puts, posts, etc. It's a single query with clear attack vectors that can be guarded against. How in the world do you protect a server and data using this framework?
brador 3 days ago 0 replies      
Anyone know the planned business model on this one?
stuffihavemade 2 days ago 0 replies      
Why the NIH package system instead of npm? That's the main showstopper for me w.r.t investing time into meteor.
flexie 3 days ago 3 replies      
"we've arranged $11.2 million in funding for Meteor's continued development."

That's a nice arrangement. Do anyone know about other open-source web frameworks that have managed to get so much funding?

dreamdu5t 2 days ago 0 replies      
Let's come back to this when Meteor is actually deployed in a real production environment handling 200 requests a second. Until then it seems like a lot of hot air.
phmagic 2 days ago 0 replies      
I'm surprised that people quickly adopt Meteor yet chastise PHP for enabling bad app developers, wouldn't Meteor do the same thing?
tibbetts 3 days ago 0 replies      
I expect to see some of this money used to purchase actual meteors: http://www.ebay.com/itm/CANYON-DIABLO-IRON-METEORITE-140-gr-...
zanst 2 days ago 0 replies      
Well deserved! Congrats to the team.
wikkiwa 2 days ago 0 replies      
What's with this anti-directory structure movement? osnews.com
224 points by wim  2 days ago   260 comments top 39
crazygringo 1 day ago 4 replies      
This all boils down to several salient points:

1) Nobody, not even 90-year-old computer newbies, has trouble understanding hierarchical folders. There could not be a more natural concept of organization. It corresponds to a box inside a storage box inside a closet inside a room inside a house inside a neighborhood, etc. It's just every level is called a folder. Saying that "x" group of people "can't understand" that is just insulting to them, frankly.

2) People who use a flat directory to save 1000's of invoices on a computer which is only used for that, who do not understand folders -- that's fine. This doesn't prove folders are non-intuitive. They just don't need to understand folders, because the job doesn't require folders. The moment their job does, someone can explain it to them, and they will get it, the same way they get that paperclips go in the box on the shelf in the closet.

3) The original Mac OS (say, up to System 6) did a great job of making folders understandable. They were physical icons, physical window locations, they were easy to use. The Open/Save dialogs could be a bit confusing, and still are -- there's definitely room for improvement there.

4) Modern OS's do a terrible job at making folders understandable, because there are drive directories, often hidden, and then multiple user folders, and their Documents directories, and then things outside their Documents directories (like Desktop, Downloads, etc.), and fake folders that show the content of multiple other folders, etc.

5) So people are rightly claiming that folders are a mess and confusing. Yes they are, on modern OS's. But the problem is not with the concept of folders, it's with their back-assward modern implementations. So don't throw the baby out with the bathwater and claim that folders are bad. Instead, the solution is:

6) Modern OS's and apps: stop trying to organize our damned files for us! Stop auto-creating "Downloads" and "My Pictures" and "My Skype Photos" and "My Virtual Machines" directories. Just stop it! Instead, give each user their own home directory, have it be empty on a new computer, have every application open/save things in it by default (including downloads), and let the user organize things gradually as they see fit. And don't let anyone but a power user ever get outside of this directory.

(And ideally, stop allowing users to put documents on their desktop -- it just confuses things and nobody has ever come up with an intuitive way to integrate that with the concept of user folders (my desktop is inside my user folder, what?). Documents on a desktop is an outgrown metaphor that just nobody seems to have the courage to jettison.)

w0utert 2 days ago  replies      
The original article wasn't very good, but this response is even worse. It appears the author is lacking any form of creativity to think beyond the idea that folders are the only paradigm for ordering files that could possibly work.

In practice, if you forget about system files and such, I'd estimate 9 out of 10 people have all their documents in a single flat folder, which they only access through their word processor or whatever they use to open them. They have all their pictures and video in a tool that organizes them into albums and such without them having to deal with folders. They have all their music in a program that keeps them in a library somewhere, which they never manipulate on the file system level. And so on. Or (also very common) they just dump everything they touch on their computer on the desktop. This is not because making folders is 'too complex', but because users can't be bothered to come up with hierarchies to order where there files are 'stored on the file system' or whatever, they just want to make their changes persistent and be able to quickly find their files later.

For the vast majority of people, folders are an anochronism. It's not that they are 'hard' or 'complex', but they are simply not essential, and actually quite limiting for file management. The whole idea that the artefacts you create or consume are best structured as a tree really doesn't make a whole lot of sense. Especially not if you want to have all your data available on multiple machines which may have wildly different file systems such as desktops and mobile devices.

This doesn't mean we should all have big piles of files with no ways of structuring them, but it doesn't mean folders are the epitome of file management either. A database-like file system with powerful search options could definitely be much better. From an end-user perspective, organizing files by the applications they can open them also seems to be a pretty good idea to me. Attaching metadata and tags to files so you don't have to superimpose them using a tree-like structure would be a huge improvement.

I'm not saying that what Apple is trying with iOS and now OS X is so great we should all hail it as the future of file management, but personally I think they are moving in the right direction. Ideally, end-users should be able to operate their devices and get to their files without even knowing it has something as abstract as a 'file system'. Just sit down, get the file you want using whatever criterium makes the most sense for finding it, manipulate it, and have the same file available on every other machine. I think this is the vision Apple has, but it will take lots of time to get there.

Also, people mailing files to themselves to get them on their iOS devices literally has nothing to do with how the iOS file system works. Folders or no folders would not make any difference.

cstross 2 days ago  replies      
From the article: I have honestly never seen a single person have any issues with directories, nested or no, and as old as the concept might be, the people I interact with seem to be able to handle it just fine.


The author's clearly been interacting with different people from the ones I know. 80-somethings who didn't grow up with computers frequently get hopelessly confused by directories. The philosophy professor who's been using a Mac since the mid-80s is a bit harder to explain. And then there was the time I got called in in the early 90s to fix the office PC used by a succession of secretaries -- running MS-DOS -- and discovered they'd saved something over 2000 Word Perfect files in the root directory because none of the temps the company had employed over a 12 month period had ever heard of directories.

In my experience many casual users (for values of something like 10%-50% of computer users these days) simply do not "get" hierarchical storage at all; they find it as baffling as predicate calculus. Hence the desire of some software vendors -- who are trying to provide machines that anyone can use without training -- to move away from it, at least on the user's side.

kalleboo 2 days ago 4 replies      
When I see people struggle with folders, it's these two things:

- System, Application and Settings storage folders that the typical user really shouldn't have to see in the first place

- Predefined directory structures like "My Documents" that the user didn't create themselves. Especially when apps crud things up even more with subdirectories with files the user didn't create themselves and can't edit themselves (on my Mac I have "EyeTV Archive", "Final Cut Pro Documents", "iChats", etc... these should be hidden in Library folders until the user exports them). The Desktop is among these. Storing files on the Desktop shouldn't be possible.

If when new users got a computer, the only storage visible was a completely blank home folder, I think a lot of this confusion would disappear. It's not nesting of folders per-se that's the problem, it's that there's a ton of shit there from the beginning that the user has no idea what it is, where they are right now, and where their document will go. When the user created everything themselves and applications don't save documents go into special folders by default, I think a lot of that confusion is gone.

In this world, users who don't understand folders won't create them, and so they won't be confronted with the complexity. The "app silo" model could be emulated by app open dialogs only showing files created by the app itself by default.

Personally, I hate the app silo model. I like having a folder per project with all the related files in it. If I'm working on a report, I want to be able to quickly get to the text chapters, the graphs, my data sources, etc without jumping between apps so much.

gbog 2 days ago 1 reply      
I think the other point of view, the anti-directory one, is well reflected in this review of OSX: http://informationarchitects.net/blog/mountain-lions-new-fil...

The most notable saying is that "As soon as we have more than a handful of notions, or (beware!) more than one hierarchical level of notions, it gets hard for most brains to build a mental model of that information architecture."

Here is my ranty answer:


My god. Who the HELL are those guys to be so dismissive of human brain?

I have a kid, he is learning ten words a day, and this little boy is not a genius: It is a normal human being in formation. He is also playing a lot with my old legos, and he is communicating better and better in two very different languages. I can tell you, dear "Information architects" that he can already handle more than one hierarchical level!

I have worked in normal companies before. By that I mean companies were people have meetings, get bored by many slideshows every week, and use excel spreadsheets daily. In this kind of companies, geniuses are not the norm. And all of these people, all of these common "brains" could handle easily "more than one hierarchical level".

So, dear "Information architects", please keep your stinky morgue to yourself.

Human brain is the most wonderful thing that can be observed in the world. Its capacities surpass anything we (human brains) can modelize with our other tools. A kid of 3 years is much better in all what matters than a computer. We, normal human beings, won't let you grow a new generation of lobotomized humans for whom it is "hard" to build a "mental model" with "more than one hierarchical level".

Post-scriptum: After a mandatory proof-reading, I sit there and I wonder: maybe my legitimate anger against your aristocratic hauteur did blind me of a better explanation. The "brain limitation" you are attributing so kindly to "most brains" is just your own problem, maybe. Then your are not as cynical as it seems. But then I repeat: of any of my colleagues, old and young, clever and stupid, ubergeek or almost illiterate, all could handle a fking tree structure for their file. Thanks for considering them (a bit).

wickedchicken 2 days ago 1 reply      
A directory structure is too restrictive, much like Java and C++ inheritance. Cluster and search based file access is way more flexible and fluid, but we haven't had a good UI or overlay onto traditional filesystems yet. Think of Go's interface model, but for files.

Imagine you have a file that is a vendor-provided html template. Does it go in vendor/ or templates/? vendor/templates? What if you want to find out all the templates in use by the system? Document that somewhere and expect a newhire to 'just know' where you store the fragments of your templates?

Sometimes there really isn't a parent-child relationship between data, modeling it like that is always true seems very 1980s.

Perhaps Rob Pike can sum it up better than I can:

"My late friend Alain Fournier once told me that he considered the lowest form of academic work to be taxonomy. And you know what? Type hierarchies are just taxonomy. You need to decide what piece goes in what box, every type's parent, whether A inherits from B or B from A. Is a sortable array an array that sorts or a sorter represented by an array? If you believe that types address all design issues you must make that decision.

I believe that's a preposterous way to think about programming. What matters isn't the ancestor relations between things but what they can do for you."

andybak 2 days ago 2 replies      
There is a flaw with directories and hierarchical storage and it's not their conceptual complexity.

Gmail made a move away from folders and it's one that I've embraced.

1. It's easy to spend too long manually organizing things into folders. It's the kind of relaxing, busy-work we can fall into to avoid the hard stuff we're supposed to be doing.

2. There are many arbitrary ways to rearrange hierarchies and no clear limit for how deep they should be. Many of us with slight OCD-ish tendencies can fall down the rabbit-hole here.

3. It's manual work that the computer should partially be doing for us.

There are ways to mitigate these problems. Multiple views of the same structure, tags instead of folders and easily accessibly, instantaneous search can teach us to be less dogmatic about our directories.

I very rarely need to use tags/folders with my email now. Search works for me 90% of the time. It's only special tasks such as doing my tax returns or tracking a particularly complex bunch of emails where I might use tags. I think my file system is probably an order of magnitude more complicated and there is less automatic metadata with files (my file system doesn't know what project a file is related unless I tell it whereas you can tell a lot about an email just using from/to/cc fields.)

At this point I was hoping to end on some kind of conclusion but I can't think of one.

robomartin 2 days ago 2 replies      
There's a huge difference here between Mom, Dad and Uncle Fester using a computer and professional or business users.

The first set of users can be either lazy or oblivious to the idea of organizing their data --directories or databases, it doesn't matter. It's a pile-o-stuff and they really don't think far beyond that. For this class of users making it super-simple is a good idea. You sort of have to protect them from themselves.

I know doctors who have absolutely no clue as to where their stuff is stored and have zero interest in investing fifteen minutes to learn the basics of directory structures and file management. Zero.

The second set of users, the pro's and business users --to generalize-- are a different story.

Take the case of a company that designs physical products. Each project is likely to live inside a directory structure segregating and organizing areas or work such as: mechanical design, schematic, pcb layout, bill of materials, design calculations, documentation, embedded firmware, FPGA code, cost calculations, marketing materials, packaging design, manufacturing, specifications, etc. In turn, each of these categories will rightly have its own subdirectory structures when and where it makes sense.

The above per-product directory structure is also likely to be completely replicated as product releases and revisions require. Not everything can be handled by Git-type version control systems. In fact, in product design there are very good arguments for complete design duplication during iterations or to mark release (as shipped) configurations. Different subject.

This second use case cannot be served well with the iOS app-centric sand-boxed model. A product directory structure with thousands of files can have a number of applications access these files. There will not be a one-to-one correlation between applications and a lot of the files in the design.

Similar use cases can be found in other businesses where the end-product might not be a physical product design. Research projects, financial reports, publications and other work product is likely to require a number of different file types that may or may not come together to form a single deliverable.

Again, the iOS sand-boxed model fails to support this use case because it forces a per-file-type or per-application separation of files and does not permit or provide the ability to organize disparate file types into projects according to context.

Put another way: If you are Lockheed you don't want the F-16 and F-117 mechanical design files mashed together into one folder simply because the same CAD system is used to open then. You want them to live within their corresponding project stores and within a sensible directory structure that organizes work according to relevant criteria. For example, the wing mechanical design directory might also contain a set of directories with aerodynamics data whereas the mechanical design directory for the seat has not need for such data.

I see that the Windows approach seems to work well for the first type of user. If applications use it correctly (some don't) everything gets dumped into "My Documents" and other predefined folders. Users would occasionally add sub-folders of their own.

The second set of users generally has the presence of mind and knowledge to "roll their own". Using my own patterns as an example, I don't think I own a single computer (Mac or PC) that does not have a separate "Data" drive where projects are stored within their own directory structures and according to their own needs.

There's another twist to this, which is a far less common use case: I happen to run more than one business. There's zero justification for my Photoshop files from business #1 to be stored in the same location as those of business #2 on the same computer. Each business has its own root directory from which to organize the corresponding files.

webjunkie 2 days ago 2 replies      
I think at some point Apple will sell us "folders within folders" as a gorgeous update that took so long to really get it right.
gbog 2 days ago 3 replies      
That's a refreshing view. I fully agree. I hate it when a music player do not allow me to browse my clean and deep directory. I detest the fact that apparently no Picture manager is letting me to see my 10 years of photos in the hierarchy of my choice, that I implemented patiently in a directory structure.

And to the many comments right here that argue that people have hard time dealing with directories, I have one question: Did you ever work within a normal company (I mean, the kind of company where you have people in suit or tailleur doing ppt and excels)?

I have, and I can tell you that the most stubborn HR assistant will have his or her files almost neatly catalogued in a hierarchy, and most of the time they will be able to find the "2001 report on office expenses" directly from the hierarchy.

Granted, most will not use a clever and consistent naming convention for files, allowing to sort them properly (eg. all files prefixed by year). And that is a problem. A problem that the hierarchy solves properly, by the way.

So, I would bet the Apple/Google "no file" movement is a dead end. Worse, it is a trap. Check who will benefit from this move: Users? No. Advertisers? Maybe. DRM corporations? Certainly...

awakeasleep 2 days ago 1 reply      
I had to stop reading at "I've never met a user who has trouble navigating nested directories."

My experience tells me he is in the 'tech guy' bubble. As someone who volunteered helping city kids use computers, had a 'business' fixing people's computers, and has a dad who teaches a free computer class at a library, and has worked tech support, I say with some confidence that the silent majority of people don't understand folders at all.

You have to be a tech savvy, middle to upper middle class person with relatively high motivation before you stand a chance of understanding folder structure. Exceptions abound, of course, but then again most 9-5 working, intelligent people struggle to organize their data in folders even though they fit my demographic of people with a chance to understand.

postfuturist 1 day ago 0 replies      
The article is a bit of a rant, but it resonates deeply with me. The single-level of nesting is obnoxious. The spotify app allows a flat list of playlists--just the one level of nesting. Now that I've got a couple hundred albums in there, it has basically become useless, and I just have to search to find everything, which is _not_ what I want to do.

Hierarchical structures are how we see the world. Tagging and search is not sufficient. File systems give us the illusion of a "place" that a file lives. A single place. Like my socks are in a drawer in my closet, they are always there. A given music file is always in /home/steve/Music/<artist>/<album>/track.foo. That's where it lives. I can find it, even if I have 10,000 albums. My tax documents are in /home/steve/Documents/personal/taxes/2011/ . They exist there. I can auto-backup /home/steve/Documents and I know that those documents are safely backed up. I won't lose those things.

LinXitoW 2 days ago 2 replies      
I don't think changing the way we structure our files is that bad an idea, although the iOS version of it isn't really great. Folders are basically a way of grouping files of a similar topic/concern together. Files can belong to many topics/concerns, but they can only belong to one folder. I'd really love it if we evolved the folder concept to a concept of tags, where the organization is very fluid and dynamic.

Example(example folder structures):

* I could put an anime movie file under /Media/Video/Movies/Anime or /Media/Video/Anime/Movies

* Same for an anime tv show: /Media/Video/shows/Anime or /Media/Video/Anime/shows

If i could tag the anime movie as "Movie" and as "Anime", i could just pick and choose which way i want to view it:

* All movies?

* All anime movies?

* All anime?

goblin89 2 days ago 0 replies      
> The article I'm about to link to, by Oliver Reichenstein, is pretty terrible

Funnily, I liked linked article more (better written, also easier to read). Although I partially agree, partially would argue with both.

I think hierarchy-less approach, properly designed and implemented, would work OK for most users. Folders just look like the most basic and generic way for organizing stuff, which isn't necessarily the best, and probably deserves optimization for particular use cases.

However, take for example cases when computer is used for production"say, DTP or video/photo editing. If you take away the freedom to organize files hierarchically, it would impose certain restrictions on the workflow. The “genericness” could be an advantage for more complex use cases.

lubujackson 1 day ago 0 replies      
One of the many reasons to hate Apple. I don't understand why supposed geeks can get behind a company that willfully puts draconian protections to keep you from using a computer like a computer. No USB port? No useable FILE SYSTEM? Emailing files is a pathetic hack for a poorly designed device. And "simplicity" is not a saving grace, there is NO REASON to not allow this sort of functionality except to force everyone to use horrible iTunes. People can defend Apple all they want, but no one can give me a valid user-centric reason for those decisions.
yaix 2 days ago 0 replies      
> Vendor lock-in

Well, duh! Obviously that's what it is about when "apps" don't want to tell you how they store their data. MS Office has been doing that very successfully for more than 20 years now. And removing the abillity to actually locate the data would make it even easier to tie the user to the app and plattform.

jwl 2 days ago 1 reply      
Maybe it started with music playlists. My digital music collection is largely in the same folder structure as it was 15 years ago, but every music player wants to make a library based on metatags, which in theory makes sense, but then just adds the problem of giving everything suitable metatags. Building your own file structure based on your own needs just seems easier and simpler.
berryg 2 days ago 1 reply      
Some people can handle directories just fine. But, a lot of people simply do not grasp the concept. For many, many years I have been trying to explain hard disks, folders and files to family members. They simply don't understand the concept. Opening a folder and double clicking on a file to open it and a program to read the file starts: they do not understand it.

For these ordinary users of computer appliances the iOS way of handling files is simply a blessing. You use an application to do something, to write a text, to listen to music, to communicate. Applications that can interact with each other, will interact with each other. Simple.

For an ordinary computer appliance user it is not necessary to be confronted with a file system. Just as a lot of other technical details are hidden from ordinary users.

rnadna 2 days ago 0 replies      
I wonder if others have scaled back on the "meaningfulness" of file names. I name a lot of my files (and directories) 01, 02, 03, etc (with suffixes according to meaning) and then I have a local README file that lists the contents of each of these.

I find it this scheme handy for scientific work in which the files are often multiple attempts to solve a problem. It saves me from writing file names like "solution" and "solution_method2" and "solution_method2_with_bug_fix" etc. The README format gives me tons of space to write comments (and cross-reference other work), while the filename, incrementing from version to version, is a sort of diary stamp.

This works for directories too. I tend to go only 2 directories deep on a given project. The top level is for the task, e.g. a calculation or a figure for a paper I'm writing, and the second is for a sequence of approaches to that task.

With this scheme, I focus on README files and not names in a directory tree. Colleagues who have tried this have found it weird at first, but then tend to prefer it to the "informative name" scheme they grew up with.

If databases were more convenient, I could imagine doing all my work with "flattened" filenames in a single directory. I think that's what apple are moving toward, but they are thinking of application-specific work, so the application deals with the databases. I prefer the README/filesystem structure because it lets me use tools like grep, etc.

ThomPete 1 day ago 0 replies      
Intuition is learned.

Something is intuitive not because it's universally understood but because we have learned the meaning of it from a holistic point of view. This requires lots and lots of experience and, for that matter, trial and error.

Metaphors are only meaningful in retrospect.

Don't count on the physical-looking button to be intuitive just because it's a metaphor from real life. Once you tell someone what a specific element means, they will most probably understand it, but not because of the metaphor itself.

There are no Bablefish in UX

Designing products and services is like speaking French. Not everyone understands it. Comprenez-vous? The noob might pick up a word here and there, but they aren't, by any metrics, comfortable with participating in the conversation.

This all leads to the following conclusion:

Intuitive interaction is for experts, not for noobs
Understanding something intuitively really means that you understand it holistically. If you understand it holistically, you can fill in the gaps. This doesn't mean you shouldn't make your design intuitive or improve on it"not at all. Just understand that you are doing it for the natives.
not for the noobs.


it 1 day ago 0 replies      
It's not so much that folders are counter-intuitive. The problem with them is that they impose a single, arbitrary structure on files that could be organized in many different ways. For example, you could have folders like AllMyImages/ or AllMyCatImages/ or CatImagesJuly2012/ but there's no way to anticipate what will be the most useful directory structure for all future situations. It would be more flexible and useful to have indexes over the files that let you dynamically organize your content according to attributes such as contains-cats, date, is-image, author, etc.
rogerchucker 2 days ago 0 replies      
Great topic and one that has been bothering me. Can somebody answer the following question - when I have a PDF opened in app-1 in my iOS device and there is an export/share button in that app which allows me to open that PDF in app-2, does that sharing end up creating two copies of my PDF on my device? If yes, then isn't this app-as-a-silo model a little inefficient?
epo 2 days ago 0 replies      
The article linked to is a witless rant. The article he is reacting to is actually pretty good.

As is commonly the case with people who don't understand what they are talking about, the author is confusing policy (I want to organise my stuff) and mechanism (use folders and sub-folders). What is undeniable is that we need a coherent way to organise our stuff. What is wrong-headed is assuming that hierarchical directories are the best, or the only, way to do so.

andyjohnson0 2 days ago 0 replies      
To me, this is about choice.

If a computer has a file system that is accessible to the user and supports files and hierarchical directories, then users can choose to use it or not. If they want to use directory hierarchies then they can. Or if they want to store all their files in one place and rely on applications to present filtered views or search (not necessarily even in terms of files) then they can.

If the file system is not accessible or doesn't support hierarchical directories then you have no choice. This is not an option that interests me.

Someone 2 days ago 1 reply      
One opinion, repeated a zillion times, does not make an argument; it makes a rant.

I see only a few arguments in this text:

- the author does not think nested hierarchies are difficult. If he added "for nerds", I would agree with him.

- the author equates having limited nesting with the 'data silos' situation on iOS and (from what I read in reviews), in slightly lesser sense on Mac OS X Mountain Lion.

- claiming that the mouse is hard to use because it provides "indirect manipulation". That may seem so, but the human brain is exceptionally good at transferring motor skills between modalities. For example, anybody who can write can write with his feet, nose, car, or whatever, and the handwriting will (except for the quality of fine motor skills) be recognizable as yours (http://www.ebaumsworld.com/jokes/read/211551/)

- equating having limited nesting with vendor lock-in. Proprietary file formats are fine for doing that. I do not see why you would need to do anything more.

muxxa 2 days ago 1 reply      
The giveaway phrase here is:

"... instead of having your own structure, tailor-made for you because you created it in the first place ..."

Non-geeks don't have the time, interest or inclination to catalogue and curate these sorts of structures.

The list that the author gives of everyday items that are easy to use ("cupboards, Tupperware, boxes, closets, pockets, wallets") all have the common property that they are not recursive, and that you can easily figure out their contents at a glance, something that is impossible to garner by looking at an opaque list of directories.

While there are some good points here about the problems of files being siloed in apps on iOs, the directory structure is a ux disaster.

rogerchucker 2 days ago 1 reply      
I think computer scientists need data points before making proclamations like "most people don't like folders".
gizmo686 1 day ago 0 replies      
I think android has the approach to filesystems write. Each app has there own dedicated folder for internal usage and is hidden from the user (actually, the user cannot access it at all unless the specific app provides a method because of sandboxing, but that is a different issue). There is also a file structure where apps can read and write any data that the user might want to share between apps, devices. By convention, the files are organized either by type (image) or by app, however when the user wants to move them, or open them with a different program, this option is left available.
jpalomaki 2 days ago 0 replies      
If OS vendor would like to switch to using metadata or tags to organize documents, allowing just two level directory structure could be the first step.

Documents in root would end up having no tags at all and the folder name would be used as the tag for those stored in folders. Obviously you could also do this with complex directory structures, but then the tags would become quite long.

Metadata/tag based systems don't necessary exclude the complex folder structures as we have seen but having both can make things complicated.

damian2000 2 days ago 1 reply      
I think that isolating data and applications into their own directory on a PC makes total sense.

But there was something that I read to do with organizing your email inbox into folders which is sort of related. A study found that people who organise their inbox into multiple sub-folders don't get any benefit at all compared to those that have just one big inbox; when they want to find something, they just sort by 'from', 'date', 'subject' or do a find text.

kraemate 2 days ago 0 replies      
I dont want applications to be file-managers and hiding where my files really are. Files+directories is a very intuitive, powerful, and low-abstraction concept --- directories and files correspond directly to inodes. Thus the file-system and users' view of files is the same.
Tagging etc can easily be accomplished by using extended attributes (like in the BeOS Filesystem), but apparently no one wants to use extended attributes.
ehutch79 2 days ago 0 replies      
i would like to see the anti-directory people put forth a basic cms that works this way. everything is tagged, no folders. or however they're saying they want it to work.

then give this to an enterprise. hell PAY them to use it. see what the results are.

since you're all ui/ux experts 'they're using it wrong' will be an unacceptable response to anything that happens.

jack-r-abbit 1 day ago 0 replies      
So... Apple doesn't want people to write code on their OS anymore? I can't think of many languages that would not suffer from a lack of folders. I don't care about all the other crap being discussed. There will always be people that have a hard time with XYZ.
abenga 2 days ago 0 replies      
I'm all for any easier file organization system as long as it's portable between systems, allows a large number of files to be locatable and usable immediately after being plugged into a system (i.e. without a lengthy indexing process), and doesn't make it difficult to share files between users in a network (e.g. a Samba set-up in an office). I don't know if there can be any such system that doesn't become as "complex" as the current folders-in-folders way of doing things. Most of the systems I've seen assume that I only access the files from this device and I'd never need to copy them to external storage or share them with another user not necessarily using the same kind of system as I do.

Maybe a hybrid system, like the one being done in GNOME, where you have the traditional underlying file system present, and a separate indexing program (GNOME documents) that enables searching by content, context, date modified, etc would be best.

powertower 2 days ago 0 replies      
> I have honestly never seen a single person have any issues with directories, nested or no...

This is completely off. I've seen people not being able to "get" the concept of folders and files even after months of demonstrations... They simply forget, don't understand, can't use. Especially in the context of drives, devices, memory cards, etc.

And those people are large in numbers, the non-computer crowd. Probably at least 30% of the general population.

b1daly 2 days ago 0 replies      
Apple's attempts at making "seamless" user experiences can be frustrating, especially on edge cases. For example, in Garageband instruments are stored in special files down a ways in a directory tree in Application Support. It happens that files get corrupted, but access to the file is so abstracted that the source of the trouble is not apparent, as it is a directory not intended to be accessed by the user. It's an attempt to hide complexity from the user but it makes it hard to trouble-shoot, or to perform actions outside of the expected.

The area that does boggle my mind in terms of a flat structure are complex multi-media authoring environments. Where different files and types are used to assemble a larger work.

My main principle as an audio engineer is that I have to know exactly where each file in the project lives in the FS. If I'm not sure, the scenarios in which projects, or parts of projects get lost happen more frequently.

Not to mention that a given project uses many files of different types, and can have 10s or 100s of thousands of files. Automatic file management seems like a recipe for disaster, especially as it won't work perfectly.

I don't get it.

lovskogen 2 days ago 1 reply      
There seems to be alot of the comments here pointing out that the users are dumb, or they "just have to learn" " really? Aren't we the ones that should design solutions that are easy to understand, and easy to use?
blt 2 days ago 0 replies      
I come across a great counter-example to directory structures every day. I make 32-bit, 64-bit, debug, and release builds of my product. How do I organize them in a tree? Which distinction is "higher-level"?
rynes 2 days ago 1 reply      
I read somewhere that Apple uses hard links to directories for time machine.
Show HN: Music for Geeks and Nerds musicforgeeksandnerds.com
208 points by kroger  2 days ago   74 comments top 32
gravitronic 2 days ago 4 replies      
Clicked it expecting another "list of mixtapes by deadmau5", ended up very surprised, and bought a copy! Very cool!

I know Ableton Live has python scripting support built in. I wonder how hard it'd be to integrate all this into composer tools in the DAW

nano_o 1 day ago 0 replies      
Here are two related books that are freely available (and that I haven't read):

The Haskell School of Music " From Signals to Symphonies, by Paul Hudak (pdf available at http://www.cs.yale.edu/homes/hudak/Papers/HSoM.pdf).

Music: a Mathematical Offering, by David J. Benson (pdf available at http://homepages.abdn.ac.uk/mth192/pages/html/music.pdf)

zoba 2 days ago 2 replies      
I purchased this with a credit card and got an error that starts off: "The request signature we calculated does not match the signature you provided. Check your key and signing method."

I waited a couple minutes and clicked the download link again and it worked. Wanted to let someone know, since the site doesn't appear to have any contact information.

prezjordan 1 day ago 2 replies      
Oof, I think you just brought my open-source[0] project to its end! This is quite nice.

[0]: http://github.com/prezjordan/Melopy

AhtiK 2 days ago 1 reply      
Wonderful! Is the codebase[1] using pyknon[2] or also csound [3]?

I'm asking this because http://musicforgeeksandnerds.com/resources.html lists csound as one of the resources.

Any ideas how pyknon API relates to csound API?

I was working with some programmatic sound generation and csound built-in python interpreter seemed to be one of the most advanced free solutions to get some python-based music generated. Csound is free for academic and research. Commercial license requires contacting MIT. But yes, csound itself is not python...

I even didn't find anything close to csound in terms of features, instruments available and community.

[1] https://s3.amazonaws.com/musicforgeeksandnerds.com/code.zip

[2] https://github.com/kroger/pyknon

[3] http://www.csounds.com/journal/issue14/realtimeCsoundPython....

EDIT: I just noticed that pyknon is generating a MIDI, not an audiofile. Pyknon is for building midi scores and is not meant for sound synthesis.

Gormo 1 day ago 1 reply      
This is extremely interesting in concept; I know essentially nothing about music theory and have been interested in learning more about it, and a logical/mathematical approach sounds perfect.

But is the book suitable for someone seeking to acquire a foundational knowledge of music theory, or does it require some level of pre-existing understanding?

I ask because most of the material on the page refers to learning "more" about music, and the sample material does seem to assume some background knowledge in musical notation, etc.

If this isn't suitable as an entry-level primer on music theory, can anyone recommend some other works to read first?

lukev 1 day ago 1 reply      
Perfect! I've often wished for such a book.

Note - It's also available on Amazon priced at 9.99, though that would be DRM encumbered and (probably?) not include the sound samples.

danso 1 day ago 0 replies      
FYI, since it isn't mentioned on the OP, the Amazon version only costs $9.99. The downside is that it's Kindle only, though with no restriction on number of Kindle devices.
_exec 2 days ago 1 reply      
Perfect, just what the doctor ordered. May I suggest you create a forum / subreddit for the readers to discuss the book?
danso 1 day ago 0 replies      
I have to say, just skimming through on the Kindle version (on iPad), the book looks beautiful and well laid-out. Can't wait to dig in.
kafkaesque 1 day ago 1 reply      
Hopefully, Kroger sees this and has a bit of time to reply.

I excelled in music. Is there anything in particular in programming that you teach differently so musicians can understand it better? If so, do you have any ebooks, PDFs or other resources of this? Just wondering because I've only recently taken up programming and I'm looking to pull information from various resources. Thank you!

Yhippa 1 day ago 1 reply      
I so want to buy this book but I don't know if it's right for me yet even after reading the sample.

Is there anything practical I can expect to use as a result of reading it and going through the exercises?

mahmud 1 day ago 0 replies      

I'm currently Yale's free online course, Listening to Music. Should be a great complement.


jroll 2 days ago 1 reply      
Out of curiosity, are the examples intended for python 3? I noticed in the sample you're passing around 1/4, 1/2, etc. which doesn't do a whole lot in 2.x. :)
richardjs 18 hours ago 1 reply      
The site says a paperback edition is coming... is there a way we can sign up to be notified when it's available?
corin_ 1 day ago 1 reply      
Will grab it and read when I next have some space on my reading list, but as a former professional musician and occasional amateur coder, I'm pretty much the exact opposite of the target audience, so will be interesting to see how it reads for me.
juanre 1 day ago 0 replies      
This is great, Pedro. Just bought a copy. I've tried ---and failed--- many times to learn music: I hope this will be the one. Now if you just found the way to include a couple of properly tuned ears in the package...
cardamomo 1 day ago 2 replies      
I love this as a pedagogical tool. Composers and hackers have a lot more in common than many folks realize, so to teach the basics of composing in a familiar language makes a lot of sense.

That said, where do we go from here? Many of the challenges in writing contemporary music are in fact notational challenges. We have a system of music notation that developed largely alongside the musical styles of the baroque and early classical eras, which tends to emphasize discrete pitches and a "divisional" model of time. (That is, the only allowable note lengths are those that can be expressed roughly as multiples of powers of 2.)

This book seems like a great way to get your toes wet, but what is the geek or nerd to do when their compositional ideas begin to butt up against what is possible within western notation and, indeed, pyknon?

gboyd 1 day ago 0 replies      
Pure Data is awesome! It's a "patcher" language which makes it pretty approachable for musicians, who are already used to the "plugging in wires" paradigm --

Also with (http://libpd.cc/) you can embed your Pd code almost anywhere!

I'm using Pd/libpd for an audio-focused mobile project --- so far it's a great architecture, perfect separation of concerns, keeps the audio guts cross-platform. Plus everything is permissively licensed.

Supercollider seems to have a lot of traction with the improvisational community, folks who do "live coding", i.e. hacking at curly-bracket syntax in a performance setting. But it's GPL and not easily embeddable in mobile devices afaik ---

P.S. Custom python scripts against Ableton work great if you are using a hardware controller that supports the Framework classes. You can access pretty much the entire Max for Live object model, via python. It's definitely an unsupported back door though.

Here's a tutorial: http://remotescripts.blogspot.com/2010_03_01_archive.html

ninjin 1 day ago 0 replies      
This seems like a great read! I have wished for years to dive into music theory and this would suit me great. I would prefer a physical copy though, any chance of seeing an e-mail list where you can get back to people that want to wait for the paper version? I signed up for notifications for volume 2 but I didn't see anything similar regarding the paper version.
bloat 1 day ago 0 replies      
Looks great! Now we need an edition using Overtone...


RockofStrength 1 day ago 0 replies      
Musicnovatory.com presents quite a few interesting music theory concepts, though it is quite opaque and off-putting in parts. For example, they present a generative binary theory of rhythm, a 'tetrad' theory of chords (by fifths and 'metamorphoses'), and a whole array of binary open/close 'entities' (rhythm, harmony, and 'melolines' (the chord tones of the melody)).
dmansen 1 day ago 0 replies      
Great stuff. Bought myself a copy after reading that you're strongly influenced by SICP - good enough for me :)

Edit: Forgot to mention, I spent a weekend writing a bunch of similar music manipulation tools in clojure if anybody is interested in checking it out: https://github.com/dmansen/composition-assistant

gbog 1 day ago 0 replies      
Bet someone could generate the Art of the Fugue from a few functions...
khafra 1 day ago 0 replies      
Speaking of which, who's going to be at Nerdapalooza next weekend?
efields 2 days ago 1 reply      
This could be perfect for me. I studied guitar in high school and played trombone for 8 years, but I've all but abandoned both and haven't looked at sheet music for years. I'll admit that I never got the structures of chords and why they sounded like what they sounded like and what types of sounds should logically proceed another.

@kroger Will this help?

dmitrig01 2 days ago 1 reply      
Kudos for Steve Reich on the very front!
tprice7 1 day ago 0 replies      
For those curious, here are a few of the author's musical compositions: http://pedrokroger.net/compositions/
twfarland 2 days ago 1 reply      
What a beautiful idea. Will a physical copy be available at some time?
marianoguerra 2 days ago 0 replies      
exactly what I was looking for, just bought it, thanks!
criveros 1 day ago 1 reply      
So, If I want to just learn the guitar and to be able to play classics like wonderwall, will this book help me?
.mail app dotmailapp.com
207 points by tomazstolfa  2 days ago   119 comments top 35
astral303 2 days ago  replies      
Best of luck w/ the implementation. However, reading the concept, I am unimpressed.

In Mail.app, I can already mark e-mails with different-colored flags, which allows me to come up with my own "get back soon" or "get back later" type of a system.

The assertion that e-mail hasn't changed since 1970's is false. Modern e-mail clients manage it much better. Compare Mail.app, Outlook and GMail to PINE--the difference is staggering. The true innovator in the e-mail space has been GMail, which has brought things like autosave, concepts such as embracing the fact that it is not truly productive to file every single e-mail into a purposeful folder, embracing that tagging is better than folders. GMail also really pushed the conversational view and presented it in a single-screen way that was innovative (compare how long it took e-mail clients to catch up with that... and no, "group by thread" did not compare).

I'm also very wary of anyone advertising "clean typography." People start adding spacing and now I only end up previewing 2/3rds of what I could preview in one screenful.

kyro 2 days ago 2 replies      
First thing I thought of when I read the heading Actionsteps was being able to tailor actions specific to each email. For instance, you receive an email from Twitter saying someone's following you, and you're given the options to either "follow", "@reply", "ignore", etc, and similarly for Facebook. Receive a bill from ATT and your next steps are "pay bill", "file as important", "put on to-do."

You're essentially containing all of the work necessary to resolve the subject of the email within the client itself. You could allow for app plugins/extensions so that others could develop hooks into various services.

Other than that, it looks clean and simple. I'm eager to try it out when it's launched.

jvm 2 days ago 4 replies      
Whoo boy does that site ever not work without javascript... Sort of ridiculous since it's a static page advertising a desktop app, neither of which should require javascript at all.

EDIT: Oh God, and hot pink highlighting does NOT complement their color scheme. And for all its javascript fanciness, it doesn't handle resizing at all. Sorry to be so negative, this site just pushes all my buttons.

davidcollantes 2 days ago 2 replies      
From: https://vanschneider.squarespace.com/mail-the-first-summary --
"Currently I'm developing a prototype and then hopefully move on to Kickstarter. But there's no plan yet if it's going to be a WebApp, MacOS or Windows app."

In other words, vaporware (so far).

antr 2 days ago 3 replies      
I've just installed OS X Mountain Lion... and gone back to Apple's Mail (no more Sparrow). I'm too tired of chasing another email/to-do client that might get acq-hired, with no future support.

Having said that, the idea behind .mail app is elegant, well thought and a step forward. Kudos.

jmduke 2 days ago 1 reply      
It's good to see some actual followthrough on the initial concept.

I honestly like the proposed Attachments and Notifications features (I'm also not a mail client aficionado, so I don't know if these are particularly groundbreaking), but the Actionsteps thing sounds clumsy to me. If my problem is that I don't have enough time to parse, read, and respond to all of my email, I don't understand how adding another step will alleviate that.

calinet6 2 days ago 0 replies      
I'd like to be able to read my e-mail address while typing it please...

    $("input#email").css("color", "#333");

Otherwise, fantastic. Thanks. Ideally you just buy Sparrow and add the 2 features it was missing. We can only hope.

3JPLW 2 days ago 2 replies      
What is it? It took me quite a while (and a few web links) to discover that it's a Mac email client.

Interesting how many new email clients are emerging this summer. I'm taking note due to the likely abandonment of Sparrow. Also due to come out "this summer" is Mail Pilot: http://mail-pilot.com/

chimeracoder 2 days ago 0 replies      
> "When the first email was sent in the early 1970's there was no big difference to the email we know today." - And this is the problem.

Actually, this is the greatest advantage of email: its extreme portability across platforms and clients.

We can reinvent the way we interact with it, but we don't need to reinvent the technology itself. This is an immensely important distinction.

5h 2 days ago 0 replies      
With the gmail web interface I already have all of these features,

actionsteps are stars in gmail, i use yellow & red bang, and red star .. just press s to rotate,

attachments have never been an issue but maybe thats just me, searching for "from:a@b has:att<down><tab>" is quite quick enough

and using filters & labels gives me the notification functionality, e.g. messages from facebook get a label, i can see facebook (3) or whatever on the left, but it will work for any type of email i wish.

"meh" is my main reaction to this, unless it has something new and is as transparent to synchronisation across devices as gmail i won't be interested

joshaidan 2 days ago 1 reply      
"Actionsteps solves the flagging problem, where every email you know that you need to respond to is of equal importance."

Wouldn't it be interesting if the sender of an email could specify what Actionsteps are required for a given email? i.e. The sender specifies that an email should be: read by the receiver, replied to by the receiver, forwarded to a specific department, complete a specific task, etc.

jenius 2 days ago 0 replies      
Perfect timing on this, right after the demise of sparrow. I was starting to really get down on the idea of having no really nice desktop mail client for OSX.

Really looking forward to checking out this app, and I really hope Tobias is either secretly an incredible osx programmer or is going to get one on board to help, because it would be a shame to see a beautifully designed interface like this be brought down by poor implementation.

89a 2 days ago 1 reply      
Someone wants to work at Google/Facebook then.

But on a more serious note, it doesn't look like a cocoa app to me so I'm not that keen on it. Too much custom UI and looks more at home on Windows 8

almost 2 days ago 0 replies      
Looks interesting, there's definitely stuff that can be done to make email better. It'll be interesting to see how this works out.

If you're interested in this and you're a GMail user you might also be interested in Active Inbox[1] which adds some useful features on top of Gmail. It lets you mark emails GTD style as "Action" (requires an action), "Waiting" (waiting on someone else) or "SomeDay" (there's an action to take, but you might not do it now). It also has an easy way to sort emails by project and some other useful features.

Disclaimer: I met Andy of Active Inbox recently and ended up doing a few days work on the product. But I was a happy (and paying) user of Active Inbox before that!

[1]: https://www.activeinboxhq.com/

jameswyse 2 days ago 0 replies      
For anyone who is confused, this is the implementation of a concept which was posted 2 weeks ago.

It's looking really good, can't wait to try it!

Link to concept post: http://www.vanschneider.com/work/mail/
And original HN post: http://news.ycombinator.com/item?id=4223869

mehulkar 2 days ago 0 replies      
As long as you don't sell your product to Google and stop development as soon as I get used to it and start liking it.
axx 2 days ago 0 replies      
Pretty nice timing:
- Sparrow announces that they've been acquired by Google on the 20th
- on the 23th http://dotmailapp.com/ gets created (as far as i can see it from the DNS record)
bigdubs 2 days ago 0 replies      
A bone to pick with the name: Leading "." in filenames usually causes the Finder (and ls) to omit the files in listings.

Maybe not a great look for an email app.

adhipg 2 days ago 0 replies      
Isn't there too much emphasis on 'click' on the page? The interface gives a feeling that it not benefit power users.

I usually never leave my keyboard when reading/writing emails. Keyboard shortcuts on GMail (and Sparrow) have been the most important feature for me to get my mail done faster.

raikia 2 days ago 0 replies      
I'm confused. Is this only for Mac (it looks like it from the screenshots)? I find it a terrible business move to have a start-up release their product only for a platform that is much less than 10% of the world's personal computing marketshare.

It looks nice, but I think you have shot yourself in the foot for trying to release a Mac app first. Windows doesn't have any good email application that is still developed anymore. Pretty much the choices are Outlook and Thunderbird (the latter of which was just announced would have no more feature developing). People are looking for the next email client to flock to...

dsirijus 2 days ago 0 replies      
I didn't think concept was flexible enough when it originally ran here on HN frontpage.
modarts 2 days ago 0 replies      
Issue in Chrome [v 20.0.1132.47 m]

Focusing in the email textbox, followed by tabbing out causes the box to shift down (pretty sure that's not intentional)

shocks 2 days ago 0 replies      
Any plans for cross-platform support or is this yet another Apple only affair?
Iaks 2 days ago 1 reply      
Just a heads up, if the creator is reading here, your page loads 0 content with javascript blocked. A minimal plea for javascript unblocking might gain you extra traffic - I know I didn't care enough to add you to my whitelist, since I had NOTHING to go on.
winter_blue 2 days ago 1 reply      
It would be great if instead of having to build a new app ground up every time we want to add a few features, if we could simply add those features to the existing application.

This would be a lot easier if that application was open source, but having a modular architecture and malleable/extensible design would be more important.

Does anyone have thoughts on application that are designed such that features can be added to it easily?

state 2 days ago 0 replies      
What I think is most interesting about this is how successfully implemented this PR campaign is. I think some of the criticism of the app is warranted: but we have nothing concrete to look at, and no real information about how the app will be developed. It seems likely to be a case of someone believing too much in the power of 'design' while overlooking the importance of 'implementation'. If I could somehow be convinced that these two things were conceived of holistically by the creator I'd be much more interested.

I do, however, really applaude the precision with which this project has appealed to people's desires.

kwamenum86 2 days ago 0 replies      
Smells like an acquisition in a year or so (and I mean that as a compliment). Congrats in advance.
glennos 2 days ago 0 replies      
This looks nice, but doesn't strike me as a revolution.

What I would love to see is a mail client that does away with the antiquated inbox/sent paradigm. It would be replaced by active/archived. If I send an email, it would appear in my active list until the expected outcome is achieved (eg. a reply). At that point I archive it. Sent and received would sit in the same view (like Facebook, etc), why they don't already on clients is beyond me. I shouldnt have to manage 2 views to ensure I've acted on received and others have acted on my sent.

ChronoGawd 2 days ago 2 replies      
Seem very similar to Mail Pilot (www.mail-pilot.com), although Mail Pilot seems that it will still be a monthly paid service, as .Mail app doesn't (at least I couldn't find if it is).

Which is a big deal for me. As for .Mail seems like you pay for the app such as Sparrow (which it looks similar too) and not have to pay monthly for it.

I plan on buying it if it comes out as it shows and does what it does.

Anyone know when it is coming out? And if it will have a monthly cost?

ricardobeat 2 days ago 0 replies      
Now is the perfect time to re-create Sparrow.
stefanve 2 days ago 0 replies      
Would love to have a ubuntu version of this. Will pay up to 30$. Or any other mail client
tastive 2 days ago 0 replies      
Assuming one of the creators reads this: "A clean and Actionsteps."

I would like better attachment management than I've got now, so I'll probably take a peek. Thanks.

brianbreslin 2 days ago 0 replies      
Am I the only one who worries about the branding of this?
dysoco 2 days ago 0 replies      
".Mail is coming to your dock" Well... I supposed it won't come out for Linux.
joamag 2 days ago 0 replies      
Looking awesome
The Making of Warcraft 1: Origin of the series & creation of multi-unit select codeofhonor.com
206 points by pwnyx  2 days ago   59 comments top 13
knowtheory 2 days ago 1 reply      
> Before I started in the game industry I had worked extensively with several low-end “Computer Assisted Design” (CAD) programs like MacDraw and MacDraft to design wine-cellars for my dad's wine cellar business, so it seemed natural to use the “click & drag” rectangle-selection metaphor to round up a group of units to command.

> I believe that Warcraft was the first game to use this user-interface metaphor. When I first implemented the feature it was possible to select and control large numbers of units at a time; there was no upper limit on the number of units that could be selected.

I wonder if this wasn't a possible case of patent infringement given the current definitions folks are using to sue.

twelvechairs 2 days ago 3 replies      
> It's surprising now to think what might have happened had Blizzard not controlled the intellectual property rights for the Warcraft universe " it's highly unlikely Blizzard would be such a dominant player in the game industry today.

This seems a very important observation. Its interesting how most of the major pre-computer fantasy universes with big fan-bases (Battletech, Dungeons and Dragons, Warhammer, Star Wars etc.) never managed to really transition seriously into computer games (a few hits but lots of misses and certainly no dominant series like warcraft and starcraft). I think the licencers of these worlds have been guilty of being far too greedy over the years as a general rule, and hurt their own pockets through doing so...

Also - great article. I loved playing Warcraft 1, and despite the fact that the 'RTS race' soon exploded and left it in the rear-vision mirror, it was still incredibly important in defining the genre... (as a side note, most of the other Blizzard games around this time were great too - must have been a great team).

MikeCapone 2 days ago 3 replies      
I spent my youth playing Dune 2 (and then Warcraft, etc).

This brings back so many memories. I had to find some Dune 2 gameplay on youtube... Hoping some people here will enjoy it too:


Being a fan of Starcraft 2, I can't believe how painful it was to have to select each unit individually... Glad the mechanics of RTSes have evolved.

cpeterso 2 days ago 1 reply      
I read that World of Warcraft was born when playtesting prototypes of Warcraft III, the first 3D version in the Warcraft series. People were having so much fun just running around with their 3D heroes that the concept grew. This is a great example of letting the product lead you.
MBCook 2 days ago 1 reply      
> While Warcraft was a DOS “Protected Mode” game, the modem driver could be called from both Protected Mode and Real Mode due to quirks in the DOS operating system and the 80386 chip-architecture it ran on [...]

Could someone provide more information on this? I'm guessing it had to do with the fact the serial interrupt could occur while some DOS system call was executing 16 bit code (or perhaps BIOS code)?

everlost 2 days ago 2 replies      
> Stu is quite memorable as a voice actor in the role of Human Peon, where his rendition of a downtrodden brute-laborer was comedic genius.

Nice to finally know the person behind the still-funny-in-my-head warcraft phrases - "Yes me lord", "Who me", "Uhuhuh"

diminoten 2 days ago 0 replies      
It's a great illustration, this story, about how seemingly unimportant decisions can be made that then get carried down through a company for years into the future.
MikeCapone 2 days ago 2 replies      
I can't wait for part 2. I wish there was a book written about the early days of Blizzard or Westwood Studios and those kind of great companies, kind of like the book Masters of Doom about id Software, or Revolution in the Valley about Apple.
MikeCapone 2 days ago 1 reply      
This makes me wonder; would it be possible to get software patents on various game mechanics? What if the first RTS producer had patented a lot of it? Is that so different from smartphone interface patents?
justjimmy 2 days ago 0 replies      
Thanks so much for the article. I grew up on W2, C&C and after 15 years, it's always nice to know how it all came to be/worked in the back ground and the little details of it all.

"Work, work."

kleiba 2 days ago 0 replies      
For some reason I love reading about the development of those old game classics, be it Warcraft, Prince of Persia, or Paradroid...
PuercoPop 2 days ago 3 replies      
What about Dune II? It was way ahead of its time. If I remember correctly it had unlimited unit select.


incision 2 days ago 1 reply      
He admits to the Warhammer influence.

Mass fanboy seppuku in 5, 4, 3...

Norvig vs. Chomsky and the Fight for the Future of AI tor.com
204 points by fogus  3 days ago   146 comments top 27
knowtheory 3 days ago 3 replies      
It's a little bit frustrating to read a rehash of an argument that was cutting edge maybe back in the late 90s, especially one that is so poorly written, and framed as a battle between two intellectuals.

Chomsky's past his heyday. He has been seminal in his field, but he's no longer doing research which pushes at the boundaries of our understanding of language, how to model it, or what the fundamental nature of language understanding systems is. (as one might infer, I come from a non-chomskyian school of linguistics).

Given that we have actual data and research about large scale systems that do interesting things (including the massive artificial neural network that google built last month, see: http://www.wired.com/wiredscience/2012/06/google-x-neural-ne... ) reporting as substance free and obfuscating as this is, is a real frustration, when we could be talking about more interesting things, such as what a solid operational definition of meaning is, or how exactly heuristic/rule based systems actually differ from statistical mechanism, and whether or not all heuristic systems can (or should) be modeled with statistical systems.

The framing of this article is particularly galling because there are so many non-chomskian linguists out in the world who operate fruitfully in the statistical domain. Propping Chomsky up as somehow representative of all linguists is pretty specious and a bit irritating.

Jun8 2 days ago 9 replies      
OK, let me start with two facts, one objective, one personal: (i) Noam Chomsky is a genius with many contributions to linguistics and computer science (ii) I think his overall influence had been damaging to linguistics.

Here's a summary of Chomsky's career in layman's terms: As everyone knows, Chomsky first came to prominence with his critique of Skinner (who, as everyone also knows, was a total psycho). He pretty much created linguistics as we know it (at least in the US, there were some numbskulls in Europe who still doubted the new order), starting from the main thesis of linguistic universals, which can be summarized as the fact that all humans possess the same language faculty, i.e. the wide range of linguistics differences between, say, English and Mandarin are just on the surface. This was a welcome relief against the Sapir-Whorf mumbo-jumbo which held that Eskimos had hundreds of words of snow and language constrained how we think. Chomsky has also been very active in politics (he's actually much better known to the general world by his political books), pointing out the evils especially of the American brand of capitalism (is there any other kind?) and its corrosive influence on the world, e.g. Iraq, Afghanistan, etc. He also points out errors in certain approaches in Economics, e.g. see http://en.wikiquote.org/wiki/Noam_Chomsky#Capitalism, without holding a degree in the field, but everybody does that.

Chomsky's greatly damaging influence to linguistics is due to the fact that his speculative and simplistic (at least originally) views on how the brain processes and learns language has stifled research in promising fields by decades. The main problem I have with him is that the cause of the shortcomings of his theory seems to be not lack of knowledge (very little was known about cognition in the 60s), which, of course handicaps all pioneers of science, but politics (I detest politically motivated scientific theories). AFAIK, his universalist views were motivated from his political beliefs.

Luckily, starting in the 90s, Chomsky's chokehold on linguistics has slipped somehow. Researchers, such as Leda Cosmides, have ventured into research on linguistic relativity (http://en.wikipedia.org/wiki/Linguistic_relativity). Skinner's theories are making a comeback in academic circles (http://www.theatlantic.com/magazine/archive/2012/06/the-perf...).

So, what does all this mean for the current debate? I think it's time to retire and the "old guard"! Let us acknowledge their breakthroughs, their contributions, but also their limitations and move on.

phaedrus 3 days ago 7 replies      
I spent about ten years working on Markov based chat programs. I gave up on themwhen I realized that no matter how sophisticated your statistical model it will never be more than a statistical analysis of text, unless it includes some rich rule based model of mental processes and mental objects. It may be that such a model of mental processes must itself be fuzzy and probabilistic, but it must exist. Therefore I come down firmly on the side of Chomsky in this debate: we should pursue theories of intelligence, and stastical models without any theory do not advance our scientific understanding of AI, however practical their application may be at the present time. This is not to say statistical methods do not work, of course they work, what I am saying is it is not a path that leads to true understanding of intelligence any more than spectral analysis of the EMF emissions of a running computer would lead to a theory of computation.
robg 3 days ago 5 replies      
This is one of those rare moments in intellectual life where being in the room and now seeing the debate develop, it becomes clear that the resulting hype isn't (wasn't) loud enough.

This distinction marks the real turning point in AI from abstract, grand claims with highly restrictive evidence toward engineering that simply works. Who cares about the ontology when we can recreate? It's like saying airplanes don't properly explain flight because they don't replicate how birds do it. Who cares? We can fly (and translate and soon reason) artificially.

It's clear that Chomsky and Universal Syntax has held back the entire field of AI (and at MIT). There isn't one algorithm in the human mind to decode all of our mental capabilities. That's mistaking subjectivity for objective lessons. Trying to recreate that Phantom has led to rule tables in AI, constraints on how the mind must operate. Instead, by allowing those fuzzy boundaries to accumulate with evidence, statistical approaches win in the long-term of our lives and in this debate.

Kuhn knew what happens to dinosaurs.

debacle 3 days ago  replies      
They have two different definitions of "artificial intelligence," which is where the schism seems to be arising from.

Chomsky takes the academic approach - artificial intelligence is the simulation of humanlike (or even possibly mammalian) intelligence.

Norvig is taking the engineering approach - artificial intelligence needs only to pass the Turing test.

They're both right, both approaches have value, and they both are bound by our limited technology at the moment.

In the end, though, Norvig will lose out. Sure, he'll make the finish line first - an AI capable of 'passing' the Turing test, but in order to have real intelligence you need an analytical engine (or brain, if you will) that can prioritize data without fiddling with bits. In the Norvig solution, someone will always have to be fiddling with the bits.

Chomsky's approach, on the other hand, will result in a 'true' artificial intelligence, the way neurologists understand it. It's just going to take a lot longer to get there.

rm999 2 days ago 0 replies      
I've been in machine learning/AI for ten years now - from undergraduate research, to graduate school, to industry - and I find debate like this fascinating. My take on it is that our understanding of what we will be able to do in the future is very unclear, and what we will want to do is very open-ended. So the debate is worth having, but it won't really resolve anything.

Statistical models may (in my opinion probably will) end up being an "AI" dead-end, eventually falling into other fields such as algorithms, like game trees and logic-based agents did. That's not to say the current statistical approach is a bad idea; on the contrary, I think these techniques are useful and simple enough that they will become fairly ubiquitous in CS.

On the Chomsky side of the argument, AI researchers have consistently been frustrated in the past 50 years, to the point that studying AI today makes you sound like a joke. But their goal is a noble one. Anyone can understand how great it would be to have a human-level intelligence on a chip - this would fundamentally change the World. The fact that we haven't dented this problem doesn't mean the problem isn't worth solving, it just means our understanding of what it takes to build this kind of AI is in its infancy.

I almost feel like Norvig and Chomsky are arguing in parallel. They are both right, but their arguments are valid on different time scales. Today, the Norvig approach will easily win out; Chomsky has nothing and is largely irrelevant. But Chomsky is, IMO, correctly predicting what will need to happen to move beyond an eventual roadblock in a much grander AI.

azakai 2 days ago 0 replies      
First thing, please read the actual article by Norvig, it is excellent,


Second: I found it astounding that the article never mentions Skinner. Surely this article is trying to do to Chomsky what Chomsky did to Skinner in 1959 ("A Review of B. F. Skinner's Verbal Behavior", http://www.chomsky.info/articles/1967----.htm ).

Chomsky basically marked the beginning of modern era of cognitive psychology with that essay, displaing the previous paradigm of behaviorism. Norvig's article has similar form in some ways to that article, and similar goals (to argue for a new paradigm over an older one). As I was reading it, I was sure Norvig had that context in mind. So I was surprised to read

> So how could Chomsky say that observations of language cannot be the subject-matter of linguistics? It seems to come from his viewpoint as a Platonist and a Rationalist and perhaps a bit of a Mystic

Well, no, Chomksy explained very well why he opposed observations being the subject matter of linguistics in his 1959 essay. Skinner's behaviorism looked only at observations and experience, and did away entirely with internal mental states. That might seem bizarre to us today, and the reason is in large part the shift heralded by Chomsky's article from behavioral psychology to cognitive psychology. In the latter, the goal is to understand the internal processes that are involved in psychology (or specifically language).

Statistical language models are not behaviorism. But they do share a lot with it, they are based primarily on raw empirical observations as opposed to deep models, so it is natural for Chomsky to oppose them on similar grounds (and not due to Platonism or Rationalism, although I suppose you can speculate that those motivated his 1959 essay too).

Side note, we can speculate that if Skinner had today's computers and statistical modelling methods, the shift from behaviorism to cognitivism might never have happened, seeing as the statistical approach is so successful.

orbitingpluto 3 days ago 1 reply      
I know a card counter. I showed him how to condition probabilities to determine how to best play. He went for the full Monte Carlo method and he lets his simulation run for a week before he starts using it "just to make sure". It's frustrating because he doesn't get that his results are statistically significant after about 30 seconds of runtime. He still makes money doing it. The results are tangible, but he's still just mucking about.

'Quantum mechanics is certainly imposing. But an inner voice tells me that it is not yet the real thing. The theory says a lot, but does not really bring us any closer to the secret of the "old one." I, at any rate, am convinced that He does not throw dice.' --Einstein

Statistical methods can work but they are unsatisfying to the scientifically curious. You're not really a scientist if you create something that works and you don't really know why. (Not to say that the method doesn't have value. Sometimes you have to play with your Lego before you grow up.)

VikingCoder 2 days ago 2 replies      
I picture Chomsky as Kepler, trying to build orbits out of Platonic solids.

Until Kepler had access to Brahe's data, he was not going to be able to come up with his theories of planetary motions.

Worse than that, the laws of planetary motion present a simplistic view of the universe: what happens when a bunch of small objects orbit a very massive object. I think they wouldn't help you out at all, in trying to understand planets moving in a binary star system.

There is no analytic solution to the N-body problem. We can only simulate the motions of a group of massive bodies by iteratively applying the laws of gravitation that we have deduced. Knowing the mathematical properties of how objects behave in a gravitational field, and actually understanding HOW GRAVITY WORKS are two enormously different things. Newton was frustrated with the theory of Gravity, because it was, as Norvig's models, just a model - with no explanation of why. But the model allows you to make falsifiable predictions, and understand how the universe will behave. Looking for the Higgs Boson is awesome - but there is potentially no equivalent in the linguistic world.

Chomsky asks us to ignore F = G * m1 * m2 / r^2, because there's no WHY attached to it.

PS - this understanding of the history of science is brought to you by Carl Sagan's Cosmos TV series. I have no deeper insight than that.

brudgers 3 days ago 1 reply      
Intellectually, there seems to be something as wrong with avoiding anthropomorphism when discussing human endeavors (such as language) as there is with anthropomorphic explanations of erosion or chemical reactions. Skinnarian approaches to language may leave people unsatisfied because there is no story, just clinical observation.

Norvig's approach (as characterized in the article) takes the the "Artificial" in "Artificial Intelligence" to include the mechanism by which an intelligence makes decisions. Chompsky's aesthetic of linguistics applied to AI would treat "Artificial" as a description of the platform in which an intelligence is embodied (i.e. non-biological) while requiring the platform to operate linguistically on the same principles as a "natural intelligence."

Norvig's approach (as characterized in the article) is essentially a better Eliza (or Ford's faster horse).

If one takes the Turing Test as scientifically meaningful rather than an engineering standard, then one falls in one camp or the other and the Norvig Chompsky debate is over a pseudo-problem. "Artificial Intelligence" is in that sense metaphysical jargon.

mootothemax 3 days ago 5 replies      
Isn't this basically an argument over John Searle's Chinese Room thought experiment?

It supposes that there is a program that gives a computer the ability to carry on an intelligent conversation in written Chinese. If the program is given to someone who speaks only English to execute the instructions of the program by hand, then in theory, the English speaker would also be able to carry on a conversation in written Chinese. However, the English speaker would not be able to understand the conversation. Similarly, Searle concludes, a computer executing the program would not understand the conversation either.


SlipperySlope 2 days ago 0 replies      
I am a entrepreneur/researcher working to create artificial intelligence. My approach follows Turing's suggestion that one should create a child mind and proceed to educate it. I employ Construction Grammar in my English dialog system - not a statistical parser/generator. Operating on a smartphone, I use available statistical speech recognition engines to transform speech to text, but from that point onwards the server-side processing in Construction Grammar is symbolic, thus engineered from first principles. Likewise, for English generation, my discourse planner emits structured RDF that the bi-directional Construction Grammar generator transforms into a text utterance. That symbolic text is then input to an available statistical text-to-speech engine available on the smartphone, to speak to the user.

As an example of the power of symbolic approaches, my parser has a complete symbolic analysis of English auxillary verb constructions, producing unique, meaning-rich, RDF-compatible semantics for:

I am learning about computers.

We are learning about computers.

We will be learning about computers.

I could be learning about computers.

I have been learning about computers.

I better learn about computers.

I had better learn about computers.

I dare learn about computers.

I did learn about computers.

I do learn about computers.

He does learn about computers.

I had learned about computers.

He has learned about computers.

I have learned about computers.

He is learning about computers.

I need learning about computers.

I ought to learn about computers.

I ought to be learning about computers.

I used to learn about computers.

I was learning about computers.

We were learning about computers.

Because of the so-far limited success of my work, I am inclined to agree with Chomsky's AI argument despite using a modern grammar opposed to his linguistic principles.

An artificial intelligence will use both statistical techniques and symbolic, e.g. procedural techniques, I think. With the most useful intelligent behavior being symbolic. E.g. an AI designing, writing and testing software.

PaulHoule 3 days ago 1 reply      
Well, in the big picture, Chomsky created an activity which keeps liguists very busy. His approach, however, has contributed very little to language engineering.
mcguire 2 days ago 0 replies      
Historically, AI has been divided into two related but different approaches. "Strong" AI is interested in understanding and creating Minds; figuring out what intelligence is, how it works, how we do it, and how it could be done in general. "Weak" AI is interested in doing things that couldn't be done before; things that we do not have good algorithms for, or don't have any algorithms at all.

Those two are not opposed. Any advance on either side helps the other. In this argument, Norvig is representing an extreme version of weak AI since he seems to be arguing that it's possible that statistical methods are all there is. (I suspect that he isn't actually making that argument, though, but that strong AI's models are currently too simplistic to capture what statistical approaches can do.) Chomsky, on the other hand, seems to be caricaturing strong AI by saying that anything that doesn't directly shed light on the Grand Theory is worthless.

aidenn0 2 days ago 0 replies      
It's a question about engineering vs science. Before Kepler, people actually could predict the motion of the stars and planets through the sky; perhaps not as elegantly or accurately as after Kepler, but to a certain degree, so what?

The AI case is clearly a point where the theories from linguistics are insufficient for engineering purposes. Watson could not have been built today based off of Chomskian linguistics. Maybe the statistical models will advance the theory of linguistics, maybe not. Either way they will give us useful tools now which is better than elegant tools later.

no_more_death 2 days ago 0 replies      
One myth I want to debunk:

Copernicus's theory did NOT do away with epicycles. Search on Google for "copernicus epicycle" and the first article demonstrates my point. The one who did away with epicycles was Kepler. Copernicus believed orbits had to be perfectly circular; Kepler recognized that the data fit better into an elliptical model.

It's not 100% clear whether the author believed the "myth," but hopefully I can set some people straight in this forum.

stcredzero 1 day ago 0 replies      
> If the solar system's structure were open for debate today, AI algorithms could successfully predict the planets' motion without ever discovering Kepler's laws, and Google could just store all the recorded positions of the stars and planets in a giant database

I'm sorry, but this bit is half wrong and simply numerically illiterate. We can store all of the recorded positions of the planets and other bodies in the solar system, but we need models to predict their future positions. This is an important distinction, since we might use such models to save the human race one day.

mbq 2 days ago 0 replies      
The main problem with Chomsky's approach is that it is quite likely that human intelligence mechanics are just incomprehensible for a human intelligence, and not because of some crazy construction tricks but simply plain old brute size and complexity it imposes.
Judging from much simpler (thus deeper investigated) biological systems like some bacteria metabolisms we can see that there is no grand design there, only trivial primitive core and numerous layers of less or more subtle modifiers of modifiers. IMO there is no reason why the same can't work for the brain and thus the "transition to sentience" is way more continuous than we would like to expect.
6ren 2 days ago 0 replies      
It's true that Engineering at times leads Science. But, from a scientific view, what's the point of a model if you can't understand it? After all, we already know how to create intelligence without understanding it.

While it's conceivable that intelligence is too complex for a human to ever understand (e.g. if not amenable to hierarchical decomposition), that would be very sad news for science.

aangjie 2 days ago 0 replies      
Just for the record, i consider this a simple model. And it's from norvig. http://norvig.com/spell-correct.html
sireat 2 days ago 0 replies      
There must be some analogies made to the much smaller field of chess computer programs.

From 1950s to about 1980 or so it was thought that the best computer chess program would approximate the way a human would think about the game. Botvinnik in particular was adamant that such an approach would be the right one.

However, most of the progress was made through brute force. Modern chess programs select moves in a way that is far removed from the way a good chessplayer selects moves, yet they can now produce games that seem very "uncomputer" like and "human".

frobbin 2 days ago 0 replies      
AI research, including speech recognition and machine vision, are currently ENGINEERING disciplines trying to make artifacts that do interesting things. Success is an artifact that works.

Several basic science disciplines are trying to understand how brains work. There is mostly tremendous amounts of experimental facts, difficult to put together, and some theory and modelling to go with it.

Norvig would be confused if he thinks that engineering AI systems automatically counts as models useful for understanding the brain. If there is application to understanding brains it is a welcome accident. It happens that there are signals in basal ganglia that look like the temporal difference error signal from reinforcement learning. So maybe RL research can help understand some brain circuitry in that case.

But in general the engineers are trying to get stuff to work, and they are deluded if they think they are simultaneously making progress in understanding how brains work.


For example: why does speech recognition use hidden markov models and N-gram language models? Because they're the best model of how brains understand speech? No! Not at all. HMMs and N-gram models are above all computationally tractable. Easy to implement, not too slow to run.

We have algorithms (such as baum-welch and N-gram smoothing techniques) to get them work work well in engineering applications. Nothing more. Might they help us understand brains? Maybe, but not at all necessarily so.

ilaksh 2 days ago 0 replies      
I know that everyone has been careful not to mention Chomsky's political beliefs, but I am suspicious that this is actually partly about Chomsky's political beliefs, which I think are more in line with reality or at least more egalitarian than Norvig's must be, since Norvig has been running one of the hegemony's greatest tools recently. I see a parallel between the general derisive dismissal of Chomsky's academic views as being simplistic with the type of dismissal commonly given to a Chomskyish geopolitical viewpoint. I see this disagreement as a surrogate for the very different geopolitical worldviews.

I doubt that Chomsky is really so hard line about his old approaches to AI as we are led to believe, although he is probably farther behind the times than Norving.

I actually think that even Norvig is just applying recent contemporary AI to AI problems, but still is part of an old or establishment guard himself as far as AI goes. I think that the real cutting edge AI research is called AGI (artificial general intelligence) research.

The generation/category of AI research or machine learning that Norvig is tied into is much newer and steps beyond the earlier traditional AI that Chomsky might have been involved with, but the AGI researchers are a step beyond Norvig's clique. And the AGI researchers are, by the way, very optimistic about the Singularity or at least the likelihood of human-like and probably super-human artificial general intelligence in the short or medium term.

I mean the Norvigish machine-learning stuff isn't completely disconnected from the AGI stuff and completely behind and I assume it will result in extremely capable AIs relatively soon, but the AGI approaches will probably prove to be more powerful and more humanlike since they are closer to human models.

Take a look at what Brain Corporation is doing, or Numenta, or the OpenCog project. That stuff is beyond Norvig and friends' approaches.

ecolak 1 day ago 0 replies      
When Einstein heard about Quantum Mechanics and the idea that everything is a probability, he said: "God doesn't roll dice". He meant that even though Quantum Mechanics does give us many answers about the world of the tiny, it doesn't truly explain it. I believe that a similar analogy can be made to this case.
yters 2 days ago 0 replies      
Norvig is only trivially right. Sure, with enough stats you can infer a lot of the structure of all the information we humans have created, and thus replicate the structure, as Google is doing with its suggest service. However, this does not explain how humans created the structure in the first place. Such a form of AI will forever being playing catch up to humans.
fat_clown 2 days ago 1 reply      
It is an interesting debate, though I think it's being shone in the wrong light.

According to the article, it almost sounds like Chomsky believes a statistical approach to AI is a disservice to the field. The point he's missing is that research in statistical based AI is just that - statistics research.

Chomsky and Norvig deal in two different fields, which happen to have similar applications. Norvig does research in statistical and machine based learning. Success in this field comes from a new model that can make more accurate predictions, or a proof that it is impossible to make valid predictions about X with only Y as input. Applications of this field include technologies which rival AI systems as envisioned by Chomsky, but the essential point is that this field focuses on statistics research, not AI research.

Chomsky is wrong in dismissing this as a disservice. I do agree with his main point, that AI research and knowledge is not necessarily furthered by statistics research, but that is simply because they are different beasts entirely.

Maybe one day, when the biology has caught up with us and we have a solid understanding of the brain, will we be able to create a highly intelligent computer. Until then, statistics research is most likely to yield fruitful results.

psb 2 days ago 0 replies      
Where is eyudkowski when we need him?
Get A Job: The Craigslist Experiment thoughtcatalog.com
197 points by kawera  1 day ago   125 comments top 34
wcchandler 1 day ago 4 replies      
I attempted something similar during my last year of college. I made a posting at my closest large city, detailing my ideal job. I also laid out what seemed to be good requirements that I would have satisfied for the position.

I was astounded by the results. I quickly had to remove the ad, too. The messages were heart wrenching to read. One man was unemployed after being discharged from the military. Another man had been doing senior level work for 10 years. Another didn't have an ounce of experience in this field. And finally a couple were recent graduates -- these were the only ones I was interested in. I left mine up for a couple days. I realized the deceitfulness of this experiment. I genuinely felt remorse despite my animalistic nature to size up the competition. Then I reflected on myself and thought about the few times I'd read a job post and quickly turn to my wife "This one sounds perfect! I really hope I get it!" and be grinning from ear to ear for the next half an hour only to be hit with sadness for a day or two. Then I wondered, if it was this easy for me, who is to say others wouldn't be doing the same?

I changed my rules for applying to job posts after that. They had to actually state their recruiting firm in the post. Bonus points for disclosing their client. They also had to post an email or website -- some way to validate the poster. Again, bonus points for phone numbers. I would also search various strings in the post to see if they're listed anywhere else. I became a lot more cautious in my search, which was a good thing as I wasn't slapping my name, email, address and phone number all over to who knows where.

After my experiment I reached a conclusion-- I had broad competition and shouldn't try to gauge myself on others. This made me much calmer in the interviews. I felt reassured knowing it wasn't "what I knew", and instead "who I was." I decided to be more of myself and not give canned answers that I thought they wanted to hear. My skillset was expendable and I needed to realize that. I lost my sense of entitlement. I used to think my $100,000 piece of paper meant something.

Then I realized it did.

tokenadult 1 day ago 2 replies      
An interesting experiment (for a company that actually has a job and is not leading people on) would be to distinguish a job posting that says

"Hiring for this position will be based in large part on a work-sample test during a half day at our office"


"Previous experience . . . preferred, but will train the right candidate."

We can tell that the United States economy is in recession (as is the case in many other countries with Hacker News participants) because we keep seeing new stories submitted every day or so about the hiring procedures of companies, with multiple comments. In a long FAQ post I've posted recently


here on Hacker News, based on many helpful comments from other participants, I summarize a LOT of research on company hiring procedures. If you want to hire someone good in the United States, make a work-sample test part of your hiring procedure. Work-sample tests are much better than biographical reviews of resumes for finding good workers. If you want to get a good job in a well managed company, develop the skills to get past a realistic work-sample test for the position you seek. Many more details appear in the FAQ,


which is quite long but well worth a read if you are looking for a job or if you are a business manager trying to hire someone who will do a good job.

P.S. I still hear of young people who are gaining full-time, full-benefits jobs in today's economy. In the usual case, they are getting those jobs by showing what they can actually do as part of the hiring process, degree or no degree.

After edit: I recently read the VERY interesting book The Folly of Fools: The Logic of Deceit and Self-Deception in Human Life


(I wonder if this book has been discussed here on HN yet?) and that reminded me of one of the main reasons that hiring by screening resumes is demonstrably less effective than hiring by giving work sample tests: many people lie on their resumes. The author of the interesting submitted blog post was of course posting a fake job ad, and several comments here on HN point out that many job ads may just be fronts for recruiters rather than postings by actual employers. Participants here gave examples (NUMEROUS examples) of job ads getting responses that don't appear at all to fit the job, but an employer also has to worry about "false positives," applicants who look like they fit the job but who are inflating their educational credentials or multiplying their years of actual work experience. The only way to know what an applicant can do is to test. Perhaps announcing up front that the hiring process includes actual testing of job-related skills MAY screen out some of the poseurs from even sending in their fake resumes (although the many shotgun applicants who don't even read the job ad closely will still be sending resumes all over the place only to waste the time of anyone who receives the resumes).

One way the author of the blog post could have demonstrated statistical acumen is by labeling his data presentations "Self-Reported Experience" rather than "True Experience" and "Self-Reported Credentials" rather than "Education." He has NO idea what the actual educational credentials or work experience (or other aspects of biography) of any of his applicants really are. He would have been aware of this point if he had taken a good statistics course in college, but alas good statistics classes are very rare in the United States.



patdennis 1 day ago 3 replies      
We're familiar with the art of the job search: day after day, scanning the classifieds, Monster, Indeed, Craigslist, etc. for open positions; forever touching up résumés to appeal to specific job requirements; writing endless cover letters that never seem to sound quite right; applying to dozens, maybe hundreds of jobs per week; staring vacuously at the familiar monitor glow at 3 a.m.

This sounds horrible. I've never gotten a job that wasn't through personal or professional connections, so for most of them even handing over a resume was essentially a formality.

I work in a specialized field, which I think accounts for why this is possible. But I do only have a humanities education, and most of my day to day skills have been developed since leaving school (without a degree, mind you). It's just a matter of finding a niche.

stevejabs 1 day ago 2 replies      
From personal experience, don't apply to any positions that don't reveal the company name in the post. You'll rarely get anywhere, and when you do it will likely not even pan out to being who you want to work for or the job that was described.

What I normally do is this:

Find a job that you find unique or intriguing. If it has a company name attached I immediately jump to LinkedIn to find who I may be working for and if it's the small company, I just look up the CEO.

At this point, I start following the CEO on Twitter (if they have one) and I find out what they are interested in and post about. I will now usually start engaging that person every now and then to make them put a name to a face. This also allows you to get an idea of the personality of the person you may be working for.

After a while, I'll put the question out there, "Hey I saw that you posted Foo job at your company. Has this position been filled? I'm extremely interested."

If they don't have Twitter, I'll engage them straight through LinkedIn. At this point I'll be straight upfront and honest. Just tell them that you are interested in finding out more information about the company before you formally apply for the job. When the time comes, try and get your resume straight to their personal email via this conversation.

Open up a line of dialog with someone who is posted the job. If it's an HR department, it may be tough, but not impossible. It's worth it in the long run to build up connections (even if their virtual) with people. Blindly applying to positions is just going to leave you in the dark.

EDIT: Grammar

tseabrooks 1 day ago 2 replies      
This is interesting and well written. Find yourself a grad student in Math and you've got your job. I want more articles of a "pop math" variety exploring the numbers of everyday life. Not only are they interesting and engaging, these types of stories provide the groundwork for helping "us" (entrepreneurs) know what industries we should be attacking.
patio11 1 day ago 3 replies      
Cheat sheet for humanities master's degree holders: microecon 101 suggests that anyone offering to pay a substantial premium to the market clearing price for X will be offered lots of X.
hkmurakami 1 day ago 0 replies      
"However, for a more specialized position, such as Full-Time English Instructor or Editorial Assistant or Professional Lobsterman, I'm sure there are far fewer résumés submitted."

I suspect that this is not even the case, as many/most people doing the resume machinegun routine blindly apply to everything in sight.

justjimmy 1 day ago 3 replies      
I recently started looking for an iOS artist to work with and I posted a job ad on Craiglist and Kijiji, for the first time.

I was overwhelmed with the response - I mean I knew there was alot of people using Craigslist but I had no idea of the scope. And the spam. I even had people submitting me their resumes who had nothing to do with what I was looking for. Needless to say, I was quite surprised by amount of response I've gotten.

It was definitely insightful to sit on the other 'end', even for a day, to try and make sense of all these resumes, to read through the greeting emails / copy+paste jobs. It was only a glimpse but it was an interesting experience.

Now I'm not advocating people to start creating fake job ads to see what it's like…

Edit: Anyone else notice the author using a coffee ring to make his pie chart from? Improvisation " mark of a true warrior!

hkmurakami 1 day ago 2 replies      
Thought experiment: does applying a "price" to applying to a position make sense? Legitimate & qualified applicants get lost in the noise because unqualified candidates send hundreds of resumes on sites like Monster and Craigslist.

While this would open up possible fraud with disingenuous job postings with no intent of being filled, would something like charging $1 to apply to a position improve the situation? After all, this is the approach that Universities take when accepting applicants.

l3amm 1 day ago 0 replies      
The much sadder reality is what actually happens in a company that posts that ad:

1) Optimistically post the job ad.

2) Receive hundreds of responses within hours of posting it.

3) Begin going through those hundreds of responses, reading cover letters and resumes. At first you're pumped then you realize very few people here are qualified for the position/read the post.

4) After ~20 resumes, you close your email for the day. If you made the mistake of using your personal email address, I'm sorry.

5) Despair. Pick and choose random emails (maybe filter by email, are there any harvard.edu's in there?) and immediately call the first person who seems remotely OK. Iterate step 5 until you find 'the one.'

I've worked with/observed dozens of employers in this process, a tiny fraction of resumes actually get read, and frankly it's impossible to stay focused through the experience. A "fun" experiment is to print out 40 resumes in a pile, write a job description for those resumes and then try to find the most qualified one in that bunch. Free-text association with often poorly-articulated job requisitions is a nearly impossible proposition. Add in the step where you have to download and track those applicants, and it's damn near impossible to use Craigslist to find the best candidate.

This was the original inspiration for our company, Foundry Hiring (www.foundryhiring.com). We're trying to build tools to make this process as painless as possible. In this instance we will give you a free link to post in your CL ad which has an application form in front of it, so that emails don't hit your inbox, the applicant's are stored directly in our database. We then give you tracking tools on top of it to make the process better. We're still in beta, but I would love user feedback.

Foy 1 day ago 2 replies      
I've heard that the employment prospects with a humanities degree were... not optimal.

But to think that so many people with Bachelor's or Master's degrees are out there fighting tooth and nail over meager admin assisstant jobs that barely pay anything.

I'm shocked. o.o

Also, have you ever thought that maybe having a Master's degree makes you seem over-qualified for a secretary (sorry) position? When I think of a Master's in English I think of jobs like Editor, or Journalist.

xarien 1 day ago 0 replies      
Keep writing and make sure you have a link on your site that says: "Like my blog? Hire me as a copywriter!"

Good copy is hard to come by and if you can get good at it, you'll definitely find yourself in demand...

el_cuadrado 1 day ago 1 reply      
For big companies, the degree is just a requirement. If they ask for a minimum of bachelor, then they likely do not care if you have a Ph.D.

For smaller companies, experience is everything.

And Masters in English is not an advantage, unless you apply for a blogger position or something.

pawelwentpawel 1 day ago 0 replies      
After reading the post and admiring the creative pie-chart I started thinking about more time consuming experiment of a similar type. As I believe, most of the people that applied for this job weren't "snipers" but more of a bit desperate types machine gunning each single input appearing in their over-caffeinated visual cortices. If called back they probably wouldn't even remember the names of companies they applied for. I wonder what would happen if the fake job ad contained a fake website with a fake test (of some related domain or even a basic grammar/maths) for each applicant to solve before even uploading a resume. I guess that it would give us a more full picture not only on "quality" of competition but also on the number of machine guns in the field (ad views -> website views -> tests solved).

And a bit of an unrelated question - how illegal is posting fake job ads for such purposes?

simonbarker87 1 day ago 0 replies      
I know that unemployment is high but how does a shift from say 6% (arbitrary number as I'm not american so don't know the numbers) to 9% have such dramatic effects, if you spin it round and say that employment has dropped from 94% to 91% the stats sound pretty good - wrong direction change granted.

Jumping from 6% to 9% means half as many more are now looking for work.

I must be missing something somewhere so if someone can explain this I would very much appreciate it.

dennisgorelik 1 day ago 1 reply      
1) "Administrative Assistant" postings attract lots of job seekers. For some reason unemployed candidates like to apply for these jobs.

2) Scammers like to post "Administrative Assistant" postings too. I'd say that at least half of "Administrative Assistant" postings are scam (scammers are trying to recruit people for money laundering or pyramid scheme).

3) Job postings looking for software developers are totally different story. There are plenty of openings chasing limited pool of available developers.

4) See how jobs/resumes ratio varies for different skills:


abiekatz 1 day ago 0 replies      
This is part of why temp agencies and "talent resource" companies can exist. Trying to filter the ton of resumes and hire the right candidate off of craigslist.com can be quite the pain.

I do wonder though what jobs bright people without hard skills should pursue. This guide is interesting: http://www.slideshare.net/choehn/recessionproof-graduate-172...

I think people in that position should try to learn a valuable skill. Of course easier said than done but there are a number of skills that should lead to a middle class life that can be learned relatively quickly.

Programming could be a good choice. See: http://techcrunch.com/2012/05/10/dev-boot-camp-is-a-ruby-suc... DevBootcamp seemed to produce quality developers in 3 months.

Though also PR, online advertising, some focus in marketing, web design, sales...whatever.

galfarragem 1 day ago 0 replies      
I'll share my experience on how I get jobs. I never won any job answering job offers. All the jobs I got (6 or 7, doesn't matter) I got knocking doors.
I never "lost" any interview that I got this way.

Is this a rule to follow or was just luck? I don't know the answer. I know that it works for me. Ok, I'm in the architectural field, within other industries and corporations probably would be impossible to have this approach. Age is also an important factor. Knocking doors when you're during your 20's is not the same as during your 40's. Even during your 30's is already more difficult.

It's my experience. I hope it can help somebody.

This way people who could make the interview already know you. You are already interviewing without any appointment! If they ask you a more formal interview you're in.
Early mornings always choose early mornings. Afternoons are terrible for this approach. Everybody just wants to go home. Nobody will have any patience to listen to you.

Smile in the face and confidence of course. This can make the difference. Of course lucks has a big role also: if the decision-makers are in the office or not, if tehy are free, if there is any (even a slight is enough) need of people, etc

austenallred 1 day ago 0 replies      
To be fair, that was in a big city for an extremely generic job. Anyone who is employable anywhere is eligible for that job. I used to hire writers from CraigsList; there were a lot of resumes, but few were well done, and even fewer had decent writing samples.

If you're really looking for a job, you have to look for something specific.

dbecker 1 day ago 1 reply      
I'm wonder what the ratio of job postings to job applicants is.

If there are 600 applications per listing, and each applicant sends 50 applications, then there are 12 times as many applicants as listings.

Do the 600 and 50 numbers seem about right?

ilaksh 1 day ago 1 reply      
There are not enough jobs. The 'economic' model is broken, and the 'economy' is being strangled.

But its worse than that. The very concept of a job doesn't even really qualify as civilized.

liamondrop 1 day ago 0 replies      
I've gotten several gigs through Craigslist. Both fulltime and one-off / freelance. It rarely has taken me more than a couple days to get an interview/callback.

My single most important tip for doing this successfully is to get your app in fast. Like within 30 minutes of the job going on the site max. I'm certain that almost no one looks at more than the first 50-100 submissions. It helps to put relevant categories/searches into some kind of rss reader that updates frequently and allows you to see multiple feeds simultaneously so you can see at a glance as soon as something you're interested in pops up.

Second to speed is writing a good cover letter. The letter should be concise, active tense, probably with bullet points for easy scanning, and talk about what you have done and what you will do for them that will make their life easier. Having most of the letter drafted in advance will help you get it in fast, but you will likely want to at least touch on a few key points from their job posting.

jdoody 1 day ago 2 replies      
Interesting article. Time to make your own job.
alfredp 1 day ago 0 replies      
I posted an ad in the newspaper with a similar kind of experiment in mind. A dead marketing dude was daring me to do something crazy. What's the worst that can happen? (About midway through this: http://www.thegaryhalbertletter.com/Newsletters/aslz_winners...) You can read this first to see what he actually dared me to do.

The short version of what happened: my cellphone was ringing off the hooks and I had to just turn it off for a few weeks. Totally insane response.

Paul_S 1 day ago 0 replies      
I see how the author gets useful information for himself but is this ethical? He essentially wasted time of those people who applied in good faith. A bit cruel.

He solicited personal information from those people - I'm sure there are rules about storing and processing personal information.

pjnewton 1 day ago 1 reply      
Great timing on this for me. I just posted a job (a real one) to Craigslist and received a handful of responses.

Making the assumption that I would get a bunch of resumes I decided to add one little hurdle. I asked the applicant to copy & paste their resume into the email body. I did this to see who was actually paying attention and who was just spamming every posting and hoping someone would bite. Sadly this hurdle come not be overcome by the majority of the people I heard from... The search continues.

rizzom5000 1 day ago 3 replies      
Why would the author believe 600+ applicants for a semi-skilled entry-level job posting in NYC is an unusual number? How does that number compare to a similar job posting in London, Tokyo or Shanghai? This experiment is silly, contrived and unscientific. The data is without context, and essentially meaningless.

Also, while I wouldn't exactly call it unethical to do something like this - it is dishonest, and something about it feels slimy to me.

leeny 21 hours ago 0 replies      
I wonder how many of the applicants had well-written cover letters and resumes (good sentence structure, interesting content, devoid of massive typos and grammatical errors). I would bet that it's no more than 20%, and all of a sudden, things aren't so dismal anymore.
iba 1 day ago 0 replies      
This experiment demonstrates the futility of job hunting cold-calling style. In my experience the best way to get noticed is through networking, especially through friends in the field. If I were hiring, I would give more consideration for someone who was recommended even by a friend of a friend. Our job hunting time is much better spent networking than spanning every craigslist job posting with our resumes. I wish someone does a counter experiment with networking vs craigslist spamming.
keithnoizu 1 day ago 0 replies      
its kind of depressing to think of someone with a masters degree hoping to land a job that pays less than a fifth of what I make with an associates degree . . .
gsibble 1 day ago 0 replies      
I am sooooo happy that I'm an engineer right now.
donnfelker 1 day ago 0 replies      
his post, in itself, just may get you a job. Nice work.
natmaster 1 day ago 0 replies      
I'm genuinely curious why the author - who is obviously very intelligent otherwise - does not simply use their intelligence to learn an actual marketable skill. Especially if they are looking at applying for such boring jobs - surely some sort of scientific analysis like they did in this article would be more interesting to do as a job.
systematical 1 day ago 0 replies      
pssht. I've been doing this for years
How big is the entire universe? scienceblogs.com
181 points by sajid  3 days ago   77 comments top 17
goodside 3 days ago 2 replies      
The short answer is nobody has a plausible upper bound on the size of the entire Universe, and there's no firm consensus on whether it's literally infinite. If it is finite, there is no "wall" or "center", since it's not a big sphere -- a finite Universe just means that if you go straight long enough, you end up in the same spot. It's analogous to how if you stood on the ground of featureless planet, the land you could inhabit is finite, but it has no borders and no point where you could stick a flag in the ground and say "this is the center of the planet's land".

Most of the time when people give hard numbers for the size of the Universe, they either mean A) the portion of the Universe we can see, B) the portion of the Universe we could ever see in principle, or C) the portion of the Universe which could, in principle, be causally influenced by early particle interactions that could have causally influenced us. If you suspect someone might be talking out of their ass about cosmology, ask them how big the Universe is -- if they give you a clean multiple of 14.6 billion light years, they're an idiot.

Technically inclined readers will find this more enlightening than the article above: http://arxiv.org/pdf/astro-ph/0310808.pdf

tokenadult 3 days ago 0 replies      
There was a talk about this issue at the Midwest Science of Origins Conference on 31 March 2012 by Marco Peloso of the University of Minnesota. Peloso reviewed the evidence available for the condition of the Universe just after the Big Bang. He also mentioned that current observations are consistent with a "flat" geometry of the entire universe, but pointed out other lines of evidence, not mentioned in the blog post submitted here, consistent with a finite (although very, very large) size for the universe.

Finite size is consistent with current observations and theories, and an issue that the submitted article doesn't have a lot of space to address. Per a Wikipedia article,


consistent with what I heard in the lecture earlier this year, "The universe appears to have no net electric charge, and therefore gravity appears to be the dominant interaction on cosmological length scales. The universe also appears to have neither net momentum nor angular momentum. The absence of net charge and momentum would follow from accepted physical laws (Gauss's law and the non-divergence of the stress-energy-momentum pseudotensor, respectively), if the universe were finite." Wikipedia cites the Landau and Lifshitz physics textbook from the Soviet Union, Landau, Lev, Lifshitz, E.M. (1975). The Classical Theory of Fields (Course of Theoretical Physics, Vol. 2) (revised 4th English ed.). New York: Pergamon Press. pp. 358"397. ISBN 978-0-08-018176-9, for this statement.

Some multiverse theories such as those mentioned in another comment posted before this one can be overlaid on a simpler theory of a single finite "observable" (in principle) universe with the properties we know from human observation. Testing theories like those, or like the "level I multiverse" theory mentioned in the other comment, still needs more work.

The current Udacity course in physics


started off its first unit with the students reproducing the effort of Eratosthenes of Cyrene to measure the circumference of the earth more than 2,200 years ago. On reasonable assumptions known to the ancient Greeks, it was possible to get a surprisingly accurate estimate of the earth's circumference, with a major source of error being simply measuring distances between one city and another a few days' journey away.

boredguy8 2 days ago 0 replies      
This might be a bit of a nit, but it happens enough: the quote that begins that article is misattributed. It's from Daniel J. Boorstin. http://en.wikiquote.org/wiki/Stephen_Hawking#Misattributed
lifeisstillgood 3 days ago 1 reply      
Well, that blew me away. I would happily have gone for 13billion light years across.

This one little blog (that was a few hours work covering decades of hard work by hundreds of people) has reminded me of the vast importance of education, scientific inquiry and just plain old reading.

One recent suggestion here in the UK is to make maths study compulsory till age 18. If I can go till age 40 and not know how we know how big the universe is, and I am supposedly in the top 5% of educated people, then yes yes yes.

Is there any politican I can vote for who will double the science budget ?

xefer 3 days ago 7 replies      
Max Tegmark had an interesting article in Scientific American a while back discussing the implications of an infinite universe:


Perhaps the most disturbing of which would be the fact that, by definition, at some distance, there would have to be a duplicate of yourself.

thangalin 2 days ago 1 reply      
The article used one of my drawings without my permission.


nessus42 2 days ago 1 reply      
Another factoid that causes the mind to reel is that a flat universe, which is of infinite size, also has zero total energy. This means that it is conceivable that someday we might figure out how to manufacture entire new universes.

Who says that there's no such thing as a free lunch!

tocomment 3 days ago 2 replies      
But if the big bang happened 13 billion years ago there shouldn't be any matter or energy outside of a 13 billion light year radius, right?
jsmcgd 3 days ago 1 reply      
The visible curve of the Earth is due to atmospheric distortion? What? Surely not. Surely the curve you see is the horizon of your view?
kennon 3 days ago 2 replies      
Very cool, but I'm still a little confused. The earth's surface more or less just extends out in two dimensions, but doesn't space extend out in three? How does all of this work when you have take into account the z-axis as well as x and y? How would a three-dimensional object "close" in around itself?
ars 2 days ago 0 replies      
The first half of this is from the excellent book:

Relativity: The Special and General Theory by Albert Einstein

It's written for the interested layman, the article is from part 3 of the book where Einstein explains this.

nrmehta 2 days ago 0 replies      
For the layperson (like me), the below is an intriguing and related TED talk from Brian Greene. A big conclusion is that we now believe the universe is expanding at an accelerating rate based upon the edge of the observable universe. But Greene posits that 1000s of years in the future, this accelerating edge will be too far away for us to observe and the universe will look to future mankind to be more static and small. I'm not physicist so I can't criticize this claim but I found it intriguing.


javert 2 days ago 2 replies      
Why presume that the universe is a surface, instead of a giant volume? After all, the normal way of looking at it is that we're in space, not on it...

This is an especially appalling oversight in an article intended for the intelligent layman.

ctchocula 2 days ago 0 replies      
Fields Medal winner Terence Tao gives a public lecture on this material:


acdanger 2 days ago 0 replies      
"The universe is a big place, perhaps the biggest." -- Kilgore Trout
hansbo 3 days ago 3 replies      
Interesting. I had no idea that the curvature of the universe was an indication of its size. Does this imply that a flat universe is also infinite?
kapkapkap 3 days ago 1 reply      
First glider discovered in a cellular automata on an aperiodic tiling plus.google.com
175 points by tim_hutton  1 day ago   25 comments top 7
ForrestN 1 day ago 5 replies      
This looks fascinating; any chance someone can easily explain the significance of the terms and the image? Seems daunting to look everything up and try to understand as a layman.

Edit: Awesome, thanks everyone! <3 HN

SilasX 1 day ago 1 reply      
How exactly did they go about proving that it continues to glide forever, despite the lack of periodicity? Is there some finite combination of subsequent tile groups in aperiodic tilings, such that you just have to show that all of them involve a transition another part of the glider cycle?
pax 1 day ago 0 replies      
haha, I just found an Google Easter Egg, while reading this thread :)

> Google "Conway's Game of Life"

PS. Also, somebody posted it on HN 3 weeks ago, but got no love: http://news.ycombinator.com/item?id=4224926

pohl 1 day ago 0 replies      
Awesome! For years I have had a desire to see "life" on a penrose tiling. Are there more examples out there?
jongraehl 10 hours ago 0 replies      
This explanation of the work the automata rule needs to do was interesting: http://mrob.com/pub/math/quad-grid-glider.html from g+
nraynaud 1 day ago 1 reply      
isn't this pattern the 2D projection of a periodic 5D pattern ?

what would it mean for the glider's path in 5D (well, 6D with time) ?

blaines 1 day ago 2 replies      
I have to sign in to read this? =/
Samsung: Apple wouldn't have sold a single iPhone without stealing our tech bgr.com
164 points by zacharye  3 days ago   120 comments top 15
hetman 2 days ago 5 replies      
What I think is fundamentally wrong with the way the patent system is applied today is that in many instances it serves as nothing more than an artificial monopoly only benefitting the patent holder.

The idea of the patent system in itself is actually quite elegant. Government grants a temporary monopoly in exchange for the public disclosure of an invention. Everyone wins! The person issued the patent can more easily profit from their invention because they can work in the open. Society likewise benefits, no more secret guilds locking up knowledge, now everyone can see how it's done (and apply it once the monopoly expires).

As a social contract this really makes sense. The problem is that a lot of the patents being thrown around today only really satisfy half of that contract. A temporary monopoly is indeed granted, but the disclosure received in exchange is worthless information. I say it is worthless because no one actually needs to read that disclosure to figure out how to do it themselves with basically no effort.

Of the large companies, it seems to me that Apple is one of the worse offenders at exploiting the patent system in this way. Don't get me wrong, what they are doing is perfectly legal, but that doesn't make it OK. It does seem like Apple is following the legal rules and Samsung is thrashing around attempting to break them (perhaps feeling they have little alternative). But that doesn't mean that our legal system is perfect and cases like this will hopefully help us understand how it can be fixed to prevent this kind of exploitation in the future.

SODaniel 3 days ago  replies      
I have to say that I am happy to see a major company saying what everyone is thinking. It is truly absurd that Apple has the audacity to claim some 'we invented it' right to any type of Cellphone tech or Tablet hardware.

ALL Apples success is derived from the UI/UX side and the fact that iTunes laid the pavement for the 'app' concept.

Trying to bully the competition with purchased patents and lawyers will only turn people off their products, and once the 'Steve Jobs effect' wears off, they are nothing but another company selling cellphones and tablets.

And then, it might be time to look elsewhere for your 401k investment.

soup10 2 days ago 1 reply      
This case has nothing to do with patents and everything to do with Jobs/Apple throwing a tantrum. There are so many overly broad and trivial patents that companies have no choice but to ignore them. Standard procedure is to acquire your own stash of broad and trivial patents so that other companies won't bother getting involved in a legal battle with you.

This time though, Jobs thought Samsung and Google crossed the line in the scope and extent to which android devices copied the iPhone and iPad. So he started this patent war to spite them. I think everyone involved knows that nothing will come of this except legal fees.

iamben 2 days ago 1 reply      
I'm so bored of all this. It's like watching my parents going through their divorce - pretty much everything ended up with petty point scoring, and at the end of the day, the kids are the ones that got hurt.

edit: And their legal representatives made a rather nice chunk of change.

bryanlarsen 2 days ago 0 replies      
Linked article is basically just an edited copy of linked WSJ article: http://blogs.wsj.com/law/2012/07/24/the-apple-samsung-trial-...
Steko 2 days ago 1 reply      
And then



From Apple's brief:

In February 2010, Google told Samsung that Samsung's “P1” and “P3” tablets (Galaxy Tab and Galaxy Tab 10.1) were “too similar” to the iPad and demanded “distinguishable design vis-à-vis the iPad for the P3.”

In 2011, Samsung's own Product Design Group noted that it is “regrettable” that the Galaxy S “looks similar” to older iPhone models.

As part of a formal, Samsung-sponsored evaluation, famous designers warned Samsung that the Galaxy S “looked like it copied the iPhone too much,” and that “innovation is needed.” The designers explained that the appearance of the Galaxy S “[c]losely resembles the iPhone shape so as to have no distinguishable elements,” and “[a]ll you have to do is cover up the Samsung logo and it's difficult to find anything different from the iPhone.”

runjake 2 days ago 1 reply      
Steal != legally licensed (from Qualcomm, et al)

Look, from the 20,000 foot view, I think Apple's (ab)use of the patent system is pretty lame. But, this is just grandstanding on Samsung's part. If they have a legitimate legal complaint, I'm sure they'll bring it up in court. And then we'll see.

ChuckMcM 2 days ago 2 replies      
While watching this battle, I always wonder what sort of covenants exist on Samsung's supply agreements with Apple. I was wondering after reading the article why Samsung doesn't just stop selling them parts, or raise the price of the parts 100% or something like that. Seems like they should be able to 'earn back' all the money they are spending defending themselves and cause Apple double pain. But they don't.

And the other thing I wonder is if any of Apple's behavior is hurting them in the supply chain. If you make components that Apple uses and also make products in another part of the company for resale, do you put language in that Apple can't sue you? Or that you don't have to honor purchase orders if they do? Something?

hkmurakami 2 days ago 3 replies      
I wonder if Apple is stepping into some really muddy territory with their legal brouhaha.

While Apple's patents center around their HW and SW design, from what I understand, Samsung, Motorola, et. al's patents include some fundamental WiFi and communication patents that could be a huge counterpunch towards Apple.

makmanalp 2 days ago 0 replies      
Does anyone have links to the mentioned internal documents from Apple / Samsung?
i0exception 2 days ago 2 replies      
Apple is fighting Steve Jobs' battle. If it weren't for Jobs and his overly bloated ego, we wouldn't have had these stupid patent wars.
kirillzubovsky 2 days ago 1 reply      
"Apple has admitted in internal documents that its strength is not in developing new technologies first, but in successfully commercializing them" - and that is why they WIN! Who cares whose technology came first. Samsung couldn't execute and Apple could. If we had it the Samsung way, Nokia 7200 flip phone would still be the standard of mobile technology. shivers
salem 2 days ago 0 replies      
This is not news. The cell phone industry has a long history of petty non-novel patents, such as for vibrating _and_ ringing at the same time. A phone with a screen, a grid of icons and round corners sounds equally petty.
jinushaun 2 days ago 0 replies      
Read the article. Read the highlighted statements. 404 Apple stealing from Samsung not found. This whole patent battle is idiotic.
kenster07 2 days ago 0 replies      
It's about time.
Government: we can freeze Mega assets even if case is dismissed arstechnica.com
163 points by Cadsby  17 hours ago   102 comments top 15
maeon3 15 hours ago 3 replies      
Wow, the justice system refuses to apply American law to a foreign company operating in a foreign nation, so what you do is go seize his person, and wait until he has an address in Prison, then presto! It's legal to apply American law to him.

Imagine if Iran had pulled a stunt like this to some female CEO of a major corporation was in violation of Sharia law for exposing skin. We can't imprison them while they live in the foreign land, so lets bring them here and then we can apply justice.

Am I missing something that makes this an ethical thing to do?

_delirium 16 hours ago 1 reply      
Unfortunately this has been true for a while. For example, having drug-trafficking charges against you dropped doesn't automatically release all impounded property. There is some right to due process, but its precise extent is unclear: there has to be some kind of hearing at some point, basically, in which the seizure can be challenged. But it's actually possible for the government to seize the assets permanently after such a hearing, without a conviction.

Some information here: http://www.law.cornell.edu/background/forfeiture/

rickmb 16 hours ago 0 replies      
Just a thought: Megaupload is a Hong Kong corporation. There's no way the White House didn't have a diplomatic chat with the Chinese before demolishing a Hong Kong business, is there?
zallarak 16 hours ago  replies      
It seems to be that America is slowly becoming less free; considering factors such as Obama's domestic assassination program, federal wiretapping and more. As an American, I still acknowledge we live in among the freer societies of the world. I still can't help but wonder why the government is increasingly imposing the way it is, history has always shown that repression of almost any form backfires in the long run.
mcantelon 16 hours ago 3 replies      
Rogue state. Imagine if China had done this to a US company.
dllthomas 16 hours ago 0 replies      
Hmm. I'm not sure I like it, but the title and comments here are a little misleading. The government has indicted MegaUpload, MegaUpload is arguing that they can't be served notice of those indictments, and the government is saying, "Well, until we can, we're not unfreezing the assets." MegaUpload's position doesn't sound ridiculous - "we're not a US company, why should we be subject to US law?" - except that they were doing business in the US (is that where the seized assets were? It sounds like they at least host some stuff out of Virginia).
hastur 4 hours ago 0 replies      
The lesson from this and other cases is simple:




(That includes non-US data centers belonging to US companies, like Amazon AWS servers in Ireland.)

Otherwise you're exposing your company and your users to the arbitrary, predatory practices of US "law enforcement" system, which happily does the bidding of US content industries.

anigbrowl 16 hours ago 0 replies      
One wonders what prevents the government from simply providing notice to Megaupload's counsel, which does after have power of attorney and is thus presumably empowered to take notice as well. I haven't studied the ins and outs of criminal procedure though.
lifeguard 16 hours ago 1 reply      
Sounds like something a king would do.
dakrisht 15 hours ago 0 replies      
So glad to see my tax dollars working for things like this. Like another user here said "this has never been about justice, the law" or what's right - it's all about lobbying, politics and $$$$

The sham organizations that are the RIAA/MPAA are so heavily vested in the government, they'll try anything and everything to create a controlled Internet/sharing system of "content" - which in and of itself is the vaguest definition of all time. All this SOPA/PIPA crap will never pass, we will always find a way for a free Internet.

Total waste of time, a government making up bullshit laws along the way for something that will not only have zero effect on content sharing, but will actually increase it after angering tons of people.

It's truly ridiculous when resources, energy and tax payer dollars can be spent on more important issues that our country needs help with. All these politicians in this country are a f-ing disgrace.

boyter 16 hours ago 1 reply      
At this point I don't care if Mega-upload is guilty or not. There has been enough cock-ups by the prosecution to throw this thing out of the courts. I sincerely hope that the US government pays for this incident and a result of that sets a precedent of following proper due process.
SeanDav 15 hours ago 1 reply      
This has never been about the law or justice, it is all about politics, lobbying and money.
stevedekorte 14 hours ago 1 reply      
Had they used bitcoin, would the government had any power to do this?
alttab 16 hours ago 1 reply      
Correction: you think if you do it won't cost those responsible dearly. My bet is you don't fuck with the Internet.
vtry 14 hours ago 0 replies      
Welcome to the police state.
None are so hopelessly enslaved as those who falsely believe they are free
       cached 28 July 2012 15:11:01 GMT