hacker news with inline top comments    .. more ..    27 Oct 2012 News
home   ask   best   7 years ago   
2
AWS outage summary amazon.com
34 points by tch  2 hours ago   10 comments top 8
1
helper 0 minutes ago 0 replies      
> We are already in the process of making a few changes to reduce the interdependency between ELB and EBS to avoid correlated failure in future events and allow ELB recovery even when there are EBS issues within an Availability Zone.

This is music to my ears. We switched away from ELBs because of this dependency. Hopefully this statement means Amazon is working on completely removing any use of EBS from ELBs.

We came to the conclusion a year and a half ago that EBS has had too many cascading failures to be trustworthy for our production systems. We now run everything on ephemeral drives and use Cassandra distributed across multiple AZs and multiple regions for data persistence.

I highly recommend getting as many servers as you can off EBS.

2
teraflop 46 minutes ago 1 reply      
> Multi Availability Zone (Multi-AZ), where two database instances are synchronously operated in two different Availability Zones.

> The second group of Multi-AZ instances did not failover automatically because the master database instances were disconnected from their standby for a brief time interval immediately before these master database instances' volumes became stuck. Normally these events are simultaneous. Between the period of time the masters were disconnected from their standbys and the point where volumes became stuck, the masters continued to process transactions without being able to replicate to their standbys.

Can someone explain this? I thought the entire point of synchronous replication was that the master doesn't acknowledge that a transaction is committed until the data reaches the slave. That's how it's described in the RDS FAQ: http://aws.amazon.com/rds/faqs/#36

3
lukev 1 hour ago 0 replies      
I am always astonished by how many layers these bugs actually have. It's easy to start out blaming AWS, but if anyone can realistically say they could have anticipated this type of issue at a system level, they're deluding themselves.
4
mrkurt 4 minutes ago 0 replies      
AWS sure does put out amazing post mortems. If only they'd make their status page more useful ...
5
Trufa 6 minutes ago 0 replies      
I really love when the companies take time to explain their customers what happened specially in such detail.

It's clearly a very complicated setting, and this type of posts make me trust them more, don't get me wrong, and outage is an outage, but knowing that they are in control and take time to explain shows respect and the correct attitude towards a mistake.

Good for them!

6
papercruncher 9 minutes ago 0 replies      
I know there are lots of smart people working there but just look at the sheer amount of AWS offerings. Amazon certainly gets credit for quickly putting out new features and services but it makes me wonder if their pace has resulted in way too many moving parts with an intractable number of dependencies.
7
ndcrandall 42 minutes ago 1 reply      
Everytime there is a service outage it makes me feel better about using them in the future. Every outage is actually making the project more reliable since some issuess will only manifest in production. I believe they have a great team that's very knowledgable.
8
filvdg 59 minutes ago 0 replies      
Everything is a Freaking DNS problem :)
3
About today's App Engine outage googleappengine.blogspot.com
36 points by azylman  2 hours ago   15 comments top 6
1
bithive123 1 hour ago 1 reply      
This sounds very similar to something that can happen at small scales too. It happened to me when I naively used a preforking apache and mod_proxy_balancer on a machine with 2GB RAM. We had a surge in traffic and the load balancer passed the "paging threshold" (it started swapping) and at that point the increased latency caused requests to pile up as users would get impatient and hit reload, leaving processes tied up waiting to time out.

In my case I was able to get things working again by disabling all non-essential apache modules and lowering the timeout. I even had to take the load balancer completely down for a few minutes to get the users to back off long enough to ramp up smoothly again. Then I switched to nginx and haven't looked back.

Obviously I am not comparing my apache instance to App Engine, just the broad strokes of the self-perpetuating failure mode. But, reading between the lines this post basically admits to oversubscription on App Engine. Talking about load as having an unexpected impact on reliability (especially during a "global restart") is a nice way of saying that they got more traffic than they could handle.

2
mwsherman 18 minutes ago 0 replies      
A big, hairy problem that a YC company should take on: modeling complexity and predicting emergent phenomena like this. (Ditto Amazon's outage.)

It wouldn't be just for data centers, but that's a good place to start.

3
untog 2 hours ago 2 replies      
Amazon: this is the kind of explanation blog post we want from you. Please be inspired by it.
4
loceng 2 hours ago 2 replies      
Cascading failures seem to be a recurring theme amongst hosting providers..
5
philip1209 2 hours ago 2 replies      
A 10% credit for SLA violations seems quite generous - credit for 3 days after about 6 hours of downtime
6
cloudwizard 44 minutes ago 0 replies      
It makes more sense for GAE to potentially have cascading failures since they failover for you. AWS does not so it is less vulnerable.
4
SAT: Getting the lowest score possible colinfahey.com
35 points by solipsist  2 hours ago   21 comments top 10
1
tokenadult 14 minutes ago 0 replies      
A lot of the comments here are related to the idea of whether or not the SAT can be regarded as being much like an IQ test. It can, and psychologists routinely think of the SAT that way. Despite a number of statements to the contrary in the various comments here, taking SAT scores as an informative correlate (proxy) of what psychologists call "general intelligence" is a procedure often found in the professional literature of psychology, with the warrant of studies specifically on that issue. Note that it is standard usage among psychologists to treat "general intelligence" as a term that basically equates with "scoring well on IQ tests and good proxies of IQ tests," which is the point of some of the comments here.

http://www.iapsych.com/iqmr/koening2008.pdf

"Frey and Detterman (2004) showed that the SAT was correlated with measures of general intelligence .82 (.87 when corrected for nonlinearity)"

http://www.ncbi.nlm.nih.gov/pmc/articles/PMC3144549/

"Indeed, research suggests that SAT scores load highly on the first principal factor of a factor analysis of cognitive measures; a finding that strongly suggests that the SAT is g loaded (Frey & Detterman, 2004)."

http://www.nytimes.com/roomfordebate/2011/12/04/why-should-s...

"Furthermore, the SAT is largely a measure of general intelligence. Scores on the SAT correlate very highly with scores on standardized tests of intelligence, and like IQ scores, are stable across time and not easily increased through training, coaching or practice."

http://faculty.psy.ohio-state.edu/peters/lab/pubs/publicatio...

"Numeracy's effects can be examined when controlling for other proxies of general intelligence (e.g., SAT scores; Stanovich & West, 2008)."

As I have heard the issue discussed in the local "journal club" I participate in with professors and graduate students of psychology who focus on human behavioral genetics (including the genetics of IQ), one thing that makes the SAT a very good proxy of general intelligence is that its item content is disclosed (in released previous tests that can be used as practice tests), so that almost the only difference between one test-taker and another in performance on the SAT is generally and consistently getting all of the various items correct, which certainly takes cognitive strengths.

Psychologist Keith R. Stanovich makes the interesting point that there are very strong correlations with IQ scores and SAT scores with some of what everyone regards as "smart" behavior (and which psychologists by convention call "general intelligence") while there are still other kinds of tests that plainly have indisputable right answers that high-IQ people are able to muff. Thus Stanovich distinguishes "intelligence" (essentially, IQ) from "rationality" (making correct decisions that overcome human cognitive biases) as distinct aspects of human cognition. He has a whole book on the subject, What Intelligence Tests Miss, that is quite thought-provoking and informative.

http://www.amazon.com/What-Intelligence-Tests-Miss-Psycholog...

(Disclosure: I enjoy this kind of research discussion partly because I am acquainted with one large group of high-IQ young people

http://cty.jhu.edu/set/

and am interested in how such young people develop over the course of life.)

2
swordswinger12 44 minutes ago 0 replies      
I honestly don't know whether to be impressed or frightened by the level of obsessive attention to detail on display here.
3
nateberkopec 25 minutes ago 0 replies      
A lot of the SAT-apologist comments here missed one of the most amazing parts of the post:

> "The correlation between [...] combined verbal and math scores and freshman GPA is .52;"

.52! And it's pulled straight from the College Board's Terms and Conditions! And later on, it goes on to explain that high school GPA's correlation is just .54! The graph he produced to visualize the scatter involved with a .52 correlation is both hilarious and horrifying.

4
Steko 34 minutes ago 0 replies      
Spoliers (it's very long): he got one question correct, sadly.

I feel for him given the preparation and detail he put into it. I missed a few SAT questions my senior year but helpfully, The College Board decided to make the test easier the following year and magically recalibrate my scores to perfect. It was too late for colleges to care and the expected thousand girlfriends never materialized but it did make me feel warm and fuzzy inside.

5
mehulkar 47 minutes ago 2 replies      
This reminds of me playing Hearts on a PC. You can try really hard to get a good score, or you can get the worst score possible and win the round[1]. I didn't read the whole article, but it was pretty clear that to get the lowest possible score you have to know the correct response to every question. (A little less black & white for the essay section, but you get the gist.)

Colleges should have a lottery admission available to people who can get a perfect score on the SAT/ACT. Students would inadvertently study harder and learn proportionally more than if they were to study hard enough to get a perfect score.

[1] http://en.wikipedia.org/wiki/Hearts#Shooting_the_moon

6
maeon3 1 hour ago 3 replies      
The SAT is an important measure of a humans intelligence and performance in the real world, that's why the best companies always filter candidates and select the ones with highest SAT scores.

It's a good thing we have tests like these and pay large sums of money to the people who maintain it. Otherwise interviewing might be totally screwed up and completely fail at its intended purpose in this country.

7
protomyth 12 minutes ago 0 replies      
My high school required students to take the ASVAB in addition to the ACT (SAT wasn't offered, so I had to go to a testing center for it).

I know a guy who honestly tried on the math section. He got the single point for signing his name, but missed all the questions. The first question is "2+2".

8
beatgammit 1 hour ago 3 replies      
I admit, I didn't read the whole thing, it was super long.

I personally think the ACT (and by association the SAT) is a better measure of intelligence than a GPA, and I would love it if companies used that more often as a filtering metric. Obviously, grades in general aren't the best metric, but they're simple and generally pretty reliable.

If I were hiring somebody, I'd look at a combination of test scores and personal achievements. If I were hiring a programmer, I'd filter by test scores, then look at examples of projects the applicant works on/is associated with. Grades aren't as important in my opinion.

If that's the premise of the post, I totally agree. If not, I'm not willing to read a forever long post that forces me to relive the horror of the testing I went through before I got into college.

9
dangoldin 1 hour ago 0 replies      
I've thought of doing this but never had the courage unfortunately. Maybe when I retire I'll have the time to go back and try this out.
10
pbiggar 21 minutes ago 0 replies      
tl;dr: he got one wrong, discussed in section 14.2.4.
5
Dropbox Bug Can Permanently Lose Your Files konklone.com
147 points by joshuacc  8 hours ago   109 comments top 26
1
jpadvo 7 hours ago 11 replies      
It's always important to remember the difference between a syncing service and a backup service. A syncing service sometimes feels like a backup, because you can use it to recover files if a local device is destroyed or lost. HOWEVER, any service capable of syncing files is equally capable of destroying them.

It's important to have an automated one-way backup system that you can manually restore from. Something like Tarsnap [1] looks like a really good possibility (I haven't used it myself, but it seems solid)

[1] http://www.tarsnap.com/gettingstarted.html

2
matt_holden 5 hours ago 3 replies      
(Hi all, I'm the PM for the Dropbox desktop client team.)

I just wanted to let you all know that we take any claims like this really seriously. There aren't any known bugs on the Dropbox side that would cause this, and unfortunately there are potential causes such as hardware errors, filesystem corruption, and other OS issues (including those like http://www.phoronix.com/scan.php?page=news_item&px=MTIxN... which another poster pointed out) that can corrupt data or create zero byte files.

Nevertheless we will continue to look into this just to be sure, and we also work hard to find ways for Dropbox to shield users even when the OS, disk or other components fail (our undelete/file revisions and Packrat are among these).

3
apike 6 hours ago 0 replies      
I have all my data, media, and documents in Dropbox (80GB) and while I have 11255 zero-byte files, none of them are likely Dropbox's fault. Most of them are empty logfiles, .svn and .git noise in old projects, and the like.
4
uptown 7 hours ago 1 reply      
Dropbox for syncing.

Crashplan for backing up.

That combo hasn't failed me yet.

5
goostavos 7 hours ago 7 replies      
How many of you is this affecting? I'd be curious to know how many keep the only copy of their file on dropbox, or any cloud service for that matter. I've never trusted any service enough to be solely responsibly for my important files.

I use Dropbox primarily as a tool to synch content between my desktop/laptop/phone, but any significant change I make to those files gets saved locally 100% of the time.

I am not a very trusting man.

6
acabal 5 hours ago 2 replies      
Never trust anyone but yourself with valuable files like family photos. I don't care if they promise 110% uptime and reliability. The files you really care about, in the very practical end, are your own responsibility regardless of how much money you pay to other people. You can sue them till you're blue but it won't get your files back.

To that extent I keep my important files backed up not just in Dropbox, but also to Crashplan, and to a spinning-rust hard drive especially kept for backups that I protect in my home. That's three points of failure I can recover from if something goes wrong; and if all three fail at once, then I probably have worse things to worry about, like the zombie apocalypse.

7
tnorthcutt 7 hours ago 1 reply      
Wow.

15 Files affected here. I haven't checked to see if any of them are unrecoverable (none of them are vital), but this does seem like a very bad bug.

8
bitcartel 7 hours ago 1 reply      
Although Dropbox isn't a backup service, it offers file versioning, so you would think you could recover from such a situation. Google Drive also offers file versioning, but the devil is in the detail - they do delete older versions:

https://support.google.com/drive/bin/answer.py?hl=en&ans...

For RAID like protection with Dropbox and other providers, you can roll your own BRIC (Redundant Bunch of Independent Clouds). I did this using Tahoe-LAFS to stripe data across storage providers. Requires a bit of set up, has some caveats, but does work. If you use with duplicity you have versioning on top of a distributed, encrypted, redundant store.

http://news.ycombinator.com/item?id=4689238

9
gcv 5 hours ago 0 replies      
No problems on my system, but I make a point of backing up everything sensitive to an external hard drive occasionally, and using Arq to do hourly backups to S3.
10
c16 7 hours ago 1 reply      
Yep, I'm affected by this bug too.. Thankfully nothing important, but makes me wonder if I should leave my truecrypt folder on DropBox.
11
mech4bg 6 hours ago 1 reply      
It surprises me that someone would rely entirely on dropbox as their backup tool AND primary file location, especially when it happy manipulates the files locally.

I backup to an external HDD and to the cloud and still have the originals (as well as having extra copies again of my music and photos synced across my computers) - the more redundancy you have the better.

It sucks that so many people need bad stuff to happen to them to do something about it - I'm so thankful that storage became so cheap before anything really catastrophic happened to me. I've lost data in the past but it was back when so many things were offline, nowadays it's CRITICAL to have a good backup plan.

12
intellegacy 6 hours ago 1 reply      
Happened to my mother and her friend a week ago. Only they lost all their files. They weren't permanently unrecoverable, thankfully, only deleted.

The trust my mother had in dropbox is now gone, and probably will remain so for the next couple years.

13
duncans 6 hours ago 0 replies      
If you're on windows the following Powershell may help diagnose if you've been affected:

    gci $env:USERPROFILE\Dropbox -r | where { $_.Length -eq 0 }

14
wissler 7 hours ago 1 reply      
At least one of my files was effected and not recoverable (hopefully the zero-size test is reliable).
15
mikegirouard 6 hours ago 0 replies      
> 2 other files (precious family photos) were also affected, but it happened recently enough to be recovered manually by Dropbox engineers.

It's awful that it had to come to that, but it's reassuring that they will be willing to work with you on that level.

16
whichdan 6 hours ago 1 reply      
How viable would it be to keep all of your data in a git repository? Let's say I'm backing up 25gb of music, could I have Dropbox sync everything but the .git folder, and just do a git revert if shit goes down? Will it end up using twice as much space?
17
M4v3R 6 hours ago 0 replies      
Isn't this related to Ext4 bug that was found recently? [1]

[1] http://www.phoronix.com/scan.php?page=news_item&px=MTIxN...

18
piyush_soni 6 hours ago 4 replies      
Dropbox has lost my files as well. I had kept some very important private files there which I just wanted to keep there without the need of updating them/syncing them. After uploading, I removed the local copy of that private top level folder, as I didn't want to keep a local copy in my computer. With the next sync, this STUPID SH*T removed the folder from their server as well, without any warning of any sort! Any system which deletes the files without asking, should just not be used or depended on.
19
larme 7 hours ago 1 reply      
For all the people saying you need a local backup, in this case you not only need to keep a local backup, but also it have to be a versioned one.
20
iceron 7 hours ago 0 replies      
FYI: I'm pretty sure Windows users can navigate to their Dropbox folder and type 0 bytes in the search bar to see if they were affected.
21
joejohnson 7 hours ago 1 reply      
This guy didn't backup his computer before upgrading the OS? That was his first mistake.
22
Splines 7 hours ago 0 replies      
My content is fine, I have about 5.5 GB of content, and usually push content using the Windows client.
23
eli 7 hours ago 0 replies      
Huh. Is there an easy way to confirm files affected versus files that were intentionally set to zero bytes?
24
alttab 7 hours ago 0 replies      
Well, ... it is DROPbox...?
25
ssebro 5 hours ago 0 replies      
I've had a similar thing happen to me, but I was able use a python script to pull the (hidden, old) files from a cache on my machine. http://www.dropboxwiki.com/TipsAndTricks/RestoreFilesAndDire...
26
bloaf 6 hours ago 0 replies      
Remember, cloud storage doesn't count as a backup if its the only place you've stored your files.
6
Uber Co-Founder Garrett Camp Launches BlackJet, The "Uber For Private Jets" techcrunch.com
32 points by kloncks  3 hours ago   11 comments top 5
1
jcampbell1 2 hours ago 1 reply      
NetJets already exists, but it is different. It is more like owning without the headache and they also offer fractional ownership. It is damn near impossible to beat NetJets on pricing because they can finance the aircraft via Berkshire Hathaway's AAA.

This seems like a different market. I am not quite sure I understand it. It looks like it is flying private rather than first class on standard routes. The advantage to charters is that you don't have to go through security, and you can fly from nowhere to nowhere direct. I don't understand this business model. It seems like they are flying standard routes.

New York to LA on a private jet is worse than commercial because you are stuck in a 6 foot diameter pipe for 4.5 hours, rather than a first class seat on a normal airplane.

2
jacques_chester 2 hours ago 1 reply      
This strikes me as a good idea, on paper.

If I may play the traditional role of first-comment-is-negative however:

1. NetJets are a fractional ownership company who are probably going to remain more popular at this level of society because they're a full service outfit;

2. Charter companies already run their planes efficiently, so they may be in a position to match on price on some routes;

3. I suspect that for a lot of really really rich people, owning a plane is not a purely financial decision. It's a status signal.

All that being said ... it might just work by creating a new market segment -- people who can afford business or first class but can't quite justify private charters.

I think people on HN sometimes underestimate the importance of the middleman in economic life. You don't have to be B2C or B2B with a direct product or service. Often the most profitable position is as the switchboard.

3
wpietri 1 hour ago 2 replies      
Uber for private jets? No idea how they're going to land the plane out front of my house, but I'll definitely download the app and give it a try.
4
sgold1 2 minutes ago 0 replies      
i thought the point of a private jet, was so that you did not have to share with anyone else
5
gadders 56 minutes ago 0 replies      
Slightly offtopic:

What I found out the other day is that Jesse Jaymes [1] who sang the song "College Girls are Easy" founded Marquis Jets, a similar-ish service.

He's also married to the billionaire founder of Spanx. Extraordinary career trajectory.

[1] http://en.wikipedia.org/wiki/Jesse_Itzler

7
Attack of the week: Cross-VM timing attacks cryptographyengineering.com
10 points by stalled  1 hour ago   discuss
8
10,000,000,000,000,000 bytes archived archive.org
91 points by cleverjake  9 hours ago   33 comments top 11
1
ChuckMcM 8 hours ago 1 reply      
The internet archive is a wonderful thing. I recovered much of my web site when my server burned in a fire, it was cheaper than the $2400 to try to pull it off the melted hard drives. It has also provided fodder for a ton of lawsuits, of the patent/IP/he-said vs she-said varieties.

Given the latter use, and subsequent 'retro-takedowns' that have occurred on the archive, I wonder if there is a market for 'a copy of the archive right now' which would be hard to retro-actively modify? And I wonder what the legal theory would be around having a tape archive of something that was 'clear' at the time you took it, but then later 'redacted'. Could you use your copy of the unredacted information?

2
pjscott 3 hours ago 0 replies      
That number has a lot of zeros at the end, but really, it's just barely getting started. This may be the greatest electronic civilization archival project in human history, but it is also the smallest and most impoverished. It has a lot of growing left to do.
3
Zenst 8 hours ago 1 reply      
I do wonder what the best form of compression would be and given there web pages if some form of customed compression that was optimised for HTML would be useful.

There again with that volume of data, what would CERN do for storage/access for a data pool that size and still be useable.

Reason being if you wanted to back that lot of up ship copies for research purposes then with todays technology the humble memory stick, even the biggest would fail to even handle the file index on this scale. Scary amount of data. But certainly a data set many would like to play with and try things out, being the geeks we are.

4
taf2 8 hours ago 3 replies      
google does something interesting when you use google.com to convert that number to say gigabytes by doing a search such as: 10,000,000,000,000,000 bytes to gigabytes

the result is 9.31323e6

Notice the e... because it's in the same font your eye might miss it like mine and then you'd say to yourself... 10 gigabytes is so small who cares... but if you do the same search again but this time to petabytes you'll realize it was an 'e' in the gigabyte number...

So google says 8.88178 petabytes that's a lot

5
tripzilch 7 hours ago 3 replies      
> On Thursday, 25 October, hundreds of Internet Archive supporters, volunteers, and staff celebrated addition of the 10,000,000,000,000,000th byte to the Archive's massive collections.

So, I bet everyone is dying to know, what was the 10,000,000,000,000,000th byte then?

6
charonn0 4 hours ago 0 replies      
You know it's a great party if the music is performed live by the guy who wrote the book in your field.
7
barbs 4 hours ago 0 replies      
Did anyone else read the title as "10 bajillion bytes archived" or was that just me?
8
z92 4 hours ago 0 replies      
10,000,000 GB archived, sounds more cool.
9
tarice 7 hours ago 0 replies      
Interesting note at the bottom for those who may have missed it (Donald Knuth on the organ is very distracting):

>The only thing missing was electricity; the building lost all power just as the presentation was to begin. Thanks to the creativity of the Archive's engineers and a couple of ridiculously long extension cords that reached a nearby house, the show went on.

10
arasmussen 8 hours ago 2 replies      
"Ten Petabytes (10,000,000,000,000,000 bytes) of cultural material saved!"

Not quite 10 petabytes: (10 * 1024^5) > 10^16

edit: this is wrong, silly me.

11
3rd3 5 hours ago 0 replies      
That is already about five times the identifiable storage capacity of the human brain!
12
China Blocks Web Access to The New York Times After Article nytimes.com
188 points by jimmyjim  14 hours ago   104 comments top 16
1
wilfra 11 hours ago 8 replies      
The truly sad part of this is most Chinese people wouldn't really mind the site being blocked because of this, nor even be all that surprised to learn what the article said. They don't get offended and angered by their government hiding things from them or abusing their power in the same way people in Western countries do, nor do they have a strong desire to learn the truth. They just accept this as the way things are.

Yes, there are exceptions, but those who feel different are in the minority.

2
blrgeek 41 minutes ago 0 replies      
One of the differences between India and China seems to be that in India at least, there's a free 4th estate, and there is no way a Government would be able to block articles like this.

As a matter of fact, through the Right To Information Act, there's an activist who is currently raking up dirt on a whole bunch of politicians serially.

Makes me thankful of the freedoms we enjoy and take for granted!

3
untog 13 hours ago 5 replies      
Well, at least we know the article is telling the truth now.
4
jcromartie 13 hours ago 4 replies      
It happens in pockets of the US, too: Jerry Falwell's conservative Liberty University did a very similar thing.

http://www.washingtonpost.com/blogs/campus-overload/post/fal...

EDIT: mircocosm was a poor word choice

5
kaptain 12 hours ago 2 replies      
Can someone post a mirror or the content of the article? I'm in China.
6
codyZ 9 hours ago 1 reply      
I'm not sure what's worse: People thinking that Chinese nationals do not care, that they are unaware, or perhaps both. Most of the people that I know in China, who are at all, remotely informed about anything knows not to get their news from regular news channels. Particularly anyone skilled enough to setup a Weibo account. Within minutes, most news gets out anyways via Weibo (Chinese Twitter)...

In fact it was two of my Chinese friends who told me about the article this morning....

7
Claudus 13 hours ago 2 replies      
So, combining these two statements, it seems that they blocked both sites 30 minutes before the article was posted in Chinese?

If that's true, it's disappointing the Times didn't do a simultaneous release in anticipation of the block.

"HONG KONG " The Chinese government swiftly blocked access Friday morning to the English-language and Chinese-language Web sites of The New York Times"

"By 7 a.m. Friday in China, access to both the English- and Chinese-language Web sites of The Times was blocked (...). The Times had posted the article in English at 4:34 p.m. on Thursday in New York (4:34 a.m. Friday in Beijing), and finished posting the article in Chinese three hours later after the translation of final edits to the English-language version."

8
zschallz 11 hours ago 0 replies      
Looks like its no longer blocked in China... http://www.greatfirewallofchina.org/index.php?siteurl=www.ny...
9
bluekite2000 12 hours ago 0 replies      
Now I m anxiously waiting for one written for Vietnam. I lived there for a few years and the 2 countries closely resemble each other
10
arbuge 1 hour ago 0 replies      
So it has been, so it always will be. Absolute power corrupts absolutely.
11
thomasfl 7 hours ago 0 replies      
China blocked Norwegian Broadcasting Corporation's sites two years ago when the Nobel commitee awarded Liu Xiaobo the peace prize.
12
bennyfreshness 11 hours ago 0 replies      
Honestly, I didn't really see much wrongdoing on part of the leadership, namely Wen Jiabao, as described in the article. Its mostly relatives taking advantage of political connections. Its a broken system, where the state is too closely intertwined with business. Hopefully the rumors are true and the new ruling coalition will make some progress in liberalizing the economy modeling it after Singapore's.
13
jonathanyc 1 hour ago 0 replies      
Loving the racist comments on HN these days.
14
ethana 9 hours ago 0 replies      
The Times should not just only be blocked in China. Serious.
15
duxup 12 hours ago 0 replies      
I was reading that article last night thinking... this is gonna get blocked.
16
udonmai 13 hours ago 0 replies      
I just want to say ... ''
14
U.S. Arrests Paul Ceglia for Multi-Billion Dollar Scheme to Defraud Facebook betabeat.com
126 points by iProject  11 hours ago   64 comments top 12
1
grellas 9 hours ago 3 replies      
A few thoughts:

1. When Ceglia first filed his claims, I called it a "lawsuit full of holes [that was] built up by sensationalist reporting into a supposed major threat to Facebook and to Mr. Zuckerberg" and concluded that, "in the courts, this thing is going nowhere." (http://news.ycombinator.com/item?id=1537158) My sense of this had nothing to do with any fraudulent tampering with evidence as alleged now in the criminal prosecution but instead with the whole smell of the thing: a flaky 2-page contract with basically incoherent terms used as a basis for a lawsuit brought by a backwater lawyer who drafted a complaint that would have been an embarrassment to a first-year law student. It looked like a joke on the face of it, notwithstanding that a small-town judge had initially entered a TRO based on the filing.

2. Then, in April, 2011, this case looked like it had taken a major turn: Ceglia had dumped his original lawyer and retained the prestigious firm of DLA Piper; he also produced a mountain of emails "documenting" that he and Mr. Zuckerberg had, in effect, entered into a legal partnership giving him a major ownership piece in the FB venture; he also (via the lawyers) put together a compelling story in his complaint making it appear that FB and Mr. Zuckerberg were in deep legal trouble concerning his claims. (Here is my comment at the time: http://news.ycombinator.com/item?id=2438063) Once again, there was a sensationalist wave of reporting across the web rejoicing about how Mark Zuckerberg was about to get his comeuppance.

3. Since that time, through the work of very able lawyers for Facebook, the Ceglia case has been progressively torn to shreds in the federal court to which it had been removed and, as the case has disintegrated, it has drawn progressively less interest (the DLA firm withdrew at the first sign of serious trouble). Indeed, without the dramatic turn of a federal criminal indictment, I doubt that it would done more than draw a few yawns as it eventually headed to the judicial graveyard where most flaky cases ultimately find their rest.

4. The lesson here is how prejudice and crowd-think can dramatically affect and distort perceptions. When someone takes on the role of villain (as Mr. Zuckerberg has in some circles), there are those who so desperately want to see him torn down that they will suspend their better judgment just to see it happen, whether he was right or wrong in what he had done. This is not to excuse him in things he may have done wrong in other contexts, but he had done nothing wrong here and it is just amazing to me how many people were willing to take it as a given that he had even with little or no evidence to back it up.

5. The other (major) lesson here is that there are serious limits to playing fast and loose with the courts. It is true that there is much abusive litigation but there is obviously a line that cannot be crossed without inviting horrific consequences. It doesn't happen often enough that abusive litigants get what they deserve but, when it does occasionally happen, it is very nice to see. At least it sets an outer bound on what people can do to abuse one another in the courts.

2
nostromo 10 hours ago 4 replies      
I was struck when we incorporated our company and raised financing how much we still use ink and paper to keep track of things like corporate ownership. When signing so many documents I wanted to seal them with paraffin wax and send them to our lawyers via carrier pigeon.

In the case of Facebook, a multi-billion dollar company, it seems amazing that someone can come out of the woodwork and with a little effort in document tampering cause such a hullabaloo. It seems like too important of an issue to be inpart determined by "spacing, columns, and margins of page one of the Alleged Contract."

Of course, I don't know of a better system, just that the current system seems archaic. (You probably couldn't create a centralized "contract bureau" in the federal government, because many contracts are private, until someone sues.)

3
kfury 10 hours ago 1 reply      
I'm disappointed by all the folks calling mail fraud a 'loophole'. Mail and wire fraud are the ways people prosecute domestic 419 (Nigerian prince) scams and other confidence games.

This case shouldn't be given a free pass just because the party intended to be deceived is the judicial system rather than the mail recipient.

4
leoh 11 hours ago 8 replies      
Did this line strike anyone else as pathetic?

In today's press release, USPIS Inspector-in-Charge Randall C. Till added: “When Mr. Ceglia allegedly decided to take advantage of Mark Zuckerberg and Facebook, he underestimated the resolve of the Postal Inspection Service to bring him to justice for illegal use of the U.S. Mail.”

Ceglia is probably in the wrong and unethical. But the self-righteousness and pompadour of the USPS? Really?

5
defen 9 hours ago 2 replies      
Perhaps a stupid question, but if this guy had hand-delivered all the documents to the court, he'd be totally in the clear with regard to criminal activity? Why isn't he being charged with providing fabricated evidence to a federal court?
6
unreal37 9 hours ago 0 replies      
I wonder what will happen to his lawyers. And I wonder what the judge thinks.

Ceglia has had a couple of lawyers (Argentieri and Boland) who have been filing discovery motions, and really aggressively going after Facebook and Zuckerberg. If you read only a couple of their filings, it seems really personal for them. They are absolutely offended at Facebook's lawyers behavior, really aggressive in the wording of their filings, and going so far as to call out FB lawyers by name and complain about them.

And now it turns out the US Govt found the real contract from email archives and it doesn't match at all.

I'd like to see the lawyers punished somehow (reprimanded) for pushing this obvious fraud through the court system so hard and for so long.

And if I were the judge, wasting 2 years of his life on a sham case like this, I'd be furious. They were still submitting filings even as of yesterday!

7
redthrowaway 8 hours ago 3 replies      
Can anyone give a reasonable argument for why mail fraud carries a maximum sentence of 20 years? That seems ludicrous in the extreme. Why does mail fraud incur a greater penalty than rape, kidnapping, and most forms of murder?
8
fleitz 11 hours ago 1 reply      
Mail fraud... looks like some AG spent too much time watching "The Firm"
9
taylorbuley 5 hours ago 0 replies      
I had dug up his arrest record ~day one -- Not pretty. Interestingly, I had surprised a previous lawyer, who apparently did not do the same before taking on the case.

http://www.forbes.com/sites/velocity/2010/07/28/facebooks-ma...

10
seivan 8 hours ago 0 replies      
Is it me, or does it seem like his combined stupidity and incompetence (two different things) is what caused him to not realise the shit he pulled of leaves a trace?

Take it from a non developer scumbag thief "wannatrepenur" to not have the God damn decency to read up on how email works before trying a stunt like this.

11
timpeterson 9 hours ago 0 replies      
But Zuckerberg did meet with this Ceglia guy, right?
12
kennethcwilbur 9 hours ago 1 reply      
If only tl;dr were in Latin, lawyers would understand it
15
Github Notifications API github.com
50 points by joeyespo  7 hours ago   4 comments top 4
1
witten 59 minutes ago 0 replies      
I found it interesting that this is ostensibly a notifications API, and nowhere is there any way to actually get notifications except by polling. I definitely understand why a push notification API would be a bad idea, but is polling with Last-Modified really the state of the art in notification APIs? What if you want to find out almost instantly when an event occurs, and not just at the granularity of your poll interval? At the API level, what do people use for this? Comet-style long polling? WebSockets? pubsubhubbub?
2
bryanh 5 hours ago 0 replies      
Shameless plug!

We just added support for this on Zapier: http://zapier.com/zapbook/github/ We'll add some links to specific templates, but some good examples of ways to use it might be:

    * Send SMS for specific mentions (we support filters)
* Power your own RSS feed (for Google Reader?)
* Dump to chat programs (Hipchat, Campfire, etc...)
* POST a JSON payload to some URL
* and many more...

If you already have a Zapier account with a GitHub auth, you might need to re-authenticate to catch the notifications scope.

3
agazso 4 hours ago 0 replies      
Aside from the shameless plug, what will you use it for?
4
ewolf 4 hours ago 0 replies      
Finally! I've been waiting for this :) In case anyone of you likes to get notified about new issues, issue comments etc. on his homepage (like I do), you might want to have a look at http://updatified.com/ ;)
16
Why Are There So Many C++ Testing Frameworks? googletesting.blogspot.com
4 points by cleverjake  51 minutes ago   discuss
17
Ubuntu Core On The Nexus 7 jonobacon.org
60 points by vectorbunny  8 hours ago   18 comments top 7
1
listic 3 hours ago 2 replies      
Is Nexus 7 the first hardware of choice for Ubuntu developers?

I would love for them to start with Microsoft Surface (with keyboard as standard in some models it would kinda be the logical choice) but I understand that Nexus is a) earlier b) cheaper c) you have to start with something.

Right?

2
6ren 1 hour ago 0 replies      
This is the path to desktop disruption but Note: Nexus 7 lacks HDMI out

How does performance compare - what's an equivalent netbook/laptop/desktop? BTW: Nexus 10 is looking good, cortex A-15 http://www.phonesreview.co.uk/2012/10/26/impressive-google-n...

3
dpearson 6 hours ago 3 replies      
I'm a little surprised that their solution for installation is to replace Android. I would think dual-booting would be easier to get developers to commit their own devices for testing.

Is dual-booting not possible/really difficult on the Nexus 7?

4
kwijibob 3 hours ago 0 replies      
I would love for this to be a success. I'm a long term Linux and ubuntu user. However I do use Gnome 3.x and not Unity.

My doubts are that not even the Ubuntu team will be able to reach the level of desktop polish that Google have achieved with Jelly Bean.

I would prefer Ubuntu to be better as it is a more open stack.

5
soapdog 2 hours ago 0 replies      
Ubuntu runs on the HP TouchPad.
6
mtgx 6 hours ago 3 replies      
I think their efforts would be better spend on the upcoming Nexus 10. Larger screen to fit Ubuntu and much better CPU and GPU, too.
7
somesaba 4 hours ago 0 replies      
this is my dream come true!
18
Steam for Linux Beta steamcommunity.com
202 points by futureperson  15 hours ago   58 comments top 8
1
presidentender 13 hours ago 3 replies      
I installed Windows specifically to get Steam (and Visual Studio). If Steam works on Linux, I have very little need of Microsoft products outside the office.
2
rowsdower 14 hours ago 1 reply      
I'm confused. What was the point of posting this link? This is just the Steam group for the beta. It doesn't include any new information (or a way to get in the beta) that I can see.
3
pja 14 hours ago 0 replies      
Looks like they might be going to announce something at the upcoming Ubuntu conference: http://cdr.thebronasium.com/sub/17746
4
aristidb 14 hours ago 1 reply      
Can somebody explain what precisely this page means?
5
iddqd 4 hours ago 0 replies      
The actual beta sign up page was just posted to the group.

http://www.valvesoftware.com/linuxsurvey.php

6
jamesmiller5 13 hours ago 5 replies      
I'm quite surprised the domain is "steamcommunity.com" and not "community.steam.com", it made me hesitant to enter login details.
7
jiggy2011 14 hours ago 1 reply      
Ok , so I joined the group but I can't get into group chat from my Linux box.
8
rtcoms 10 hours ago 2 replies      
Isn't kickstarter would be perfect for companies to create games for linux platform?

This way they will know in which games people are really interested and I also think that people will gladly support those kickstarter. Overall much less risk on investment.

19
3.6 Million Tax Payers Exposed in South Carolina Cyberattack securityweek.com
13 points by techinsidr  3 hours ago   1 comment top
1
ktavera 34 minutes ago 0 replies      
I live in SC and in the last few months complained to the SC DOR numerous times that their online "ePay" system was horribly outdated and a huge security hole. I accidentally stumbled on a possible SQL injection exploit while I was making a tax payment last month and reported it to them... No response.

This is just a prime example of the incompetence of state government IT departments. They likely paid millions of dollars in 1995 for this system to be developed (written in classic ASP and throws very detailed errors to users all the time) and just never thought to update it since? This system probably processes hundreds of millions of dollars in tax payments every year. You would think this would be the one system the state government would want to keep up to date and secure.

Also they had my credit card on file so it was one of the ones compromised that was very likely only "encrypted" with standard DES encryption.

20
Beware the Alan Turing fetish jgc.org
97 points by KC8ZKF  12 hours ago   55 comments top 11
1
SiVal 8 hours ago 1 reply      
Turing was extraordinary, but was he better than Einstein and von Neumann and Claude Shannon and Richard Feynman combined? Because at roughly the same era Turing would have been hatching England's Silicon Valley, Einstein, von Neumann, Shannon, Feynman and others like them were in New Jersey, along all the other smart guys at Princeton, combined with the booming US economy, all the defense spending, with AT&T, joined by Bell Labs, lasers, the transistor...

...And still Silicon Valley ended up in the orchards of California, not the New York suburbs of New Jersey.

Then there's that smelly hippy Steve Jobs, nobody's idea of a brilliant information theorist, who revolutionizes industry after industry. Without Wozniak, he would have done what? Founded Jamba Juice? Had an art gallery in Carmel? Who knows?

Sure, maybe a butterfly in Brazil can produce a hurricane in Florida, but speculating that a specific butterfly, even an unusually large one, might have produced a hurricane eventually is absurd.

2
w1ntermute 10 hours ago 1 reply      
> This brilliant, charming, odd, driven workaholic could have turned the old industrial heartlands of Lancashire into a British Silicon Valley and perhaps America's brightest and best would have flooded east across the Atlantic.

This just another example of the British yearning for bygone days in which their country was the center of the world.

3
damienkatz 6 hours ago 1 reply      
Turing is amazing, but I've often felt the Turing Test had no real scientific or mathematical basis, and if proposed by a far less famous and influential person, would be largely (and rightly) ignored.
4
jimhefferon 10 hours ago 3 replies      
Often in the minds of people who are not involved in an area, that area become epitomized by a person. For science it is Albert Einstein. In the US Civil Rights movement it is Martin Luther King. In computers it is becoming Turing.

Turing is pretty good, I think.

6
scorpioxy 9 hours ago 0 replies      
I know of Alan Turing after reading the book by Hodges and a theoretical computation course at college. Definitely a brilliant man but I agree with what is said in this article.

Many, many people contributed to the computing world we have today whether it is theoretical or practical and we will never know what might have happened with Turing alive. Why not talk about von Neumann as the father of the modern computer? Simply because even though one man might have had a big influence on a specific field, many more after him would have had a big impact and nobody can predict the different outcomes. Standing on the shoulders of giants and so on..

7
msutherl 7 hours ago 0 replies      
A large contributing factor to the current trend of Turing fetishism is the fact that so many people in a range of fields read "Computing Machinery and Intelligence" in University as an introduction to the problem of artificial intelligence. It was the first paper assigned in at least 3 of my classes.

For many people, this is the first time that they consider such issues and so Turing is solidified in their mind as: 'old mathematician guy who invented computers and the idea of artificial intelligence'.

Given the authoritative atmosphere of university and an absence of any of the surrounding history, it's easy to imagine that people would get attached to a singular figure. And indeed they have.

8
surfingdino 5 hours ago 0 replies      
While we prise Alan Turing, we should not forget the efforts of Rejewski http://en.wikipedia.org/wiki/Marian_Rejewski Różycki http://en.wikipedia.org/wiki/Jerzy_Różycki and Zygalski http://en.wikipedia.org/wiki/Henryk_Zygalski
9
throwaway312 32 minutes ago 0 replies      
Notice that neither JGC nor any of the responders here have mentioned Alonzo Church, the inventor of Lambda Calculus, and a man whose contributions to computer science were arguably every bit as fundamental as Turing's.

My prediction is that if Turing ever gains the sort of widespread recognition that JGC wants and that Turing probably deserves, Church will become the Monsieur Curie of their intellectual marriage who for reasons of political correctness will best be left unacknowledged.

10
akavel 10 hours ago 0 replies      
[2011]...
11
batgaijin 11 hours ago 5 replies      
If I was considering computer science in the UK during the 50's, I would have immediately left after Turing's death.

That man could have lead a revolution, could have started one of the most influential and powerful corporations on this planet. He had that potential. To say otherwise is disingenuous.

21
Developer Auction Racks Up $78 Million More Bids, Expands To LA techcrunch.com
62 points by allangrant  9 hours ago   21 comments top 9
1
daeken 9 hours ago 2 replies      
> Not that DevelopersAuction has been sticking too closely to its original parameters anyway. Ostensibly the company solicits developers who are graduates of Stanford or MIT and/or have worked for companies like Google, Facebook or Apple. But at least one Chicago based developer was accepted even though he didn't match those criteria.

I was accepted to Developer Auction (not looking for work, but I always like to know what my options are) despite having no college experience (let alone a degree), indicating I lived in the middle of nowhere in CT and was only interested in remote work, had never worked for one of the 'big startups', and stating I was only looking for security roles. No idea how I was accepted.

I received some interest -- e.g. "Are you sure you don't want to move to SF?" -- but no offers. I'm assuming it was primarily due to the remote requirement given that.

2
dangero 8 hours ago 1 reply      
I'm trying to understand 78 Million in bids. Let's say 10 companies make a bid of 150K to the same single developer, then that is 1.5 Million in bids right?
3
siavosh 8 hours ago 0 replies      
The article is a bit misleading. Offers generally mean the interviews are done, and the company has found a match. But on the homepage it says all offers are non-binding and before any interviews. It seems they can be used as just a way for the recruiter/company to get someone's attention.
4
hippich 1 hour ago 0 replies      
I was accepted besides my out of states degree and no experience working in big startups. And unlike daeken I was never featured in any publications nor I have impressive resume :) I am wondering what criteria I did fit...

Anyway, got zero offers. Oh well. :)

5
pc86 9 hours ago 2 replies      
I still don't understand why they're only limited to allowing developers from a few select employers and Stanford/MIT grads.
6
timdorr 9 hours ago 1 reply      
I put myself up on there even though I'm based in Atlanta. (I'm a friend of Allan Grant, btw) Any thoughts about telecommuting positions or general remote workers? Are you avoiding those kinds of positions currently?
7
xoail 9 hours ago 1 reply      
I wonder what expands to LA really means. I got 2 offers from startups that are based in LA already.
8
frere 2 hours ago 0 replies      
Hmm, I didn't know Feirstein (usell.com, money4gold) was affiliated with this... red flags galore.

http://www.corporationwiki.com/Florida/Fort-Lauderdale/doug-...
http://www.sec.gov/Archives/edgar/data/1271075/0001116502090...

9
RMNH 9 hours ago 0 replies      
Nice job)
22
Yahoo to ignore IE10's "Do Not Track" ypolicyblog.com
67 points by jakeludington  4 hours ago   79 comments top 15
1
ben0x539 3 hours ago 6 replies      
I think it is perfectly reasonable to assume that users intend to not be tracked by the very large number of third parties they are involuntarily exposed to on the web.

Yahoo are resorting to this whole buzzword-laden meaningless rhetoric around ~user experience~ and ~value proposition~. That just reinforces the impression that the only reason anyone was prepared to go along with DNT was that they assumed that 99% of users weren't going to be in a position to express their ~user intent~ to not be tracked. Since, you know, most people have better things to do than to learn how to teach their computer about obvious preferences like "please don't spy on me".

Microsoft is simply making the benefits of the DNT scheme more accessible to its users. It's pretty telling that Yahoo is already backpedaling from respecting the users' intent, faced with the possibility that more than an insignificant fraction of users might actually be enabled to benefit from DNT by this decision.

(Edit: Personally I think rather than squabbling about DNT, browser vendors should be taking much more aggressive, technical steps to make tracking users harder, instead of having a default configuration that stops just short of transmitting the user's SSN via request header. Disabling features like user agent and referer headers for and quickly discarding cookies from untrusted (by individual user "intent", not based on SSL certs or anything) hosts would be a start.)

2
powertower 3 hours ago 2 replies      
> Recently, Microsoft unilaterally decided to turn on DNT in Internet Explorer 10 by default, rather than at users' direction.

> It basically means that the DNT signal from IE10 doesn't express user intent.

Blatantly false. Not only are you presented with the option to turn off DNT on first use (that takes up the entire screen), but I'd imagine users would choose to have advertisers track them about 1-10% of the time if made to choose. So a default On setting does represent the consumer to a degree that you can't ignore.

3
pbiggar 1 hour ago 2 replies      
In all the comments here, I can't find anybody who thinks that Yahoo is doing the right thing here. Well, I do. I think what Yahoo is doing is the right thing for them, for their users, and for the web.

If the web is going to be ad supported, then its going to have to be targeted advertising or its going to be both shit and annoying. Remember "punch the monkey", or ads that took over the entire screen? Now, through tracking, we are able to get really really good ads - things you might even be interested to see and buy.

If DNT was supported by everybody and on by default, that's the end of online advertising in its current form. So we can choose from the following options: ignore DNT, ignore DNT for IE10, or go back to non-targeted advertising.

Let's assume the last of those, which leads us to the following options: revert to shit ads, make users pay for content directly, or pack up your content-producing company and go home. None of these are best for the users or the web.

The DNT founders know this - that's why it was default null in the spec and in Firefox. IE10 is doing this deliberately even though they know it can't work, and there are choices here: they are trying to improve the world but are incredibly wonderfully naive, they want to undermine Google, or they want to undermine DNT. I'd love to believe its the first, but no-one has ever claimed that about MS.

4
jtchang 4 hours ago 2 replies      
Took me a second to figure out this whole DNT business.

So basically it is just an HTTP header your browser sends to the server that tells it not to track. Seems kind of like the wrong way to do it. If I was some nefarious website wouldn't I have straight up ignore it? There isn't any incentive for me to not track a user. In fact aren't a lot of companies around advertising based on the fact that you CAN track users?

More info here: http://donottrack.us/

5
nanoanderson 3 hours ago 1 reply      
Maybe Yahoo should respect IE10's DNT defaults, but display huge modal screens that tell the user "Your browser vendor is inhibiting our value proposition. Please allow us to track your behavior for maximum value extraction."
6
theevocater 3 hours ago 2 replies      
I know people celebrated Microsoft's decision to do this in IE10, but this is what many of us were saying would happen. The relationship with the Do Not Track flag was always tenuous so flagrantly ignoring the spec (which indicates that default on is wrong) was simply going to cause companies to ignore the flag completely.

Regardless, this whole thing is silliness in the extreme. I wonder if this means yahoo is going to start allowing requests with the evil bit set as well :).

(http://en.wikipedia.org/wiki/Evil_bit)

EDIT: also, didn't IE revert this change?

http://blogs.technet.com/b/microsoft_on_the_issues/archive/2...

Well ... Sort of

> DNT fits naturally into this process. Customers will receive prominent notice that their selection of Express Settings turns DNT “on.” In addition, by using the Customize approach, users will be able to independently turn “on” and “off” a number of settings, including the setting for the DNT signal.

7
wonderyak 3 hours ago 1 reply      
> In our view, this degrades the experience for the majority of users and makes it hard to deliver on our value proposition to them.

I know Yahoo! has to maintain their business which depends on things like ads and content delivery; but to say it with such sterile marketing jargon just makes me nauseous.

How about you guys do what everyone else has had to do since the beginning; create something awesome and let people use it with a minimal barrier to entry. Right now, Yahoo! is like a giant skyscraper tenented only by iPhone case kiosks.

8
ajays 2 hours ago 0 replies      
The only reason Microsoft is making DNT the default is because it will directly impact Google's bottom line (and Microsoft loses money in its online division, so they won't hurt as much). Since when did Microsoft really start caring about the users?
9
donohoe 3 hours ago 1 reply      
I left this as a comment on their blog which I assume will never be approved:

  "We fundamentally believe that the online 
experience is better when it is personalized"

Um, doing so is not impeded by DNT as that does not relate to ads. That a bit of a white-lie to imply that it is they way you've worded that first paragraph.

  "It basically means that the DNT signal 
from IE10 doesn't express user intent."

Actually I think it does - you think your average person on the street wants targeted ads? Seriously - who is writing this.

  "In principle, we support “Do Not Track” (DNT)"

In principle China, Syria, Iran etc support Human Rights...

  "Ultimately, we believe that DNT must map to 
user intent " not to the intent of one
browser creator, plug-in writer, or third-
party software service."

Again - seriously - what reality are you apart of?

I had hoped for a Yahoo turn-around of sorts, I really did. You've lost me.

10
Steko 2 hours ago 1 reply      
Am I mistaken or did the FTC not just fine Google $22.5 million over the exact same behavior for a considerably smaller share of web users.

If I'm Microsoft I am making a very public appeal to the FTC over this Monday morning. $22.5 million is a lot more money to Y! than Google. And as wary as consumers are of Microsoft long term they've got to degrade the Google/Firefox brand to start gaining any traction. Why not go full bore on tracking/creepiness?

11
fitztrev 3 hours ago 0 replies      
Issue aside, I'm curious about this site itself. The first thing I noticed is that they're running a pretty old version of WordPress. They're on 3.0.3 (December 2010) when the latest version is 3.4.2. For security purposes, I'm surprised they don't stay on top of that.

Also I was really confused if this was an official Yahoo site. No real mention anywhere on it. After some quick digging, it appears to be. But I'm surprised it's not hosted under the Yahoo.com domain somewhere.

12
tvladeck 4 hours ago 4 replies      
They key thing to understand is that if IE10 did not have DNT enabled, that the default setting would be _just as arbitrary_ and would still therefore not "map to user intent" in their words. There has to be a default in one direction or the other.

That, and many users will use IE10 knowing that it ships with DNT pre-enabled. To ignore this is totally immoral and unethical. This is totally shameful.

13
teuobk 3 hours ago 0 replies      
This is surprising, given that Microsoft is the provider of ads on Yahoo Search (and perhaps other properties).
14
leeoniya 4 hours ago 2 replies      
To be honest, i actually don't care if yahoo tracks me on yahoo properties - in fact, i expect them to. What i DO NOT want is for them to track me across the entire internet through injected javascript, iframes and dedicated tracking domains that serve same-origin analytic scripts from hundreds of sites - that is unethical.

Currently using adblock plus, noscript and ghostery on my FF setup with specific additional controls in ABE for Twitter and FB domains.

15
mikegirouard 1 hour ago 0 replies      
No comments on the post? Hmm... I'm curious what they are getting but haven't approved.
23
Libtorrent experience - the poor state of async disk IO rasterbar.com
75 points by willvarfar  11 hours ago   20 comments top 9
1
evmar 9 hours ago 1 reply      
(Copy'n'paste of reddit comment:)

While it's true that the Windows API seems to be the best thought through, I was surprised to learn that the implementation may randomly fall back to synchronous IO in unpredictable ways, which (depending on the app, but likely for something that's attempting to juggle a lot of work like a bittorrent implementation) means you need a thread pool anyway.

http://neugierig.org/software/blog/2011/12/nonblocking-disk-...

2
arielweisberg 9 hours ago 1 reply      
My experience with Linux and buffered IO (ext4) from multiple threads has been very positive. The only beef I have is that you can't prevent the data you write/read from polluting the cache without resorting to madvise which isn't available from Java. I don't usually care about the contents of the page cache so it isn't a showstopper.

You can do hundreds of thousands of random reads a second from a single thread submitting tasks to a thread pool on an in memory data set. You can do tens of thousands of reads for a > memory data set with an SSD and I was able to get the advertised number of 4k IOPs out of the SSD (Crucial m4) and an Intel i5 desktop CPU.

I frequently have to multiplex data as it becomes available into a single file (to keep the IO sequential for the disk and filesystem) and I always use a thread per file and I got up to 250 megabytes/sec on a 4 disk RAID-0. I don't currently have a use case for needing more sequential write throughput than that so I haven't tried to attaching more disk and SSDs weren't as fast or common at the time.

My reading of buffered IO in Linux is that it translates to a combination of page cache interactions and async IO under the hood so we are technically always using async IO.

3
wizard_2 9 hours ago 1 reply      
I believe even NodeJS's libuv came to the same unfortunate conclusion for non windows hosts.

https://github.com/joyent/libuv
http://nikhilm.github.com/uvbook/filesystem.html

4
j_s 8 hours ago 0 replies      
Alan McGovern chose a compromise for MonoTorrent, using async io but processing all the results in a single thread.

The Evolution of MonoTorrent - FOSDEM 2010

http://www.youtube.com/watch?v=TbhKpeqIy8o&t=10m10s

Simplified Threading API

http://monotorrent.blogspot.com/2008/10/monotorrent-050-good...

5
mattgreenrocks 9 hours ago 1 reply      
Nice write-up. I suspect the poor implementation of async I/O suggests how often it is actually used in practice. Signal handling definitely feels like the wrong design here, especially for a library author.

I'm also not surprised that Windows fared better here; with IOCP they had a chance to redo async I/O completely.

6
dclusin 9 hours ago 0 replies      
Another good read about increasing disk IO and reducing latency can be found at the Mechanical Symphony blog about the single writer principle: http://mechanical-sympathy.blogspot.com/2011/09/single-write...

We ran into issues where lots of threads attempting a disk write were causing latency problems. We were able to get around this by having a single dedicated thread to disk IO.

7
freyrs3 6 hours ago 0 replies      
libeio makes some strides in this direction.

http://software.schmorp.de/pkg/libeio.html

8
gwern 4 hours ago 1 reply      
> The aio branch has several performance improvements apart from allowing multiple disk operations outstanding at any given time. For instance:

This sounds like a bad idea. If the improvements aren't tied to asynch, why weigh them down with the async albatross instead of merging them to the mainline?

9
VMG 9 hours ago 0 replies      
the line breaks make it very difficult to read for me - here's a copy of the text: https://gist.github.com/3960408
25
Jessica Livingston: What Goes Wrong foundersatwork.com
457 points by nswanberg  1 day ago   80 comments top 26
1
nadam 18 hours ago 4 replies      
Two of the problems can be easily avoided: 'cofounder disputes' and 'investors' are not problems in case of single-founder bootstrapped startups. :)

One thing I would add to the topic of 'determination': Are we speaking about determination to make a startup successful or determination to try out as many ideas in our life as possible, learn as much as possible and try to make at least one startup successful in our life?

I mean first we have to analyze what we optimize for:

If we optimize for the success of a given startup then it is obvious that the optimal strategy is to never give up on the startup.

If we optimize for the success of a person in his lifetime then it is different. In this case we have to examine all kinds of opportunity costs. Could it be a better strategy to very quickly abandon a startup when it seems that people do not want the product, so that we can start much more startups in our life, to increase the chance of at least one becoming successful?

2
ericdykstra 1 day ago 2 replies      
Here's a link to the video version of Jessica Livingston's talk from Startup School: http://startupschool.org/2012/livingston/

It really is great! I encourage everyone on HN to read or watch it if they have not already. As a not-yet founder, it has a lot of interesting advice that I don't think is documented anywhere as concisely and practically as it is here.

3
btilly 1 day ago 1 reply      
The link assumes that you know who is speaking. But it doesn't give that critical piece of information.

Founders at Work was written by Jessica Livingston, who is a cofounder of ycombinator. She's married to Paul Graham. But do not think that she's in there just because of the personal connection. Her book is truly excellent. And in previous articles I've seen Paul say that the #1 thing that they want in a founder is determination, and the person that they rely on to spot it during the interview is Jessica.

4
cs702 1 day ago 1 reply      
This is excellent -- and very much in the spirit of Charlie Munger's often-repeated saying: "All I want to know is where I'm going to die, so I'll never go there."[1]

--

[1] http://www.pbs.org/wsw/news/fortunearticle_20031026_03.html

5
pmarca 1 day ago 0 replies      
This is extraordinarily accurate, based on my experience.
6
tarr11 1 day ago 1 reply      
Worth the read just for this:

The pizza place was very confused by this, but they send the pizza guy without a pizza, Kyle answers the door, and the pizza guy says, "The site is down."

7
mck- 21 hours ago 0 replies      
The Mixergy interview with PG (Feb 2010)[1] mentions Jessica working on a second edition of Founders at Work.

PG: "You know, that is her deepest wish. If she is watching this, she'll be laughing so much at this point because that's what she would like the most too to be able to spend more time on the new version of Founders at Work. There's a new, she's working on a new edition, with a bunch of new interviews."

Any updates on this?

[1]: http://mixergy.com/y-combinator-paul-graham/

8
loumf 14 hours ago 0 replies      
If you are interested in a take on this topic that is based on data, check out Noam Wasserman at BoS 2009 (his talk at BoS 2012 was very similar)

http://businessofsoftware.org/2009/05/professor-noam-wasserm...

He has been collecting data on start-ups and then looking at survival lengths and outcomes. He wrote a book on the topic

http://www.amazon.com/Founders-Dilemmas-Anticipating-Foundat...

9
mynegation 1 day ago 4 replies      
Very interesting. Statistically speaking, women are better than men in reading non-verbal information. I wonder if this is a part of YC success.
10
goronbjorn 1 day ago 0 replies      
This is another great piece of writing by Jessica Livingston (I was at Startup School for the talk as well).

Is she ever going to pursue writing a sequel to Founders at Work?

11
dools 20 hours ago 3 replies      
Arguing anything other than differences in levels of persistent hard work and skill in your particular field has a large mountain of evidence to overcome.

The effects of those two are very large, the effects of everything else comparatively small per decades of startup and longitudinal entrepreneurial studies.

Nonsense about hustle is exactly that: nonsense. The weight of evidence suggests that, if anything, hustling and creativity have a net negative effect on long term health of a startup.

But there's money to be made keeping up the lie.

Lastly, beware of pseudo-pop-science that opens with only a few people's stories. People manage to succeed as founders all over the world; these stories are not remarkable and tell us nothing.

In general the whole "determination" thing has little to no value in any serious consideration of startup success: it's about on the same level of credibility as diet fads.

12
LiveTheDream 1 day ago 1 reply      
Are people really having such trouble with the context of this article? The author's name is in the headline (maybe it was modded in later, to be fair), but also there's an "author" link[0] in plain sight. PG's essay's don't have a "who am I" introduction, and if you didn't know who he was then you'd simply click on the obvious "bio" link.

[0] http://www.foundersatwork.com/author.html

13
faramarz 23 hours ago 1 reply      
Awesome read!

Jessica mentions the Codecademy team launched 2 days before Demo Day and managed to signup 200k users. If I remember correctly, they launched on HN through a Show HN thread.. and so on..

What I really want to know is, how many of those initial 200k users stuck around? I was one of them and I have only signed in maybe twice since their launch.

So what does that mean? they leveraged the curious users to get VC interest? Did they really engage me, us, the 200k? is that a false positive?

I guess if the net result is a positive one Today, none of this really matters.

14
biscarch 1 day ago 0 replies      
Having started riding this roller coaster I particularly enjoyed a view of what pitfalls to be aware of in the future.

Also, since I just survived a dual-founder breakup (company intact), it was encouraging to know that this was probably a bigger bullet to have dodged. (For those curious, post-breakup I reached out to an old friend with whom I've shared some tenuous situations and we have applied to YC for the next batch)

Edit: I forgot about the pizza comment! When she asked how to contact someone in Lake Tahoe, I audibly said pizza (in my empty apartment). When the solution was pizza, I had a celebratory moment.

15
brianmcdonough 1 day ago 0 replies      
Jessica's speech follows one of the themes she established in "Founders," - overcoming emotional responses being a key to success in startups (and life). Her skill in communicating complex ideas is subtle, but more impressive because it lacks the usual dose of ego and/or one-upmanship. The sole intent being to help people who can listen well enough to use the information to help themselves.
16
001sky 1 day ago 0 replies      
In order to make something people want, being brilliant and determined is not enough. You have to be able to talk to your users and adjust your idea accordingly. Ordinarily you have to change your idea quite a lot even if you start out with a reasonably good one.

-- This is a great point. Even outside of startups.

17
alid 1 day ago 0 replies      
I love this! It's one of the best pieces of startup advice I've read in a long while; I've sent it to my startup friends. New fave quote: "Determination is really two separate things: resilience and drive. Resilience keeps you from being pushed backwards. Drive moves you forwards".
18
uhwuggawuh 5 hours ago 0 replies      
Are those Pokemon in the first figure? If so, I have severely underestimated the coolness of next-generation Pokemon.
19
ww520 21 hours ago 0 replies      
Wow. Great advices. I especially like the resilience part. Those rejection stories really hit home, as I have gone through similar experience recently. When reading them, the line "when life deals you a lemon, make lemonade" kept flashing through my mind.
20
Rajiv_N 1 day ago 1 reply      
I know this is a minor issue. But when I publish something I want people to inform me of problems. Please note that I don't mean to be disrespectful and just want to help. So here goes:

3rd sentence: "There's a talk I've always want to give at the beginning of each batch...". I think this should be either "I've always wanted to..." or "I always want to..." right?

21
michaelnovati 1 day ago 3 replies      
I saw this talk at Startup School. Honestly, as someone working in industry who tried doing a startup during school, there's a huge thing missing.

CMD+F for "luck" = 0 results.

Luck is a huge factor and sometimes you just need to move on to either something new, or working for a company to fill in the gaps, and trying again soon.

22
andrewhillman 1 day ago 0 replies      
This is one of the best articles I have read in a while. Obviously you see a lot over 7 years/ 500 startups.
23
bcooperbyte 23 hours ago 0 replies      
Loved it. Very informative. Being an entrepreneur is a tough road, but with preparation, belief, and determination things will eventually take its course.
24
nanodeath 1 day ago 1 reply      
A byline would be helpful, here...
25
seacond 23 hours ago 0 replies      
"Investors tend to have a herd mentality."
26
ghshephard 1 day ago 4 replies      
I don't mean to be impolite, but do you honestly not know that "Founders" is one of (if not THE) seminal work on the culture of startups and their founders?

I guarantee you cannot name three books that have done a better job capturing this topic, because they don't exist.

Claiming that Livingston's relationship with YCombinator/Graham is the reason why the book is so wonderful, is like claiming David Pogues relationship with the NYT is why he's such a popular tech reviewer, or Manohla Dargis is such an amazing movie reviewer

It misses the point of both their contribution, and talent and is frankly, quite rude.

27
dl.google.com now served by Go groups.google.com
171 points by mseepgood  18 hours ago   114 comments top 11
1
MatthewPhillips 15 hours ago 3 replies      
> Why rewrite in Go? It all started back in April of this year, when I was running "apt-get update" at home and noticed that downloads from dl.google.com were stalling for no apparent reason. I asked about this on our internal Google+ and was put in touch with the right people. It turned out that the existing C++ dl.google.com service was no longer actively maintained (short of being kept alive) and that it relied on some assumptions about Google's internal architecture which were no longer true.

This doesn't instill confidence in Go. That it is a success story with abandoned projects is not what we need to hear; we need to hear that Google+ runs on Go. Or at least something the size of Google Reader. When a startup is evaluating what language to build their stack on, dl.google.com is not what they're aiming for.

2
16s 14 hours ago 5 replies      
Languages such as Go would interest me more if they were not controlled by one company. I've had bad experiences with corporate languages in the past. This is one reason I'll stick with C++. It has an ISO standard and many different compilers from different companies. I feel more in control and less likely to be shafted by Google, Oracle, Microsoft or whoever owns the language de jour.
3
knodi 11 hours ago 1 reply      
I have been working with Go for about 6 months now and I'm in love with Go. I find it simple, easy and fun to write in.
4
stephen 13 hours ago 1 reply      
Huh; did the full switch over just happen?

Out of the blue 1-2 days ago, a DTD hosted on dl.google.com (http://dl.google.com/gwt/DTD/xhtml.ent) started semi-randomly timing out and hanging Eclipse.

(This was due to the Eclipse UI asking "Is this your XML file?" for every XML file in the project, for every plugin, and one of the plugins, IvyDE, ended up not having fetch-eternal-dtds turned off. What a silly default.)

5
haarts 17 hours ago 9 replies      
I've been following Go rather closely lately and I noticed that these little announcements come regularly.
This is clever marketing (mind share!) and one of the reasons why Go is going to win out over D et al. When was the last time you read something about D? (Perhaps I frequent the wrong watering holes.)
6
djhworld 15 hours ago 2 replies      
Cool.

As a sidenote is there any way of viewing google group posts without logging into google? I'm at work at the moment and my google account keeps on getting logged out as cookies are deleted frequently.

7
signa11 11 hours ago 0 replies      
as another example of stuff in go, there is a window-manager called "wingo" that is still pretty alpha, but kinda-sorta-works https://github.com/BurntSushi/wingo
8
jff 12 hours ago 0 replies      
"Uh oh, another story about success in Go, better post some FUD!" Nobody's yet come in to say how using Go on a 10-year-obsolete computer (32-bit PC) may have problems with garbage collection, pretty unusual for a Go post.
9
sanatgersappa 16 hours ago 0 replies      
Always nice to see some real-world application of a promising technology.
10
signa11 15 hours ago 2 replies      
<rant>
this is weird, i had posted the same story approx 7 hours ago, which received 2 votes, now the same story, 5 hours later is on the front-page http://news.ycombinator.com/item?id=4700943
11
playingcolours 16 hours ago 2 replies      
Can Go be a good language to implement NoSql database?
28
BufferBox (YC S12) launches deal with transit system financialpost.com
49 points by mmccauley  10 hours ago   14 comments top 5
1
untog 9 hours ago 0 replies      
Very clever move. Ideally located, (presumably) with security cameras etc already there, and provided by a transit authority- which are almost permanently cash-strapped.

In short: bring it to New York, please. But expect them to get dented late at night.

2
solox3 4 hours ago 1 reply      
I've worked in close proximity to these guys, but the math behind their business still boggles me.

There are about 10 slots in their largest box, and they are charging $4 per slot. The average package will occupy the slot for a day before it is picked up. This gives a max revenue of $40/day/box, or $1200/month/box. Slash that figure in half because it is impossible to have all slots occupied simultaneously all the time, and you get $600/month/box.

If they can turn this into a profitable business, they are truly visionaries that see what others cannot.

3
brianbreslin 10 hours ago 1 reply      
These are similar to Amazon lockers in seattle right? Interesting business model (would love to hear more about it somewhere)
4
webmonkeyuk 8 hours ago 1 reply      
I wouldn't have thought this would have ever got of the ground with potential security issues. An underground station would be the perfect place to plant explosives. In London they don't even have rubbish (trash) bins in the stations.

Perhaps only pre-approved retailers can deliver to it. That system's still open to abuse though.

5
andrewcross 9 hours ago 0 replies      
While it's mainly designed for packages, I use it for all my mail now. Would have a hard time going back to not having it.
29
Cucu: a compiler you can understand zserge.com
57 points by LiveTheDream  11 hours ago   9 comments top 7
1
p4bl0 10 hours ago 0 replies      
It is a very good idea to write a small compiler for teaching purpose. However I think it would have been better to have clean code where the reader can understand what happens at each line. Here the massive usage of global variables makes it harder and unintuitive. Also there are a lot of functions which returns for instance 0 or 1 to signal their success or failure but then at the call sites their return values are always ignored. Also, the way the -_expr functions works in the part 2 is really, really weird and unintuitive (but this is due to the all-in-global-variables design). I could go on with other issues, but all this is to point out the big caveat: one should not take the code of the article and try to grow the compiler to extend it because doing so will lead to horrible spaghetti code. And that really is too bad, because it would have been an even better exercise for beginners to extend such a small compiler.
2
paupino_masano 1 hour ago 0 replies      
Interesting concept, though I'm not sure quite what they're trying to achieve. Are they trying to teach lexical analysis? Their `typename` method seems awfully verbose for my liking - in my opinion, definitely harder than the LALR(1) syntax that CUP promotes.

Personally, I learnt how to write compilers using a mix of FLEX (the Fast Lexical Analyzer - not by Adobe!) and CUP. From that you create a LALR(1) grammar which then compiles down to a tree (if you choose) which you traverse depth first generating code/assembly/whatever. I would say that this is actually EASIER to write and to understand than trying to do it from scratch - you concentrate purely on the grammar and the tree generation (which leads to code generation or interpretation - e.g. BASIC, code analysis etc).

This MAY seem complex as there is a bit of "black box" going on behind the scenes (how does it lexically turn characters into symbols? How does it turn those symbols into a grammar which eventually builds a working product?) however once you understand the grammar language (lexical analysis is easy) you find that compilers aren't that difficult. It's a matter of turning code into symbols which you then apply a grammar to. From there (tree/code generation/interpretation) is easy. I would in fact put lexical analysis and grammar generation into a separate toolkit which a compiler developer _uses_ - though it may be from my experience: I welcome other compiler developers to interject.

Learning how to write a compiler definitely made me a better programmer - especially in terms of OO languages and also understanding how languages are built (i.e. learning new languages aren't an issue when you know how the internals are likely to work). I highly recommend it for anyone looking to improve their skill set. Unfortunately the course I took at university (which WAS available free on the internet) teaching this concept has now died. It's quite sad that it has disappeared as that paper was hands down my most useful paper I ever took. I've now written compilers for companies converting legacy code to modern code (and native etc) from these skills - it's much more versatile than just generating assembly/machine code!

Anyway, I digress: I admire what they are trying to do, however I would recommend others to rather learn FLEX (or JFLEX, CSFLEX etc) and CUP (CSCUP etc) instead of trying to do all the heavy lifting themselves. If they want to write a lexical analyzer or a grammar parser then that is a different journey...

3
hvs 6 hours ago 0 replies      
Doing something like this before tackling the Dragon book [1] is a great idea. Formal theory will make a lot more sense once you've run into some of the practical problems yourself.

[1] http://en.wikipedia.org/wiki/Compilers:_Principles,_Techniqu...

4
scorpioxy 10 hours ago 0 replies      
Very cool. I think more articles like this, about how a compiler is not magic, should be written.

It still amazes me how many programmers I meet think that a compiler is something that they could never write. I say a production compiler might be too much work for just one person, but in its basic building blocks it's actually relatively easy to write.

I say this after trying to write one just to overcome the mystery, even though I probably won't need to ever write my own professionally.

You can find my attempt at:
http://www.codedemigod.com/jack-compiler/
https://github.com/alaasalman/jackcompiler

5
dlo 3 hours ago 1 reply      
Great work!

Might I offer a suggestion? I would argue that most of the work that goes into a compiler is not the front-end functionality but rather the "middle-" and back-end functionality, i.e. making the generated code fast. It would be very insightful to those outside the compilers community as to what it's actually like to work on one to go over the basic optimizations, such as global data-flow analysis (reaching definitions, live variables, partial redundancy elimination, constant propagation, etc.), loop unrolling, and so on.

6
ecoffey 10 hours ago 0 replies      
This is a really great article and really similar to the compilers class I got to take in college.

Removing left-recursion from a grammar by hand! Woo!

7
borplk 10 hours ago 1 reply      
The site seems to be down
30
API Quirks: a new blog series documenting weird web API behavior zapier.com
21 points by mikeknoop  6 hours ago   6 comments top 3
1
h2s 5 hours ago 3 replies      
I love this. I have a suggestion too. Reddit's REST API is pretty quirky because it's so tightly coupled with their own front end code. Here is a quote from the code that handles posting a comment (https://github.com/reddit/reddit/blob/master/r2/r2/controlle...):

     # remove any null listings that may be present
jquery("#noresults").hide()

To be clear: that's Python you're reading. It's pretending to interact with the DOM. These calls get bundled up and sent back with the response for the frontend code to slavishly execute. The result is quite a schizophrenic API. On the one hand, you can get very nicely formatted JSON about data like so http://www.reddit.com/r/skateboarding.json. On the other hand, it's needlessly difficult to write code that can tell if a comment posted successfully or not when the response data looks like this:

   "jquery":[
[0, 1, "call", ["#form-t3_iqhw5ili"]],
[1, 2, "attr", "find"],
[2, 3, "call", [".status"]],
[3, 4, "attr", "hide"],
[4, 5, "call", []],
[5, 6, "attr", "html"],
[6, 7, "call", [""]],
[7, 8, "attr", "end"],
[8, 9, "call", []],
[1, 10, "attr", "find"],
[10, 11, "call", ["textarea"]],
[11, 12, "attr", "attr"],
[12, 13, "call", ["rows", 3]],
[13, 14, "attr", "html"],
[14, 15, "call", [""]],
[15, 16, "attr", "val"],
[16, 17, "call", [""]],
[0, 18, "attr", "insert_things"],
[18, 19, "call", [
[{
"data":{
"content":"<div class=\"thing id-t1_c25u9i3 even odd comment \" onclick=\"click_thing(this)\"><p class=\"parent\"><a name=\"c25u9i3\" ></a></p><div class=\"midcol likes\" ><div class=\"arrow upmod\" onclick=\"$(this).vote('f0f0f0f0f0f0f0f0f0f0f0f0f0f0f0f0f0f0f0f0', null, event)\" ></div><div class=\"arrow down\" onclick=\"$(this).vote('f0f0f0f0f0f0f0f0f0f0f0f0f0f0f0f0f0f0f0f0', null, event)\" ></div></div><div class=\"entry likes\"><div class=\"collapsed\" style='display:none'><a href=\"#\" class=\"expand\" onclick=\"return showcomment(this)\">[+]</a><a href=\"http://www.reddit.com/user/KerrickLong\" class=\"author gray submitter id-t2_4appr\" >KerrickLong</a><span class=\"userattrs\">&#32;[<a class=\"submitter\" title=\"submitter\" href=\"/r/MostlyHarmless/comments/iqhw5/changelog_mostly_harmless_v041_released/\">S</a>]</span>&#32;<span class=\"score dislikes\">-1 points</span><span class=\"score unvoted\">0 points</span><span class=\"score likes\">1 point</span>&#32;<time title=\"Fri Jul 15 16:57:16 2011 GMT\" datetime=\"2011-07-15T16:57:16.209277+00:00\">91 milliseconds</time>&#32;ago &nbsp;<a href=\"#\" class=\"expand\" onclick=\"return showcomment(this)\">(0 children)</a></div><div class=\"noncollapsed\" ><p class=\"tagline\"><a href=\"#\" class=\"expand\" onclick=\"return hidecomment(this)\">[&ndash;]</a><a href=\"http://www.reddit.com/user/KerrickLong\" class=\"author submitter id-t2_4appr\" >KerrickLong</a><span class=\"userattrs\">&#32;[<a class=\"submitter\" title=\"submitter\" href=\"/r/MostlyHarmless/comments/iqhw5/changelog_mostly_harmless_v041_released/\">S</a>]</span>&#32;<span class=\"score dislikes\">-1 points</span><span class=\"score unvoted\">0 points</span><span class=\"score likes\">1 point</span>&#32;<time title=\"Fri Jul 15 16:57:16 2011 GMT\" datetime=\"2011-07-15T16:57:16.209277+00:00\">91 milliseconds</time>&#32;ago</p><form action=\"#\" class=\"usertext\" onsubmit=\"return post_form(this, 'editusertext')\" id=\"form-t1_c25u9i38va\"><input type=\"hidden\" name=\"thing_id\" value=\"t1_c25u9i3\"/><div class=\"usertext-body\"><div class=\"md\"><p>Hey!</p><p><strong><a href=\"http://www.google.com/\">search!</a></strong></p></div>\n</div><div class=\"usertext-edit\" style=\"display: none\"><div><textarea rows=\"1\" cols=\"1\" name=\"text\" >test&#32;comment</textarea></div><div class=\"bottom-area\"><span class=\"help-toggle toggle\" style=\"display: none\"><a class=\"option active \" href=\"#\" tabindex=\"100\" onclick=\"return toggle(this, helpon, helpoff)\" >formatting help</a><a class=\"option \" href=\"#\">hide help</a></span><span class=\"error TOO_LONG field-text\" style=\"display:none\"></span><span class=\"error RATELIMIT field-ratelimit\" style=\"display:none\"></span><span class=\"error NO_TEXT field-text\" style=\"display:none\"></span><span class=\"error TOO_OLD field-parent\" style=\"display:none\"></span><span class=\"error DELETED_COMMENT field-parent\" style=\"display:none\"></span><span class=\"error DELETED_LINK field-parent\" style=\"display:none\"></span><span class=\"error USER_BLOCKED field-parent\" style=\"display:none\"></span><div class=\"usertext-buttons\"><button type=\"submit\" onclick=\"\" class=\"save\" style='display:none'>save</button><button type=\"button\" onclick=\"cancel_usertext(this)\" class=\"cancel\" style='display:none'>cancel</button><span class=\"status\"></span></div></div><table class=\"markhelp md\" style=\"display: none\"><tr style=\"background-color: #ffff99; text-align: center\"><td><em>you type:</em></td><td><em>you see:</em></td></tr><tr><td>*italics*</td><td><em>italics</em></td></tr><tr><td>**bold**</td><td><b>bold</b></td></tr><tr><td>[reddit!](http://reddit.com)</td><td><a href=\"http://reddit.com\">reddit!</a></td></tr><tr><td>* item 1<br/>* item 2<br/>* item 3</td><td><ul><li>item 1</li><li>item 2</li><li>item 3</li></ul></td></tr><tr><td>&gt; quoted text</td><td><blockquote>quoted text</blockquote></td></tr><tr><td>Lines starting with four spaces<br/>are treated like code:<br/><br/><span class=\"spaces\">&nbsp;&nbsp;&nbsp;&nbsp;</span>if 1 * 2 &lt 3:<br/><span class=\"spaces\">&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;</span>print \"hello, world!\"<br/></td><td>Lines starting with four spaces<br/>are treated like code:<br/><pre>if 1 * 2 &lt 3:<br/>&nbsp;&nbsp;&nbsp;&nbsp;print \"hello, world!\"</pre></td></tr><tr><td>~~strikethrough~~</td><td><strike>strikethrough</strike></td></tr><tr><td>super^script</td><td>super<sup>script</sup></td></tr></table></div></form><ul class=\"flat-list buttons\"><li class=\"first\"><a href=\"http://www.reddit.com/r/MostlyHarmless/comments/iqhw5/changelog_mostly_harmless_v041_released/c25u9i3\" class=\"bylink\" rel=\"nofollow\" >permalink</a></li><li><a class=\"edit-usertext\" href=\"javascript:void(0)\" onclick=\"return edit_usertext(this)\">edit</a></li><li><form class=\"toggle del-button\" action=\"#\" method=\"get\"><input type=\"hidden\" name=\"executed\" value=\"deleted\"/><span class=\"option active\"><a href=\"#\" onclick=\"return toggle(this)\">delete</a></span><span class=\"option error\">are you sure? &#32;<a href=\"javascript:void(0)\" class=\"yes\" onclick='change_state(this, \"del\", hide_thing)'>yes</a>&#32;/&#32;<a href=\"javascript:void(0)\" class=\"no\" onclick=\"return toggle(this)\">no</a></span></form></li><li><form action=\"/post/remove\" method=\"post\" class=\"state-button remove-button\"><input type=\"hidden\" name=\"executed\" value=\"removed\" /><span><a href=\"javascript:void(0)\" onclick=\"return change_state(this, 'remove');\">remove</a></span></form></li><li class=\"toggle\"><form method=\"post\" action=\"/post/distinguish\"><input type=\"hidden\" value=\"distinguishing...\" name=\"executed\"/><a href=\"javascript:void(0)\" onclick=\"return toggle_distinguish_span(this)\">distinguish</a><span class=\"option error\">distinguish this? &#32;<a href=\"javascript:void(0)\" onclick=\"return set_distinguish(this, 'yes')\">yes</a>&#32;/&#32;<a href=\"javascript:void(0)\" onclick=\"return set_distinguish(this, 'no')\">no</a>&#32; /&#32;<a class=\"nonbutton\" href=\"/help/moderation#Distinguishing\">help</a>&#32;</span></form></li><li><a class=\"\" href=\"javascript:void(0)\" onclick=\"return reply(this)\">reply</a></li></ul></div></div><div class=\"child\" ></div><div class=\"clearleft\"><!--IE6sux--></div></div><div class=\"clearleft\"><!--IE6sux--></div>",
"contentHTML":"<div class=\"md\"><p>Hey!</p><p><strong><a href=\"http://www.google.com/\">search!</a></strong></p></div>",
"contentText":"Hey!\n\n**[search!](http://www.google.com/)**",
"id":"t1_c25u9i3",
"link":"t3_iqhw5",
"parent":"t3_iqhw5",
"replies":""
},
"kind":"t1"
}],
false
]
],
[0, 20, "call", ["#noresults"]],
[20, 21, "attr", "hide"],
[21, 22, "call",[]]
]

Edit: Didn't mean for this to come off as a pop at them. I know how this sort of thing happens and I know that feel.

2
drivebyacct2 4 hours ago 0 replies      
Is there something similar but just for shit APIs? I'm spending as much time writing an API for as I am implementing a library for TVDB because the API is so... just stupid. By time I'll be done with this it will cache from TVDB and be able to serve all of the content back in a SANE REST API rather than in the random URL, extract string, request url, get a zip with 3 conflicting xml files inside. Ugh.
3
eldavido 4 hours ago 0 replies      
Write a post about MailChimp's PHP API...I dare you.
       cached 27 October 2012 04:02:01 GMT