hacker news with inline top comments    .. more ..    6 May 2011 News
home   ask   best   7 years ago   
Apple not providing LGPL webkit source code for latest iOS 4.3.x gnumonks.org
90 points by gnufs  1 hour ago   18 comments top 6
Xuzz 46 minutes ago 0 replies      
For iOS 4.1, which came out in September 2010, absolutely no GPL code for it (or later versions, like 4.2) was posted until March 2011. That's not 8 weeks: that's about 6 months.

When comex (http://twitter.com/comex) and saurik (http://saurik.com/) asked for it (via emails to opensource@apple.com and copyright@apple.con) around last November, I don't think they got any response from Apple "until this year. Then, Apple let them know that it would be up "within a week". I think the iOS 4.1 and 4.2 code actually went up about three weeks after they received that email.

saurik has even more examples of them not releasing the [L]GPL'd code near the top of this post: http://www.saurik.com/id/4 " "Frankly, I wouldn't be surprised at all if Apple ends up on the bad end of a GPL-related lawsuit."

(In my opinion, the fact Apple has posted any code for iOS 4.3 at this point is a big step in the right direction: they're not perfect yet, but at least they've got 8/10 of the projects up.)

Macha 45 minutes ago 0 replies      
Apple, as with many other companies, does not understand that it has to release the source simultaneously with the program using it.

Despite the articles claim, Apple has not released the source in s timely manner for previous versions of iOS, instead waiting for it to be pointed out or for version N+1 or N+2 to be released first.

vaporstun 40 minutes ago 2 replies      
Anyone else find this to be a bit over-dramatic?

They have released every other version and just haven't released the 4.3.x one yet. There is no indication that they refuse to release it ever, the site still says "Coming Soon" and it has still been < 2 months since 4.3 was released.

Yes, I understand that under the GPL they're supposed to release it simultaneously with the launch, which they failed to do, but is this really front page news?

Maci 6 minutes ago 0 replies      
While in all likelihood it's a legal and bureaucratic issue causing delay, I can see how this is considered bad form.

However, I've made an attempt at understanding the source release obligations under the GPL and all I get from it is: When you release to the public, you've got to release the source. But at no point have I found a "it has to be released immediately."


The only clause I can see Apple potentially hiding under is Section 3.B of the GPL.
ie. as long as they have the door open for written requests all is well.


Can someone please clarify for me how the "well intended" spirit of the license works versus the real world legalities and requirements ?

cppsnob 12 minutes ago 0 replies      
I'm still waiting for that "open" FaceTime specification.
rewqwefqwerf 36 minutes ago 2 replies      
You just need the copyright owners to sue now.
The copyright holders can also ask the SFLC do it for them, I'm sure they would love it.
How fast is the Internet at Google? Mind blowing. thenextweb.com
88 points by dannyr  2 hours ago   49 comments top 23
danieldk 1 hour ago 3 replies      
Now, let's put these numbers into perspective. According to Ookla's Net Index, the average speed for a country in Europe is around 12 Mb/second.

That's partly because most people are happy with such speeds (or at least the speed/price ratio). At home I have a subscription with 120MBit/s downstream. Speed tests usually give 130MBit/s during daytime and 140MBit/s at night.

Given that, 500MBit/s downstream is the very minimum one would expect from one of the leading Internet companies, probably located closely to one of speedtest.net's nodes.

51Cards 2 hours ago 1 reply      
What's funniest is that if you look closely at the Speedtest.Net image, Google only gets a 3 1/2 star rating. No one is ever happy I guess.
kristofferR 18 minutes ago 0 replies      
Is this really "mind-blowing"? I can upgrade my personal home fiber to 400/400Mbits for 6000NOK (1100USD) per month here in Oslo. Expensive yeah, but not too unreasonable.
ChiperSoft 9 minutes ago 0 replies      
This is impressive, but it loses a little luster when you realize their proximity to the testing server.

SpeedTest's SF server is hosted by MonkeyBrains.net. A quick traceroute shows their servers are just one step away from Cogent's SF backbone. I suspect whoever ran this test has fewer than four steps between them and the testing server, and the slowest link between them is the gigabit ethernet port to their desk.

spiffworks 38 minutes ago 1 reply      
And now for a bit of perspective, here in India I am working with a 512kbps connection, which is the only one I can get without any 'fair use' caps. The latency for the fastest DNS is about 350ms, and on weekends, my bandwidth can be as low as 200kbps.
ANH 1 hour ago 0 replies      
Sigh. I invite you to semi-rural America: http://www.speedtest.net/result/1284417179.png

I live 20 minutes from one of the largest airports in the country.

Maro 1 hour ago 1 reply      
This reminds me of the good old days, the 90s, when I was a teenager and finally upgraded to a ~50KB/s cable modem from my puny 33600 baud modem. That was a game changer, I even ran an FTP server for a while =) It's been getting faster ever since then, but I never again had that feeling of "OMG it's so fast let's download something for the hell of it".
lutorm 1 hour ago 4 replies      
Is the average US home internet connection really 10Mbps?? I find that hard to believe.
retube 1 hour ago 2 replies      
I think the fastest connection I ever had was back in the day when I worked at CERN. They host(ed?) a European internet hub, so rates were pretty damn quick.
darrenkopp 1 hour ago 0 replies      
Obviously this screenshot was taken before they went over their 250 GB quota and began to be throttled mercilessly.
buckwild 14 minutes ago 0 replies      
You'd be surprised how much of a productivity bottleneck "slow" internet can be.
u3tech 22 minutes ago 0 replies      
Good but nothing special, i live in lithuania and i download files at 140MBit/s .I am using cheapest my internet provider plan and if i want to pay more i could get faster internet.

p.s. i pay 17usb per month

andrewvc 2 hours ago 0 replies      
I usually can't get those kinds of speeds from most of my servers in datacenters generally.

Speedtest uses servers in the same city, and game downloads probably do as well (CDNs) but the average case is likely significantly slower.

StavrosK 1 hour ago 1 reply      
Am I the only one who thought they meant latency? I like my speeds well enough, but I could use some reduced latency. I guess there isn't much one can do about this, though.
ma2rten 1 hour ago 1 reply      
Wow, I did not realize my internet connection at work is actally that good... I ran speed test before some month ago and it was in the same range. (see http://speedtest.net/result/1147935298.png). Back then I though "not bad". But that a post about something like that would make it the homepage of HN...
code_duck 1 hour ago 0 replies      
Any of us could have that if we felt OK paying for an OC3 or OC12 connection at your home or office, right? I'm sure plenty of companies and universities have such service.
maratd 1 hour ago 2 replies      
That doesn't even make sense. For you to get to those sorts of speeds, you need jumbo frame support. While you can get that taken care of on a LAN/WAN or for Internet2, you can't magically make the Internet support it just by sprinkling Google fairy dust everywhere.


rajasharan 47 minutes ago 0 replies      
I don't get the point of this post. If the screenshot said MB/s then that was something.
My Apt has a corporate internet freely available for the residents and I always get 500 - 700Mbps.
mynameishere 50 minutes ago 0 replies      
I'm more impressed that speedtest's server can push data upstream that fast, unless I'm mistaken about how the test works.
juiceandjuice 1 hour ago 0 replies      
Internet at Stanford comparable.
nt 1 hour ago 0 replies      
I get similar speeds at work ( financial company in NYC ). Can't download any games though :-(
wazoox 2 hours ago 1 reply      
I suppose they're making some use of their gigantic caches of the whole web as a proxy, too?
DarkSideofOZ 44 minutes ago 0 replies      
That's pretty crazy, I get 6Mbit on a good day.
This Could be Big: Decentralized Web Standard Under Development by W3C readwriteweb.com
41 points by kerben  1 hour ago   9 comments top 4
EGreg 27 minutes ago 1 reply      
freenet and other services already do this. You can already use your browser to browse freenet, if you get the freenet program. They recommend using Chrome in Incognito mode for maximum privacy.

And it is impervious to DNS takedowns and you can even set up a darknet. It's used in China a lot. Also Perfect Dark is used. They operate on distributed hash tables. The problem is that without a central server, the only way you can connect to the hive is by hoping one of the last known hosts is still up. It also needs to use heuristics for routing.

swombat 58 minutes ago 1 reply      
As the comments point out, this is not a generic decentralised web standard to get around ICE and the like, but just a specification for p2p audio/video/etc communications for online video conferencing and so on. Not as big as I hoped.
phlux 52 minutes ago 1 reply      
I wonder how easily man-in-the-middle attacks via node spoofing would be.

You masquerade as a node by re-hosting their content and you capture any other client that accesses your proxy of that information.

codemechanic 50 minutes ago 1 reply      
Tonido (http://www.tonido.com) is a pioneer in this space. They have invented the model much before opera unit. The cool thing is that Tonido also provides a decentralized openid to end users.

Firefox should buy Tonido. It will change the industry if it happens and the way people share information. I probably may be little ahead. If you think deeply it will make sense.

Stolen Camera Finder stolencamerafinder.com
151 points by obtino  5 hours ago   56 comments top 20
cousin_it 4 hours ago 2 replies      
So if I find a photo I like, I can find all other photos taken by the same camera? Is there potential for stalking here?
charlief 3 hours ago 3 replies      
Good idea, but works most effectively when:

(1) Various encode/decode steps along to publishing the photo online don't corrupt EXIF data

(2) Thief isn't sophisticated to wipe/disable EXIF data. Many cameras shoot in a proprietary, higher-bit format and give you a fairly obvious wizard option on a desktop tool to include/exclude the EXIF data.

(3) Thief will use the camera, not sell it immediately into a second-hand market.

(4) Even if your camera is supported, it has to be configured to record EXIF data by both you and the thief. Some proprietary formats are fairly raw and don't always include EXIF-derivable data by default.

This will get some adoption because what other option do users have, but it will be interesting to see how many uploads convert to a lost camera being recovered/thief being apprehended. If users had the ability to leave a testimonial when there is some kind of closure, you could derive a metric of success.

yellowbkpk 3 hours ago 2 replies      
Would it be possible to process the images somehow and find the noise profile for every image and match it with existing images?

When I found a directory full of images and couldn't remember which camera took them, I noticed that there were a few fuzzy pixels of green and red if I zoomed all the way in that were present in all photos taken by that camera. I took a photo of a white wall in a dark room (to force high ISO) with a couple of my cameras and found the one. Of course I found out about the EXIF serial number and other unique data later on, but it could still be useful on sites that store the original image but strip EXIF.

jasonkester 4 hours ago 1 reply      
Tried it with a photo taken from a camera I had stolen in Peru:

The 'SAMSUNG TECHWIN CO., LTD. Samsung SL201' does not write serial information in the exif. See the supported cameras page for a list of models that do.

rednum 4 hours ago 1 reply      
I think it could help to add a feature "I've found camera/sd card/other device with photos". Just an anecdotal evidence, but my friend's friend found an iPod with some photos few years ago and couldn't locate the owner. Surely it doesn't happen very often, but if this site gained enough popularity, it could be really helpful.
humblepie 4 hours ago 0 replies      
I had my Canon DSLR body cleaned at the service centre here in Brampton, ON. When I got it back I noticed it felt different--the shutter sound is more thumpy, and etc. I checked the serial number to check if it was really mine and it was. It's all fine but then months later just by some coincidence I saw a photo on Flickr with my e-mail address in the metatags. Some of my photo buddies warned me that Canon is notorious for swapping parts when your cameras are in for service.
seles 25 minutes ago 0 replies      
I doubt this will every successfully result in a stolen camera being recovered. But, it is a cool new idea that certainly has other obvious applications such as finding other photos by the same camera.

Would it be better rebranded to a different purpose?

corin_ 3 hours ago 1 reply      
What's the database of photos it can search against like? I just tried looking up a photo, the site found the serial number in it but couldn't find any matching photos online. I know the exist, even the exact same photo I was testing with is available on various websites.
bxr 4 hours ago 1 reply      
Seems like a neat idea for a search engine, but I tried with photos from 6 different cameras and none of them stored the serial number in exif. I wonder how many models this is actually useful for.
meinhimmel 3 hours ago 0 replies      
Another neat idea: Allow the user to select their city, the make and model of the camera, and the date it was stolen. Then you can scrape Craigslist from the surrounding area and show possible matches.
subway 3 hours ago 0 replies      
Obviously I'm an edge case, but I'm not using a graphical file manager, so I can't use the drag and drop method of providing a file.

Have you considered allowing users to specify a file by URL, or the browser's browse mechanism for file input?

stevejalim 3 hours ago 0 replies      
It's a shame some smartphone cameras (eg, my old Nexus One) don't tag with full EXIF data, else you'd then have a much larger potential userbase.
kwestin 1 hour ago 0 replies      
The project is a great proof of concept, the chances this will get someone's stolen camera back is pretty slim. We have a similar project, but it searches for the data using existing search engine data. Only about 25% of cameras will embed the serial number, then when uploaded only a few sites will retain the EXIF data, or provide it through meta data. A few that keep the EXIF data or provide it in meta data include:


Some of these sites strip out some tags. Some manufacturers have custom EXIF tags like Nikon which may store the serial in a "Serial Number" tag or a tag called "0x00D".

wicknicks 39 minutes ago 0 replies      
A lot of cameras don't include the Serial Number in the EXIF header. What happens then?
antidaily 1 hour ago 0 replies      
I haven't gotten this to work once. Cool idea though.
hallowtech 3 hours ago 0 replies      
No love for RW2 I guess =(
Also, add an upload button, I don't want to drag&drop if my browser is full screen!
PanMan 4 hours ago 1 reply      
Great idea. But instead of a serial input, it should ask for a photo or Flickr account or so. I don't know my cam's serial, and I can't look it up easily if it's stolen.
MasterScrat 3 hours ago 1 reply      
On Chrome, drag-n-dropping from other windows doesn't seem to work (on Windows 7).
wazoox 2 hours ago 0 replies      
Apparently this doesn't work in Firefox. Too bad.
maqr 4 hours ago 2 replies      
Terrible idea. EXIF data is not reliable. You can make it say anything you want.
What Does Having the #6 App in a Mac App Store Category Get You? $15.42 hanchorllc.com
55 points by Osiris  2 hours ago   25 comments top 10
modernerd 1 hour ago 1 reply      
Even in the iOS App Store, there's a world of difference between sixth place and first place in terms of revenue. (I've charted in the US in both positions for the productivity category.)

It took Pixelmator 20 days to take a million dollars on the Mac App Store[1], so his suspicion that 'the stats aren't that great' might be true for most people, but it's not true for everyone.

This isn't surprising, though, because the Mac App Store will be subject to the same formula for profit as the iOS App Store:

  1. Build something unique with mass appeal.
2. Polish it until it shines.
3. Market it to a big list of loyal fans.
4. If you don't have a big list, build one or piggyback someone else's.
5. Update your app, improve it, and do one thing every day to spread the word.

Consistently, the apps doing well appear to get steps 1-to-5 right, which often results in Apple featuring them. Earning $15/day isn't reason to give up, though. It's a great starting point; there's plenty of room to grow from there.

[1]: http://www.pixelmator.com/weblog/

YooLi 55 minutes ago 0 replies      
1. This is MAC App Store. Not sure why people are mentioning Android.

2. This was only over 1 (ONE) day.

3. This was in the Dev Tools category.

pagliara 1 hour ago 2 replies      
You can't expect to make much money on the Mac App Store with an app priced at $2 in a niche category. You'll never get the volume of downloads necessary to be profitable, unlike on iOS, whose market is many times larger than the Mac.

My Mac App Store experience has been rather different. I have several applications in the store ranging from $5-$20, and daily revenue is usually in the $100-$200 range. The Mac App Store has brought much greater exposure and increased sales.

To generate income on the Mac App Store, you really need to be aiming for products that can be sold in the $20 range if your product is in one of the smaller categories.

evilduck 34 minutes ago 1 reply      
The article cites sales of Panic's Coda and BBEdit.

Panic has been around for a long time and has an established brand among Apple users. If I wanted a Panic product, I'd go directly to Panic.com where I've purchased from them before, and I'd prefer to give them their full price instead of 70%. Random Hanchor LLC product of questionable usefulness? I'll let Apple shield me. Hence, lower App Store sales of reputable products, higher sales of unknowns.

Coda and BBEdit are also relatively old products. Citing their slow sales as evidence that the Mac App Store is ineffective would be the same as citing slow sales of Adobe CS3 on the App Store if it were for sale. The primary purchasers for those products owned them well before the App Store even existed.

jcnnghm 1 hour ago 1 reply      
I bought the app a couple weeks ago. It can be downloaded from github and compiled, but I wanted to support the developer. Small world I suppose.
sosuke 1 hour ago 1 reply      
So how much did he spend on advertising? Why is he saying that all of his advertising only goes towards a single day of sales? Shouldn't it really be seen over a week after and week prior to get a good idea of the impact it had?
dusing 56 minutes ago 0 replies      
We have a $.99 app for a sports team. One 3 day period we were #3 in the Sports category in the US. It yielded 703 sales during that period, and a sustained period of 3x downloads pre ranking.
kefs 1 hour ago 1 reply      
I guess this is the opposite end of the spectrum..


scottcha 1 hour ago 0 replies      
I work on one of the top 100 free apps in the mac app store (its been as high as the top 20s). Our volume is surprisingly low considering we are ranked so high in the overall category. I guess its telling about the overall volume in the store.
MatthewPhillips 1 hour ago 0 replies      
Let this be a warning to the mobile platform app devs: this is your future. Get all the gold you can, while you can.
The Parser that Cracked the MediaWiki Code dirkriehle.com
18 points by rams  1 hour ago   6 comments top 4
pornel 0 minutes ago 0 replies      
AST of an example page is the interesting bit:


Semiapies 28 minutes ago 0 replies      
I hadn't realized that there were any parsing issues around MediaWiki's markup. 5000 lines of PHP? Eek.
brianjolney 10 minutes ago 1 reply      
link died. any mirrors?
Open Education Resources kqed.org
61 points by nprincigalli  3 hours ago   7 comments top 5
buckwild 13 minutes ago 0 replies      
I can't believe they didn't mention Khan Academy.
eftpotrm 1 hour ago 0 replies      
I have to admit an interest in this as I work for them, but the Open University (http://www.open.ac.uk/) do have quite a lot of material online....




jasonmcalacanis 1 hour ago 0 replies      
The progress in free education is astounding.
barry-cotter 3 hours ago 1 reply      
Pap. If you know Opencourseware exists there are two things worth learning on that page, ck-12 and flat world knowledge make freely downloadable textbooks.

Not worth your time.

huherto 1 hour ago 0 replies      
I am testing learnboost.com to manage a class. It works ok. But I have found a couple of wrinkles. I hope it is just a matter of time.
Thomas Würthinger is a JVM hero guidewiredevelopment.wordpress.com
13 points by carsongross  40 minutes ago   2 comments top
nddrylliog 8 minutes ago 1 reply      
How about "A fully hotswappable JVM - never restart again"?

Also, VMs in the Smalltalk family have been doing this for a long time.

Russian tycoon buys Warner Music for $3.3bn theregister.co.uk
19 points by swombat  1 hour ago   6 comments top 4
simonsarris 34 minutes ago 0 replies      
By "Russian tycoon" they mean "The US-based business of Russian-born, US-educated, US-residing-since-1978 tycoon"

His company has also been a significant stakeholder in Warner Music since 2004.

kgermino 35 minutes ago 1 reply      
>>...in a $3.3bn deal. The company's $1.9bn debt is also transferred to its new owner, valuing Warner at $1.3bn.

Wouldn't [Value of Warner] = [Price Paid] + [Debts]? Or am I missing something.

sp332 33 minutes ago 0 replies      
The article says he made his money in property and subsidized industries. I don't really know, but I'm afraid this might mean he's even more likely to pursue stupid property laws and continue antagonizing Warner's customers. Does anyone have a better idea of what might change at WM as a result of this deal?
m3mnoch 5 minutes ago 0 replies      
this makes me wonder if music is about to be unlocked. all it really takes is for one of the majors to realize that music is a service and not a product and completely disrupt the music industry. similar to the monetization strategy of social games -- spend money chasing the people who will pay by giving them reasons to buy rather than throwing good money after bad by litigating against people who won't pay in trying to recoup some silly notion of a 'lost sale.'

then again... this is the self-destructive music industry we're talking about here.

Writing Maintainable JavaScript msdn.com
12 points by rmurphey3  54 minutes ago   4 comments top 3
wccrawford 27 minutes ago 0 replies      
These apply to all languages, not just Javascript. It's pretty basic techniques for keeping your code nice:


Naming Conventions

Start using these techniques now, refactor old code over time

mberning 20 minutes ago 0 replies      
Write as little code as possible. The easiest code to maintain is the code you never had to write.
autalpha 43 minutes ago 1 reply      
I refuse to install Silverlight :P
Machine Learning: A Love Story hilarymason.com
15 points by ColinWright  1 hour ago   discuss
Show HN: My embeddable C/C++ webserver github.com
47 points by udp  4 hours ago   26 comments top 4
tptacek 3 hours ago 1 reply      
+12? +8?

  char * NameDecoded = (char *) malloc(strlen(Name) + 12);
char * ValueDecoded = (char *) malloc(strlen(Value) + 12);

if(!URLDecode(Name, NameDecoded, strlen(Name) + 8) ||
!URLDecode(Value, ValueDecoded, strlen(Value) + 8))

I'm pretty sure you also don't want to accept and honor any arbitrary 64 bit value for Content-length.

And I'm also pretty sure Content-length can't be negative and doesn't belong in a signed integer.

udp 4 hours ago 4 replies      
This webserver is the most complete part of a high-level, cross-platform networking library I've been working on. The rest of the examples: https://github.com/udp/lacewing/tree/master/examples

ajax.cc is probably the most interesting - it's an example of long-poll AJAX with jQuery and Lacewing::Webserver as a backend. Lacewing uses epoll/kqueue/IOCP rather than a traditional select/poll approach, so it should scale quite well.

I've been developing Lacewing pretty much in private for quite a while, now - it's quite daunting when I start thinking about having to write all the documentation and such, but this morning I just thought "screw it, it's going on github, undocumented or not". It might as well be public while I'm getting it all together...

sigil 1 hour ago 1 reply      
In src/webserver/Webserver.Incoming.cc you have an ad-hoc HTTP parser that internally buffers partial request data and does some questionable things like recursing on each request header line.

Consider using a finite state machine parser for HTTP like this one [1] or this one [2], which will reduce your overhead, and help you push the buffering and eof problems back up to the main event loop code.

[1] https://github.com/ry/http-parser

[2] https://github.com/mongrel/mongrel/tree/master/ext/http11

k_shehadeh 2 hours ago 1 reply      
Encouraging to see more development in this space. Thanks. I'm curious, though. How do you think this compares with the mongoose (http://code.google.com/p/mongoose/) project. Other than the fact that it's C++-based what would you say are the relative advantages/disadvantages?
Waterbear - a visual language for Javacript waterbearlang.com
49 points by toni  4 hours ago   17 comments top 11
nprincigalli 2 hours ago 0 replies      
Might want to check MIT's OpenBlocks ( http://education.mit.edu/openblocks ), which is related to Scratch (listed as inspiration by Waterbear) and includes code from StarLogo (slcodeblocks) (also listed as inspiration) and is being refactored "to make the code more amenable to inclusion to other projects". One of the recent uses of OpenBlocks was Google's AppInventor.

Kudos to the Waterbear author! Loved to see it done with JS, HTML5 and CSS3. One less thing in my list of "things that I want to see in the world but may have to build it myself". Watching the repo already! :)

snorkel 49 minutes ago 0 replies      
Cool editor UI works in FF 4 but Run does nothing, not even for the demo script.
ericz 1 hour ago 0 replies      
Try adding

     -webkit-user-select: none;
-khtml-user-select: none;
-moz-user-select: none;
-o-user-select: none;
user-select: none;

So text in the blocks can't be selected

agazso 3 hours ago 1 reply      
A working demo would be useful to see how to use this after all.
Sudarshan 2 hours ago 0 replies      
Awesome this is just like http://scratch.mit.edu and

google has a product called app inventor.


Hope it is now easily accessible as a web app... Cool UI.

sfvisser 1 hour ago 0 replies      
Reminds me of Eros (http://conal.net/papers/Eros/) and tangible functional programming (http://haskell.org/haskellwiki/TV). Both by Conal Elliott.

edit: Video here: http://www.youtube.com/watch?v=faJ8N0giqzw

robinduckett 2 hours ago 0 replies      

    Uncaught SyntaxError: Unexpected token {

nicetryguy 1 hour ago 0 replies      
Does not work on android chrome. Can't even scroll
iambot 3 hours ago 2 replies      
wow that looks brilliant (from what I can see), do you think its at all useful to people that actually know javascript? or is it more of a tool for those that dont/are learning?
tluyben2 2 hours ago 1 reply      
Nice work! GPL I hope?
Klonoar 2 hours ago 0 replies      
Ah, neat, Scratch in the browser.

About time. ;)

I'm very drunk and will edit this later. Props!

Wall Street Journal Leak site expects you to own the copyright newscientist.com
35 points by VierScar  3 hours ago   15 comments top 8
sudonim 40 minutes ago 0 replies      
My guess is that some idealistic bright young thing at WSJ said "We should let people submit stuff to us anonymously" and envisioned it as a simple file upload with a submit button. And because laws exist and lawyers at big companies (and small ones) tell you whether to use one square of toilet paper or two, they said "you must add this copyright thing or it's not gonna fly". And the idealistic bright young thing thought "That's not quite as cool as an anonymous document dump, but it's a document dump... I guess".

Then the internet (the market?) told the WSJ, we do not agree to your lawyers' terms and conditions and another good idea was defeated. Big corp: you can't remove the core values from a product and expect the idiots will flock to it because you are Big corp and should be taken seriously.

Duff 3 hours ago 2 replies      
Doesn't that pretty much invalidate the notion that the site is in fact a "leak" site? If I own the copyright, I'm not leaking, I'm releasing information!

Maybe this is just a fancy new way for government officials to release phony "leaked" stories without attribution?

jordanb 1 hour ago 1 reply      
This makes me wonder if there's a loophole in their source protection pledge.

If you were to check the box asserting copyright, and if the whilstleblowee were to file a John Doe suit against you for copyright infringement and then subpena your identity from the WSJ, would they comply with the subpena?

dsl 2 hours ago 1 reply      
Pro-tip: If you mail the documents to them you don't have to click an accept checkbox.
mrcharles 40 minutes ago 0 replies      
Totalitarian honeypot?
nicetryguy 11 minutes ago 0 replies      
sounds like Wile E. Coyote is running ACMEleaks, waiting for the roadrunner

conglomerates and altruism don't tend to mix

ltamake 2 hours ago 1 reply      
Ha ha ha, oh wow. So much for "WikiLeaks competitor".
jasonmcalacanis 1 hour ago 1 reply      
That's just CYA.
Google, Facebook: "do not track" bill a threat to California economy arstechnica.com
21 points by evo_9  2 hours ago   21 comments top 3
neutronicus 45 minutes ago 1 reply      
I'm of two minds about this. I want my browsing habits to be a closed book to everyone, period, and I doubt I have the necessary technical savvy to achieve that goal. I'd therefore love to dump the responsibility on the government, and damn the consequences to the California economy.

I am afraid, though, that letting the government get a regulatory foot in the internet's door will send me out of the frying pan and into the fire, privacy-wise.

yanw 1 hour ago 1 reply      
They are absolutely right. And last time I checked the internet was global and not micromanaged on state level.
kovar 18 minutes ago 1 reply      
I found the most interesting part of this article the following quote: "... and would make them more vulnerable to security threats." Allowing users to opt out of tracking increases their security threat? I'd like to see more detail on this.
Getting Users For Your New Startup pud.com
5 points by pud  15 minutes ago   discuss
Mozilla tells DHS: we won't help you censor the Internet boingboing.net
152 points by miraj  9 hours ago   23 comments top 7
exit 8 hours ago 5 replies      
i'd like to see a movement which clearly places the internet above the sovereignty of any nation
jarin 4 hours ago 0 replies      
Hats off to Mozilla for discovering that dealing with DHS is exactly like dealing with Righthaven and the RIAA.
bgruber 3 hours ago 0 replies      
This is exactly why i stopped reading boingboing; there's a tendency to ascribe meaning to actions that just isn't there. Mozilla said no such thing. What they said was more like "we won't just do whatever a government agency tells us to unless legally compelled to do so." I'm pleased Mozilla did this, but their stance is not the one it's being portrayed as here.
eloisius 5 hours ago 0 replies      
Supposing they comply with the subsequent court order, what's to prevent the 10 variations that will pop up to replace it? This would surely only proliferate add-ons with the exact same functionality.
maeon3 7 hours ago 1 reply      
Close that car hood citizen, there are secrets in there, don't make me taze you.
pmh 1 hour ago 0 replies      
Previous discussion on the original blog post: http://news.ycombinator.com/item?id=2518075
ltamake 2 hours ago 1 reply      
Mozilla just went up in my book.
What sort of entrepreneur are you anyway? swombat.com
20 points by grellas  2 hours ago   5 comments top 4
michaelochurch 1 hour ago 0 replies      
Good vs. evil is a completely different debate from the practical balancing of concerns an entrepreneur faces on a day-to-day basis. The OP mixes these two radically different orders of decision making. Good and evil are not just about misprioritization of values, and most entrepreneurs are good people balancing decisions of a more practical nature, such as when to hire and when not, that don't really have moral "right answers".

Bad people have no problem with good values. Irreconcilable opposites are made from the same handful of values representing goodness.

As for evil vs. values, evil generally comes in two forms: fanatical and selfish evil, which may both exist in one person. Invariably, both exist in evil movements: Hitler being an example of fanatical evil and Eichmann being an example of selfish evil. OP's statement is only true (sometimes) of the latter. Fanatical evil is characterized by the insistence on one value, which may be good (economic equality, as in communism) or bad ("racial purity", as in Nazism) at its root but is, in either case, given such an absurdly high priority over other values (humanism, reason) as to have catastrophic results.

Selfish evil, on the other hand, is characterized by narcissism, psychopathy, and a complete lack of values, but also a willingness to adopt the appearance of whatever values are in fashion. A selfishly evil person cares only about self-advancement.

The fanatical kind is what we associate with villians like Hitler and bin Laden, and it's really destructive when such people get into power. In business, it's far more common to encounter the selfish kind of evil: the office psychopath. This is because, although they're less damaging to the world, office psychopaths are just a lot more common. You can think of selfishly evil people like tornadoes, which can cause a lot of harm on a local scale but rarely globally, and fanatical evil people like hurricanes, which have a broader effect.

On the small scale, selfish evil is more dangerous because these people are more common and, unlike the fanatically evil who usually end up institutionalized, there are a lot of them in high positions of power and it is impossible to spot them unless one directly sees them doing bad things, because they act like anyone else. On the large scale, fanatical evil is more dangerous to the world because it's better at PR. It can dress itself up as "virtue" and prey on weak minds.

lichichen 22 minutes ago 0 replies      
I feel this article had great potential, but it somehow strayed away and missed a lot of important points. As well, the title should be along the lines of “Understanding Entrepreneur Values” or of such. And Swombat I would love to hear your thoughts on the concept of “how to stay true” “questioning values” and digging deeper into the basic concepts we have build around the idea of “Entrepreneur”

That being said, here are my thoughts.

1) I don't feel the notion of “Good Vs. Evil” is remotely relevant. They are labels we put on Social accepted behavior. Instead, the argument should revolve around then notion of how Entrepreneurs respond to incentives, for instances in the case of “Should I sell my company for x million, and leave my employees in the cold” or dealing with pressure “My company is not doing so well, shall I cut back on my CRM program, or fire employees”

These are more related to the notion of “entrepreneur value” where as serving the customers and having good employees are high value tasks, and choosing between one may come down to what is “valued” higher for the founder.

A more psychology related piece on being a founder:

2) I would also like to take a minute to explore some of the entrepreneurial values as listed by OP and dissect reasons behind the actions. If we are talking why some Entrepreneurs put “treat customers right” above the “bottom line” or vise versa, it really isn't much about being a “good” entrepreneur or a “bad” entrepreneur. In the book “Confession of an Ad Man” by David Ogily, he argues that the reason behind why companies are not putting a lot more marketing dollars into their marketing programs is because often consumer sentiment and marketing results is an un-measureable thing. As a business student, I can tell you that we are trained to be scientist, in the sense that we are taught scientific methods that are based on quality research that someone somewhere has quantified. Something that is unquantifiable is a scary thing, because

A) We cannot prove it's success
B) We are not sure if we are doing it right
C) Shareholders would have a hard time grasping the results

As humans, we like to be right in the most efficient manner. Therefore anything unquantifiable, like investing behind raising consumer values is a scary thing. Hence why a lot of businesses care more about bottom line then customer experience. Although this trend is changing =]. Now you might be thinking, can't we peg a value against returning shoppers and aren't there a lot of studies in the recent support the notion that returning customers have a high value. Yes and yes! Again, companies are slowly moving toward that direction and almost all (and their shareholder) still measure success based on bottom line. Overall, It's not the notion of Good entrepreneur vs Evil entrepreneur, but external and personality traits that affect action.

mdinstuhl 45 minutes ago 0 replies      
Thank you for writing this. I am relatively new to the whole entrepreneur / startup scene and I have already met some of the most RADICALLY different people I could imagine.

I have left startup companies based entirely upon the values of the co-founders. Their goals (REAL goals, not the goals listed in their mission statement) were simply too different from my own.

Again, thanks for a great post!

tejaswiy 2 hours ago 1 reply      
Completely offtopic, but every time I open one of your articles, my first point of focus goes on to the photo on the side. Was this intentional or an accident?

It's really cool though, normally I don't pay attention to who wrote an article, and just skim through to see if it's interesting. Here, the initial focus on the photo reminds me that you're a very good blogger, based on other articles that you've written in the past and grabs my complete focus right away.

Tech experts, including Google, to discuss future of the USPS washingtonpost.com
6 points by localtalent  39 minutes ago   discuss
Intel reinvents the transistor as a 3D Tri-Gate geek.com
45 points by bzupnick  4 hours ago   15 comments top 5
roschdal 2 hours ago 4 replies      
I found the information about the size of the 3d transistors interesting. Does this mean that Intel is currently able to develop and manufacture nanotechnology devices (1 to 100 nanometres)?

For example, in the video the red blood cell was much larger than the 3d transistors. If Intel is able to develop transistors which are on the same scale as small theoretic nanotechnology devices, why aren't we getting access to a lot of nanotechnology applications yet?

For example these nanotechnology applications:

VB6_Foreverr 3 hours ago 1 reply      
I can relate relate to the thickness of a hair but I can't relate to the size of a red blood cell so...
If Mr Bohr ia an average sized 1.8 metres tall and is only a little smaller than the diameter of the hair after the first shrinkage that means that after the second shrinkage where he's 100 nm tall that the hair is about 1000 times wider than that. I calculate it would take him a good 20 minutes to walk past the end of the hair and in his world this chip is about the size of a pool table
montagg 3 hours ago 2 replies      
Even Apple's hastily thrown together tour of their antenna facility was sexier than this. Wow.

I've never seen someone so awkward in front of a green screen before.

philjackson 4 hours ago 1 reply      
You quoted funny as if I shouldn't find it funny, but when that shrink ray blew up...
hackermom 2 hours ago 0 replies      
This video is a serious contender to Microsoft's Windows 7 House Party ad and their Songsmith ad.

add.: why does everyone hanging around here have a stick or two up their wazoo? Not a soul will fail to notice how truly ridiculous this video is in the same lovely way the Songsmith ad is, but why is one not allowed to mention this in jest without getting slammed over it? Breathtaking... :)

Exception Safety, Garbage Collection etc. (a.k.a. Why Does Java Suck So Bad?) slideshare.net
37 points by eplawless  2 hours ago   32 comments top 7
revetkn 1 hour ago 4 replies      
...am I reading correctly that his argument for using C++/D (!) is that it's hard to remember to say this:

    try {
} finally {

instead of this:


orangecat 1 hour ago 0 replies      
Python's "with" statement handles this scenario: http://effbot.org/zone/python-with-statement.htm
JulianMorrison 2 hours ago 0 replies      
Also, Go gets this right with "defer", and with mostly deterministic error handling (panicking is not the normal way to signal an error, returning a status is).
jriddycuz 31 minutes ago 0 replies      
The problem with this whole argument is that the author assumes that deterministic memory performance is completely necessary. It's certainly nice, but there are so many times when it just doesn't matter.

While I agree that Java sucks because it makes certain very common things require extreme verbosity, worrying about garbage collection isn't all that important except in systems-level programming (which isn't done in Java really), and large GUI that need tons of memory and still need responsiveness. But many people wouldn't even think to use Java in those cases anyways, so I'm not really sure what this guy's point is.

grimlck 2 hours ago 4 replies      
Interesting how java is the only language included in the title, but the slides have the opinion that C#, Ruby and Python all suck as well.

Seems like a cheap way to get upvotes.

latch 1 hour ago 0 replies      
if PHP gets something right before your language does, you should reassess your life goals


JulianMorrison 2 hours ago 2 replies      
I think you can avoid nesting finally thus:

    X x = null;
Y y = null;
x = foo();
y = bar();
} finally {
if (x!=null) x.dispose();
if (y!=null) y.dispose();

Stanford explores NYC engineering campus stanforddaily.com
27 points by SnowyEgret  3 hours ago   5 comments top 2
pvodsevhcm 46 minutes ago 1 reply      
This is a repost. The headline is appropriate for the stanford daily, but it becomes misleading when it gets copied verbatim to a site with a much wider audience. NYC is soliciting bids, and Stanford is one of the universities that applied. Stanford is not independently looking to build an NYC campus, as the headline implies.
asr 2 hours ago 2 replies      
This is definitely not a done deal--NYC has solicited proposals from many institutions, and it's not certain they will do the project at all. The NYT has a nice overview here: http://www.nytimes.com/2011/04/27/nyregion/bloombergs-big-pu...
Pioneers of Soviet Computing sigcis.org
27 points by gnufs  3 hours ago   2 comments top 2
mynegation 1 hour ago 0 replies      
Some of the people mentioned in the book were my lecturers in Moscow State University. All great people, but sometimes courses like 'Computer architecture' left me with a mixed feeling. They used all these old architectures like BESM-6 as examples. It was great to know how it all started, but I felt like I am not getting enough knowledge about actual modern architectures, and I went through couple of electives to make up for it.

Note to the translator and readers: please do not use Russian patronymic names in the translation. It is common for Russians to address each other with name and patronymic (a sign of respect). But patronymics confuse non-Russians. For example the book refers to Sergei Lebedev (or Sergei Alexeevich Lebedev if we use patronymic) as 'Sergei Alexeevich' many times over and over, and to non-Russians 'Alexeevich' sounds like a family name.

danohuiginn 59 minutes ago 0 replies      
If you're interested in this area, I highly recommend Francis Spufford's book "Red Plenty". It's a fictionalised account of some of this history (Lebedev is a character, for example). Its main focus is the interaction between Soviet computer science, economics and political idealism.

Here's one review: http://www.telegraph.co.uk/culture/books/bookreviews/7956346...

An API for European Union legislation epdb.eu
40 points by mazsa  5 hours ago   5 comments top 4
patrickk 3 hours ago 0 replies      
Slightly tangential, but Eurostat maintains some useful data also on EU members. Could be a useful data point if you are going after a European market with your killer startup idea. It is two years out of date however. Stuff will have changed since then, especially in the likes of the PIIGS countries.

Purchasing power:


Broadband penetration rates:


mazsa 4 hours ago 0 replies      
"As a start we are giving away 50 API keys for the ones interested. The limitations on the number of API keys is to make sure we have enough resources to serve our users. Should the demand exceed this we will have to look into scaling our solution."
ortatherox 4 hours ago 1 reply      
I wonder if this could lead to a http://www.theyworkforyou.com/ for Europe
rprospero 49 minutes ago 0 replies      
Disappointingly, the API appears to be read-only.
The #1 Killer of Meetings, And What You Can Do About It hbr.org
16 points by ColinWright  3 hours ago   18 comments top 11
wccrawford 1 hour ago 2 replies      
Let's boil that down a bit:

Meetings should be for discussions, not education.

I think he's correct that PowerPoint encourages the latter and discourages the former.

However, I think he's wrong about having people present about things outside their area. That only encourages them to do research into things they have no business in.

Instead, only call meetings when decisions need to be made, and only let people prepare the facts needed to make those decisions, not presentations.

Lost_BiomedE 17 minutes ago 0 replies      
When presenting, and wanting discussion, I have always found that the linear format fails hard. Is there any presentation tools that allow you to present slides in a flow-chart/logic tree format? Linearly, if my assumed best track isn't the result of discussion, the rest of the slides are worthless. I want a functional choose your own adventure presentation.

I think a powerpoint medium can work well in discussion if it is allowed deviations.

maurycy 12 minutes ago 0 replies      
What is interesting, the method overcomes storytelling. The same set of facts can be worded very differently, and, usually, it is heavily influenced by the presenter's feelings.
daniel-cussen 2 hours ago 0 replies      
TL;DR it's powerpoint.
MatthewB 31 minutes ago 0 replies      
The first thing I thought of when I read this was the episode of 30 Rock where Jack talks about "Meetings Magazine." Too funny.
zwieback 1 hour ago 0 replies      
We're having fewer and fewer group meetings with PowerPoint (a good thing) but more and more teleconferences where someone shares their desktop. For some reason that encourages people to share more spreadsheets, code, drawings, etc. but the flipside is that half the audience is surfing or doing their email.
petervandijck 2 hours ago 1 reply      
I thought killing meetings was a good thing.
autalpha 46 minutes ago 0 replies      
I find meetings are only effective if you are there to make a decision together. Even then, I would keep the meeting exactly and arbitrarily 21 minutes! Any longer, then you'd have to bring in alcohol :)

At the meeting, turn up the heat and makes everyone stand up. In front of a group of sweaty and impatient people, let's see how long others can tolerate you should you ever go off topic.

niels_olson 48 minutes ago 2 replies      
Every time I read HBR, I'm less impressed. Am I missing something?
fishtoaster 1 hour ago 0 replies      
His thesis seems to be that powerpoint is a poor tool for fostering discussion.

To be honest, this seems kinda self-evident; powerpoint is a tool for disseminating information- for facilitating a largely one-way transmission from speaker to attendees. It's useful when you need that, but expecting it to help when you want a round-ish table discussion with major input from multiple people seems silly.

rokhayakebe 1 hour ago 0 replies      
Stop meeting altogether.
The author of Nginx on why V8 is not suitable for web servers google.com
238 points by hanszeir  16 hours ago   62 comments top 24
ErikCorry 10 hours ago 2 replies      
Speaking only for myself as ever:

Handling out of memory errors is a hard problem. At the point when you can't allocate more memory you are busted in most languages. For example in C a slightly deeper function call depth can cause the stack to expand at any point. If that point coincides with you running out of memory then you are busted. No program can keep running with a broken stack.

If V8 were capable of recovering from out-of-memory errors then you would still have to go through all of node and all the native libraries that it uses and check that they can handle any allocation failing.

And if V8 handled out-of-memory errors with exceptions then you have two choices. Either make the exceptions uncatchable, in which case the JS program running in the server has no way to recover and is probably in an inconsistent state. Or make the exceptions catchable, in which case there is no guarantee that any memory will ever be freed up and you are back to square one.

I think it's possible to make V8 more resistant to out of memory situations. I don't think it's possible to make it bulletproof, and I don't think there are many server apps that have this property. Do people run Java servers in such a way that they are often getting out of memory exceptions, recovering, and then continuing to serve? I don't think so.

In practice the way most servers work is that you give them plenty of memory and you write your server app in such a way that it does not use unlimited memory.

If there are non-out-of-memory errors that V8 is failing to recover from then these are bugs and should be reported. I can't think of any examples off-hand.

As far as the other comments go they seem to assume that you will want to use a V8 context per connection. Node uses one V8 context for all connections, so the comments don't apply. Context creation has been speeded up a lot since the article was written, but this is only for the browser. Node doesn't need it.

daeken 15 hours ago 2 replies      
While an interesting article in its own right, it has nothing to do with node.js, except that this happens to be a criticism of the way V8 works, which node.js is built on. However, they may not be using things in the same way he is, or there could be mitigating circumstances which make it possible for node.js to pull this off. (I don't know anything about node.js internals, I just know this article isn't about it, as the title indicates.)

Edit: This was written when the title was "The author of Nginx on node.js and why V8 is not suitable for web servers". It's since been changed.

baguasquirrel 16 hours ago 11 replies      
It's scary what you can do with machine translation these days...
EGreg 14 hours ago 0 replies      
Sysoev brings up a couple good points that affect Node.js .

1) The fact memory allocation might crash the process is a serious problem. Is this still the case?

2) The fact that garbage collection is still stop-the-world may be a problem for server availability. Are we able to call the generational collector from node, or no?

Things that don't involve node:

The ability to create multiple objects in different contexts. Sysoev is thinking of the "scripting" model, like PHP, which is not how Node does things. Node just runs one process to handle thousands of requests, not thousands of processes or threads. There is no need.

wrs 12 hours ago 0 replies      
At Microsoft, I watched (from afar) the incredibly painful multi-year process of making a complex language runtime intended for one environment (.NET for IIS) satisfy the requirements of a very different environment (SQL Server). When fundamental design assumptions like "memory allocation failures can kill the process" have to change, it's a big deal.

Seems like process isolation a la fastcgi is the practical way to go, unless the V8 team itself wants V8 to be embeddable in a "reliable" way (meaning, it recovers from its own errors without corrupting the process it's embedded in).

cagenut 14 hours ago 1 reply      
These only sound like problems if you need to keep it single process. If you break out the javascript interpreter via fastcgi workers just like people do with php on nginx it becomes mostly moot points right? At that point its limited to 500 req/sec/core, but frankly these days that means 4000 req/s, which, hell, I'll take it.

Granted thats assuming you can actually run a fastcgi/v8 setup, I've never looked. I wonder how hard a mod_v8 for apache prefork would be.

delinka 3 hours ago 1 reply      
I keep seeing these replies on how well Google Translate performed. I didn't realize while reading that it was machine-translated, but my immediate thought was "this person is not a native English speaker or needs much work his attempt at Yoda-speak does."
JulianMorrison 10 hours ago 1 reply      
IIRC, on Linux at least the memory allocation thing is moot. If your process runs out of memory, it dies anyway. You can't catch "malloc says no" because malloc never says no. It either says yes or blows your process's brains out.
ballard 14 hours ago 1 reply      
Node.js is often used because it is cool, new and easier for front-end developers to develop a functioning backend.

For more traditional uses that want something less hackish, erlang for example, crashes only a single thread not the entire vm. Functional programming languages in general like haskell and erlang are interesting for backend core services.

stewars 14 hours ago 0 replies      
Node does not create a new V8 context for each request so the 2ms => max 500 requests per second scaling problem does not exist with Node.
IgorSysoev 7 hours ago 0 replies      
The title is "Why V8 is not suitable for EMBEDDING in servers".
olegp 11 hours ago 0 replies      
A lot of the points raised are an issue only for long running processes. Akshell (http://www.akshell.com), for example, has synchronous I/O libraries and uses an Apache MPM like model, so for us the limits are less of a concern.
shadowsun7 15 hours ago 1 reply      
A thought: why not have node developers fork V8, and modify that to make it ready for server deployment? Has the coreteam and/or Dahl thought about this?
glesperance 15 hours ago 0 replies      
The state of todays base of node (v8) has nothing to do with the concept/paradigm its trying to put forward -- that is event driven web apps. Hence, one must not discard node as a bad solution only because v8 is not optimal in all situations.

Everything is always a work in progress.

rhasson 15 hours ago 0 replies      
it's a great write up and the context issue seems interesting, however it's from Feb 2010 according to the last paragraph and a lot has changed in V8 since
ma2rten 9 hours ago 0 replies      
That might as well be the best Google Translate translation I ever read.
sassafras 15 hours ago 0 replies      
As I'm sure others will point out, this article has less to do with node.js as it does with embedding v8 in Nginx. Although since node.js runs in v8, I suppose you could interpret it that way.
gtdminh 9 hours ago 1 reply      
in russian : Почему Google V8 пока не под...одит для встраивания в серверы
in english : Why is Google V8 is not suitable for integration into servers.

There is a word "integration" that the author of the post forgot to put in.

VB6_Foreverr 8 hours ago 0 replies      
Maybe it's just me but every time I see V8 i think it's VB
bengtan 14 hours ago 0 replies      
Hmmm... I read this title as 'The author of Nginx on why (Nginx) V8 is not suitable for web servers' and I'm thinking ... huh?

Does someone want to change the title to be clearer?

Klinky 15 hours ago 0 replies      
It sounds like more of the concern is that V8 is/was buggy & had some situations where performance would be compromised. I don't think it's a slam saying it could never be used.
mythz 15 hours ago 0 replies      
When Igor speaks, I listen.
mraleph 9 hours ago 0 replies      
Right title for this 2 years old article is "Why V8 is not suitable for nginx".

He has his own set of requirements which by no means applies to _all_ webservers.

jorangreef 10 hours ago 1 reply      
Likewise, Nginx is not suitable as a reverse proxy for streaming web servers, since Nginx proxies HTTP 1.0 not HTTP 1.1.
Visual Website Optimizer featured in India's national newspaper timesofindia.com
79 points by sushi  9 hours ago   22 comments top 9
FraaJad 4 hours ago 1 reply      

Times of India is world's largest circulating English newspaper [1], however "The Hindu" carries the banner of "India's national newspaper".

[1] http://en.wikipedia.org/wiki/Newspaper_circulation#World_new...

uast23 7 hours ago 0 replies      
This is a very recent development among media housed in India; they have suddenly started realizing that people here are actually earning money by doing a "valid" business on Internet. You can also find lot of bloggers being covered by national newspapers very frequently. I say, this is a very good move because more than anything else it will help in reducing the tabooness of leaving a secure job for doing a startup.
ThomPete 9 hours ago 1 reply      

I can't help to think that the discussion that was on here a while back have something to do with it all.

HN have turned into quite an impressive ecosystem (even if my assumptions about your success is wrong :) )

karterk 7 hours ago 1 reply      
Congrats! Curious - what is your technology stack?
brown9-2 3 hours ago 1 reply      
Is Google Website Optimizer really 5 years old?
maheshs 7 hours ago 1 reply      
Congrats paras, you are a big inspiration for every startup especially in India.
_ankit_ 7 hours ago 0 replies      
Congrats! Way to go! You guys have a product with great potential.

I can say that seeing your current implementation.

savrajsingh 5 hours ago 0 replies      
great job paras!
zygtot 6 hours ago 1 reply      
Congrats Paras - for gaming HN yet again! You are my hero!
Journal rejects studies contradicting precognition newscientist.com
20 points by wallflower  4 hours ago   8 comments top 4
tokenadult 2 hours ago 0 replies      
Richard Wiseman, interviewed for the submitted article, is a psychologist who studies human cognitive biases. His concern regarding Bem's purported studies on precognition is that if journals reject studies showing no effect, while accepting flawed studies appearing to show precognition, this results in a file drawer problem


in which publication bias among the published studies will then skew any subsequent attempt at meta-analysis of the published studies. The file drawer effect is of particular concern in meta-analysis of purported extrasensory perception.


By the way, Wiseman is perhaps best known online as the author of the Colour-changing Card Trick website, featuring videos of an experiment that demonstrates selective attention in human perception.


If you haven't seen the video before, it is well worth watching.

I hope the more responsible journals of psychology will step up their efforts to publish commentaries on the study design


of ESP studies and publish in general more studies showing failures of replication of earlier study findings.

schrototo 2 hours ago 3 replies      
The TL;DR version: "This journal does not publish replication studies, whether successful or unsuccessful"
sigzero 3 hours ago 0 replies      
Did they reject the studies BEFORE they were submitted or AFTER?
Semiapies 1 hour ago 0 replies      
I'm curious how this will play out in the scientific world - if there's a slew of failed attempts to reproduce the findings, this will be the new cold fusion, and such an embarrassment may actually affect the way papers get published.

In the layman world, of course, the original paper will be used as justification for whole shelves of pseudoscience books, and the public won't actually hear about any failed attempts to reproduce the claims.

Foreman: Run Complex Apps with Ease daviddollar.org
15 points by ddollar  55 minutes ago   2 comments top 2
pvh 37 minutes ago 0 replies      
I'm a huge fan of Procfile. When I use a Procfile app, I know that regardless of the language or the configuration of processes, workers, clocks, or what-have-you, I can always get it up and running with a single `foreman start`.
amalag 28 minutes ago 0 replies      
This is a great tool, glad to see something this useful. My webapp has a 2-3 background scripts that need to be run, this is far better than nohup!
       cached 6 May 2011 18:02:01 GMT