hacker news with inline top comments    .. more ..    5 Jul 2012 News
home   ask   best   7 years ago   
Corrupt App Store binaries crashing on launch marco.org
75 points by shawndumas  4 hours ago   27 comments top 8
smokey_the_bear 3 hours ago 1 reply      
This is also happening to our app, Gaia GPS, and it's pretty much ruining my holiday.

It seemed to effect US users around noon PDT, and then a batch of international users around 5 pm PDT.

markerdmann 15 minutes ago 1 reply      
One of my apps went into "In Review" status at midnight (19 hours ago) and still hasn't been rejected or approved. The rejection or approval has always happened in less than 12 hours for me, so it seems like Apple might be holding all approvals until they've resolved this issue. At least, I hope that's the case... I really feel for the developers who got a slew of one-star reviews because of this.
taligent 2 hours ago 2 replies      
I suspect the issue isn't with the store corrupting binaries but the application servers being under heavily load and dropping connections to the user. Begs the question why they aren't doing MD5 validation of the binaries before launching and notifying the user.

It is 4th July holiday after all. Lot more traffic.

scottchin 3 hours ago 2 replies      
I have two waiting-for-review updates that have been sitting in the queue for 9 days now. This seems much longer than my past experience of 2-4 days turn-around. I wonder if these delays are related to the corrupt updates issue.
scottchin 2 hours ago 2 replies      
Any developers know how to change an app update that is "Waiting for Review" from "Automatically Release" to "Hold for Developer Release?"
alttab 1 hour ago 0 replies      
Hopefully no one was relying on that income or those good reviews. More the reason why developers should make sure diversify their product strategy.
samstave 1 hour ago 0 replies      
I just updated the "CUE" app in the last few days and it crashes on launch every single time.
bobbypage 55 minutes ago 0 replies      
I don't think that's possible.
Why IQ Breakthrough Aren't: Algernon's Law gwern.net
60 points by kiba  3 hours ago   44 comments top 11
tokenadult 2 hours ago 4 replies      
I guess I will have to be the first here to join issue directly with the thesis statement of this interesting article:

"The lesson is that Mother Nature know best. Or alternately, TANSTAAFL: 'there ain't no such thing as a free lunch.' Trade-offs are endemic in evolutionary biology. Often, if you use a drug or surgery to optimize something, you will discover penalties elsewhere. . . .

"In 'The Wisdom of Nature: An Evolutionary Heuristic for Human Enhancement' 12 (Human Enhancement 2008), Nick Bostrom and Anders Sandberg put this principle as a question or challenge, 'evolutionary optimality challenge' (EOC):

"If the proposed intervention would result in an enhancement, why have we not already evolved to be that way?"

The answer to the evolutionary optimality challenge (EOC) comes from any properly taught Biology 101 course: evolution is not a teleological process, and it neither seeks nor is driven by "enhancement," but results in haphazard adaptations of ancestral systems through stochastic survival of genes.

I will make the counterclaim here that it is by no means clear (and it is certainly not conclusively shown by any of the examples in the interesting submitted article) that anything that can properly be called "optimizing" of human beings or of humanity as a whole necessarily results in a "penalty."

The staggering additions to human well being (and to the number of living Homo sapiens individuals) as a result of cultural innovations around the globe look to be mostly gain with remarkably little pain. I can eat foods that are grown in environments I have never visited, use this Internet device to communicate with all of you, fly to places far away without sprouting wings, and learn the thoughts of deep thinkers who are long dead. The progress of humankind in the last few thousand years has been a story of casting off natural constraints.

I may have to use the full Hacker News editing time period for comments to "optimize" this reply some more after I spend some time with my family, so please don't penalize me yet for the brevity of this reply. Before someone builds a chain of conclusions based on a supposed natural law, to reach other conclusions, it is first necessary logically to demonstrate the truth of the claimed natural law. "The only way of discovering the limits of the possible is to venture a little way past them into the impossible."


Thanks for bringing up this topic of discussion.

First edit: I see another comment mentions group selection. That is not a widely accepted idea (as contrasted with kin selection) in evolutionary theory. See three recent posts (there are more where those came from) from Jerry Coyne's Why Evolution Is True website:

24 June 2102 "The Demise of Group Selection"


26 June 2012 "Did human social behavior evolve via group selection? E. O. Wilson defends that view in the NYT"


28 June 2012 "I (and others) comment on Steve Pinker's discussion of group selection"


Oh, and since we are talking about human intelligence as a big part of the discussion here, I should mention the Wikipedia user bibliography "Intelligence Citations,"


a reference pathfinder that will always need more editing, as there is continually new research on this topic, but which already gathers many of the best monographs on the subject and some good review articles in one place.

Second edit: pjscott's thoughtful reply asks,

I don't think you've actually disagreed with the article. Did you read the 'Loopholes' section?

I've reread the article again, now that you've asked, and I think what I see here is a certain degree of rhetorical incoherence. I wouldn't compose an article this way if I wanted to make a tight argument, but perhaps the author desires to "essay," and try out ideas, and is still making up his own mind.

Anyway, based on the author's yeah-buts in some of the examples given after the "Loopholes" section of the article, and on the artitle's title, and on the first wave of comments received here on Hacker News, I have to take a stand and strenuously disagree with the idea that there is any "evolutionary optimality challenge (EOC)" to be met by human beings endeavoring to improve themselves, to improve human society, or to improve the world in general. There is plenty of scope for further optimization of individual human beings and of the human condition.

vibrunazo 1 hour ago 4 replies      
> If the proposed intervention would result in an enhancement, why have we not already evolved to be that way?

Really? This is getting upvoted here? How disappointing.

"Evolution doesn't have a goal, it doesn't make future plans. Evolution is an accident."
- Richard Dawkins in The Selfish Gene

Did you know giraffes have a nerve connecting the points in it's head that are about 5cm apart? But instead of connecting those 2 points directly, the nerve goes all the way down through the neck, then all the way back up, to connect its end point. Is there any advantage in this design instead of just connecting directly? No there isn't, giraffes are that way only because, historically, that's how they evolved. Evolution doesn't make intelligent plans. Mutation happens randomly, then natural selection will sometimes prune out bad mutations. That's all. Evolution is imperfect. Alluding there's anything intelligent about evolution is alluding to intelligent design and creationism.

Cushman 1 hour ago 2 replies      
This is an interesting article, but there's something about it that bothers me I'm trying to put my finger on. I think I'm a little wary of using this sort of analytic argument to validate a theory like this ex post facto while ignoring a much simpler theory that trivially explains the same results:

Medicine is hard.

It has nothing to do with evolution; take an intelligent person with no mechanical experience, tools, or instructions, give them a car which is behaving oddly, and ask them to fix it. It will be a miracle if they manage to properly diagnose and fix the problem without destroying at least a couple of engines. The fundamental rule is "Interfering with a complex system for which you don't have the manual by trial and error is as likely to do harm as good."

I guess I don't see why this is treated as an insight into human intelligence. No one with mechanical experience thinks you can make a car go faster by using hotter gasoline. No one with technical experience thinks you can make a computer run faster by boosting the input voltage. For the same reason, no one with medical experience thinks there's a drug that can make you smarter. It just doesn't make sense.

But that doesn't mean you can't make a faster car, or computer, or brain, or that it need be terribly difficult. For all of these, the answer lies in structure. And in this respect, brain modification may be the easiest of the three, because modifying brain structure is one of the primary functions of the brain.

ghshephard 2 hours ago 1 reply      
Excellent article all around - I wish more people wrote this way. The author initially puts forth a thesis, "Any simple major enhancement to human intelligence is a net evolutionary disadvantage." - and then sets forth to both support it, while simultaneously identifying all of the major issues with this concept, the possible loopholes, and touching on all the popular memes that the HN crown will likely want to bring up (modafinil) and examining them through that lense of "If it's so great, why didn't we evolve that direction."

The author is opinionated, controversial, entertaining, and educational - with citations to boot.

tikhonj 1 hour ago 0 replies      
This is slightly off-topic, but I've always been suspicious of the "TANSTAAFL" principle. Maybe reading some books by Heinlein soured me on it :P. (He's normally a great writer, but either I didn't understand To Sail Beyond the Sunset, which is likely, or it wasn't any good, which is unlikely but possible.)

Basically, the assumption there is that the current equilibrium is optimal or close to optimal. The problem is that this is often untrue. If I'm being inefficient, there is no reason for an improvement not to be completely free, after all.

It also smacks of "conventional wisdom" which is often much more conventional than wise.

More pertinently, I think this can apply to evolution. Now, clearly, I am no expert on evolution, so I could be completely off-base. But my understanding is that evolution can get caught in local maxima. That is, there could be some sufficiently remote global maximum in the fitness function that isn't reached because any probable mutation hits a lower value between the current state and this possible maximum.

I suppose an example could be about how evolution never came up with the wheel. I think there are cases where wheels would increase fitness, but they are simply too remote from existing organisms to reasonably evolve.

If such an effect exists, then some sort of design process could overcome it. In this case, we could strictly improve on evolution. Now, I'm not sure if this is an actual effect, but it seems plausible. If anyone has any actual studies on the subject, I would like to see them.

Goladus 12 minutes ago 0 replies      
Among other things, the author of the OP does not adequately address the role of environment in survivability, reproductive success, and genetic expression itself. He mentions birth control. Birth control is a valuable adaptation in certain environments, particularly where sex plays an important social role beyond reproduction (see: Bonobos) or where resources are scarce and would be wasted on excess offspring. But obviously, birth control can be severely maladaptive for an individuals genes and populations that overuse it may be at greater risk of extinction than those that don't.
spindritf 2 hours ago 0 replies      
Especially the changed trade-offs regarding nutrition are interesting. Right now in the west we can meet pretty much arbitrarily high calorie intake requirements with decent control over its composition but we're still in the energy conserving mode, making us fat and "lazy".

Another trade-off completely changed by modern civilization is risk aversion. We can afford much more risk because of modern medicine, because of living in populous, anonymous cities, because of immense wealth to fall back on... And yet it takes a lot of courage for most people to approach and talk to a stranger (an attractive stranger of the opposite sex in particular), fight in a martial arts tournament, or stand out in any significant way (by dressing differently, by speaking loudly). Those who do usually aren't making a calculated decision but are simply unadjusted.

Alex3917 2 hours ago 3 replies      
"Any simple major enhancement to human intelligence is a net evolutionary disadvantage."

An individual who is more intelligent might be less likely to pass on their own genes, but may well make the species as a whole more resilient. Looking at evolution from the perspective of the individual is a mistake.

Xcelerate 53 minutes ago 0 replies      
The whole premise of his argument is based on the statement "There's no such thing as a free lunch". His logical conclusions are then derived from this. However, he never actually proves the statement, and I for one don't believe it.

Why must there necessarily be a tradeoff for anything good? There's no law requiring that, and what makes one person happy may make another person miserable.

EDIT: I just realized there's about 15 people on HN that said the same thing. And that's why I visit this place!

rubashov 2 hours ago 1 reply      
"The Bell Curve" way back in the 90s touched on the many ways substantially above average IQ appears to be maladaptive. You get much more than 1.5 standard deviation for the parents on IQ and infant mortality actually spikes.

Also various neuro diseases appear strongly correlated to IQ of the parents, like tay-sachs, for example.

APelletier 2 hours ago 1 reply      
I believe there is far too much emphasis placed on genetic and/or chemical/biological explanations of intelligence. Whether you agree with the sentiment of the linked essay or not, the only real and true control we can ever have over intelligence through genetics/biology is via eugenics.

Until such a practice is accepted (and here's hoping it never is) we need to focus on what we can control: Culture. The unfortunate thing is this is difficult to approach from a geuinely scientific/empirical approach. The fortunate thing is that some common sense can go a long way: Feed babies/children well, set the bar high from an early age, start teaching them a second language at an early age, expose them to new and many experiences, etc.

Stephen Hawking lost $100 bet over Higgs boson discovery bbc.co.uk
98 points by mproud  5 hours ago   23 comments top 6
teamonkey 7 minutes ago 0 replies      
This is not the first time Hawking has made such a bet.


garyrichardson 4 hours ago 2 replies      
His smile at the end of the video is priceless.
olalonde 4 hours ago 2 replies      
Eliezer Yudkowsky (AI researcher) also lost his bet it seems (http://lesswrong.com/lw/1dt/open_thread_november_2009/17xb).
DanBC 1 hour ago 1 reply      
What else are they going to do with LHC now?
s_henry_paulson 4 hours ago 3 replies      
Seems like he was betting $100 on altruism.

I have no idea how to feel after watching this video.

DigitalSea 3 hours ago 1 reply      
Fair enough they've found proof that Higgs Boson exists but this is only the beginning. Nothing will come of this discovery for a very long time, it's a start but there are many more years of research before any benefit of this discovery is seen or felt. Great news though, I knew they'd find it eventually.
Tarsnap outage post-mortem daemonology.net
112 points by cperciva  6 hours ago   34 comments top 12
gleb 4 hours ago 3 replies      
Acunote is a customer of tarsnap and so am I personally. The professionalism and transparency that you can see in this RFO are a big reason why I use it and recommend it. The quality of the product/solution is the bigger reason.

What's not mentioned is that throughout the outage the customers were getting timely emails from Colin letting us know what's going on. We didn't have to go to a blog, twitter, facebook or some other ungodly place to find out what's happening. This is how it should be done.

knowtheory 4 hours ago 0 replies      
I love reading about Tarsnap. This is the sort of post that exemplifies someone who has a deep knowledge of his craft, a responsibility to his customers, and probably most noteworthy, a measured perspective and appreciation for life outside of just the business and web service he runs.
raghus 3 hours ago 0 replies      
For many of us, a datacenter losing power is the only effect we will see from this storm. For most of the people who were directly affected by the storm, it's the least of their worries.

Very well put. Good to put things in perspective.

16s 4 hours ago 2 replies      
My power just came back on today. Amazing levels of damage from the storm here in Virginia. Some homes are expected to be without power for another week or more. To add to that, every day the temps are in the mid 90s with 50% humidity. It's been miserable but a good learning experience. I need to build an outdoor camp shower and a latrine. Believe it or not, having those would have been a great luxury.
moe 4 hours ago 1 reply      
Thank you.

That's a nice post-portem and reinforces my trust in tarsnap.

jbellis 4 hours ago 2 replies      
What design improvements are you considering to make this go faster, should it be necessary again?
rbancroft 4 hours ago 0 replies      
Great description and I have to say I really liked the broader context of the outage. The severity of the storm was something I had been curious about but hadn't looked up myself yet, and so I appreciated the extra education!
robryan 2 hours ago 0 replies      
Here file system corruption is mentioned and I read the amazon posts on the issue mentioning that they let customers check EBS disks for potential corruption in a read only state (I think it was).

Is there some kind of guide to this process? I feel like I am underprepared if my EC2 instances get hit by a similar failure.

davidbanham 1 hour ago 1 reply      
Tarsnap is a _great_ product, but I worry that I'm not paying you enough money for it. It's super cheap for what we get from it. The very last thing I want is for you not to be making enough from the business to warrant putting in the kind of effort and energy that you do, and to then have the service go away. If the current pricing structure achieves that, then brilliant, but if it doesn't, please raise your prices and keep kicking goals.
da_n 3 hours ago 0 replies      
"...after which Amazon wrote in a post-mortem that "We have also completed an audit of all our back-up power distribution circuits""

Not saying this was unpreventable, but there is a tone of disingenuous doublethink in the Amazon status's recently. Fail-proof is failing.

serverascode 4 hours ago 0 replies      
Great work going on at tarsnap. I've known about it for a while; should start using it.
baconhigh 4 hours ago 0 replies      
I really admire how well written this outage report is, and how transparent the whole process is. +1
4 min Emacs screencast inspired by Bret Victor's Inventing on Principle emacsrocks.com
49 points by magnars  4 hours ago   11 comments top 9
mark_h 1 hour ago 0 replies      
Awesome!! That guy is steadily doing some amazing publicity for emacs.

This is a "live" video he did, slightly longer at 18 minutes, but with lots of javascript refactoring, etc:

espeed 39 minutes ago 0 replies      
This cool -- definitely digging into this tomorrow. I use swank-clojure every day, and it's incredibly productive to see real-time results as you type. I didn't know there was a swank-js. Is there by chance a swank-python out there somewhere?
wavephorm 37 minutes ago 0 replies      
Now he needs to do a 4 hour step-by-step tutorial on how he set up Emacs to do this.
tikhonj 2 hours ago 0 replies      
Wow, that is awesome!

I think it would be cool to combine it with a small JavaScript library that could make things like the transformation between animation and just drawing a path automatic. I doubt this would be too difficult, and it would make going between them much easier if you didn't have to change your code significantly.

There is no question that I'm using Swank-js for all my web development now!

sdgs86 2 hours ago 2 replies      
I've been a vim user since I was ten years old, but emacs is impressing me more and more. The learning curve for setting up some of these plugins is pretty high; does anyone have a tutorial so I can get this up and running?
michael_michael 2 hours ago 0 replies      
A note for the author regarding Dropbox's "uncool" headers: Adding `?dl=1` to the Dropbox link will automatically prompt users to download the file upon clicking, and save you from having to write the disclaimer asking users to right-click and "Save As..."
mattdeboard 2 hours ago 0 replies      
Oh man, definitely going to get this up-and-running tomorrow. This is awesome, potentially.
chamakits 2 hours ago 0 replies      
This is some amazing stuff.
As a fan of emacs, but having no experience with javascript, I would like to be able to recreate this personally, and possibly use this as my training ground. I know the purpose of the post is to show a short video of something impressive, but could you consider providing the code samples and maybe even a guide of the code?

Regardless of whether you can or not, it truly is some amazingly impressive work. Thanks for sharing!

Aykroyd 1 hour ago 0 replies      
I love these kinds of demos. REPLs are great... it makes me want to go back to using emacs.
ACTA killed in EU parliament: 478 votes to 39 falkvinge.net
441 points by charliesome  16 hours ago   79 comments top 12
rickmb 14 hours ago 1 reply      
As expected.

Even most pro-ACTA parties voted against to save their countries leaders from having to find another way out of this mess now that public opinion turned against it. Blaming the EU parliament is a relatively cheap and easy way to nullify their previous commitment (and signatures!), and one that won't result in US sanctions against individual member states.

It's basically a get-out-of-jail card for those countries that did a 180 and would no longer ratify ACTA despite signing it.

rlpb 16 hours ago 14 replies      
Is there a list of the MEPs who voted yes? I'd like to make sure to vote these MEPs out if I have the opportunity, and I'm sure others would as well.
willvarfar 16 hours ago 4 replies      
Commissioner Karel De Gucht said recently he'd re-field it until it passed:

> If you decide for a negative vote before the European Court rules, let me tell you that the Commission will nonetheless continue to pursue the current procedure before the Court, as we are entitled to do. A negative vote will not stop the proceedings before the Court of Justice.


hobin 16 hours ago 1 reply      
I'm very happy about this. Not only because ACTA was bad, but also because this shows the views of the parliament were fairly congruent with the views of the public - and that's always a good thing in a democracy.

Of course, the battle ain't over yet.

gouranga 16 hours ago 1 reply      
A victory of common sense. It's nice to see a logical outcome rather than a sponsored outcome (which is how the US and its intensive lobbying works).
debacle 16 hours ago 1 reply      
It's strange to live in a world where sanity feels like insanity.
TazeTSchnitzel 16 hours ago 2 replies      
As someone else pointed out, the concept of Intellectual Property, its goals, and how it is enforced continues to diverge on the two sides of the Atlantic, which is probably a good thing.
andion 2 hours ago 0 replies      
Great, thanks to this thread comments I have realized the only representative "my" party has on the EU Parliament did not vote... but her last bog post says she is happy they wont let ACTA to be approved. I feel... sad
kahawe 12 hours ago 2 replies      
> The European Commissioner responsible for the treaty, Karel de Gucht, has said that he will ignore any rejections and re-table it before the European Parliament until it passes

Is it just me with limited political understanding or is making this claim publicly just an outrageous slap in the face of EU citizens and democracy? Could someone with better understanding shed some light on the processes at hand and how de Gucht could make that statement without instantly being oust from office?

belorn 13 hours ago 0 replies      
An important event for subjects outside intellectual property context. The next time someone has the bright idea to create a treaty in secret, people can point to ACTA as an cautionary tale. It alone is maybe the most important victory from this.
mtgx 15 hours ago 1 reply      
Next up: TPP. Bring it MPAA/Obama administration!
malandrew 9 hours ago 0 replies      
Even though ACTA is dead, I would like to see those 39 that voted for it lose their seat in office over it.
Adobe Flash API ported to Dart github.com
39 points by cnp  5 hours ago   2 comments top 2
gauravk92 54 minutes ago 0 replies      
AS3 -> Dart -> JS, a bit much don't you think? I'm not sure who is trying to use a web stack three levels deep and dart based no less. I'm skeptical of how long dart is going to last at google, but this is definitely an impressive piece of work regardless. If it was AS3 -> JS and worked as well as it does now, i feel the web would collectively lose its shit. Unfortunately though this is tied to google, and i don't think many people want to be stuck being supported solely by google anymore.
DanielRibeiro 43 minutes ago 0 replies      
Easeljs does this pretty well, but in pure js:


A Gentle Introduction to Algorithm Complexity Analysis discrete.gr
58 points by gtklocker  6 hours ago   12 comments top 6
mbenjaminsmith 14 minutes ago 1 reply      
Great article. I've understood the why and what of Big-O but never how to do the analysis.

I have a question for the informed however:

The article says that we only consider the fastest growing parts of an algorithm. So, counting instructions, n2 + 2n would just be n2. But why do we do that? Imagine an algorithm where we have n12 + n2 + n2 + n2 + n2, etc. Do we really ignore each n2 section?

ssurowiec 6 hours ago 0 replies      
For anyone interested in this subject I'd highly recommend the Algorithm Design Manual by Steven Skiena[0]. He goes over this in pretty good detail in the first ~1/4 of the book. This was my first introduction to Algorithm Complexity Analysis and by the end of that section I already found myself looking at my code in a different light.

[0] http://www.amazon.com/dp/1849967202

tsurantino 2 hours ago 0 replies      
I've been doing the Coursera course on Algorithms (from Stanford by Tim Roughgarden). He's going over concepts like Big-O notation as well as analyzing algorithms. This particular article is a refreshing read, and I would highly suggest the Coursera course as an accompaniment if someone wants to go deeper.
rlu 5 hours ago 1 reply      
I wonder if the title is in jest. Clicking on this link and then being greeted with a wall of text (literally! It fills my entire screen) is hardly gentle.

I'm sure it's a good read just not right now >_>

captaincrunch 6 hours ago 0 replies      
A gentle intro? It's a bloody novel!
sodelate 2 hours ago 0 replies      
similar topic in many Computer Algorithm Books
HTC wins swipe to unlock patent dispute against Apple bbc.com
130 points by saket123  10 hours ago   60 comments top 11
azakai 8 hours ago 4 replies      
> The judge said that HTC's "arc unlock" feature - which also involves a predefined gesture along a path shown on-screen - would have infringed Apple's technology had it not been for a device released in 2004.

No, no, no. It is clearly obvious, the fact that there happens to also be prior art just adds insult to injury. If there had not been prior art, it would still be a frivolous, trivial patent.

This is exactly what's wrong with the patent system - you don't need prior art to tell you something is obvious and should be unpatentable.

scott_w 6 hours ago 1 reply      
Interesting how "prior art" can require a product to be released in a country to apply.

So, an international company can just see what is developed in another market, copy it and patent it in their own country?

I can understand this being the case in 1912, but we have the world wide web. Surely this concept is out of date?

dataminer 8 hours ago 0 replies      
Following video of Neonode N1 (cited in the article) shows slide to unlock feature predating the iphone


oraj 8 hours ago 1 reply      
"We remain disappointed that Apple continues to favour competition in the courtroom over competition in the marketplace."

This. I do think that Apple is an innovative company which creates great products. But this does not in anyways justifies its actions in courtrooms all over the world trying to exploit a system that is clearly outdated.

mrkmcknz 8 hours ago 2 replies      
It infuriates me when I hear of these 'swipe to unlock' and 'pull to refresh' patents that actually get issued.

Defensive use only when patents as pathetic as these are used is all well and good. Wasn't the patent system created to protect real innovation?

A fucking slide to unlock gesture is not innovation.

Now let me go and patent that 'dance to pay' gesture.

MichaelApproved 6 hours ago 2 replies      
I see a lot of comments saying swipe to unlock is obvious but I disagree. Just because something is simple doesn't mean it's obvious.

Now, that doesn't mean I think Apple deserves a patent for swipe to unlock but I do think people are mixing up the terms "obvious" and "simple".

novalis 6 hours ago 3 replies      
"Apple declined to comment on the specifics of the case.

Instead it re-issued an earlier statement, saying: "We think competition is healthy, but competitors should create their own original technology, not steal ours.""

This reaks of disastrous lazy damage control PR.

rodion_89 8 hours ago 1 reply      
I'm curious, can Neonode now sue Apple over the use of "swipe to unlock"?
petitmiam 6 hours ago 0 replies      
In tennis and cricket, you get a set amount of challenges. Once you've used them up, you can't make any more.

Could the courts implement something similar for patent disputes?

studio625 8 hours ago 2 replies      
What would the world look like if these patent trolls got their wishes?
jiggy2011 6 hours ago 0 replies      
whooopeee, does this mean my HTC phone will now get rid of that stupid "pull ring to unlock"/"pull ring to answer call" crap?
Replacing text in the DOM… solved? padolsey.com
47 points by bpierre  5 hours ago   12 comments top 6
xutopia 4 hours ago 0 replies      
I did something similar as a jQuery plugin here: https://github.com/garyharan/jquery-replace-utilities
tantalor 22 minutes ago 0 replies      
Why not make this a method of Element?

  myElement.findAndReplaceText(/foo/, 'em');

albertoavila 3 hours ago 1 reply      
I actually did something like that a few months ago: https://gist.github.com/3050333

Only the problem was a bit different, I wanted to be able to select a section of a blog post to generate a url fragment representing that selection, so, if you shared it, that section would be highlighted when entering the page, but the underlying problem is the same, example:


jQueryIsAwesome 3 hours ago 0 replies      
One of the reasons not to use the innerHTML prop is because (almost) all interactivity gets broken; for example a CSS3 animation restarts if you set innerHTML of one of its parent elements.

Another reason is because IE (even the 9) likes to delete white-spaces when you set innerHTML; even in PRE tags!

mkmcdonald 5 hours ago 0 replies      
Oddly enough, I was working on a similar problem earlier today.

I've found that collecting the Text nodes and returning them in an Array is preferable. All it requires is some simple traversal. The developer (that should know what text goes where) can then map text to nodes.

wslh 5 hours ago 1 reply      
The thing is that also depends on the browser version. You couldn't do that for IE 8 and below because there is not real DOMRange support.
Things I like about programming in Go jgc.org
134 points by jgrahamc  11 hours ago   70 comments top 8
cletus 8 hours ago 3 replies      
I have a hard time getting past it being mostly unusable on 32 bit Linux [1]. That just seems like a fundamental flaw in the garbage collector that moving to 64 bit simply kicks the can down the street.

And before anyone suggests "just use 64 bit", that actually a really crappy solution. In most cases it merely (almost) doubles your memory footprint for no real gain.

[1]: https://groups.google.com/group/golang-nuts/browse_thread/th...

ardit33 8 hours ago 6 replies      
I tried, and was exited about it, but the lack of easy IDE integration and the lack of a proper debugger were a turn off for me. For now at least.

Two questions to people that use Go:

1. Does Go have a good debugger that is easy to use? i.e. I shouldn't have to mess with the command line, but preferably use it directly from something like eclipse. It should work even in Windows.

2. Is goclipse working properly? Last time I used id, while it would show the syntax fine, the build and run was not working. I had to use the command line.

I think a language should be developed together with its tools. Right now Go is lacking in this department.

jbert 10 hours ago 4 replies      
Will go really multiplex goroutines which are blocking on system calls as well as those blocking on go-level entities (e.g. channels)?

The underlying OS threads will do that, but if goroutines aren't 1-1 with OS threads (which I don't believe they are), how is this achieved? Are all current syscalls intercepted?

What about calls out to external C libs which in turn call blocking syscalls?

(I could test, but if go happens to be running all my goroutines in their own OS threads, I could get a false positive "pass")

phasevar 10 hours ago  replies      
I've been a Python developer for a decade. Go is the first language to successfully pull me away from Python. I've been developing in it for almost a year now and for every reason this blog post points out, I'm in love with it.
eternalban 3 hours ago 0 replies      
CSP is an interesting take on concurrency but lends a degree of opacity to the scheduling of tasks. So far (working in Go) in minor (or well understood patterns) it is a non-issue, but I wonder what would happen at large scale. It is also worth noting that in Go 1 runtime, switching from 1 "processor" to N can certainly alter the behavior of your program e.g. the same precise [1] binary may lead to CSP/Go's version of "deadlock" e.g. "all goroutines are sleep" or not depending on simply 1 bit of difference in the binary. The current runtime also gives rise to some interesting (and very unintuitive) performance differences when hacking with the CSP mechanism e.g. peppering GO sources with unnecessary sleeps, etc. to hack the scheduler.

[1]: well nearly the same. Somewhere in the binary, a call to set maxprocs is present. Same exact program differing only in the param to this call (i.e. 1 vs 2).

eta_carinae 6 hours ago 2 replies      
> And the fact that it's missing exceptions seems like a win because I much prefer dealing with errors when they occur.

This person doesn't seem to have much experience with error handling.

You shouldn't deal with errors the way you prefer it, you should deal with them the right way. Sometimes, it's where they occur, but very often, it's somewhere up the stack.

We learned this lesson in the 90's.

jwingy 10 hours ago 2 replies      
Maybe I'm not understanding it correctly, but on the point about "it's ok to block", does that mean Go will continue to serve other requests normally, but at the point of the blocking code, it automatically has a "callback" to the next command? (effectively saving the programmer the pain of having to write code in callback fashion)

Having no prior experience with Go, I found the rest of the article informative!

didip 8 hours ago 2 replies      
does Go have package management? How do you guys find 3rd party libraries?
Software patch to avoid Galaxy Nexus ban coming soon cnet.com
18 points by eplanit  3 hours ago   6 comments top 3
ghshephard 2 hours ago 1 reply      
I'm of two minds about this - on the one hand, I'm disheartened at the continuing saga of a patent system that allows such ludicrous ideas as "swipe to lock" to be patented. If there was ever "system or process" that didn't need patenting to have it's "methods" revealed to the public to promote the "Progress of Science and useful arts" - "Swipe to Lock" would be it.

On the other hand, actually, no, I'm just of one mind on this one.

cargo8 53 minutes ago 0 replies      
The patent they are getting caught up with is for the Universal Search. I read that the patch will dumb down the voice + google search bar to only perform google searches and not search through apps, contacts, etc... Supposedly the suit does/will not affect Google Now, though.

Just another example of how terrible it is that patents are stifling progress. Also hilarious since Google's core is basically universal search.

idspispopd 1 hour ago 0 replies      
My hope is that the patch doesn't just replace the feature with an arbitrary and unpleasant equivalent.

Rather I'd prefer if they took their time and presented a solution that beats 'swipe to unlock'. Making it a real lemons to lemonade scenario.

Unfortunately if they repeat the past the financial pressure of getting to market tends to prefer the first option over the second.

Modeling real life actions for FB's Open Graph geekli.st
3 points by mathrawka  36 minutes ago   2 comments top
alttab 33 minutes ago 1 reply      
This seems interesting. However, I want less of my life integrated into Facebook, not more.

Useful for those who have strong overlap between people who would care about their commits and friends on Facebook that would give a shit.

The Perfect Compliment esquire.com
163 points by Firebrand  15 hours ago   49 comments top 13
jswanson 13 hours ago 5 replies      
Kind of an odd article, the writer spends the first part of it trying to come up with compliments, which usually fail.

Instead of manufacturing a compliment, just pay attention and realize what you think looks nice, what you like, or what you think is cool. Instead of holding back and not saying anything, which I think is what a lot of us do, tell them.

If the person has a funny shirt on and it brightens your day, say 'I really like your shirt.'

Instead of trying to compliment somebody, just try to appreciate them, and then relax enough to tell them what you appreciate.

A compliment you manufacture for the sake of giving a compliment will probably come across as stilted and fake.

elzr 9 hours ago 0 replies      
On a completely different level of mundanity, here are 3 beautiful compliments:

Jon Stewart to SpaceX founder Elon Musk: You have invented a rocket, and a spaceship on the rocket, and you've launched this into orbit already, and brought it back. The four entities that have done that are: the United States, China, the Soviet Union, and Elon Musk.

Peter Forbes to physicist David Deutsch: To read him is to experience the thrill of the highest level of discourse available on this planet and to understand it.

Cicero to historian Thucydides: He so concentrates his copious material that he almost matches the number of his words with the number of his thoughts. In his words, further, he is so apposite and compressed that you do not know whether his matter is being illuminated by his diction or his words by his thoughts.

kenjackson 12 hours ago 1 reply      
There's a case here in the Seattle area where a guy was put in a coma after complimenting another guys rims: damanlehman.com

Given the multitude of ways a compliment may be taken, I tend to only give them to people I know. And even then with great care.

zobzu 5 hours ago 1 reply      
Meh I hate such compliments. A compliment like this is not one.

It's empty. It's just a "look i'm cool and nice I say something nice just to say something nice"

I dislike when I hear those. On mean days, I'd often go "oh, thanks! so what's so good about my shoes compared to yours?"

And the person has no clue. They usually don't even come up with a lie, like "I like the shade/tint" or "the logos are awesome" or what not.

Because, again, it was empty and had no meaning. Generally, they did not like the shoes. They just "wanted to be nice". Happened that the shoes/umbrella/whatever weren't 100% usual, but they did not find anything they liked. Oh so wrong.

dekz 1 hour ago 0 replies      
More often then not saying something isn't required. A simple smile can be a day brightening compliment.
power 14 hours ago 1 reply      
I think a good compliment comes from your understanding of a person and in general you don't know enough about a stranger from a few seconds' observation to be able to make a meaningful one.
And to follow a stranger so you can compliment them is just creepy in my opinion.
hkon 12 hours ago 1 reply      
I get suspicious when someone compliments me. I guess for me, sarcasm and irony are both the rule, rather than the exception when it comes to "compliments".
gooddaysir 5 hours ago 0 replies      
Reminds me of when I did the Rejection Therapy challenge[1] (the game forces you to interact with strangers and get rejected). The trick for me was to try and find context in the situation - a shared experience we could both relate to that was non personal.

For example, if it's in line at a grocery store, I'd make a joke about the trash tabloids that are set up as an impulse buy. It's a safe way to start a convo, and a lot more natural sounding.

[1] http://rejectiontherapy.com/rules/

Tycho 8 hours ago 1 reply      
You can't really go wrong with 'I like your hair.'
aangjie 15 hours ago 0 replies      
Interesting.. I like the way, you spent time and attention on details of the person you're observing and/or complimenting. Both time and attention being the most rarefied resources nowadays. Reminded me of the quote " the best gift, you can give somebody is your undivided attention"
torrenegra 12 hours ago 0 replies      
"You are beautiful... to me"
geuis 12 hours ago 0 replies      
That's not even funny nor appropriate for HN. This isn't reddit. Don't say anything unless you have something to contribute to the discussion.
The Invisible Bank: How Kenya Has Beaten the World in Mobile Money nationalgeographic.com
129 points by ramabk  15 hours ago   62 comments top 17
droithomme 11 hours ago 3 replies      
I read about halfway through this article thinking it sounds good, but they used insubstantive marketing language terms like "new innovations", "big ideas", "safely and securely", "flexible adaptable technologies", and "dreaming big but thinking locally" so many times that it gave the game away. In addition there are no down sides described, the alternatives are demonized, and there is no real technical explanation. It's indistinguishable from a full page magazine ad. It is clear is a paid placement by advocates for a concept, and not a real article from a journalist. I then glanced up to see what site is running this sort of puff article and was genuinely surprised to see National Geographic is now doing this sort of thing. I guess the journalist needs some practice so he can learn to hide his tells in future product placements.
technotony 2 hours ago 0 replies      
I previously founded a credit and savings bank using mobile money in the Philippines. M-Pesa is a fantastic success in Kenya but replicating it's success in other countries has been very difficult. There are several reasons for this:
1) There were no alternatives when it was launched, eg Western Union. The best way to send money home to your family was literally to trust it to a bus driver, you can imagine how well that worked
2) The regulators were very 'soft' touch and allowed Vodaphone to launch a service which was illegal in many other countries (for money laundering reasons)
3) Vodaphone had an 80-90% market share in most segments, this prevented the 'cross-network' problem

I believe that until android (or other) smartphones become affordable to poor people, freeing innovation from the carriers, we won't see M-Pesa's success get replicated around the world. Then we will see rapid growth shortly after that tipping point is reached.

abenga 14 hours ago 3 replies      
(Kenyan here)
I think that the fact that Kenya did not have such a large banking industry helped M-Pesa succeed. By the time the local banks realised that it threatened to eat into their business, it had become too big and popular to beat. We had the local banking association petition the Central Bank to introduce regulations on it, the cost of compliance to which would have made it a lot more costly than it is, and subject to a lot more bureaucracy. Luckily, the proposal did not go through.
lifeisstillgood 13 hours ago 1 reply      
Can anyone explain how it actually works?

I am guessing you give vodafone 10 dollars, they give you a txt with a random code
anyone with that random code and your number can request the 10 dollars to come off your account and onto theirs

but ...

What is the strength of the code ? What is the security around the transmission to your phone? It sounds bruce schneir might no like it

gcb 5 hours ago 0 replies      
In the us you'd have to pay $30/mo for unlimited payment via mobile phones in a 2yr contact.

And it would only work for the first $1000, then it will cost extra per dollar.

And it doesn't matter it works over means already provided/charged by the telcos, they will go out of their way to make it billable.

grayrest 9 hours ago 4 replies      
Does anybody have a term for the phenomenon where a technology achieves enough critical mass among early adopters for network effects to arise only to have the technology improve as it matures so the late adopters wind up leapfrogging the early adopters?

I see this all the time, a simple example being self check out machines. A chain in Atlanta adopted them in the late 90s (this is from memory, could be later) and they were fairly common in the city in early 2000s. They were awful/clunky to use but 5-6 years later I'm in the middle of nowhere South Carolina and find the process to be comparatively painless on an obviously newly-installed machine. Meanwhile the original technology is still installed/operational in the city.

onedognight 8 hours ago 0 replies      
It look like the fees can be summarized as ~2% (with a ~$1 minimum) split between sender and receiver with the full fee covered by the sender for out of system transfers. http://www.ifc.org/ifcext/gfm.nsf/AttachmentsByTitle/Tool6.7...
ramabk 14 hours ago 1 reply      
In Africa, many homes do not have electricity but increasingly everyone has a cell phone. Prepaid cell phone minutes have become a defacto currency in many African countries. Africans are in many ways more acclimated with using privately issued digital currencies (such as M-PESA)...
Canada 4 hours ago 0 replies      
It's a halwa network. It moves debt. It's not at all innovative. The only reason we don't do this here is because it's illegal.
ImprovedSilence 13 hours ago 0 replies      
That is a very interesting article, I love to see the innovation in these scenarios. I also kept clicking around on articles there, it seems national geographic has some very good/interesting articles. I particularly enjoyed this one involving pay as you go solar power, for lights and phones: http://newswatch.nationalgeographic.com/2012/04/17/pay-as-yo...
nowarninglabel 6 hours ago 0 replies      
M-pesa is awesome, it's been intriguing to me that at Kiva we've been able to do mobile payments for microloans in Kenya long before they will be easy enough to transact in the U.S.
edoloughlin 4 hours ago 0 replies      
With this system, you're giving your location, real social graph and financial transaction data to a single company. I think I'd pass.
hrayr 4 hours ago 0 replies      
Was I the only one avoiding this article because they thought it's about Kanye West?

This trend of third world countries taking a technology and running with it doesn't surprise me one bit. It's much more difficult to disrupt an established industry with new technology than to adapt a mature technology to create new industries.

onoj 11 hours ago 1 reply      
I thought the Philippines had cash transfer and payment by sms since 2006 (article is 2007) my friends use this so it is not vapour.


vindicated 14 hours ago 0 replies      
I'm not sure if it's exactly the same thing, but this sort of service has been available in Pakistan for a while now - http://www.youtube.com/watch?v=5kL--YSnFPo
nodata 14 hours ago 0 replies      
I thought I read a story recently about how the Kenyan system was being plagued by mistrust. I'll see if I can find a link.

Edit: can't find the story.

DiabloD3 14 hours ago 6 replies      
No offense, but Bitcoin already did this.
CERN experiments observe particle consistent with long-sought Higgs boson cern.ch
207 points by sdiwakar  19 hours ago   50 comments top 10
pessimist 18 hours ago 2 replies      
What stood out to me from watching the presentation was the incredible integrity of the physicists involved. The CMS group had 2 sets of data with > 5 sigma significance, but chose to also show weaker data that actually reduced the significance slightly. Consider the enormous effort spent to show that the result was not some background fluke - two separate detectors, running 2 completely different means of detection, each with their own sets of computer programs verifying the results. Finally both show almost exactly the same result (although the masses are slightly different at this point)!

Given that tevatron also sees similar (although weaker) results, the confirmation is beyond doubt. And what an achievement - the first fundamental particle observed since the quarks in the 1980's! An incredible victory for theoretical models developed almost 50 years ago (no wonder Peter Higgs had tears in this eyes)!

Combining the brilliance of the theoreticians with the integrity of experimentalists is what makes science the pinnacle of human achievement (IMO), and makes me proud to be human today.

mkr-hn 12 hours ago 1 reply      

This is a great comment:
"Think about it this way. Let's say you're at the target range, and the Lone Ranger is shooting at clay pidgins right nearby. Obviously you'll want to know if he's shooting silver bullets, right? But you can't look at them while they're still tied up in the gun. You can't look at them after they've hit the pidgin. And they're traveling too fast to study while they're in flight. The only way you can see if they're silver bullets is based on how the pidgin gets blown to pieces."

"The Higgs boson has a very short lifetime outside of other subatomic particles. The only way for us to study them is to smash those particles together and see the results of the decay. Based on how the Higgs decays (blows itself to pieces), we can infer its existence."

sanxiyn 18 hours ago 0 replies      
It's cute that ATLAS spokesperson says 126 and CMS spokesperson says 125. Then CERN says 125-126.
esusatyo 3 hours ago 0 replies      
So, will we get more understanding about the shape of the universe?


Can we now conclude what'll happen if we go right to the edge of the universe with speed of light? Without any further debates?

mkr-hn 14 hours ago 2 replies      
I'm going to summarize this based on my limited understanding, and hopefully someone can confirm or deny.

This is important because it confirmed the accuracy of the way we conceptualize the structure of the universe.

That means both the people who start and fund projects know that the basis of modern physics is sound. So the time and money won't be wasted by a surprise "nope, Higgs-boson isn't there" in the middle of a project based on the belief that it is.

mrpollo 8 hours ago 5 replies      
Can someone please explain to me what would the immediate technological advancement be if this is true?, what applications would this have? (Already watched the video and read the FAQ, still a lot that is not clear to me)
confutio 18 hours ago 2 replies      
Can someone please dummify this result a little bit? what does this mean for particle physics in the future? does it confirm any theories?
hessenwolf 16 hours ago 1 reply      
Really interesting article: so we're basically back to studying the Aether, right? Or did I get that wrong? The only cause of mass is the particles moving through a vast pervasive invisible field?
sdiwakar 19 hours ago 7 replies      
Amazing, I wonder what this means for creationists and more importantly - our understanding of our universe?
Create 8 hours ago 0 replies      
just for the record and to please the prospective downvoting mob, here are my experimental observations consistent with the cern experimental domain in order to warn any non-westerners:

"The cost [...] has been evaluated, taking into account realistic labor prices in different countries. The total cost is X (with a western equivalent value of Y) [where Y>X]

source: LHCb calorimeters : Technical Design Report

ISBN: 9290831693 http://cdsweb.cern.ch/record/494264

about integrity:


FYI: https://secure.wikimedia.org/wikipedia/en/wiki/Spin_(public_...

Independence apptentive.com
18 points by rganguly  5 hours ago   1 comment top
brandall10 19 minutes ago 0 replies      
Ha... 3.5 weeks ago I left my enterprise .NET job to work as a Rails contractor. Yes, I indeed got some work done today while visiting the folks :)

Happy Independence Day indeed.

Wiki inventor Ward Cunningham develops federated wiki wired.com
112 points by dctoedt  14 hours ago   67 comments top 22
nicholassmith 13 hours ago 3 replies      
I think this could lead to a lot of interesting possibilities but I'll focus on one.

Lets say you're on Wikipedia and for example want to know about the assassination of J.F.K, the Wiki article contains most of the stock facts and so on, but lets say you want to know about the really crazy and out there conspiracy theories of Hoover and aliens and so on, a federated system would allow people to set their own page up and you can continue through to it. The sort of content that sends most Wikipedia moderators into catatonic seizures indexed and available.

There's plenty of other uses out there, I just think it's an interesting point that it'll allow anyone with enough time on their hands to create an archive of what they think is relevant as a counterpoint to what someone else thinks is relevant. Don't like the standard descriptor? Normally you'd have to fit a wiki mod, fork it off and have it as a counter point.

derleth 14 hours ago 1 reply      
I don't see how this is anti-Wikipedia. But I suppose conflict sells, so linkbait (to be clear, linkbait on the part of Wired, not the submitter) titles like these are inevitable.

Also, for a lot of people, 'wiki' is simply short for 'Wikipedia'. That's unlikely to change at this point. I don't know what Ward can do to make a Federated Wiki (great name, BTW) achieve any kind of success with that kind of barrier.

drone 13 hours ago 1 reply      
... and then the spammers have a field day: on every page, a list of 10,000 forks, re-titled with names like "Luis Vitton CHEAP$!$" and "U 2 can aFf0rd Rolex!"

Interesting idea when all players are playing by the same rules and with the same intent. Not as appealing when the most active are there simply to generate noise.

mark_l_watson 14 hours ago 0 replies      
A little off topic, but a friend of mine had a dinner conversation with Ward Cunningham and refactoring expert Ralph Johnson a few months ago and posted it on youtube: http://www.youtube.com/watch?v=jqGYoKvekik The video could have used some editing (shorter!) but has some interesting parts.
duskwuff 9 hours ago 0 replies      
As I see it, one of the biggest benefits of traditional wikis' linear history was that it fostered quick back-and-forth collaboration between users. Unless there's a strong mechanism for discovering changes made by other users, a forking model seems like it'd rapidly result in a bunch of different (and potentially non-mergeable) versions of any page.
roguecoder 11 hours ago 0 replies      
It seems odd to me to focus on Wikipedia as the comparison point, but then again I was on C2 back-in-the-day and have always seen the quintessential wiki as a superior version of a forum rather than Wikipedia. If we are comparing to currently-active technology, I would argue that the federated wiki concept has the potential to bring tumblr or del.icio.us style contributions to a different, longer-form audience.

I do worry that the federated concept will make authorship too important. The lack of permanent credit on C2 was part of what made it valuable: people more often wrote things to contribute value when they didn't have the motivation of scoring points, gaining karma or making a name for themselves.

emperorcezar 13 hours ago 3 replies      
"To run The Simplest Federated Wiki, you'll need your own web server, which Cunningham thinks is an important part of the project."

Sounded great till that point. He's putting his own nerd centric bias into it. If I'm a history guy, not a tech guy, and I want to "fork", asking me to run my own "server" or whatever is a non-starter.

e12e 13 hours ago 1 reply      
TFP: http://wardcunningham.github.com/

in the spirit of slashdot: the fucking project

rocky1138 13 hours ago 2 replies      
Also: wikipedia offers its entire content available as a tarball. Fork and run with it as you wish. While not "federated" it's pretty damn open.
pessimizer 13 hours ago 1 reply      
Could this be layered on top of the current Wikipedia through browser plugins? And how would discoverability work in the context of 10,000 current versions of a wiki page? It's hard enough when there's 30 different forks on github.

edit: federated search, federated social? How would the system prevent search gaming and social gaming?

irunbackwards 5 hours ago 0 replies      
Much better headline than the one actually on Wired, "Wiki Inventor Sticks a Fork in His Baby."
kzrdude 14 hours ago 1 reply      
When I talked with Ward in 2005, he mentioned his vision of a Wiki had a continuum of pages where he said you could take a page and browse the different "directions" people had edited it to.
6ren 7 hours ago 1 reply      
With always-on, always-connected devices everywhere, we are not that far from every device being a web-server by default, and every device being a router for web traffic.

Data storage, and even computation may be centralized in the cloud... but why not distributed distribution?

Vitaly 9 hours ago 0 replies      
Instead of plain EC2 installer t needs a Heroku Installe. Then anyone can have it running for essentially free in no time
zimbatm 13 hours ago 1 reply      
So where is the link to the actual project ?
Nux 9 hours ago 1 reply      
"To make Federated Wiki easier to adopt, there's a one-click installer to deploy a server to Amazon Web Services." <- oh yeah, super-federated.
p_sherman 14 hours ago 0 replies      
Holy mother of misleading titles.
webwanderings 12 hours ago 0 replies      
Didn't I see something from Dave Winer doing similar thing for Twitter, personal stream?
Jhonbxl 13 hours ago 0 replies      
If the problem is having the knowledge hosted in one place only, controlled by "one" person only, the solution should be to decentralize, not make everyone able to copy it everywhere. Plus, this solution seems to add more work to the contributors, since you'll need to review each fork/edit and choose which to merge or not.

On the other hand, a wiki working like a P2P network, where everyone hosts the same version and edits are automatically propagated accros the web would solve the "problem" without adding more job on contributors.

themonk 9 hours ago 0 replies      
J3L2404 14 hours ago 1 reply      
Forking does not lead to spooning. This sounds like it will quickly become an impenetrable rat's nest. Better to fight it out on the one true source, Wikipedia.
sdfjkl 11 hours ago 0 replies      
At last you can fork the truth and make your own version of it. I imagine this will be a hit with governments everywhere.
Jean-Louis Gassée: Nokia should fire Elop and the board should go too computing.co.uk
76 points by mtgx  12 hours ago   60 comments top 8
mtgx 9 hours ago 2 replies      
My favorite part is that Nokia didn't want to go with Android because they would have to depend on someone else, and yet they've completely given up control to Microsoft. At least with Android they would've had some control of what they put on their devices and how different they look from the competition's devices, in both hardware and software.
jeswin 10 hours ago 5 replies      
If I had Gassee's track record I would refrain from criticizing.

1981-90 - Exited when Apple was on a downward spiral.

1991-02 - BeOS, didn't get anywhere.

2004- - Palm, didn't get anywhere either.

Come on.

Spearchucker 10 hours ago 1 reply      
Having done a freelance gig at Nokia, working on some Windows Phone stuff pre-Mango, I can say that Nokia is, and always has been, fully aware of the WP roadmap. The Lumias were brought out quickly, but have never been representative of what Nokia is capable of.

Don't fire Elop, retain the board as-is, deal with the fact that there will be incompatibilities. And wait for a Nokia running Windows Phone 8 and a Pureview camera. This strategy has legs.

aeturnum 10 hours ago 4 replies      
The more time goes by, the better Elops bet on windows phone looks to me. Now that Google owns a device maker, developing for android is less attractive as you'll always be dealing, in part, with Googles hardware divisions priorities. On the other hand, Nokia is, more-or-less, Microsoft's hardware division. Other people make windows phones, but Nokia is the biggest player and their best bet. That's a pretty good place to be in if you're not going to make your own OS.

Of course, maybe windows phone will be a total failure, but given the alternative is being a "me too" Android developer I think it's a reasonable strategy.

ChuckMcM 5 hours ago 0 replies      
I find these sorts of things (chewing over previous decisions) to be rather painful and less than productive. I'd much rather talk about solutions moving forward since really, that is all you can do. I'm all in favor of figuring out what information or skill might have given you better insight in the past but that's really as far as I would go there.

Nokia's bread and butter has always been 'feature' phones, and that is something they really can't afford to give away. One strategy I could certainly see them taking would be to start with Android, replace the user land part with an application to run a feature phone, and push the footprint of that software down to allow for the least expensive hardware to run it.

Then leverage the core competence in the Android kernel to create the best of class kernel for a Nokia branded 'smart' phone.

I do wonder however if Elop is the guy to push such a strategy.

edwinnathaniel 9 hours ago 0 replies      
... and replace them with who?

... and the new regime will do what?

... so Elop can't do X,Y,Z but why not ask him to hire the right people to do X,Y,Z?

How fast can Nokia turn Symbian around into some sort of magical software that can please developers and users all-around the world?

Going with Windows Phone may be the not-so-bad alternative for short-term while stabilizing the company (i.e.: moving from old regime to a new regime is very very very tough, if you know what I mean).

Once the company has stabilized (if they can...), even though you get a hit by siding with Microsoft, start your plan B: build your own ecosystems.

You don't bulldoze your way out of mountain of problems. You come up with a step-by-step plans.

JVIDEL 6 hours ago 0 replies      
Maybe he knows what he's talking about: Apple was doing great when he was in charge of products, and after Sculley fired him Spindler almost destroyed the company.
correctifier 6 hours ago 0 replies      
Nokia had reached the end of the road on Symbian and needed a new direction.

This is a risky thing to do and going with Windows Phone gave Nokia the backing a very large and still influential company. Going with anything else would have meant going in alone. This includes Android which would be going alone into an already crowded market.

The current situation isn't great for Nokia, but they are in a deal that has the potential to help both companies, Nokia with short term financial help and Microsoft with a strong vendor to create showcase phones and the distribution network to get them into consumers hands.

Contrast this to RIM who tried to go in alone.

EDIT: fixed typo

Is personal funding viable? gittip.com
54 points by whit537  9 hours ago   49 comments top 9
MrFoof 4 hours ago 1 reply      
>Is personal funding viable?

Here's a question: What verifiable documented successes exist in the world?

The one that always comes to my mind is Dwarf Fortress. It's not open source, but Tarn Adams has gone with the donation-ware model since January of 2007. I've been tracking those numbers since, and recently have attempted to start getting data on community size where that's also verifiable: https://docs.google.com/spreadsheet/ccc?key=0AhPaW9RBi5v4dGF...

>Instead we should expect a few people to tip a lot, and a long tail of people to tip a little.

Based on anecdotes, this doesn't surprise me, but I'd certainly be curious to have a look at hard data. I know in the case of Dwarf Fortress about 1% of Bay 12's income is from one person: me. Additionally, I'm aware of other big tippers as well.

However to get back to my original question, if folks could provide links or other sources to other successful donation-funded endeavors that aren't humanitarian efforts or fitting the typical charity mold, I'd be interested in hearing about it.

ForrestN 6 hours ago 1 reply      
I think that it will be quite difficult to make this possible exclusively from individuals. Unless there were a threat that the projects will stop, or an offer of doing more that's pretty clear, most don't have a strong incentive to give. Something like kick starter helps to solve this in a way tipping doesn't.

What about a sponsorship component? Facilitating companies sponsoring a programmer? Their github page gets a logo or something, and they get a bit check each month. It's not a new model. Skateboarding, for example, which has no practical use aside from entertainment, often works this way I think. Many skaters earn their living from sponsorship and make next to nothing from competitions or whatever else.

If there were a major github competitor (maybe there is, I'm not so much in a position to know), the rivals might also pay these figures to work on the projects they control on their platforms. It helps Github immensely to have all these projects on its platform.

TimJRobinson 13 minutes ago 0 replies      
I love this concept!

Another idea that could work well is partnering up with some influential bloggers / open source evangelists and holding an international "Feed a coder" day, which brings awareness to the open source community and asks people to make a donation to their favourite projects / libraries / Wordpress plugins on that day.

There are a lot of Wordpress Bloggers out there who would be happy to donate to their favourite plugin authors but just haven't and this could push them over the line (this is just one community I'm quite involved with).

jlarocco 5 hours ago 1 reply      
This seems really strange, to me. Isn't it basically working around an arbitrary, self-imposed restriction to give code away?

As a developer, if I want to make money from one of my projects, I'll charge people to use it or to buy a copy. If I can't sell enough copies to fund development, I don't see how I would ever be able to fund development with "tips."

As a user, I don't mind paying for software, and I don't think many people do. Most people realize that it takes time and skill to create a piece of software, and that the people creating it could have spent that time doing something else.

jay_kyburz 1 hour ago 0 replies      
Hey Whit, kill the weekly payment thing and make tips one off payments. Have a look at the freemium model that is all the rage these days. Apparently players hate subscriptions and would much rather pay for one off consumables.

Also, allow tips of any size. $2048 or $4096 would not be crazy.

pdeuchler 5 hours ago 0 replies      
This is pretty cool. Hopefully this will encourage more people to not only develop more for open source, but also encourage those open source heavy developers to possibly invest more or all of their time.
Draiken 4 hours ago 1 reply      
Unfortunately I think only the internet stars have even a chance of living like this... The irony is that these superstars normally already have huge paychecks :/
kiba 8 hours ago 2 replies      
I see that it supports credit cards, but it doesn't support bitcoin.
juanbyrge 8 hours ago 3 replies      
What's wrong with them getting jobs?
Refer.ly (YC S12) Lets You Earn Cash and Donate It To Charities techcrunch.com
14 points by dmor  5 hours ago   1 comment top
twodayslate 1 hour ago 0 replies      
It is not very clear how much you actually earn.
Try git in your browser github.com
85 points by julien  12 hours ago   10 comments top 3
rjsamson 12 hours ago 1 reply      
Glad to see this done with Code School - their stuff is always top notch!
jurre 10 hours ago 1 reply      
I was hoping this would let me actually use it in my browser since my work doesn't allow me to install any software. Anyone have any ideas?
andrewfiorillo 9 hours ago 1 reply      
This looks great, but it doesn't seem to be working. For instance, I type git init in the first lesson and hit enter, but nothing happens. Same thing goes for git status in the second lesson.
Treating JavaScript like a 30 year old language github.com
87 points by toni  13 hours ago   78 comments top 25
greggman 10 hours ago 7 replies      
I use the Google style guide as well since it's required by my job and while I have learned somethings from it I'm not a fan.

80 characters? I never had that limit for the 27 years of programming proceeding using the Google style guide and it never caused me any grief. I find that naming things descriptively and an 80 character limit are at odds.

I'd rather read

    maxCombinedUniformVectors = maxFragmentUniformVectors + maxVectorUniformVectors;


    maxCombinedUniformVectors = 
maxFragmentUniformVectors + maxVectorUniformVectors;


    maxCombinedUniformVectors = maxFragmentUniformVectors + 

I admit it's kind of useful for side by side diff tools but in my previous 27 years of programming I never felt like "If only this code was 80 characters I could read the diff".

More importantly though, the Google Style guide is written by Java programmers to try to make JavaScript into Java, totally ignoring all the benefits of treating JavaScript like JavaScript. That has it's benefits, especially for Java programmers. They don't have to learn some of the cooler things about JavaScript. They can go on treating it like a traditional oop language. And they get static type checking.

On the other hand, all of these FP concepts are out


Even common JS concepts like encapsulation

    var ErrorLogger = (function(){
var privateErrorCount = 0;

return {
error: function(msg) {
getNumErrors: function() {
return privateErrorCount;

Are not allowed by the Google Style guide as well as many other JSisms.

Another nit, Google Style guides disallow formatting for readability except for comments?!?!


    var kStateRun = 1;        // character is running
var kStateRunToWalk = 2; // character is transitioning from run to walk
var kStateWalk = 3; // character is walking

Not allowed

    var kStateRun       = 1;  // character is running
var kStateRunToWalk = 2; // character is transitioning from run to walk
var kStateWalk = 3; // character is walking

Either lining things up makes them easier to read or it doesn't. Comments are not some exception. If lining up comments makes them easier to read then lining up anything makes it easier to read.

Terretta 11 hours ago 2 replies      
"At some point in computer history, somebody (arbitrarily?) created an 80 character line limit for code. ... I've been writing JavaScript for three-ish years"

For a couple decades, and not ending until the late 90s, most text terminals and text modes for graphics cards were 80 characters wide[1], and dot matrix printers also had an 80 character line length (plus margins) dating back to 80 characters per line punch cards from 1928[2]. If a line was longer, you had to scroll that individual line. To let your code be read easily anywhere including your own screen, you stuck to that line length.

1. http://en.wikipedia.org/wiki/Text_mode#PC_common_text_modes

2. http://en.wikipedia.org/wiki/Punched_card#IBM_80-column_punc...

josephg 10 hours ago 1 reply      
The google javascript style is overly verbose and really awkward. Javascript isn't a typed object-oriented language. If you fight javascript until it looks like C++, you make an awful mess. Most of the javascript I've read from google is overly verbose, takes ages to compile(!!) and it avoids javascript's best features - anonymous functions (closures), object literals and dynamic typing.

If the author writes his javascript as if it were C++, its really no wonder he hates the language. Javascript is a wonderful little language - but its not C, and if you pretend it is, you're going to have a miserable time.

crazygringo 8 hours ago 1 reply      
I'm a huge fan of an 80-character limit. But more importantly, of having a rigorously-defined limit.

It has nothing to do with history. Horizontal scrolling is a usability nightmare, and in programming, word-wrapping introduces ambiguities.

Code should be designed to be read. You should be able to set your editor to the width defined by your project standards, and know that the code will always look the same to everyone. Clear coding is an art, and makes use of well-chosen indentation and linebreaks. Having different people use different widths in a single project destroys that art and legibility.

Now, why 80 characters? Here's my personal reason. According to "The Elements of Typographic Style", by Robert Bringhurst:

> "The 66-character line is widely considered ideal."

Assume that you'll often have 8 spaces of indent on the left side, and the a ragged right edge of perhaps 6 characters, and you get 66 + 8 + 6 = 80. There's nothing perfect about it, but an 80-character width is basically what's generally comfortable for comprehension by the human eye.

skrebbel 11 hours ago 1 reply      
> I would contend that readability generally has more impact on the success of a project than micro-optimizations and stylistic experimentation. If that means writing like a C coder in the 80's, then so be it.

Excellent attitude. I need to keep reminding myself of this when I come up with them fancy oneliners again.

frankus 10 hours ago 1 reply      
The use of intermediate variables is something I'm conflicted about.

On the one hand, they can make code easier to reason about. Also, when using a crappy debugger that doesn't display return values you can more easily see what's going on. In some cases they make code that at least looks like it ought to run faster.

On the other hand, ditching intermediate variables makes refactoring more straightforward. You can immediately see the complete set of dependencies of a line of code and extract common bits of code into helper methods with less hassle.

In general I think I prefer the "cram a bunch of nested functions into one line" approach, but then that might be language-dependent (I mostly do Objective-C these days, and Xcode has a pretty smart automatic line-wrapping feature).

Lisp would be an extreme example where that's pretty much all you do.

watt 11 hours ago 3 replies      
I disagree with the point about 80 chars limit.

If the line consists of boilerplate code, I will allow it to go over 120-160 chars most of times. The idea is to just get that code out of sight, as nobody should have to read it anyway. If it's important, I will break the line up. If not - like a long exception message that gets constructed with bunch of context information - I'll let it grow out of window. It's not something you will need to look at, read and grok every day. And I am happy to have it out of sight most of time.

I break up lines when thinking about stepping through them in debugger (though javascript debuggers seem to be able to step statement by statement, not line by line).

kemiller 11 hours ago 1 reply      
Totally aside, but interesting: 80 characters was not an arbitrary limit, at least not directly. It was the size of the IBM standard punched card, and the terminals that succeeded them. Famously, versions of COBOL (FORTRAN too?) well into the 1990s would not even recognize input past column 80 even though it had long since graduated to text files.


I still like to use 80 characters even today because it means I can fit two or even three pages of code side-by-side without wrapping. Great for merging.

Xcelerate 11 hours ago 2 replies      
Maybe I'm an exception, but I generally find code with LESS syntax to be more readable.

    var makeAdder = function(x) {
return function(y) {
return x + y;


    makeAdder = (x) -> (y) -> x + y

Is anyone else like this? Do you think people are hard-wired to prefer one form of syntax to another, or do you think it's a "whichever you have more experience with" kind of thing?

vorg 2 hours ago 1 reply      
> an 80 character line limit for code [...] eliminates the need to horizontally scroll

There's other ways fit your first example within 80 chars:

    setTransformStyles(context, buildTransformValue(this._transformOrder, _.pick(state, transformFunctionNames)));

How about?...

    ""型(势, 做"值(这._"订购, _.'(态, "函•名)));

ricardobeat 10 hours ago 1 reply      
Those are all fairly standard practices (except for the compile step which brings portability issues). I got the impression that the author has some underlying reason for this post that went unexplained.

If the lack of default parameters and style consistency are your main peeves with Javascript, it's faring quite well :)
Ironically, those are problems that CoffeeScript solves, yet it seems this guy would be the last person to try it.

Roboprog 10 hours ago 1 reply      
Semicolons: it's too bad Netscape didn't choose to use colons instead (but then it wouldn't sorta look like C, would it).

As opposed to common dogma as this is, JavaScript is essentially a line oriented language, like Ruby, Shell Script, BASIC, Groovy or dBASE. OK, so it's more like Ruby, where the line will continue if it doesn't look done.

Had they used colons instead of semicolons, it would have been obvious that "oh, I'm using this punctuation to add another statement to this line", but of course, it's usually the null statement in JavaScript.

It's a shame JavaScript (and Ruby) didn't adopt the line continuation character convention (e.g. - backslash or ampersand) for incomplete lines. As it is, one might be best served by learning what incomplete statements look like, just in case, and alternately, where the parser might think your statement is shorter than you intended.

Yes, I add the semicolons at work to avoid the <<trivial criteria>> contest.

correctifier 10 hours ago 0 replies      
I have worked with a strict 80 character rule, and I find that it usually causes worse looking code with excessive multi-line statements, poor variable naming and makes refactoring more labour intensive.

I have a soft limit of around 100 characters for C++, which is good for readability and still allows me to have two side by side editing windows.

samspot 11 hours ago 0 replies      
One of my favorite things about the 80 column rule is it makes it even more effective to split your window vertically.
tomjen3 12 hours ago 0 replies      
If you are writing types in Javascript, you owe it to yourself to use webstorm, as it can do code completion in javascript (it can also do reasonable inference).
AjJi 12 hours ago 2 replies      
AFAIK, using 'new' is not a matter of preference. Omitting the keyword leads to different results. Am I missing something?
azakai 8 hours ago 1 reply      
> I don't like JavaScript. I write an enormous amount of JavaScript " in fact I write it almost exclusively " but I don't really like it as a language.

Why would you work "almost exclusively" in a language you don't like? There are plenty of opportunities on software with all sorts of languages.

altrego99 10 hours ago 0 replies      
Though some are good suggestions for writing maintainable codes, I do not see a lot of opportunities to improve javascript from this.

The pieces I would like to have are a) optional arguments and b) strict type checking. They are actually syntactic sugar in a way, because you can get the same effect using typeoff function.

But restriction to 80 characters... would be definitely recommended as coding practice, but never be forced by the language!

mkmcdonald 10 hours ago 1 reply      
I'm pleased that someone else favours a sensible line limit.

I stick to 72 columns for width and 20 lines per function body. The result has been very concise code that's easy to follow. Only exceptional cases such as heavy recursion have eluded the line limit.

If popular JavaScript projects wrote code with cleanliness in mind, maybe more people would take the language seriously.

anuraj 10 hours ago 0 replies      
I have always felt that scripting languages should stay clear of OO syntax largely. Try to do small files which are procedural. OO is for large monolithic modules - good to collabarate and manage complexity - scripting should stay where it needs to be - simple and standalone, no threading, but asynchronous where possible. This makes life easier for everybody.
antidoh 12 hours ago 1 reply      
I liked the bit about using new. As he says, some people go with capitalizing their constructors, but capitalization is not enforced by the compiler. 'new' is explicit, it can't be anything but a new object.
hackNightly 12 hours ago 0 replies      
I thoroughly enjoyed this article. I'm always looking for ways to improve the readability and "share"ability of my code, and the author has provided several tips that I'll be taking forward. The part on strict-ish typing is brilliant. Very good read.
Kartificial 12 hours ago 1 reply      
I just cannot wrap my head around the fact that readable code is considered boring?
frugalmail 10 hours ago 0 replies      
Have you taken a look at http://haxe.org ? Seems like a good solution against javascript's flaws.
scottschulthess 12 hours ago 2 replies      
Doesn't even mention coffescript
Higgs Boson Explained by Cartoon nasa.gov
601 points by ColinWright  1 day ago   125 comments top 23
tatsuke95 1 day ago 3 replies      
Slightly off topic:

I admire this method of conveying ideas and information (animation). It's a great way to consume these clips.

The RSA has a whole series of 10 minute lectures which they animate on a whiteboard in this style. The illustrations are brilliant.


(search for RSA Animate)

kcima 1 day ago  replies      
From the video around :30 seconds in, "...this is when surprises might happen. Any day could be the day that changed the world."

I am still looking for an answer as to how the world could be changed by the discovery of the Higgs Boson particle. What are some possible outcomes for society? I do not doubt that it will change, and I agree fully with it's value, however, I can't find any specifics in what ways it might change or what new technologies might be created with or without the Higgs Boson.

Also, at a 9 Billion USD price tag, how were our governments convinced? There must be something beyond scientific intellectual curiosity. Those of us with this curiosity may be happy to pay for it, but how were politicians convinced? What value will this provide to the governments of the world who made the decision to purchase this answer.

I'm sure it's not this...

Scientists: "We need 9 Billion to find out if the Higgs Boson particle exists."

Governments: "OK, here is your 9 Billion."

... 15 years later

Scientists: "The answer is yes. The Higgs Boson does exist."

Governments: "Oh, that's really great."

Update: I understand and agree fully with the value of this research. I am asking if there are any specific technologies that are expected to be advanced or if it is just added knowledge that could lead anywhere. I am also wondering how it was explained to politicians who don't have specific interest in science.

runn1ng 1 day ago 6 replies      
I still don't get it.

I still don't get how they jumped from "We have this Higgs field" to "and hey, the field is a particle."

mietek 3 hours ago 0 replies      
The cartoon says "Interestingly, you can't have negative mass, or repulsive gravity." It would be interesting to hear why.
femto 1 day ago 0 replies      
It seems as if a Higgs confirmation announcement is going to happen in about 12 hours [1]. A video was briefly up on the CERN site, before disappearing behind a password. Apparently it is part of preparations for an announcement at the International Conference on High Energy Physics, which started today in Melbourne, Australia.

[1] http://www.smh.com.au/technology/sci-tech/weve-observed-a-ne...

DanBC 1 day ago 1 reply      
Depressingly this cartoon is more complex than almost all of the BBC science output.

Broadcasters with the BBC's remit need to have science programmes that are far beyond my understanding. Almost everything on the BBC can be followed by a reasonably smart 14 year old.

hazov 1 day ago 0 replies      
I watched it here some months ago:


ColinWright 1 day ago 0 replies      
Found via midko's comment here:
seanalltogether 1 day ago 6 replies      
So if I've got this right, mass shouldn't be thought of as the "bulk" of a thing, it's simply thought of as a charge within that thing. So a photon is a particle of substance that contains no mass charge on it, despite the fact that it has some amount of volume?
krrrh 8 hours ago 0 replies      
If this is still too advanced for you, and you need to brush up on the atom, proton, neutron, and electron, this video of Venus Flytrap explaining the concept in 2 minutes might be a good place to start. http://www.youtube.com/watch?v=hhbqIJZ8wCM
fromdoon 1 day ago 1 reply      
Can someone explain or point to sources where the implications of finding/not finding the Higgs Boson are clearly quantified?

Or is it that they are not sure what they would do with it when they find it.

Further, how would this discovery affect the modern day/upcoming tech?

Jun8 1 day ago 0 replies      
My takeaway from this: CERN's cafeteria beats the shit out of ours! Years ago I had the chance to visit there and didn't, talk about the road not taken.
eevilspock 1 day ago 0 replies      
Shy physicists explain dating and relationships in extremely roundabout way.
elorant 1 day ago 5 replies      
What I don't understand about quantum mechanics is the reason nature had to make things so damn complicated. What was the fundamental problem that led to the solution of quantum entanglement, or the duality of the wave-particle situation.
retube 1 day ago 1 reply      
great cartoon, shame about the cafeteria audio.
smeg 1 day ago 0 replies      
So if mass and charge are both just attributes of particles, what is the "charge equivalent" of the Higgs boson? And if there isn't one, then why is it assumed the HB exists? Why cant particles have a innate "mass charge" in the same way they have an "electric charge"?
SonicSoul 1 day ago 0 replies      
love this! it's like khan academy on steroids. It must have taken a lot of work to create. great way for Jorge Cham to get his name out. I'll be subscribing to PHD Comics
josscrowcroft 1 day ago 0 replies      
Fantastic animation, but doesn't give any indication as to why it's such an important search!
Zaheer 1 day ago 0 replies      
Awesome video! I had Daniel Whiteson as a Physics professor at UC Irvine last year. My favorite professors ever. Very great at explaining concepts and makes the class fun.
modernise 22 hours ago 1 reply      
I has it.


elorant 1 day ago 0 replies      
Another particle, another Nobel prize.
tdskate 1 day ago 2 replies      
waste of my fucking time
jalanco 1 day ago 0 replies      
All I want out of it is an anti-gravity device.
       cached 5 July 2012 04:02:01 GMT