hacker news with inline top comments    .. more ..    18 Nov 2012 News
home   ask   best   6 years ago   
You can do it alone ryancarson.com
166 points by ryancarson  3 hours ago   66 comments top 21
pg 2 hours ago 8 replies      
You can certainly start a business without a cofounder. What's hard to do, empirically, is to start one that gets really big. There are a handful of counterexamples, like Amazon, but Treehouse is not one of them yet.

We have a lot more data about what happens in startups than any individual founder does. What interest would we have in mischaracterizing it?

petenixey 2 hours ago 1 reply      
As I think Ryan would be the first to agree, he may have built Treehouse on his own but his early businesses have all been very much built with the support and partnership of his wife Gill.

Ryan has also been a master of bootstrapping businesses upon business. He built Carsonified on the foundation of simple workshops he ran, he built ThinkVitamin on the foundation of Carsonified and he built Treehouse on the combination of both of them.

I say that not as a critic but as an admirer. I think he's done a first class job of building his reputation, wealth, influence and expertise through all of these.

However I think it's somewhat disingenuous for Ryan to state that because he has done Treehouse on his own, "so too can you". It's what the people want to hear and it will bring the enthusiasm of the bulk of HN readers who are going it alone.

However the risk for most startups is not that they will exit for millions and the CEO will only own a paltry 20% of the company rather than 70%. It's that they will die. It has taken me many years to realise it but the presence of a true co-founder dramatically reduces this likelihood.

It feels odd writing this all these years later because I knew Ryan at the very start of my career and almost the start of his. I then was a single-founder, he was working with Gill. I feel like we've passed each other going the opposite direction.

I gave only a small fraction of equity to my first co-founder and paid a high price for that. I have now come full circle and realise the importance of a good co-founder and of an even split between you.

Ryan is right in that he can do it without a co-founder. However if you fit that mold you probably already know that and are running a business quite possibly as the sole influencer already. If you don't then I think there is good reason for the astonishing faith of the valley in the co-founder. There is a degree of group-thought, sure but there are a lot of very, very sound reasons not to go it alone.

rauljara 3 hours ago 0 replies      
"I believe Paul and David are stressing the importance of co-founders because they're talking about young founders with no previous business experience."

"I funded the business with cash from my previous business"

Ryan has thoroughly convinced me that if you've run some successful businesses in the past, it's probably better to go it alone your new one.

However, I don't think he convinced me that PG was wrong (and I'm not sure he was trying to), just that PG's advice doesn't necessarily apply to serially successful entrepreneurs. All of PG's cautions about the dangers of going it alone still seem completely valid. However, it's not like there aren't dangers to having a co-founder (the marriage analogy, though a little cliche, is cliche because it rings true).

I suppose, as with all things, the thing to do is take stock of your own particular situation and weigh the risks. But if you don't have direct experience overcoming at least some of challenges PG refers to, a co-founder still seems like the more sensible route.

Kiro 2 hours ago 0 replies      
"What's wrong with having one founder? To start with, it's a vote of no confidence. It probably means the founder couldn't talk any of his friends into starting the company with him. That's pretty alarming, because his friends are the ones who know him best."

Wow, pg is so wrong. Whenever I get an idea I want to execute it myself. Bringing in friends is not something I even consider.

nodesocket 1 hour ago 0 replies      
I just applied to TechStars Cloud in San Antonio for http://commando.io and was denied as a single founder. I knew it was a long shot, but had to apply just for the amazing opportunity that is TechStars and the mentors and experience.

Being a single founder is bloody hard. Mostly though it is lonely. Nobody to talk with, nobody to bounce ideas around and debate features or implementation with. Should we use MongoDB or Riak? Being a single founder you make all the decisions. Also investors believe if you can't convince anybody else to join your company, than its probably a bad idea. I don't necessarily believe this, since I am myself a single founder.

So, why is finding a co-founder hard. I moved up to San Francisco over a year ago, left my pool of friends, and drove up in my car with everything I owned. Finding people in San Francisco that are either not already doing their own startup, or already working at a badass company is extremely difficult. Even more, there is the catch-22, I don't have any capital to pay you, but I have equity. Again, not a really convincing proposition for a rockstar developer or designer.

Startups are hard, the hardest thing you will probably ever do. So being a single founder is just not mentally healthy and as productive as having co-founders.

nhangen 21 minutes ago 0 replies      
If it hadn't been for my co-founder, I would be insane at best, and at worst, divorced and nearly dead. We keep each other going, pick up each other's slack when we're having a bad day/week/month, and keep each other energized throughout the day.

I'm sure you can do it alone, but I wouldn't want to.

fitandfunction 2 hours ago 1 reply      
I'd love to see someone (ryancarson or pg?) write a complementary article about "what if you have to do it alone?"

In other words, sometimes, it's not your choice to be a solo co-founder. Many people have compared finding a co-founder to finding a spouse. In both life decisions, I don't think anyone seriously advocates for "sucking it up, and going with the least bad option."

Sometimes, you're poorly geographically positioned, or in a "strange" market, or later in life (friends are already "matched up" or in secure jobs), etc

For myriad reasons, you could sincerely try to recruit a co-founder and come up short.

The question then becomes ... do you make the best of it and go for it anyway?

Or, is the lack of a co-founder a signal (to yourself and others) that your idea / plan is unworthy?

I hope the answer is the former because that is what I am doing. Someone remind me to write this article when I figure it out.

muratmutlu 3 hours ago 0 replies      
I think that many founders won't be in the same position as you as having come from a successful business and be well known and respected in the industry.

I think either way is cool, I'm happy to share 50% with my co-founder because money isn't my primary goal and he's a fun and clever guy to work with which makes the journey even more enjoyable

webwright 2 hours ago 1 reply      
I don't think anyone has ever said that you can't. Just that it's harder and success is less likely. Yes, certainly co-founder add additional risks to the equation and do indeed reduce the magnitude of success, should you find it. But as others have said, the more important # to optimize is your chance of success.

I'd love to see data to the contrary versus an (admittedly inspiring) anecdote... All of the data that I've ever heard about (from PG and other sources) seems to support that ideas that people who find a co-founder have a better shot.

photorized 1 hour ago 0 replies      
Yes you can.

Best part is focus, clarity, and instant decision making. If you feel you have a great idea, just go for it. Don't waste time convincing others.

goldfeld 3 hours ago 1 reply      
The point I find to ring the truest with me is that of a single life goal. If your startup is the love of your life, your ultimate passion and purpose on Earth, it gets very hard to find someone to share in on that passion. And taking someone on board for half equity because they see it as an exit opportunity doesn't feel right.
jonathanjaeger 3 hours ago 0 replies      
Mark Suster agrees that you don't need the 'typical' co-founder split. Here's his take: http://www.bothsidesofthetable.com/2011/05/09/the-co-founder...
dutchbrit 1 hour ago 0 replies      
I'd hate to have a cofounder. I'd really have to find someone with same amount of passion and who's on the exact same line. I don't like people messing with my vision, having too much control. Don't get me wrong, I listen and take in other people's advise and views, but I want the final say.

Also, I think a lot of people look for funding, while they don't even need it, but that's another issue..

bencoder 3 hours ago 1 reply      
Somewhat off topic but I'd like to ask about this:

> We've grown from three people to 54, and $0 revenue to $3.4m+, all in just two years.

I'm completely naive about these things and I'm not involved in business, but isn't this very risky? That equals to only 63k/head revenue. I guess this is banking on future growth but is this a standard pattern for a growing startup?

rvivek 55 minutes ago 0 replies      
Startups are a function of the morale of the founder(s). When it becomes zero, the startup dies. And there will be hopeless times when it'll almost approach zero. Empirically, having a co-founder can be a huge boost during those times which in turn means you're increasing your odds of success by playing longer. You can definitely do it alone (kudos), it's all about increasing your odds.
vlokshin 3 hours ago 1 reply      
I think I speak for most young, yet-to-be-successful, founders when I say: I'd rather keep 15-30% of something that has a 5% chance of success than 70-100% of something that has a .01% chance of success.

I thoroughly believe doing it alone (even if you're REALLY good) is a .01% chance, and doing it with complementary talents that have skin, heart, reputation in the game brings that up a ton.

That being said, congrats on being in the 0.01% of that equation.

eande 1 hour ago 0 replies      
yes, you can do it alone and I had some success building up my hardware company. But at some point I decided to change and bring in co-founders and our momentum just build up x-times higher.

First hand experience tells me too, that starting up a company alone is not only really hard, but slower as well, which these days speed is more crucial than ever.

I have to agree on common wisdom and recommendation here that if you want to start a company and create something with impact try to partner up. Finding the right partners is another whole chapter by itself.

namank 3 hours ago 1 reply      
Doesn't exactly instil confidence when "You can do it alone" is prefaced by "The Naive Optimist".

That said, upvoted!

timedoctor 3 hours ago 1 reply      
A big factor if you are trying to succeed while doing it alone is can you hire a great team to complement your skills. This usually costs more money in the beginning stages in salaries than having a co-founder. It means the founder must have some money behind them and usually means they must have succeeded in business before.
calgaryeng 3 hours ago 0 replies      
I'm with ya - you just have be slightly crazier than any "normal" entrepreneur :)
duncanwilcox 2 hours ago 0 replies      
Show HN: v0.1 of my book "Why programmers work at night" leanpub.com
39 points by Swizec  2 hours ago   17 comments top 11
Swizec 2 hours ago 2 replies      
Hey everyone,

I'm looking for some feedback on my yet unfinished book - felt like this was a good time. The most developed chapter right now is "About flow" so I'd really love your thoughts on that one.

You can get it for free with this coupon code: HN0.1

Zenst 1 hour ago 0 replies      
Main reasons programmer work at night would be some or all of the following:

1) Access to resources - be that free internet, the home computer or faster internet.
2) TZ, its the internet and opearates on all TimeZones and with that people interact across timezones and with that night seems to always cover many cross-over times.
3) Distractions, there are less distractions, less noise and with that easier to focus.
4) Coffee overshoots, with that the late hours make productive time.
5) Whatever reason they want, code knows nothing about daylight.

As for the sample of your book I glanced upon I would wonder if any programmer would want to read it, and any manager who would gain from even a hint of insight into programmers, would not be motivated to pick such a title.

If anything it is more a chapter subject for a larger book on how to deal with modern persona's in a work enviroment.

That would be my approach, but my main reason for coding late at night is covered in all the above though less distractions being the primary. Now if you had a chapter on how to justify too your boss that you will be working weird hours, that would be something a programmer would be interested in.

daurnimator 12 minutes ago 0 replies      
Hmm, I just used his punchcard thing @ http://nightowls.swizec.com/

And apparently I do 60% of my work on wednesdays.... wtf.

dageshi 1 hour ago 0 replies      
Of interest perhaps is that personally I find that as winter rolls around my sleep patterns get later and later. In the summer I'd guess I sleep 2am-10am in winter it's more like 4am-12noon and I can't for the life of my shift this. Every year as winter rolls in almost like clockwork it changes.
sdfjkl 1 hour ago 1 reply      
I used to work an 8 hour day, then go home, have dinner and go to bed immediately after. I then got a few hours sleep and woke up refreshed and with a clear brain between midnight and 2am. I then coded for a few hours until I started feeling tired again, went to bed, caught an hour or three of sleep and went to work. This had the benefit of me being better rested for my nightly hacking than for my daytime job, which didn't require my full alertness (I was stuck in a dull job at the time). The downside was that I had no social interaction with people in my timezone (other than at work).
smoyer 2 hours ago 1 reply      
I'm one of "those morning programmers" who likes to get up before the rest of the world. It's great to be well-rested and it's also quiet enough to work. I guess the what I have in common with the vampire programmers is that caffeine is most definitely involved.
jph 1 hour ago 0 replies      
Great idea. I'm often coding at 5 a.m. and that time is magically good for flow.

Have you investigated the history of "second sleep" for clues?

Also try looking at polyphasic sleep, software like RedShift, and products like the Zeo for ideas and people who like night/morning work.

JonnieCache 1 hour ago 0 replies      
Very nice indeed. Looking forward to the finished product. However,

"Perhaps there's just some worrying stuff going on and you aren't Irish enough not to worry about it."

What does this mean? Never heard that expression. Anyway, aren't the irish traditionally meant to be neurotic and guilt-ridden?

shortlived 1 hour ago 1 reply      
I was a midnight programmer until I had a family and more importantly kids. I've always been a somewhat nocturnal person even before I programmed. Now with a family I'm more of the early bird programmer, who starts working at 6am. You get the same peace and quit,,, but there is still something different. Maybe it's that with mignight programming, you really can have a long stretch and with early bird, you are limited to a relatively short period before others start showing up on the radar. I think the pitch black of night also helps.
batgaijin 19 minutes ago 0 replies      
"Because my brain is hemorrhaging during my day job."
DustinCalim 1 hour ago 1 reply      
If you enjoy working at night and use Chrome, this extension has been a huge help for me(and my eyes):


532x Performance Increase for MongoDB with fractal tree indexes tokutek.com
12 points by rdudekul  1 hour ago   1 comment top
carterschonwald 12 minutes ago 0 replies      
Please note that the actual name for a "fractal tree" in the research literature is "Streaming (cache oblivious) B-Tree" (or at least they've very very closely related).

Writing good code wrt memory locality is SUPER important for writing high performance code, whether its in memory work, or larger than ram (eg for the DB). Also a fun exercise to try to understand how!

They Cracked This 250 Year-Old Code, And Found a Secret Society Inside wired.com
256 points by pstadler  11 hours ago   23 comments top 11
kens 7 hours ago 4 replies      
I suspect there's a second code hidden in there. From the article, describing the code symbols that are Roman letters:

    These unaccented Roman letters appeared with the frequency 
you'd expect in a European language. But they don't
represent letters"they mark the spaces between words.

It's implausible that these characters just happen to appear with a language-like frequency distribution and are all meaningless spaces. I suspect they actually have a meaning and provide a second message.

To clarify, it's like taking "SthisEisCtheRfirstEmessageT" and assuming all the capitals just indicate spaces.

danso 9 hours ago 3 replies      
A wonderful read. I know a little bit about frequency analysis and was surprised to see how straightforward its application was (in theory). I'm even more surprised that after a decade of Google, that this approach wouldn't be one of the first things tried out given the length of the text. As the OP describes, it was a chance encounter at a conference that machine learning was finally introduced into the problem. Until that point, the linguist had been trying in vain to decipher the text...is there still such a gap between the researchers and the computational experts who know how to implement solutions?

* to put it in a less-polite way: how the F else would you solve a problem like this, with non-computational methods?

gebe 5 hours ago 0 replies      
Wow, not often accomplishments from people you actually know and have had as teachers end up on the frontpage of HN. I was at the same talk by Kevin Knight as Schaefer and I can vouch for that it was a mighty interesting one! I actually changed my curriculum a bit (to include cryptography) as a result of his talk.
nnq 6 hours ago 0 replies      
this: "The unaccented Roman letters didn't spell out the code. They were the spaces that separated the words of the real message, which was actually written in the glyphs and accented text." makes me think of a cyphertext within a cyphertext, something like an ancient form of stenography.

...maybe the symbold used as spaces are not actually random and there's another message hidden there, with another cypher, offering the writers of this "plausible deniability" regarding its existence: they could only give the way to decipher the first level of encryption and say that's all there is, while the really important information was hidden in the "space characters"...

(... now putting my tinfoil hat back in the closet :) )

keithpeter 8 hours ago 0 replies      
Good catch, nice read, with a computational angle.

Take a walk down some of the older lanes in London, say near Borough Market or back up towards Southwark, or the other side between Brick Lane and Petticoat Lane, and imagine yourself back in the 1700s.

Coffee houses, close groups having meetings, private rooms upstairs in narrow houses. The feeling that true knowledge was being passed on. The meaning people found in the processes of the primitive technology.

It strikes me that the boring bits of the decoding (tokenising the symbols, entering the tokens) could be farmed out using a web site hosting scans of texts. The computational resource could perhaps be spare cycles on a PC with an appropriate application. Scope for lay science of a particularly interesting kind, and the refinement of algorithms as they are applied to a larger corpus of texts.

Turing_Machine 2 hours ago 0 replies      
The next time I'm at the eye doctor, I'm going to be wondering what that eye chart really means. :-)

Another poster mentioned the Voynich manuscript. It's available on archive.org if anyone wants to try their hand:


Here's a list of others:


Jun8 7 hours ago 1 reply      
And now if only someone cracked the Voynich manuscript!
tsunamifury 7 hours ago 0 replies      
This introduction feels eerily similar to an opening interview at Google.
Leszek 6 hours ago 0 replies      
> Eventually we turned to the last items in the Oculist trove: nine copies of a four-page document written in a mixture of old German, Latin, and the Copiale's coded script. The message was more or less identical in every set.

I feel kind of sorry for them, that at the end of their journey they found what was essentially a Rosetta Stone for the code they were decoding.

BerislavLopac 10 hours ago 1 reply      
I'll be calling my rock band "Quiet Bulldozer". ;-)
BaconJuice 8 hours ago 0 replies      
Enjoyed reading this. Thank you.
The Year 2512 antipope.org
160 points by huetsch  9 hours ago   68 comments top 33
cletus 7 hours ago 5 replies      
It is of course hard to predict 500 years out. Hell, it's hard to predict 20 years out. Did anyone really see the world of today even 20 years ago?

But I'll take my own fanciful stab.

I don't foresee either an energy or a climate crisis. There is a hard limit on how expensive energy gets because at some point you can turn totally renewable energy into a fuel of some sort, ideally taking CO2 out of the atmosphere to do it. It's not cost effective now because energy is so cheap. But like I said: there's a limit to how expensive it can get.

The bigger problem (IMHO) is going to be certain elements and metals that aren't so easily replaced. I agree with the author that getting certain elements from space is going to be economically tricky (rather than technologically tricky) compared to how cheap it is to pull stuff out of the ground.

You can recycle iron to a degree but a certain amount is lost through corrosion/rust. Rare earth elements are harder to replace.

I do foresee there being a lot less of us and that is probably going to be a traumatic change.

Sadly I don't foresee a huge presence in space. The energy costs, particularly when you look at even the most optimistic models of interstellar travel in particular, are just too extreme even with perfect mass-to-energy conversion.

Change like evolution is often perceived to be smooth but it's not. Our world like life itself is shaped by key, often small, events. Europe in 1914 was a powderkeg in 1914 but one man's death triggered a sequence of events that resulted in World War One, the armistice for which sowed the seeds for World War Two. One could argue that if the Archduke had lived something else would've triggered the war and you may well be right. Still how different might the world be if, say, JFK was killed by a chance bullet in World War Two?

As far as longevity goes, that's a tough one. I expect there'll be a certain class of people who live much better and longer than others but then again the history of the world thus far is those kinds of technological advancements always trickle down eventually. Living forever? I have my doubts.

Artificial intelligence as always is the sleeping giant of the future. I believe that to be inevitable and the effects could be profound to put it mildly.

I too believe the nation states of today mostly won't exist in 500 years.

InclinedPlane 6 hours ago 1 reply      
Assuming that there is no great WWIII or equivalent cataclysmic event the world of 2512 is beyond our faintest imaginings and would likely be frightening to us.

I don't speak about nanotechnology or even brain-uploading and synthetic sentience, I speak about rather more mundane trends that are almost certain to continue.

For example, manufacturing. Today manufacturing is still rather similar in nature to the way it was in the 17th century, we just have a whole crap-ton more of it and it's easier to ship manufactured goods around the globe. But I believe we are reaching an inflection point on manufacturing. We will soon reach a point where manufacturing becomes entirely automated for huge classes of devices. All you'll need to do is upload a set of files to a server somewhere and press a button and then a factory will produce whatever it is you've designed, on very short notice and in arbitrary volumes. This alone is a transformative technology, but let's take it a step further, toward fully automated creation of machine tools and to factories themselves. The idea of an assembly line as this huge, fixed entity is due to the nature of our manufacturing technology, but it's possible that manufacturing facilities will themselves become disposable (likely recyclable) and transient. Manufacturing won't be something that people consume, it will be something that people do. More so, the ability of a small amount of capital machinery to boot-strap into the manufacturing capabilities of a developed nation will rapidly eliminate almost all remaining undeveloped parts of the globe. Imagine what happens when you can ship a few containers of equipment to, say, antarctica and start building out factories, tractors, automobiles, houses, etc, etc. with only an input of crude raw materials.

How this will transform the world is beyond me, but it will certainly change our perception of wealth and scarcity and the people living in a world with this technology will be as unfamiliar to people of today as people of today would be to stone age tribes. And this technology is not a 500 year technology, it'll likely arrive in the next hundred years at most.

Let's talk about drugs and surgery and self. Modern medicine is at best a century old, and in some ways perhaps even less. There will come a time, certainly within the next 500 years, when medical technology in the realm of mood alteration, behavior alteration, and cosmetic surgery are at a level which we would describe from the perspective of today as nearly perfectly effective. Imagine what happens when people can change their personalities and their mental capabilities at whim? If you find you're depressed you can fix that, effectively and permanently. If you have a mental illness such as, say, schizophrenia or pedophilia then you can fix that too. And if you are dissatisfied with your mood or your personality you can change that too. Do you want to be an alpha personality? Do you want to be a thrill-seeker? Do you want to be bubbly and happy all the time? Easy peasy. Do you dislike the way your face or body looks or works? You can change that too. You can have a stunningly attractive and physically fit body with ease, and you can look like a movie star.

To say that this will change society is a gross understatement. In many ways I think this will be a bigger challenge to the world than any other technological or environmental challenge. To be honest I think it will be a larger challenge for our species than even trying to co-exist with thermonuclear weapons.

As for space, I think it will affect our future a great deal but perhaps not as much as these other things. One thing a lot of people get wrong about space is imagining that it's hard. It's not, we've just been doing it very, very badly. For the same exact amount of money the world has spent on space so far we could have easily built orbital cities and moon bases housing hundreds. Not with revolutionary technology, not with some alternate and hugely more cost effective programs, but merely with applying proven and existing systems and technologies in a sensible way instead of the haphazard way we have done so the last 4 decades or so. For example, for the same cost as the Shuttle program we could have continued launching Saturn Vs (at least 150 of them) which would have allowed us to easily put living quarters for hundreds of astronauts in Earth orbit and to build out moon bases (or Mars bases, frankly) quite easily. There are two other important factors people miss. First, once you have a substantial off-Earth industry then it's no longer reliant on the cost of launch from Earth's surface. You only have to launch the equipment for an automated space mining operation once, afterward you only need to keep it operational. The potential return in terms of mass launched from Earth vs. resources returned to Earth or to Earth orbit could be a great many orders of magnitude (millions or billions), much like it is for mining equipment here on Earth. Second, the world of the future will be unimaginably wealthier than we are. The parts of the world which are today developed will be even wealthier in the future, and much of the developing world will have developed within the next 100 and certainly 500 years. Even without factoring in technological and industrial advances which could make orbital launch cheaper (incidentally, things which are already running at a rampant pace of advancement even today) the simple factor of having a much, much larger total economy will mean that the amount of resources for space exploration will be larger than today by a factor of tens to hundreds. The idea that this doesn't translate into a substantial permanent off-Earth human population is, to me, patently ridiculous.

Overall, the idea of trying to predict the world of 500 to even the tiniest degree is probably a losing prospect, but it should be an interesting ride regardless.

lsc 0 minutes ago 0 replies      
comment 13:

"In the future your major political affiliation will not be the nation state or even the corporation. It will be your IT infrastructure provider IE Apple, Google, Microsoft or their 2512 counterparts."

would be an absolutely /awesome/ sci-fi novel.

Of course, for it to be realistic, consumer needs would have to grow dramatically faster than moore's law. As it is now, it's too easy to start a new consumer IT provider business, the infrastructure is too cheap. I spend rather more compute resources per customer, dramatically more than google, and I've got two thousand customers, me being some nobody kid.

If current trends continue (e.g. consumer demand for compute power trails moore's law by quite a lot) the per-customer cost of providing IT infrastructure will be so low that those providers will not be able to demand much by way of payment, otherwise some kid like me will show up and do it cheaper. If you notice... most of the online consumer infrastructure providers, right now, are not in a position to charge their customers anything at all.

tokenadult 4 hours ago 1 reply      
It was kind of Charlie Stross, a participant here on HN


to take time out from writing his latest novel to post the interesting blog post shared here. Thanks too to the HN participants who shared the link and have commented already while I was coming back from work. I especially like about this post that Stross looked back at Earth 500 years ago to show readers what time scale he is talking about, and that he was boldly definite about technological and social changes.

I will be boldly definite in disagreeing in part with one of Stross's conclusions in this interesting post. Stross writes, "I'm going to assume that we are sufficiently short-sighted and stupid that we keep burning fossil fuels. We're going to add at least 1000 GT of fossil carbon to the atmosphere, and while I don't expect us to binge all the way through the remaining 4000 GT of accessible reserves, we may get through another 1000 GT." I fully agree with this premise. There are no effective incentives in place today, nor any likely in the next few decades, to prevent further consumption of fossil hydrocarbon fuels, and that will surely result in a substantial increase in atmospheric conentration in CO2.

Stross's next step in prediction is, "So the climate is going to be rather ... different." That's a safe prediction any time, because over 500 year time scales, we have often observed climate change in historic times. Over longer time scales, but since Homo sapiens populated much of the earth, rock art in the Sahara Desert shows that the Sahara was once much less arid than it is now, and cave art in Europe shows that the climate of Europe was once much more frigid than it is now.

Stross goes on to write, "Sea levels will have risen by at least one, and possibly more than ten metres worldwide."

An interesting series of online maps shows projections of flooded land based on various degrees of sea level rise for places of interest such as New York City,


San Francisco,


the Netherlands and England,


and Chesapeake Bay.


In all cases, the maps default to showing seven meters of sea level rise and do not project any civil engineering projects to protect existing infrastructure.

Having read Matt Ridley's blog post "Go Dutch"


back when it was published, I wonder if the most dire predictions about the Netherlands are true, or if the Netherlands, the land of polders,


can continue to be "living proof to climate pessimists that dwelling below sea level is no problem if you are prosperous."

Stross writes, "Large chunks of sub-Saharan Africa, China, India, Brazil, and the US midwest and south are going to be uninhabitably hot."

I live in the United States Midwest, and my mother grew up in a hotter part of the United States Midwest during the Dust Bowl era. Most of her family is still near the family farm on the windswept Great Plains. I don't expect any part of the earth to become uninhabitably hot. We have, according to the best developed models of influences on world climate, a sure prospect of a generally warmer Earth, warming currently lethally cold areas into areas that will be habitable. My experience living in subtropical east Asia suggests that we will have more warming of cold areas than turning hot areas into unbearably hot areas from global warming.

Stross continues, "London, New York, Amsterdam, Tokyo, Mumbai " they're all going to be submerged, or protected by heroic water defenses" and my prediction is that New York, at least, will be fully protected by civil engineering projects. New York City is sufficiently prosperous to attract some of the world's brightest minds to live there (I know some young people who have moved there recently) and the current city administration actively encourages making New York City a technology hub. New York will thrive, whatever the climate.


Stross wraps up this prediction by mentioning, "Venice and New Orleans (both of which will be long-since lost)." Venice and New Orleans have been in long-term decline for quite a while, from bad governance, and will surely suffer further relative decline, regardless of sea levels. There will still be a great port at the mouth of the Mississippi-Missouri river system, and it will be a thriving and cosmopolitan city, but it may well be in a different place along the river delta from the current location of New Orleans. Venice may basically vanish.

There is much more interesting content in Stross's post, but allow me to explain why I think the high end of global warming predictions (and thus the high end of sea level rise predictions) is unlikely. We already have a known model for induced global cooling from the "natural experiment" of volcanos erupting and ejecting much dust high into the atmosphere. If the climate change we now experience produces more pain than gain (where I live, at 800 feet above sea level in a continental dry, cold winter climate zone, global warming has so far mostly produced gain), then there will be political and economic incentives to sequester greenhouse gases, or directly shade the Earth with high-altitude dust, or to do whatever else science discovers to slow and perhaps eventually reverse global warming. Over a 500-year time span, I would expect enough of an increase in understanding of climate models to bring about a world climate that is more moderate in more places than today's. Thanks for the chance to think about the far future.

reasonattlm 6 hours ago 2 replies      
The point of the technological singularity insofar as it interacts with reasonable prediction of the future is that reasonable predictions tell you that it is next to impossible to make any sort of reasonable cultural/climate/landmass/population/other soft prediction much past this century.

Hard takeoff scenarios seem to be unlikely (no self-improving AI going from human project to godlike status in a couple of hours while rolling its own molecular nanotechnology foundation). The reasons for this are the same reasons that make rapid global takeover of the internet by a viral monoculture unlikely today: results take effort, some results are opposed, some results are intrinsically hard, no breakthrough happens in a vacuum.

But: by 2040 it will be possible to emulate human brains the hard way. By all means tell me that every human culture will refrain from taking full advantage of all that can follow from that over the decades that follow. The economic benefits of human and built-from human intelligences instantiated to order are incredible. The possibilities spiraling out from that are so much greater than everything that has come before that it becomes very, very hard to say what comes next.

You could see a world in which there are trillions of entities of human and greater intelligence by 2100. With their own cultures, so much greater and broader and more varied than ours as to make us the first snowflake in the blizzard. They may or may not have access to molecular nanotechnology and as much of the solar system as they care to begin making over by then. What will they build? How can you say? Culture determines creation.

Equally, you might not see that world. But it looks most plausible to me that software life will erupt from our culture in much the same way as we erupted from Greek tribes thousands of years ago - but much more rapidly. If you can show me you can sensibly predict the details of today's world by an examination of the Mediterranean Bronze Age, then I might be more inclined to think it possible to talk about what lies on the other side of emulated human intelligence.

aresant 7 hours ago 1 reply      
That's what I call fun Saturday reading . . .

"GM mangroves that can grow in salinated intertidal zones and synthesize gasoline, shipping it out via their root networks, is one option."

That one sentence overloaded my system with a visual day-dream about the potential for our future - the way it's written evokes that famous Bladerunner line:

"I've seen things you people wouldn't believe. Attack ships on fire off the shoulder of Orion. I watched C-beams glitter in the dark near the Tannhauser gate. . ."

A few dozen words in both quotes that evoke such richness. Beautiful work.

rogerbinns 6 hours ago 0 replies      
I'm glad he didn't take the doom and gloom approach. We do keep seeing the bad side (overpopulation, disease, environmental degradation, wars etc) but slowly and surely the opposite has been happening. Prosperity has been improving things for (virtually) everyone, not just a few westerners. And people do strange things once prosperous - they protect the environment, have more greenery (compare richer versus poorer neighbourhoods), buy organic, electric cars, contribute to charity etc. The question is can we continue to improve prosperity, and there is no reason to believe it won't keep happening. Matt Ridley of "The Rational Optimist" has a lot of material to substantiate that.

Here is a quick TED talk and a transcript of the opening to get you going.


"When I was a student here in Oxford in the 1970s, the future of the world was bleak. The population explosion was unstoppable. Global famine was inevitable. A cancer epidemic caused by chemicals in the environment was going to shorten our lives. The acid rain was falling on the forests. The desert was advancing by a mile or two a year. The oil was running out, and a nuclear winter would finish us off. None of those things happened, (Laughter) and astonishingly, if you look at what actually happened in my lifetime, the average per-capita income of the average person on the planet, in real terms, adjusted for inflation, has tripled. Lifespan is up by 30 percent in my lifetime. Child mortality is down by two-thirds. Per-capita food production is up by a third. And all this at a time when the population has doubled."

rndmize 8 hours ago 2 replies      
Five hundred years out is an awfully far distance into the future. I could see most of this happening in 200 years, or probably less.

One of the problems with making predictions like this is that technology begins to compound and affect itself in weird ways - a book I have discusses how once you have proper mind-machine interfaces and can copy a person at will, one of the most efficient ways to travel becomes transmitting yourself at light-speed and getting a new body once your persona has been downloaded at your destination, rather than traveling in a physical body. This is something I had never considered before encountering it in that book, as much as it is a logical step from cybernetic brains and being able to back yourself up.

Similarly, racism, sexism and language issues being to disappear as you approach a higher level of computer integration. Racism and sexism become quaint ideas when most people can change to a body of the opposite sex whenever they want, and skin color becomes a matter of aesthetic choice. You might end up with wholly different types of racism (or perhaps species-ism) due to deliberate genetic changes to adapt to different environments resulting in wildly different types of humans, or due to experiments to bring certain species to human levels of intelligence (apes, dolphins, octopi?)

I find it interesting that he would have geopolitical boundaries exist at all. The idea of nations may well be an outdated one a couple hundred years hence. As we continue to improve our abilities to manufacture and grow things on ever smaller and more controlled scales, there may come a point where we no longer need massive structures of human organization like nations, corporations, etc. On the flip side, these things could become more ingrained and efficient such that we approach hive-like efficiency/societal structure (group minds etc.)

I suppose I find most of these speculations rather tame. I think that things will change a lot faster, and in a lot bigger ways, than described here.

ChuckMcM 6 hours ago 0 replies      
Definitely a fun read, lots of things to nod and shake your head at. For example if we've mastered the fusion power generation process then rather than carbon remediation we may find we have to burn things occasionally to boost atmospheric CO2, the reasoning goes that we've basically converted all of the arable surface area to 'farm' land, we have converted all of our industrial and motive mechanisms (cars/trucks/trains) to electricity, we use the Fischer-Tropch process to create jet fuel which can take CO2 out of the air, so not only are anthropogenic sources of CO2 but natural sources (forest fires) are removed from the system. If we are pulling energy geothermally out of volcanic hot spots that will leave their tops just frozen enough to not erupt (another giant source of atmospheric CO2).

It is really really hard to predict past a point where the energy problem becomes 'solved.'

I also expect that all of our computing / electronics devices will be essentially 3D printed out of carbon in various forms (tubes, balls, graphene) providing the various roles of switch, conductor, gate, and substrate. Those will be connected by a mesh of networking that is a couple of gigabits wireless and perhaps a terabit when hard connected. The low marginal cost of bandwidth will make it pretty much non-blocking bandwidth everywhere.

I expect we'll be eating a manufactured food product that is tasty and nutritious and the domestication of livestock and the use of any other living organism (including plants) will be considered 'quaint'. No one will have to go hungry because the combination of low cost energy and the ability to assemble food will allow for free 'food' (although not designer, "high end" food).

I think the more interesting question though comes from biology, which is to say if we have completely decoded cellular biology then there won't be any excuse for being sick or not 'healthy' (and by that I mean optimal function of all organs including the brain). At some point during the development of that capability the aspects of ones genetics which determines sexual orientation will be completely mapped out and understood and there will be a big debate about what we do about that, do we 'cure' homosexuality, do we offer to make everyone 'omnisexual' etc. There will be huge and heated debates about what is and what isn't normal.

jhuckestein 3 hours ago 0 replies      
This all makes sense, if you think of the future as a linear progression. Technology however, progresses exponentially (I don't want explicitly invoke Kurzweil here because his theories have their own faults, but the exponential part he gets right). That means 500 years from now, we will probably have solved the problem of survival. We will probably be much more intelligent, not have to work, almost certainly beexploring space etc.

There's almost no way to try to predict what life will be like in 500 years (try the predictions people had 500 years ago!). When I think about this, the most interesting questions are philosophical. If ou didn't have to die and could simulate whatever pleasure you desire whenever you want to, what will the point of living be? What will the definition of a human, a life and consciousness be if you can simulate/augment it using computers?

stretchwithme 1 hour ago 0 replies      
I sort of think we'll figure out how to remove CO2 from the atmosphere. Richard Branson has offered a $25 million prize for it.
The world uses 85 million barrels a day and its currently $85 a barrel. That's $7.2 billion dollars spent on oil per day. And $2.6 trillion a year.

$25 million is less than 1/100,000 of what we spend on oil per year. If every American chipped in 8 cents, we could double the incentive.

Sounds like a kickstarted project I could get behind.

abecedarius 8 hours ago 0 replies      
Seems inconsistent to bring up nanotech, etc., yet keep climate hell as a fixed background. Of course attempted fixes will run into issues, small or catastrophic, but they're bound to exist.
paulsutter 3 hours ago 0 replies      
Why would we have massive climate change if we have nanotechnology (either wet or full on drexlerian)? The author seems to have completely missed the mark. The only interesting conclusion (massive climate change) is contradicted by his own assumptions.

This is a pessimistic, probabilistic, poorly thought-through vision of the future. The Elon Musks of the world will steer things in a different direction.

startupfounder 5 hours ago 1 reply      
"I'm also going to ignore space colonization, because I want to focus on this planet."

Europe changed when explorers "discovered" the new world. Saying you are only going to focus on "this planet" is like saying I am only going to focus on the "old world" when talking about earth 500 years ago.

In my mind the rest of the article is pointless because the author is using the old world way of thinking about this planet.

The fact is the exploration of space is very similar as what happened 520 years ago. What happens when the price of getting to orbit drops significantly because of reusable rockets? Already there are companies that are planning on mining astroids. Saying that this is not going to effect earth in a major way is not really looking at where earth will be in 2512.

Space exploration is going to define the next 500 years of humanity and of this planet just as exploration of the new world defined the last 500 years.

mukaiji 6 hours ago 0 replies      
I think it's a bit off on the predictions relating to energy. The best way to explain why is to borrow from Vinod Khosla's theory of energy black swans, and assume that the forms of energy we know and make use of today are going to be replaced by forms of energy we either don't know of or haven't yet managed to master.

500 years is simply a long, long, long time from now in terms of human progress. I think the energy description provided here might possibly fit a model of our energy mix 100 years from now. However, it's very unlikely to be the one we follow 500 years from now, simply because the basis for energy-related discoveries dictates that every few decades an entirely new form of energy is discovered and gets subsequently iterated upon until economically viable. It simply isn't factually reasonable to assume that we have already discovered all possible forms of energy production.

By the way, i did energy-related research which is why i wanted to point this out. Regardless of these flaws, I thoroughly enjoyed reading that essay.

tl;dr: energy-predictions 500 years out are not reasonable because of Vinod Khosla's theory of energy black swans.

dave1010uk 4 hours ago 0 replies      
I think it's quite likely that within a couple of hundred years there will have been multiple changes that are completely beyond our current compression, that we couldn't even begin to speculate on. Nevertheless, I'll add my speculation to this rather interesting discussion:

The ease and volume of communication is bound to increase. Perhaps we communicate through technological telepathy, with anyone we want to. We share thoughts and senses with groups of people and solve problems by adding more brain power. The Mythical Man Month is no longer mythical. Learning and "news" become instant. Communication is probably faster than the speed of light.

Physical objects are only slightly constrained to their form and location. They can be transformed and moved almost as easily as energy can. Having something only requires thought and currency.

pourush 4 hours ago 0 replies      
I'll write this before I read the article:

Assumptions: I think a lot of what we "know" is going to be wrong. It's just a thing which seems likely to me. Not in a "things fall up now" sort of way, though a little of that, since the laws of physics have been revised quite a bit, and I don't see that trend stopping. But more in a "We were pretty much crazy to think these things" way. You know about alchemy's position politically today, and how some of the church's actions were perceived? Some things which we consider important today are going to be treated like that.

History: We'll be better at this. Assuming historians haven't mysteriously vanished as a profession, I think we're going to know more about history in the future, and knowing more about the present in the future. As a collective, I mean, not every individual.

Screwing the world up: Will happen a lot less. We gained raw power in the last 500 years, we're going to learn wisdom now. Or die. That's a possibility, its been discussed. But I'm assuming we survive.

Culture: Will have finally recovered from British expansionism. There will be lots of strong local cultures again.

Government: Will be competent. And not vitriolic. I'm predicting a break from history again.

Population: Will be ignored. Won't be a problem.

Tech: People will get what they want here. Even if what they want is something they've never heard about. And if they don't want it, that will stop it. We got the atom bomb because we wanted to kill people. That will happen less. No flying cars, but maybe hover-boards. Lots of the stuff that we usually relegate to philosophy, or say that is impossible to know, and won't affect anything even if we know it will be known and become part of science. And we'll be better, way way way better, at biology and ecology.

Intelligent Aliens: Will be found, will be relevant to some people's careers, but won't be all that important. Not the main driver of events.

Planet: Will be better, much better. Things will turn around here. People will care about it. The majority doesn't really care about it now, except in a kind of abstract way as it relates to government. But they will care about it later.

Intelligent Aliens: Will be found within a hundred years, won't be important until at least 200 years in.

cyanbane 6 hours ago 0 replies      
The author continues to compare now to 500 years ago, while I think a lot of his prognostications might right on target I think that these advances will come much quicker than 500 years. If we took the magnitude of advances for humanity from 1512-2012, and we applied those magnitudes today that it would happen in the next 100 years (5:1). I do like his non doom-and-gloom approach (disclaimer aside). I agree with some of the other comments that if you ask any human at anytime if the world is in its worst state in history, the answer will be yes and I think the author understands this isn't always the case. Great read.
curt 7 hours ago 2 replies      
Highly doubt any of the countries that exist today will exist 500 years from now. Honestly I highly doubt most of them will exist 50 years from now. Most developed countries are headed for their day of reckoning as the bills for their welfare states come due. Combined with the fact that every developed country has a negative birthrate due to these policies delaying adulthood many countries will collapse from just demographic changes in the next 50 years.

How do people still believe in run away global warming? There's been absolutely ZERO warming for the lat 16 years, the Earth warmed for 15 years before that, then cooled for 40 years before that. Cloud formation, the major environment influencer of global climate, now seems to stem from cosmic rays.

I do believe space travel, specifically mining, will become much more prevalent. This will eliminate any resource problems. As for energy advances in solar technology and nuclear (fusion or fission) should drastically lower the cost of energy by an order of magnitude from today's prices.

hdivider 7 hours ago 0 replies      
I think one of the only things we can be reasonably certain about is energy. Many or even most of the changes Charlie listed would require changing much of the technology deployed on our planet (except of course those changes that are more or less inevitable, like rising sea levels). Changing any of the hardware in our world on a large scale requires massive amounts of energy, and energy follows rules that don't change at all over a 500 year timescale.

Fusion seems to be inevitable. I can't say I agree with people who say it'll never be competitive with other energy sources. All that has to be done is to solve the engineering hurdles required to make fusion scalable, and perhaps to add the capability to use fusion reactions that make use of a greater variety of elements. (And yes, those are huge challenges, but we're talking 500 years of advanced engineering operating on something that already works in a simple prototype system.) Once that has been achieved, fusion power can outperform >any< other terrestrial energy source (except perhaps fission), as a matter of physics. I imagine the economics of that will fall into place once that massive supply of energy is made potentially accessible, since there will undoubtedly be demand for titanic amounts of cheap and reliable energy.

depoll 7 hours ago 1 reply      
"The half-life of a public corporation today is about 30 years: ten half-lives out " 300 years hence " we may expect only one in a million to survive."

Am I the only one who read this and went "Wait, 10 half-lives... that's 1/(2^10)... that would mean about 1 in 1,000 survive -- not 1 in 1,000,000."?

robomartin 5 hours ago 1 reply      
Great Mambo Chicken and the Transhuman Condition
cpeterso 1 hour ago 0 replies      
When I read science fiction about people living hundreds of years from now, I wonder what science fiction they read. The characters in Star Trek, for instance, conveniently read Shakespeare and watch mid-20th century film and TV.
guard-of-terra 5 hours ago 1 reply      
Is solar shade so much hardrer to launch than retooling all the biology to live at 45C and still dealing with the world that sucks? Launching a large slightly opaque mirror to shade select aread of Earth does not seem impossible to me.

It seems he bet everything on climate being out of control.

pdubs 4 hours ago 0 replies      
Since this is really just educated fiction, if you like this you'd probably enjoy most of the stuff written by Alastair Reynolds. Not "hard" scifi exactly, but maybe "firm".
cwe 7 hours ago 0 replies      
With all the talk of engineered intelligence and huge biotech advancements, I think there will be a massive space expansion, and I'm surprised to see it doubted.

First off, a human settlement on Mars, while technologically challenging, would need a relatively small initial population to get started sustainably (say, 2000 people). All of the advancements in food and energy production mentioned in the article could be used to provide for a colony there.

The OP talks about genetically engineered animals for food production, but they could also be engineered to better work and thrive in space-based industries; collecting raw materials, zero-gravity manufacturing, energy collecting, etc. Sophisticated, autonomous machines could do all that as well, so that actual humans have very little need to spend much time out in space, other than traveling between planets and settlements. Or perhaps all travel is virtual, using telepresence to see the solar system.

Machines built in space don't have the costs to get up there in the first place, other than the initial factories and material harvesting equipment.

Great thought-exercise, though. I love thinking about this stuff, and I think our generation has to start anticipating these changes. Some other commenters pointed out this all may happen far sooner than 500 years, so we just might need to be ready.

noiv 3 hours ago 0 replies      
Reads like a fair extrapolation of the past 500 years. Although, assuming everybody adapted to climate change, the unknown unknowns are possibly underestimated. With 2000 gt more CO2 this is a completely different planet and there might be no technical solutions to the full spectrum of rising social tenses, when a billion people and their jobs are forced to move, because of rising sea level or too much or too less rain. To develop sophisticated solutions like synthetic biology you need places where weather is of no concern and only a few places will remain when temperatures rise by 4 and more centigrades.

So in short C. Stross painted a rosy future where technology - like in the past - solves everything. But he overlooked game changers like the permafrost bomb, a burning Amazon rain-forest and all upcoming social implications.

I'll give an example: In 2010 the jet streams stucked over South Russia and Pakistan and brought heat over Russian fields and devastating floods in Pakistan. As a result food prizes exploded, Russia stopped exports leading to food riots in the Arabian world and finally sparked revolutions.

Sure, there is no proof of one event based on the other. Anyway, there is no science available to estimate social consequences of climate change, but does that mean it will have none? Just think of the secured gas transports in NYC after Sandy. How many days longer with limited supply and it would gone worse? Now, answer one question: Which technology stops gas riots?

Eventually the author is not wrong with his vision of 2512, but what scares me are the next 50 years with an unleashed economy going frenzy over excluded environmental costs.

teebs 6 hours ago 0 replies      
I enjoyed this article and I agree with most of the prediction on this timescale. I'm surprised, though, that he didn't mention one issue in particular: the continued development of human-computer interaction and its impact on the world's socioeconomic makeup.

Over the past 20-30 years, computers have completely changed the way people interact with the world. Most highly-educated people's lives center around their iPhones, laptops, iPads, etc. As time goes on, automation will likely continue to advance. As computers surpass humans in efficiency for more and more jobs, what role will the uneducated play? Clearly, wealth will continue to concentrate in the hands of fewer and fewer highly educated individuals. Will the rich exploit the poor, or will the need for consumers cause the wealthy to redistribute wealth just so that people have money to buy their goods? Will ordinary people end up like the passengers of the spaceship in WALL-E? Let's go a step further: if the so-called singularity occurs, what is the need for people in general?

alanctgardner2 4 hours ago 1 reply      
Not to be the PC crowd here, but the author is a little bit flippant about modern history. The Holocaust didn't just 'suck', nor did the Battle of the Somme. Furthermore, in his brusque dismissal of contemporary Middle Eastern culture as primitive, he misses out on the fact that Islamic extremism is a very modern issue ( < 40 years old ), and definitely does not define the region as a whole.

It's fine and well to speculate wildly about technological advances, but the future of the human race is ultimately about humans. If you're going to ignore the human aspect, do it completely. Don't trivialize millions of deaths in the race to talk about how cool nuclear fusion will be.

geori 6 hours ago 0 replies      
I highly recommend reading through Charlie's Comments. They're just as good as the article and touch on everything from building construction to Scottish Independence to creating an atmosphere in the Vales Marineris rift valley on Mars.
rms 1 hour ago 0 replies      
I predict global scale climate engineering rather than our current coasts under water.
guscost 5 hours ago 0 replies      
Hard to miss the alarmist undertone here. He seems much more confident about the effects of climate change (or is that global warming?) than the effects of politics and technology. Compare:

"Sea levels will have risen by at least one, and possibly more than ten metres worldwide."

"Fission: will be in widespread safe use or completely taboo."

melling 6 hours ago 0 replies      
I don't really see the point of speculating about 500 years out. Wouldn't it be a lot more useful to figure out how to increase the rate of innovation and discovery now?

For example, if innovation happened in flight and most people could fly at hypersonic speed within 10 years, the world becomes even smaller. Cure most cancers within 10 years instead of 50 and maybe the "next Steve Jobs" will get another 2-3 decades.

There are lots of big problems that would could solve decades sooner if we could find better ways to innovate now.

Show HN: We're siblings in high school taking on SongPop, thoughts on our V1? mgw.us
7 points by NTJPaulCarole  30 minutes ago   discuss
Show HN: ChargeBack.cc - Get your money back chargeback.cc
33 points by myotherthings  3 hours ago   33 comments top 16
aristidb 3 hours ago 1 reply      
That seems a bit sketchy to me - black-mailing merchants into signing up for your "service" of not sending them chargebacks?! Maybe you should explain why it's not.
notatoad 2 hours ago 1 reply      
chargeback only works because most users don't know it exists. If you start telling people all they have to do to get their money back from a merchant is to click a button, it won't be long before credit card companies are forced to get rid of it. please don't abuse this.
jasonlotito 16 minutes ago 0 replies      
So, who are your customers? Businesses with chargeback problems or customers filing chargebacks?

The followup is how are you intending to step between customers filing chargebacks and their banks which are a phone call away?

wilfra 3 hours ago 1 reply      
This is the online equivalent of "protection" money the mafia asks for when they say they're going to burn down your store if you don't pay them.

I applaud making it easier for people to file chargebacks but shame on your business model.

Edit: after reading the explanation given below perhaps the business model is not as bad as it first seems - if that's the case, you need to make it more clear! It looks like you are encouraging people to file chargebacks and then shaking down the merchants for money with the threat of the chargeback getting filed if they don't pay you.

malbs 21 minutes ago 0 replies      
Well I had a disputed charge I was planning on seeking to have overturned, so I've just tested the chargeback.cc system with this dispute as a trial
lionhearted 1 hour ago 1 reply      
Feedback / thoughts about potential pitfalls:

You're probably going to get a cease and desist letter at some point if you haven't already talked with the various financial institutions and have contacts there... you're almost certainly violating their terms of service (and maybe people filing through you are too).

You might want to be proactive about reaching out and making some contacts with the financial institutions.

Or maybe not, maybe it's OK. Just uninformed intuition there.

Also, you probably want to add some pretty serious language in bold saying "You must be telling the truth, not telling the truth here can cause serious harm, etc."

You probably also want to do some basic confirmation of a person's identity so you don't get whacky results. Ask for a phone number maybe, and occasionally spot check calls? I could see this being used for pranking, harassment, or inappropriate use (disgruntled employee, uninformed spouse/boyfriend/girlfriend, etc).

saurik 2 hours ago 1 reply      
How is this different from BillGuard.com? (edit:) Well, I mean, for the features this site offers; BillGuard also seems to scan your bills proactively trying to help you deal with charges, but at the end of the process seems to be fairly similar: I (the merchant) receive an e-mail from them rather than a chargeback, combined with information that might be useful to look into the matter and fix the problem. (I only started dealing with BillGuard yesterday, so I don't know much about them yet, and certainly not much about this new site.)
breck 1 hour ago 0 replies      
I think there is a big need for this type of service. The majority of my transactions are fine, but there are times when I have a problem and getting it resolved is a huge hassle. Like last month when the NYTimes charged me $15 but a bug in their database prevented me from actually using my account. Took 2 painful hours to get a refund.

In those cases I assume the merchant has better things to do as well, and it seems like a service like this could offload some work from their support staff and, by adding things like exit surveys, turn those small number of bad experiences into positive, constructive experiences for all parties.

ericcholis 3 hours ago 1 reply      
Ugh...charge backs. I work in an industry that has high charge back rates and amounts, most of the time because people are disgruntled.

Most people don't realize how easy a charge back is.

robryan 3 hours ago 1 reply      
It is better if charge backs are seen as somewhat of an inconvenience to do, so customers will think more about if they really want to chargeback. Lots of things can and do go wrong in ecommerce, if anyone slightly annoyed reached for a chargeback rather than trying to resolve an issue with the merchant it would be a big hit for merchants.
mikeash 1 hour ago 0 replies      
I've never had trouble with chargebacks, personally. I try the merchant, and if they don't play ball, I contact my card issuer. It's been pretty painless, so I don't see the point of this service. Is my experience just abnormal?
eCa 1 hour ago 0 replies      
I think a page called "Who we are" [1] should answer that question, and not describe "what we do" (again). Especially with something involving such sensitive information.

[1] https://www.chargeback.cc/who-we-are

tyrelb 1 hour ago 0 replies      
How would consumers find out about you? If I buy something online, I would call merchant first, then bank. How would I find out to use you first vs. going to merchant and/or bank? Thanks!
Sire 2 hours ago 1 reply      
This business will fail even though the idea to improve the chargeback process is a good one (for both merchants and customers).

Most customers don't know what a chargeback is. Those who do will never find your site. Only if you sell your service to the credit card companies will this work.

kkt262 1 hour ago 0 replies      
One thing that popped right out to me was how similar the top banner looks to the Paypal top banner.
hnwh 3 hours ago 1 reply      
free services.. hmm.. whats in it for you?
Android 4.2: December month missing in calendar code.google.com
57 points by hendi_  5 hours ago   20 comments top 9
erydo 1 hour ago 2 replies      
According to this comment: https://code.google.com/p/android/issues/detail?id=39692#c17

It's caused by the convention of the Android datepicker to store month values as 0-11 instead of 1-12.

Dates are given in 1-31 as expected; and years are given in e.g. 2012 as expected; but months are given in 0-11 which is totally inconsistent with how people normally number months and how the rest of the API works.

So for January 1st, 2012, printing 'getDate()', 'getMonth()', and 'getYear()' would yield '1-0-2012'. It's a badly designed API and I'm surprised there aren't more instances of this breaking.

(I ran into this once in our codebase, but we managed to fix it pretty quickly).

moondowner 2 hours ago 0 replies      
Note for everyone checking only the screenshots and not reading all the comments. The Calendar app is fine, the People app has this problem.

"It's only broken in the People app (like #29 said, Calendar is fine).

I'm running JOP40C, and I can confirm this bug (in the People app) with all date formats."

AYBABTME 2 hours ago 0 replies      
Thanks to Google, there shall be no 12-12-2012 apocalypse.
olgeni 3 hours ago 0 replies      
Perhaps Google thought it was a good idea to insert a leap month to check how the kernel would behave.

Did anybody notice heavy CPU load?

chmod775 3 hours ago 1 reply      
Incoming iOS advert: "Supports all months, January to December"
javis 3 hours ago 0 replies      
Wow. I was just feeling pretty stupid about some of the bugs in my code. Thanks for the pick-me-up, Google.
nemof 3 hours ago 2 replies      
I wonder if there's been any movement on the dreadful performance of 4.2? It made me very aware that there is little to no support for Android/Nexus, and nowhere to take problems unless you know what a bug tracker is, which many will not.
rickylais 3 hours ago 1 reply      
Looks like we will be seeing Android 4.2.1 very soon
csense 1 hour ago 1 reply      
It's rather ironic that a company so high-tech it's looking hard at self-driving cars and asteroid mining has trouble implementing a calendar, a technology that's been around for thousands of years...
Gravitational lens magnifies earliest galaxy yet seen arstechnica.com
21 points by Reltair  3 hours ago   4 comments top 3
iwwr 2 hours ago 0 replies      
Speaking of gravitational lenses, our very own sun can be used as one:


Key facts:

can work with existing technology

$5bn cost

long travel time, a century or so to the primary observation point (500-750au) but can visit several distant icy bodies before that point

100x magnification for infrared and visible-light (50-80x for radio and microwave) (only for object directly opposite the sun though)

a good precursor for an interstellar mission and

good study platform for the interstellar medium

andrewcooke 2 hours ago 0 replies      
paper - http://arxiv.org/pdf/1211.3663v1.pdf the link and redshift, 10.8, are just under the image).

it's a photometric redshift derived from the lyman break. rest-frame ultra-violet emission less than 912 A is "completely" absorbed by intervening neutral hydrogen, and between 912 and 1216 A partially, in lines. so objects are dark at shorter wavelengths than 1216 A (in the frame of the galaxy). their observations show that in our frame there's no emission short of 1.46 um (infra-red). and 1.46e-6 / 1216e-10 ~ 12 = 1+z, so redshift is approx 11.

if it's correct (photometric redshifts are not as reliable as those obtained from spectra, but are technically easier to achieve, and this is really pushing the limits of what is possible - my partner, who is still in astronomy, is sceptical that this is real), then it's the most distant object known.

i guess the above isn't very clear. i'll try again. hydrogen gas just floating around in space absorbs ultra-violet (UV) light. so you don't see much UV from galaxies.

now distant galaxies are redshifted so much (by expansion of the universe) that the UV ends up in the infra-red (IR). so what you observe are things that are only visible in the IR - everything shorter (optical and UV) in our frame was absorbed (UV) in the galaxy's frame.

so one way to find extremely distance objects is to find things that can only be seen in the IR. what you're actually seeing is the redshifted optical; what you don't see in the optical is what, in the galaxy's frame, is absorbed UV.

but these galaxies are very faint, so they are hard to detect. using a gravitational lens boosts the brightness and so makes this technique more powerful.

i'm not sure that helps (a diagram would make things much clearer). the technique, well, the resulting objects, are called "lyman break galaxies". but i haven't found a good reference googling.

Aardwolf 1 hour ago 1 reply      
Isn't that awesome, the galaxy is huge, our planet is tiny, and yet all the objects in the whole galaxy have photons that reach our little planet, the lens of that specific telescope even. The photons traveled that far, that long, just to be finally absorbed by that telescope. What are the chances of a photon from an object that far away to hit specifically this location?
Google Nexus 4 Actually Has an LTE Chip ifixit.com
30 points by crenk  4 hours ago   3 comments top 2
iloveponies 1 hour ago 1 reply      
Plausible reasons have already been suggested somewhere else[1], in summary chipset makers are creating dual purpose chips with dies that support LTE, but have fuses burnt and a cheaper price tag to restrict their usage. Even if you could reverse the fuses, there is still a lack of hardware and calibration for it to be possible in the nexus 4.

[1] http://www.reddit.com/r/technology/comments/13cods/nexus_4_t...

Zenst 6 minutes ago 0 replies      
Not like the couldn't release a model with twice the storage at the same price along with another model with LTE working. Maybe later, maybe not.
Crowding out OpenBSD lwn.net
67 points by Xyzodiac  6 hours ago   41 comments top 12
ghshephard 5 hours ago 2 replies      
" OpenBSD simply does not have enough developers to influence the direction of projects like X.org, GNOME, or KDE. "

I, and several of my colleagues, have been running dozens of OpenBSD systems for about 10+ years. In particularly, OpenBSD had an elegant IPv6 Firewall/failover mechanism about 5 years before Cisco finally decided to port Active/Failover to their ASA platform - so we were forced through sheer necessity to deploy OpenBSD in what was otherwise an all Cisco shop. Further to that, OpenBSD's ability to track several hundred thousand shortlived UDP sessions state fully on inexpensive x86 systems saved us several 10s of thousands of dollars over the equivalent Cisco systems.

At one point, all of HPs internal infrastructure was transitioned off of the Cisco ASA onto OpenBSD firewalls - OpenBSD is reliable, and industrial.

Needless to say, I'm a fan of OpenBSD and consider it critical to the various infrastructures that we deploy.

I've never been tempted (nor, to my knowledge, have my colleagues) to even consider installing X-Windows on an OpenBSD system. So the entire thesis of this article is beyond silly to me.

ari_elle 6 hours ago 1 reply      
I am very disappointed by this article, since it in my opinion clearly misrepresents the things Marc Espie said:

If you look at his original:

-) "Those vendors say "we're not in the distribution business, distribution
problems will be handled by OS vendors. We can break compatibility to
advance, and not think about it, this is not a problem." [...]

"This is a mindset we need to fight, and this has to be a grass-roots

-) "in some cases, you even have some people, who are PAID by some vendors,
agressively pushing GRATUITOUS, non compatible changes. I won't say names,
but you guys can fill the blanks in.

-) "Either you're a modern linux with pulseaudio and pam and
systemd, or you're dying.

Source: https://lwn.net/Articles/524608/

Not being a BSD guy myself, but being a fan of minimalistic linux systems, being a fan of keeping dependencies low, of not necessarily throwing out software that has done it's job for 10+ years to just get the newest gadget in, i actually think he's right with many things he says.

graue 5 hours ago 1 reply      
The article suggests OpenBSD lacks support for newer hardware. Not sure if that's true today, or in certain categories (graphics cards?).

But, credit where credit is due: Around 2005-6, I chose to run OpenBSD on my desktop computer at home because its support for wireless network interfaces was far and above better than Linux or any other open source OS. At that time, getting on my home network with Linux was a complete no-go, while OpenBSD worked flawlessly out of the box.

At that time there were several OpenBSD devs doing the hard, ugly work of reverse-engineering the crappy binary blobs that were accepted in mainstream Linux distros (and FreeBSD), and instead turning out reliable, open-source drivers.

Today I find Linux more practical to run on my laptop, but I really hope OpenBSD never goes anywhere. We need different approaches like theirs. (Actually, the non-availability of Flash was a big reason I switched back to Linux, and that's becoming less of an issue with HTML5...)

dfc 2 hours ago 0 replies      
I'm surprised that nobody has brought up the 5.2 song[1]. The 5.2 song is about these problems with upstream and confusing Linux for posix, the "liner notes" for "Aquarela do Linux!":

"Just as the original song professed its love for Brazil, "World, you'll love my Linux" is the passionate call of an idealistic dreamer who can't bear the thought of software that will only run under Windows, and yet loves the situation with software that will only run under particular Linux distributions.
This problem has proliferated itself into the standards bodies, with Posix adopting Linuxisms ahead of any other variant of Unix.

Posix and Unix have made it where you can write reasonably portable software and have it compile and run across a multitude of platforms. Now this seems to be changing as the love for Linux drives the standards bodies into accepting everything Linux, good and bad.

We also are faced with groups writing software that only works with particular distributions of Linux. From this we get software that not only isn't very portable, but often not particularly stable. Our idealistic dreamer in the song loves running one, or more than one distribution of Linux for a particular purpose. Unfortunately, the rest of us are left with the unattractive choice of doing the same, or relying on herculean efforts to port software that is being actively developed in a way to discourage porting it to other platforms."

[1] http://openbsd.org/lyrics.html#52

zaius 5 hours ago 1 reply      
Part of the reason Linux has such a huge number of devs is because the community is welcoming and forgiving of noobs.

OpenBSD was my first unix, and as much as I tried to contribute, I didn't last through their toxic developer community long enough to be a useful contributor.

This high bar is required to keep the system as secure as they want, but the trade off means scaring off devs, which is the real core of the bsd/Linux divide.

dschiptsov 5 hours ago 0 replies      
OpenBSD isn't for desktop, it is a small networked server. (And what modern X environments you're talking about without nvidia/radeon drivers and accelerated OpenGL?)

I have built a firewall from an old slow 1U Sun Netra "server" with OpenBSD/spark64 and it is still in production after almost 7 years? Why? Because punks cannot hack it with Linux/x86 exploits.) Because it has enough resources to be a gateway (firewall, openvpn, secondary dns, etc.)

Well, nowadays you anyone could buy a $50 box with linux flashed inside to do some fire-walling and some routing, and the art of making BSD-based gateways and servers almost disappeared.

Nevertheless OpenBSD is a multi-platform network server, secure and stable, in the first place. Modern X11 is irrelevant.

btw, they finally implemented kernel pthreads in the last release, so, our postgres...))

thaumaturgy 5 hours ago 0 replies      
Previous discussion: http://news.ycombinator.com/item?id=4772133 (104 comments).

The lwn article here is pretty vacuous.

edit: I'm happy to see some people in this thread already coming to OpenBSD's defense. It is really really fine software, built by a team of really smart people. If you haven't donated to the project, or at least bought one of their CD sets, please do. It does help.

zokier 5 hours ago 1 reply      
Let me guess, this is about Gnome3 and systemd (and other poetteringisms)? I think that maybe dropping Gnome3 and focusing on alternate desktops would be the way to ensure survival. Trying to keep up with Gnome3 is an uphill battle. And in smaller projects BSD developers would have proportionally larger voice.
antirez 4 hours ago 0 replies      
Nonsensical article: In the game of big numbers Linux is almost irrelevant in the Desktop as well, but it is winning like crazy in the server market, where BSD could compete.

So BSD is being marginalized for other reasons, not desktop software.

Zenst 5 hours ago 1 reply      
It would be nice if there was a unified driver model that the OS developers could easily add a wrapper level to accomodate there needs. If hardware companies had full open source drivers then this would be less of an issue, this is not the rosey situation we have and in many area's we have binary blobs. Binary blobs targeted at an OS and CPU.

Now with the advent of ARM, the sence to have open source drivers becomes more palatable and hopefully sainer. More options for your hardware to run upon and be sold upon is more sales. If you open source things and let the community help then they help and you get more win win. It is the area's were companies want to protect IP they have above and beyond the patent protection. There are cases if they are using others IP in there product which they pay to use that prevents them from releaseing the source and at best able to do binary blobs. If we had binary blobs that you could add your own wrapper around and accomodate a OS's needs, then you would still have more platforms than not open to you.

But this realy is mostly down to fancy networking cards, graphics cards and anything with a radio in it mostly. But there are always options and with the right purchaseing you can vote with your money. Support the ability to change your OS even if you don't plan on it today, think of the children :).

riffraff 3 hours ago 1 reply      
> BSD is a place where developers can experiment with different approaches to kernel design, filesystems, packaging systems, and more.

that is most certainly true, but I am wondering, has any of the work done in BSDs in recent years influenced linux development in any way?

D9u 1 hour ago 1 reply      
If Linux developers were to adhere to the POSIX standard, would compatibility be an issue?
Ingress proves once again: Google gets its users imtheirwebguy.com
25 points by mtgx  3 hours ago   3 comments top 2
munificent 2 hours ago 1 reply      
Whenever people think about Google doing stuff like this, they often get freaked out. Like, "OMG Google is using me for its own benefit!"

Well, yes, that may be true. But the important thing to keep in mind is that you aren't losing anything here. Yes, Google may be using your GPS data to improve its maps. But that doesn't mean you are having any less fun playing the game because of it.

This is, I think, one of the things that's deeply fundamental to Google's culture and really great about the company: Google is always looking for non-zero sum solutions. Where many companies think, "What's the most I can take from my customers to make us money?" Google thinks, "How can we maximize the sum of both us and our users?"

Look at ads, for example. Where many sites are constantly playing, "what's the most ads I can cram into my site before people start leaving?", Google is thinking "how can we make the ads as relevant as possible so that users actually want them to be there?"

muoncf 0 minutes ago 0 replies      
At the risk of incurring massive downvotes: if anyone happens to have a key for the game laying around, I would be very interested in getting one. I'm sitting on the edge of my chair, waiting for this game to go public. :D
Tutorial: How to build your own peer-to-peer chat app (like Couple) hipmob.com
33 points by kunle  5 hours ago   3 comments top 2
AYBABTME 3 hours ago 1 reply      
So my understanding is that this is not really peer-to-peer in the sense that connections are made with a server in the middle of two clients. Unless I got something wrong, connections are not handled stirctly from a client to another.

Or am I completely lost?

kunle 3 hours ago 0 replies      
Hey everyone - Ayo from Hipmob here. We published Couple a few days ago and thought it would be helpful to show how we put it together. We've also published the source on github (https://github.com/Hipmob) so you can take a look at the server & client side bits if you want. Would love feedback at ayo@hipmob.com
Twitter's descent into the extractive 37signals.com
119 points by zdw  9 hours ago   45 comments top 13
markokocic 5 hours ago 7 replies      
I don't understand why so many people are criticizing twitter for trying to monetize here on HN. Twitter is not a startup anymore, they have real investors and have to earn real revenue, and thus they cut everything that can potentially affect that.

Is trying to be profitable now considered a bad thing? Is selling a startup to big company that will close it down the only exit strategy that HN praise?

I know that some people feel betrayed by twitter for cutting something that used to be free. But what should twitter do? Jeopardy its own business in order to make others happy? That's not how business works.

edit: s/that/they/

antirez 6 hours ago 2 replies      
It's as simple as this: it is extremely hard to do a big business on something that is trivially reproducible (like Twitter is), just because there is a big momentum at some point. Because the outcome is one of the following three possibilities:

1) You ruin the experience because of the business model. People switch to a competitor that is annoyances free as you were.

2) You ask for money. People switch to a competitor that is free as you were.

3) You invent a business model that is an added value for users instead to be a problem. You win.

To make "3" working you need to be open minded and design the business model for months, with creativity, thinking at your users. It's hard but you could do it, but unfortunately there are this guys that gave you millions that will ruin this process. So "3" is very very very hard for Twitter IMHO.

mariusmg 4 hours ago 0 replies      
I find it ironic to see the usual "follow me on twitter" at the end of the post. Yeah, people will bitch @twitter but nothing will change.

Also, Twitter has 1500 employees. That's absurd.

Void_ 3 hours ago 0 replies      
I'm sad that a link like this makes the frontpage of Hacker News.

It contains nothing we didn't know before, it's not interesting at all and there's no value in it. David just talks shit about another company.

fierarul 7 hours ago 3 replies      
I don't understand how such a simple service is able to block 3rd party clients. Just scrap that web site if need be!

I have an user, I can acces the site via a simple text-based protocol, who cares what weird client I'm using?

smoyer 8 hours ago 2 replies      
Why are so many sites crashing Safari on my iPad1 now?
joelthelion 3 hours ago 0 replies      
Someone should build a decentralized (p2p-based?) alternative to twitter/reddit.

This way we will finally have something we can settle on, we will have multiple competing clients and no censorship or arbitrary rules, and no monetization.

Only problem is that it may be a harder technical problem than it sounds.

rdl 8 hours ago 0 replies      
At least they didn't go public and tank subsequently, and their downfall looks like it is happening before any IPO would be a risk.

Big consumer IPOs which tank hurt investor confidence across the market, and particularly in the tech IPO sector in the future.

aes256 8 hours ago 1 reply      
The business model is broken, the monetization prospects aren't there, and ultimately, Twitter is a fad.
ianstallings 4 hours ago 1 reply      
I think I'm more curious about what they need 1500 people for. That blows my mind.
ed209 8 hours ago 0 replies      
I wonder if jaiku.com could have kept twitter honest?
georgeorwell 3 hours ago 0 replies      
When did the word 'extractive' become the opposite of the word 'inclusive' anyway?
lewisflude 8 hours ago 0 replies      
Wonderfully written.
Fast JVM launching without the hassle of persistent JVMs github.com
148 points by riffraff  12 hours ago   34 comments top 14
6ren 10 hours ago 1 reply      
So obvious, trade-off space for time, yet I wouldn't have thought of it... I mean, I've thought about this problem, written a persistent JVM solution, and didn't think of it. Memory is cheaper than my intuition realises.

I wonder how many other "obvious" solutions I'm missing like this?

EDIT for the code I tried, user time is almost 3 times faster, but real time is only around 10% better... I don't understand linux well enough to know why - anyone care to explain please? EDIT Yes, drip had already run. (I picked typical times from about 10 runs each).

   $ time java...
real 0m1.466s
user 0m1.216s
sys 0m0.180s

$ time drip...
real 0m1.378s
user 0m0.412s
sys 0m0.260s

BTW: For server-like workloads, an advantage of a persistent JVM is that it gets dramatically faster over repeated runs of the same code, as it improves its hotspot-style adaptive optimisations.

I really like his quickstart "standalone" installation.

WARNING "drip kill" crashed my system. The kill functions are kill_jvms and kill_jvm (https://github.com/flatland/drip/blob/develop/bin/drip). I'm using an older ubuntu 10.04 LTS.

zmmmmm 2 hours ago 0 replies      
> It keeps a fresh JVM spun up in reserve with the correct classpath and other JVM options so you can quickly connect and use it when needed

So I assume it doesn't help if you are launching JVMs very rapidly (like, scripting stuff in a tight loop). Slow JVM launching has pretty much killed languages such as Groovy for scripting for me, because once I start using them in loops things get horribly slow.

yason 10 hours ago 2 replies      
Hasn't anyone ported zygote over to desktop Linux/Windows? You just keep the preconfigured jvm process running and fork it indefinitely for each new process. You'd still suffer some overhead depending on each application but that's just expected anyway. The jvm startup overhead isn't.
sandGorgon 11 hours ago 3 replies      
time ./drip -jar /home/user/research/jruby-complete-1.6.0.RC3.jar --1.9 -e 'a=1;puts a'
./drip -jar /home/user/research/jruby-complete-1.6.0.RC3.jar --1.9 -e 0.03s user 0.03s system 4% cpu 1.282 total

time java -jar /home/user/research/jruby-complete-1.6.0.RC3.jar --1.9 -e 'a=1;puts a'
java -jar /home/user/research/jruby-complete-1.6.0.RC3.jar --1.9 -e 3.65s user 0.12s system 183% cpu 2.056 total

Interesting!! if it runs rails effectively, this could be awesome for jruby.

mitchi 9 hours ago 0 replies      
I read his explanation but I don't have a great understanding of why a JVM would need to clean up like this. Doesn't it have a Garbage collector just for that? Why does it get slower over time? The only thing I know that gets slower over time without you doing anything special is my dad's mac os x word-only macbook pro.
honr 9 hours ago 1 reply      
Interesting idea (especially, hashing based on command line options, and probably also the libraries in the classpath); I should add this to Clove (http://hovel.ca/clove : a small persistent-jvm that I wrote for *nix, with VERY fast connection time, e.g. suitable for scripts; sorry for shameless plug).
dcolgan 11 hours ago 1 reply      
This seems like a really clever solution. I remember trying to set up nailgun a while ago back when I was experimenting with Clojure because the startup time for running a program was so long, but I could never quite get it to work. I missed being able to run python myprog.py and getting instant feedback.
georgeorwell 3 hours ago 0 replies      
As a further optimization, why not memcpy an uncorrupted JVM to some backup place in memory and then when you want to 'reboot' just memcpy the image back again?
isbadawi 2 hours ago 0 replies      
olaf 9 hours ago 1 reply      
My experiences with drip were very mixed (on Ubuntu 10.04), it did not make a stable, reliable impression on me. I wouldn't recommend it for professional use.
pbiggar 9 hours ago 1 reply      
Does anyone know how to use with with lein?
wiradikusuma 10 hours ago 0 replies      
Sweet! I immediately updated runner script for Scala and Groovy to use it!

EDIT: Hmm, I encountered this: "Could not connect to compilation daemon after 300 attempts." but re-running it ("scala") works.

pjmlp 10 hours ago 1 reply      
What about just using AOT compilation?
z3phyr 11 hours ago 0 replies      
Happier Clojure hacking days ahead :
Server-less Stripe push notifications with webscript.io coovtech.com
4 points by smarx  43 minutes ago   discuss
Learning to Love Volatility wsj.com
17 points by tokenadult  4 hours ago   4 comments top 2
jpdoctor 2 hours ago 1 reply      
> The Romans forced engineers to sleep under a bridge once it was completed.

This is exactly what is missing from the banking system. The mortgage madness would never had occurred if banks were forced to retain a portion of every loan (and put that portion in the first-loss position.)

RickHull 3 hours ago 0 replies      
> Rule 1: Think of the economy as being more like a cat than a washing machine.

This one made me smile, thinking of the classic characterization of a development manager's primary task: herding cats.

ImageShack uploader IP addresses visible mikescoding.com
44 points by eurodance  8 hours ago   18 comments top 5
0x0 7 hours ago 2 replies      
Glancing at the source code http://mikescoding.com/imageshack/index.phps for 30 seconds, it seems the way this works is that the uploader IP address is retrieved from some XML file on the imageshack servers. It seems every image on imageshack has a corresponding metadata XML file stored at a secret location, but the algorithm to calculate this URL was exposed during the earlier pastebin leak?
ComputerGuru 7 hours ago 2 replies      
Does anyone still use imageshack for anything serious any more? I'm surprised to hear they're still around.

Here's what their compete chart looks like, for what little it's worth (login required, so screenshot instead): http://cl.ly/image/3z1v152G3r1l

thejosh 8 hours ago 2 replies      
Oh boy. Everyone who has uploaded images of screenshots of illegal movies onto forums are gonna be majorly shafted now.
tsheeeep 7 hours ago 0 replies      
The API for videos is described here: http://code.google.com/p/imageshackapi/wiki/YFROGxmlInfo

For images it should be hidden.

eurodance 2 hours ago 0 replies      
Show HN: Decora (beta) - Online Interior Design for Small Projects getdecora.com
14 points by neilsharma  4 hours ago   1 comment top
ebaum 4 hours ago 0 replies      
very interesting idea
Security Incident on FreeBSD Infrastructure freebsd.org
107 points by dous  15 hours ago   31 comments top 9
meaty 6 hours ago 1 reply      
I have a couple of FreeBSD machines which pulled binary packages between those dates. I'm not overly worried. The packages have been removed and installed again from ports after a fresh portsnap dump and the systems have been verified with "freebsd-update IDS" against known good signatures. Any modified files were manually checked. I use MAC on each machine and pf up front on firewalls so I know what is going in and out as well.

The fact that these mechanisms are available is the reason I use such a system.

Also, if you consider any problems like this happening to a closed source vendor, you may never know it's happened. And don't tell me they don't do it as I've worked for a couple of companies that felt that burying security fuck ups was acceptable practice. It's why I don't work for them any more.

Zenst 14 hours ago 0 replies      
This is how you tell people about a security breach. Inform them soon as you know with what you know and assume the worst with your appraoch to restoring things.

Much respect and defineing the word professional for many.

lifeguard 10 hours ago 0 replies      
Take note:

"We unfortunately cannot guarantee the integrity of any packages available for installation between 19th September 2012 and 11th November 2012, or of any ports compiled from trees obtained via any means other than through svn.freebsd.org or one of its mirrors. Although we have no evidence to suggest any tampering took place and believe such interference is unlikely, we have to recommend you consider reinstalling any machine from scratch, using trusted sources."

0x0 14 hours ago 3 replies      
Interesting choice that some machines will not be reinstalled, only "thoroughly audited".
lhm 13 hours ago 3 replies      
I'm a bit suprised that the affected machines were powered off instead of just disconnected. Would that not make an audit more complicated?
darkf 14 hours ago 0 replies      
FreeBSD reports are always extremely professional, I love it.
chmike 3 hours ago 0 replies      
Excuse the naive question, but how does one detect intrusion when using bi-key authentication ?
bulibuta 10 hours ago 1 reply      
Scary stuff.

Please use passwords for your keys and allow key access only to a small set of known IP addresses.

Also do share other security techniques you're using besides the ones above.

DrCatbox 14 hours ago 2 replies      
They use SVN still?
What Happens When A Twitter Client Hits The Token Limit marco.org
478 points by DaNmarner  1 day ago   177 comments top 37
jmilloy 1 day ago 6 replies      
Twitter: Don't build core-feature Twitter clients, we probably won't approve them.

Atta: I built a core-feature Twitter client!

Twitter: Sorry, we're not approving your core-feature Twitter client.

Who is surprised? How is this news? Were you expecting them to not apply their own rules? It seems like a clear-cut case, and concluding "don't build anything for Twitter" is just throwing a temper tantrum.

hnriot 1 day ago 14 replies      
*The effective rule, therefore, is even simpler: “Don't build anything for Twitter.”

Exactly, that's precisely the message they wanted you to have.

What's wrong with using the twitter.com on Windows8, do we really need a special client just for Windows 8? This is exactly what the web is supposed to do.

I don't get anyone is surprised, it's Twitter's ecosystem and if you're duplicating their functionality then it's perfectly reasonable of them to not make any special exemption. If you wrote a client that exposed twitter to new markets or something that added value to Twitter then they'd likely give you a higher limit, but that's not the case...

nollidge 1 day ago 2 replies      
Sort of amused by the wording in this line:

> It does not appear that your service addresses an area that our current or future products do not already serve.

How can your future product already serve an area?

jusben1369 1 day ago 3 replies      
I've watched this from a distance with interest. Developers have a special place within the overall Internet ecosystem. As a non developer, everything Twitter has said and done in the last 12 months or more tells me "We don't need a healthy 3rd party ecosystem and we don't want one. Hobbysits can stay filling odd niche requirements and here's our cap. Everyone else though? Sorry" I have no emotion around this as I'm not a developer. I feel as though many developers can't wrap their mind around this concept of not being wanted. They're used to being very wanted initially and then at best still wanted but with a few controlling parameters around activity (see "all App Store/Developer discussions"). I suspect it might be a slightly over exaggerated sense of self importance that's meant it's taken a long time for the obvious to set in. Perhaps that's why Marco only just connected the dots? (as usual I'm not talking about all developers - I've seen many who got it right from the get go)
Pewpewarrows 1 day ago 2 replies      
If my salary depended on the Twitter API right now I'd be scared shitless.
quotemstr 1 day ago 5 replies      
You know, back before APIs were all the rage, people wrote clients for web services by scraping. Twitter really wouldn't be able to do anything about a Twitter client that pretended to be IE9.
lancewiggs 1 day ago 1 reply      
It's so sad to watch such a lively lovely service remove the fun by destroying the values that made it great.
Meanwhile we are sitting here saying "charge us money, you fools", and they are deaf to us.
Twitter: your online site is usability hell, your own clients are dated and painful. Above all we have one question: why? Why are you intent on this path of foolishness based on placing customers and developers last?
droithomme 1 day ago 0 replies      
> Now we know: “work with us directly” means “die”.

Very good summary by Marco. It's really annoying when these companies have secret policies that have to be discovered rather than are clearly stated. It just wastes people's time to try to discover the policy, having to do costly time consuming experiments to find out what the policy is as if this was an unknown branch of particle physics.

taude 1 day ago 0 replies      
While I've disagreed a lot with a lot of Marco's blog posts, gotta say he's totally right on this one. Actually this is bigger than Twitter, as any new platform that comes out that wants Devs to develop for their API needs to be treated with a certain cynicism: if the platform gets big enough, they'll likely cut you out.

I wonder if this trend between FB, Twitter, etc. is going to ruin the ability for new companies and new platforms to attract free development by third parties?

uptown 1 day ago 2 replies      
Aside from pissing off Twitter, and possibly getting a cease and desist, what's preventing developers from building a translator that sits between Twitter's web layer and their native application client? Couldn't something be developed that loads Twitter into a hidden webview that's locally scraped for the purpose of re-display however the developer pleases on their client? This implementation wouldn't require the tokens, and wouldn't be constrained by their arbitrary limits.

But I suppose they'd just wind up getting sued.

javajosh 1 day ago 1 reply      
Goddamn, I've never seen a clearer example of the colloquial term "butt hurt". Twitter is a company, they built something, they support it, they have the right to control it, and they have arbitrary rights over it. More tellingly, they have a very good point.

It reminds me of the craigslist haters, and my response to them. I don't hate craigslist for stopping third parties from using their data because, frankly, it hurts their brand if "druggycriminalroommates.com" starts syndicating their apartment ads.

That said, don't think that I'm some sort of right-wing capitalist fascist. No, I don't think everything should be privately owned and controlled. There are some things that should remain public: internet infrastructure being one of them. My personal belief is that the only real egalitarian, open system is one that relies on that infrastructure, and ONLY on that infrastructure. This vision requires that people either a) run their own servers, or b) pay money to someone else to run servers (or parts of servers). (Other possibilities for payment exist, of course, such as bartering information for service, etc.)

I mean, twitter is free to control, the OP is free to complain about that control, but the solution presented (don't develop anything for twitter) is ridiculous and immature.

koide 1 day ago 1 reply      
I wonder why something like "If you want more than 100k usernames, either pay us $x per username or use our advertisement API and put whatever tweets/ads we push where we tell you to" is not an option.

It would be refreshingly honest and for some people/clients it could work, plus it could earn them some of the needed cash.

ianstallings 1 day ago 1 reply      
Wait, you mean you can't just wrap someone's service in a fancy UI and then sell it as your own? OH THE HUMANITY!

What happened to innovation? All I see these days is a derivative of a derivative of a derivative. Hell even the memes these days are derived from other memes.

zaidf 1 day ago 0 replies      
The problem with core-twitter limitation is that twitter's own product sucks and has basic features missing or implemented very poorly...years after launch.

This is just shitty all around. Sometimes I wish I could buy 1,000 twitter tokens for some price and use it in some "core-feature" 3rd party app because twitter sucks at implementing the very core features.

keithpeter 1 day ago 0 replies      
When you have to find 1500 monthly pay checks, I suppose you have to get the money somehow.


Found via


Seems a sensible position to me (old guy, non-coder and won't use twitter or fb).

JohnTHaller 1 day ago 2 replies      
If you build a company around someone else's free API, you're either a high stakes gambler or a moron.
ChuckMcM 1 day ago 0 replies      
It is surprising that their response does not include a call to action with respect to their business development team. Why isn't there a 'click here to sign up to by a 10,000,000 token' link in that email? Now granted the result might be the same, people not developing for their API but at least they could get value pricing information as part of the transaction.
TheCapn 1 day ago 0 replies      
This is the inherent problem with building your product with dependencies on other products. You are tied to their system in such a way that your existence relies on the faith that they keep doing what they do and do not change to a system that blocks you or inhibits your functions.

I see the need/desire for hackers to make things useful and unique in a way that they envision things but you will never hear me apologize for my remarks on this subject. If you are building your business or product depending on someone else's and are not a form of contractual partner you can be kicked in the ass down the line and there's no accountability to you they owe.

rsaarelm 1 day ago 0 replies      
I liked the Internet better when we had things like irc and news to talk with that weren't controlled by any single corporate entity.
dpeck 21 hours ago 0 replies      
I'm amazed we haven't seen some developer say screw it and build thing using Twitters own keys. There have been multiple instances of them being published over the last couple of years, and I'm having trouble seeing what incentive a dev has not to just use them.

If you're working against Twitters interest already then why not go for broke?

rubynerd 1 day ago 1 reply      
I agree with the "Don't build anything for twitter" motto (and it would look good on a t-shirt), but part of me wonders if twitter could get by charging 25 cents a token past 100k, so developing apps is still possible, and twitter still gets its slice of the pie.

Although, it still kills the advertising cash cow.

SoftwareMaven 1 day ago 0 replies      
Not exactly. Twitter doesn't want you to build anything that will take a single eyeball from them. There are still services that can be built that don't fall in that category, though, such as sentiment analysis.

At least, for now. Twitter has shown they are willing to be hostile towards their developers. Even if I fit in the "don't take their eyeballs" category, I wouldn't build on their platform because I don't trust them.

yaix 1 day ago 0 replies      
> that our [...] future products do not already serve


k-mcgrady 1 day ago 0 replies      
I still don't understand why everyone is getting angry about this. It doesn't affect clients made before they introduced the new rule. The only people who get hurt are the idiots who ignore twitter and build a business which uses their API in a way they have said not to.
TazeTSchnitzel 14 hours ago 0 replies      
I'm surprised nobody has mentioned yet that Windows 8 has built-in Twitter integration that works very well.

Why do you need another client? Just go to the People Hub and click What's new. Need to reply or retweet? The buttons are right below the message. Need to see mentions or replies? Click Notifications.

ishansharma 1 day ago 5 replies      
This is really idiotic. What are they trying to do? Make sure that all the top users flock to App.net or somewhere else?
why-el 14 hours ago 1 reply      
Does anybody know of any Facebook apps that sort of do the same thing, i.e. recreate the Facebook newsfeed experience for users?
nileshbhojani 12 hours ago 0 replies      
Tweetro should be happy that Twitter is not blaming them for using their platform in a way they prohibit. They want to use what Twitter has worked hard to build, make easy money, and then also want Twitter to spend extra to help them do it (by allowing more API calls etc) - why don't they build something original?
babesh 16 hours ago 0 replies      
How would you compare the Twitter ecosystem to how the Facebook ecosystem is doing lately? Seems that Facebook sign on/identity is doing well. Seems that the Facebook hosted apps are increasingly less relevant than apps just getting sign on and possibly newsfeed flow? Twitter ecosystem seems pretty much destroyed. Tweeting and Facebook newsfeed seem to be doing fine.
jeremysmyth 1 day ago 0 replies      
Now that it has the userbase, it's milking its situation. The talk in the beginning was "How are they gonna make money out of this?" and the answer is gradually unfolding. Kudos to them or starting the way they did, shame on them for closing out the very things that brought them to where they are.

Late arrivals like identi.ca might not be as polished, but they offer a similar product, with open APIs. Being based on open and federated standards like status.net, it's extremely unlikely that identi.ca will ever get the ego trip twitter got, and in fact is much more likely that it will be more open over time, and more useful even if other competition arises. What it doesn't have, and what we (consumers as well as devs) can help with, is by heading over there and giving them users and status and content.

kalleboo 1 day ago 1 reply      
I'd like to see what would happen if a third-party client just started to use the API token extracted from the official client. What could they do?
myWordBiLLY 11 hours ago 0 replies      
So, based on their guideline, if we made and sold Twitter ID BiLLYS which are custom made wooden signs for the or office (and BTW makes for a cool present), would Twitter have a problem with this?
marblar 11 hours ago 0 replies      
It seems to me this is an issue of pricing. Reach 100,000 tokens with demand to spare? Congratulations, you left money on the table. Charge more next time.
camus 1 day ago 0 replies      
make the twitter api a paid api, and everything will be clearer.

Businesses that rely on twitter will have a contractual relationship with twitter , meaning less uncertainty ,less competition ( since you'll have to pay upfront to access twitter's data , less clients ).

That's the solution that makes sense , instead of this half baked situation twitter api developers are in.

crististm 1 day ago 0 replies      
I don't understand why people bend over backward and provide also the grease to FB, Tweeter, AppStore & co?
fidz 1 day ago 0 replies      
If developers are prohibited to develop apps with Twitter APIs, so why they build the API?
smirksirlot 1 day ago 1 reply      
I think some people might call this "biting the hand that feeds you". Or at least the hand that got you started.
Silk " Interactive generative art weavesilk.com
170 points by bawllz  21 hours ago   62 comments top 37
jamesbritt 10 hours ago 0 replies      
Some amazing generative art, done with Processing, here: http://complexification.net/gallery/

You can grab the source for these and play around with them, tweaking values to see how things work.

One of my favorites is Sand Traveler. The underlying algorithm is relatively simple, but the results are stunning.

These are presented as Java applets, but Processing 2.0 now lets you export code to JavaScript (processing.js).

It also exports to Android apk files, so you can build Android apps with Processing.

ChuckMcM 18 hours ago 5 replies      
Ok this was mine : http://new.weavesilk.com/?czgl

Would love to see a retina iPad version of this.

hcarvalhoalves 19 hours ago 0 replies      
Very interesting. It's random, but not enough to keep you from controlling the brush to create natural images.

Trunk on fire:

Flower: http://new.weavesilk.com/?cz7z

gpmcadam 37 minutes ago 0 replies      
Probably obvious. A nuclear explosion: http://new.weavesilk.com/?dayj
cwilson 15 hours ago 1 reply      
I've literally had this on repeat for almost 3 hours now: http://new.weavesilk.com/?d06a

So cool.

anigbrowl 17 hours ago 1 reply      
Quite impressed with this; only thing 'wrong' with it is that the gray/black color should erase rather than add. Thanks for the source link Void_.

I do a fair bit of generative music stuff, so I'm impressed with that part as much as the pretty colors: http://new.weavesilk.com/?czw6

jQueryIsAwesome 19 hours ago 1 reply      
This app is consuming my soul: http://new.weavesilk.com/?czkk

And Ctrl+Z would be nice.

Void_ 18 hours ago 1 reply      
This is implemented with canvas. Cleaned source: http://pastie.org/5391134
phate 1 hour ago 0 replies      
jff 16 hours ago 2 replies      

Simply doodling, when out pops the angel of death. Lovely :)

geuis 4 hours ago 2 replies      
Works great on the iPhone five. The ad is a bit annoying in that it covers the entire interface.
zoba 11 hours ago 1 reply      
These make a good 'just because' mini gift :)

Here is a heart: http://new.weavesilk.com/?d38q

Zolomon 17 hours ago 1 reply      
I am sorry, but what am I missing - where is the interaction? I can't interact with the art I create. What is the difference between this and Photoshop (or any other image editor for example) except for it being browser based and playing some sounds in the background?

It is very well done, and what you can make with it is very impressive. Good work!

fumar 11 hours ago 0 replies      
pwenzel 11 hours ago 1 reply      
pnewman2 18 hours ago 0 replies      
My attempt at Ringo: http://new.weavesilk.com/?czix
hendi_ 18 hours ago 0 replies      
twodayslate 19 hours ago 1 reply      
This is pretty old but still pretty fun. http://new.weavesilk.com/?cz33
pirateking 13 hours ago 0 replies      

Very awesome. Needs undo!

yonilevy 9 hours ago 0 replies      
With the symmetry setting disabled, this is an interesting way of freestyle sketching (more so with a Wacom) http://new.weavesilk.com/?d4e6
casinaroyale 13 hours ago 0 replies      
Ha, here is mine. Fire and ice. http://new.weavesilk.com/?d1q5
spyder 6 hours ago 0 replies      
one more:

(needs 1920x1200 or above)

ofca 12 hours ago 0 replies      
Super! I'd love the option to change the color of the background. Any way to do that?
ohashi 13 hours ago 0 replies      
Everytime I play with Silk it feels wildly beautiful. I don't know what it is about it, but I love it.
SenorWilson 15 hours ago 0 replies      
Look, I made a cat http://i.imgur.com/KK9fk.png
hankScorpi0 18 hours ago 1 reply      
Interesting idea - I had implemented a similar symmetry concept in one of my ios apps (http://gravitypaint.com - if you want to check it out).

Why not extend it to also cover radial symmetry - should be easy to add and you can let the user set the angle etc...

Kiro 17 hours ago 0 replies      
Works really good on the Android browser where things like this usually lag out (Galaxy S3). Smooth.
fidz 18 hours ago 0 replies      
Could someone list another apps that can make "beautiful" graphic like this app? (Or at least, something that could be done by non-artist, no photoshop / pattern / texture needed to generate the graphic).
alan-saul 14 hours ago 0 replies      
This is superb, thanks for posting I must have missed it the previous times. What is the ambiant music playing in the background?
likeclockwork 19 hours ago 0 replies      
Pretty cool, reminds me of this:
Dilan 17 hours ago 0 replies      
It would be great if you could easily set the picture as your desktop background.
ghostblog 9 hours ago 0 replies      
This is so dumb. Why is this art, because it's pretty? God listen to the music. So cheesy. It's the "Alienware" aesthetic. Nerd
Techasura 16 hours ago 0 replies      
i would keep this background music running during my work..
makes me refreshed.
apha 14 hours ago 0 replies      
I think I made a demonic, dual-wielding Samurai: http://new.weavesilk.com/?d119

Or it may just be a mess.

earroway 9 hours ago 0 replies      
nmb 15 hours ago 0 replies      
hi yuri! :)
Show HN: I'm 15 years old, and I released my first NPM module: Wizardry github.com
88 points by remixz  10 hours ago   79 comments top 27
SoftwareMaven 4 hours ago 0 replies      
Well done. One question, though: Shouldn't the commands be a list instead of an object? I would think you would want to be sure to keep order for image processing. For instance, I don't want my image down-sized for the web until after all the processing is done to it.

The Ecmascript spec leaves the object attribute iteration order undefined (though it appears most implementations iterate in the order attributes are added).

orangethirty 8 hours ago 2 replies      
On behalf of all the idiots that decided to rain on your parade, I would like to apologize. All of you should be ashamed. How dare you insult this young person whose only wish is to share his work with us?

In terms of the module, can't really comment much on your code. It looks clean and well written. I'll try and run it during my free time to get a good feel for it. Well done. Now go back and build something bigger.

PS. Shoot me an email (in profile). You might enjoy hanging out with the Nuuton team.

ComputerGuru 9 hours ago 5 replies      
Sorry, I don't see that your age is relevant.
remixz 8 hours ago 5 replies      
Hi all! Thanks for the great feedback. I did realize the title might have been controversial, but I have a small argument for it.

A few weeks ago, there was the 14 year old who posted their rad iPhone game on HN. Their post did inspire me to post my own work. I have a tiny hope that someone else who's doing something like I am will see this and post their own work. I doubt it, but you never know! :D

bashzor 5 hours ago 5 replies      
Oh a 15 year old, how cute. So, what about that module? Why is it special enough to be posted to HN?

The age is not relevant. Imagine someone of 36 made this module and included his age. If he had gotten into programming at 35 and this was some kick-ass thing, then yeah that would be kinda neat. Now you could have been programming for five years or so, which gives you a big advantage.

If you had been twelve or so, then I'd say it rocks. But fifteen is a fine age to develop something.

I don't mean to discourage you at all, just let the product speak and not your age.

bdcravens 7 hours ago 2 replies      
Many didn't like posting the age in the title. Yeah, no one ever words their HN submissions to be inflammatory and get to front page, right?

I'll take a million hackers showing their projects and trying to win brownie points with their age than a single freakin smart phone troll blog post any day of the week.

smoyer 8 hours ago 0 replies      
Hmmm ... when I was 15, I spent all my time playing ultimate frisbee and riding my bicycle. Except when I was poring over the schematics and ROM code for the 1802-based COSMAC Elf.
geuis 6 hours ago 1 reply      
Guess I'll leave one of the few comments about the project itself.

I'm going to evaluate this when I get home. If it works as described, I think I'll be integrating this into an imaging service we're building. The interface looks great.

Keep coding man. This looks really good.

nkohari 8 hours ago 1 reply      
Already at 15 you've done more than (I would wager) most of the people on this site -- you've shipped open source.

Congratulations, and ignore the haters. Remember that it doesn't matter what you think or say, it matters what you do. Creating software is more important than talking about it.

lewisflude 8 hours ago 2 replies      
Here we go with another one of these "I'm (under 18) and I did x" posts and an equal amount of people complaining why that isn't relevant.

But yeah, this is a really cool little module, congrats.

chill1 9 hours ago 1 reply      
"Wizardry is a task-based library for GraphicsMagick / ImageMagick that focuses on simplicity and getting one thing done right: processing images."

Why I like these words: It's not enough to be able to write code, or even to package up a module for a framework. Knowing that you can't do everything, and that you should not try to do everything, with a single module, is a promising sign in and of itself. Having a clear goal to reach makes getting there all the more possible.

josephagoss 8 hours ago 1 reply      
Whoa quite a few comments here seem a bit negative, remixz you shipped something, that is good, power on! :)
Skywing 5 hours ago 0 replies      
One thing to notice about this module is that it's spawning a sub-process out to imagemagic itself. I'm not saying this is good or bad, I'm just pointing it out. There are also other modules that wrap the imagemagic libraries themselves and do not spawn sub-processes. Just be mindful about the different implementations.
gtmtg 3 hours ago 0 replies      
Looks great - nice job!

I'm 13 and I've created a node.js command line app (http://gtmtg.github.com/view-test) and an iOS control (http://gtmtg.github.com/MGDrawingSlate) among other things, but none of them are nearly this advanced...

Again - looks really cool...

wilfra 2 hours ago 0 replies      
You didn't need to give your age, it was obvious from the name you chose for the module ;)
homakov 8 hours ago 1 reply      
I am 19 years old and I don't give a fuck
mkr-hn 8 hours ago 0 replies      
This thread is an early lesson in how age often makes people focus too much on little things and miss what's important.
kmfrk 8 hours ago 0 replies      
The font-weight on your link is very close to being too small to be legible - in Opera on Windows - hello to edge case asshats like me!)

And my vision is pretty decent.

I know you're probably using a default or something, but it's really bothersome to someone like me to read it.

Great job on the project itself, though.

KaoruAoiShiho 8 hours ago 2 replies      
Why this instead of gm?


measure2xcut1x 3 hours ago 0 replies      
ImageMagick FTW Go buddy go!
shaunxcode 7 hours ago 0 replies      
This is rad, keep it up! Seriously when others detract remember minor threat: "what the fuck have you done?"
vaidik 7 hours ago 0 replies      
Good one! Seriously! For the work you have done according to your age is tremendous. I certainly was not able to do anything even close to it when I was 15. So I'd say, hats off!!
joshbrody 5 hours ago 0 replies      
You've got one hell of a future.
robertli 6 hours ago 0 replies      
I'm 12 and what is this
tonywok 8 hours ago 0 replies      
That's rad man. Haters gonna hate. Keep shippin'
webmech 9 hours ago 0 replies      
Kids these days lol
ryanbraganza 6 hours ago 1 reply      
You know how I know you're 15? Light grey text on a white background.

I remember when I was younger and discovering ImageMagick - a perennial favourite for building little tools on top of.

Former Google lawyer to lead Silicon Valley patent office gigaom.com
27 points by kjhughes  8 hours ago   3 comments top 2
benwerd 7 hours ago 1 reply      
Presumably all that'll happen is that firms will send their erroneous or overreaching patents elsewhere? Or is there a process to avoid this?
CodeCube 5 hours ago 0 replies      
I wonder how much those patent examiners in particular will use http://patents.stackexchange.com/
Twitter is Pivoting daltoncaldwell.com
290 points by olivercameron  1 day ago   116 comments top 29
nemesisj 1 day ago 9 replies      
I hate to be that (negative) guy, but I'm starting to get really tired of Dalton constantly pissing all over twitter. Particularly because he's competing against them. I'm an App.net backer, and I think it's cool what he's doing, but FFS, let someone else carry the water, if it even needs carrying at all. This all just feels really petty and whiney, particularly when you're already at work solving the problem.
danso 1 day ago 4 replies      
Of all the rhetorical points that Dalton makes, this one was the most damning for me:

>>> His announcement was formatted as a direct reply to the official Twitter account.

This means the announcement would only be seen by his followers that also follow the official Twitter account. I don't get the feeling he did this on purpose. An experienced Twitter user would know to add a “.” at the beginning of his message so that his followers would see it.

It seems a bit pedantic. But when top-down leaders don't get even the basic details of their operations right, then there are a lot of other big-picture things that they seem to get wrong as well. In the case of MySpace's crushing defeat by Facebook, the difference really was in the details, not in the overall ambitions of the two companies.

zaidf 1 day ago 5 replies      
An experienced Twitter user would know to add a “.” at the beginning of his message so that his followers would see it.

Signed up for twitter on the day it launched(I think) and did not know that. Twitter is a painful product to use. It isn't made for humans.

Twitter doesn't have to show a username in tweets; they can easily translate it to the name.

Twitter doesn't have to require each reply to appear like an out of context note. They can easily group them as complete conversations(like facebook allowing comments).

Twitter doesn't have to make lists so hard to use. They can easily make it very similar to facebook(except on twitter there is much more need to use this since they do not filter out tweets).

Twitter doesn't have to insist on this 140 char limit that looks funnier every coming day and result in butchered communication.

Twitter doesn't have to subtract 100 characters if I post a URL that is 100 characters; it could automagically shorten it or not count against the 140 at all. Instead, I am forced to manually use bit.ly to shorten it.

Twitter doesn't have to show me a stream filled with url strings; it could easily show the title of the page or something similar to facebook.

Dear Twitter, PLEASE stop this stubbornness in your product philosophy. It is hurting your users and it is hurting Twitter Corporation.

jmduke 1 day ago 0 replies      
For those unaware: Dalton Caldwell is the creator of app.net, a direct competitor of Twitter (with a subscription fee model instead of an advertising/data model).

I think it's important to read the post with that context in mind.

ChuckMcM 1 day ago 1 reply      
Ok, third comment here, guess the previous two comments (memesisj and jamesmoss) pretty much define 'polarizing' :-)

Dalton raises some interesting questions. What exactly is twitter ? And perhaps more importantly what does Twitter think it can become? The churn in API restrictions, usage and messages certainly can be confusing.

sethbannon 1 day ago 1 reply      
This is how I use Twitter now -- to consume news and discuss said news with my social circles. I certainly wouldn't be upset if Twitter took this as the core use case of the product.
jonathanjaeger 2 hours ago 0 replies      
Dalton was just on This Week in Startups. Great to hear more in-depth insights on iMeem, social, ad sales, and app.net http://www.youtube.com/watch?v=j6ZsIlfzSBU

While I don't necessarily agree with his wording in every blog post, this is an awesome interview.

wmf 1 day ago 2 replies      
I get the impression that social media douchebags already pivoted Twitter in that direction a few years ago, and now Twitter is just confirming it. The sad part IMO is that it seems like they could fix their developer relations problems and their business model by charging to send relevant tweets to existing followers, not unrelated ads. Of course, I don't even use Twitter so you probably shouldn't ask me.
dm8 1 day ago 0 replies      
I have to disagree with author on some of the points. And looks like he doesn't like the fact that FB and Twitter are part media, part software companies.

"a media company writing software that is optimized for mostly passive users interested in a media and entertainment filter."

What's wrong in being media company? We all agree that software is eating the world, so why is it bad if Twitter is "disrupting" real-time media consumption? I loved Twitter's Olympics coverage. Even though I was thousands of miles away from London, I could feel the excitement.

Same for Hurricane Sandy. It was so useful to get latest news update in such a terrible time (for everyone involved). I was caught in another disaster few years back and the biggest problem was not getting important news updates from credible agencies/people. Twitter solved that problem for Hurricane Sandy coverage.

Twitter/FB are becoming like "breaking news" for every news. Be it earthquakes, celebrity gossip, world cups, olympics or new product launches!

lordlarm 1 day ago 2 replies      
I actually have used and are using twitter. It is a horrendous user experience - but I know how to fix it.

My main problem is my diverse interest in different subjects and twitters current inability to let me organize and follow what I like.

I'm following approx. 200 people divided 30% technology, 30% cycling and 30% friends/locals. For me, it would be impossible to imagine following more than 250 or 300 people with todays interface - because they are all thrown into each other and reading the raw feed is a clutter and mess of subjects.

You would think, considering their main goal is to get people following their interest, that they would get this part of the interface right. But the contrary - it is what is worst with twitter.

The solution (and problem) I'm hinting to is of course lists and as Facebook, G+ and virtually every other social network already have found out: people like to organize interests, people and subjects into different "buckets". Facebook had a lame interface for this many years, but does a better job now.

My point is, as an experienced twitter user, I know where the pains are and my first day in office I would make sure that the accessibility of lists were greatly improved.

The second day I would use to fix a decent conversation view and comprehensible reply scheme.

EDIT: To point out the inaccessibility of lists today, here is the general way to read up on a subject: tool-icon > lists > choose list. That's 2 clicks too many.
You could also use the shortcut "gl" and spare 2 clicks, but still, it is 1 request too much and way too complex for the regular user.

nsns 1 day ago 0 replies      
Perhaps a true commercial social network is a contradiction in terms; the friction from the user=product formula necessarily becoming unbearable with time. Perhaps an open source non profit solution wil have suited it much better.
jusben1369 22 hours ago 0 replies      
So just reading the post I got a little confused (and then i found out he's the creator of App.net and wondered if his passion isn't clouding his best judgement) Firstly the quote: "Given that most of their traffic comes from us, if we build adequate if not superior competitors, I think we ought to be able to match them if not exceed them." - I just wasn't quite sure what this quote meant - I certainly didn't assume it mean't we're going to block them though. Maybe they did block them but shouldn't the correct quote end with "I think we ought to be able to block them"?

He's been using it for a long time to consume news and information. Ok, makes sense. Yet this is apparently objectionable or at the very least damming. I think it's damming because he says he's a consumer not a producer of tweets. Is this news to anyone?

"Admit failure and give up on trying to get normal people to tweet" The balance in twitter's tweet creation and consumption happened organically. Kudos to Twitter for allowing it to happen vs forcing unnatural acts? "You should tweet more!" I don't look too closely but it seems like it's been an open secret for 2 + years that 80% of all tweets come from 5 - 10% of users or whatever.

I guess I wouldn't call it a pivot if Twitter is focusing heavily on the 10% that do 90% of the tweeting vs trying to get the other 90% to tweet more.

lotso 1 day ago 0 replies      
"What is post-pivot Twitter supposed to look like?

The best way to consume “news and information”.
Important content is mostly created by media companies, whether they are blogs, television, radio or movies.

The main reason that “normal users” would write messages is as a backchannel to discuss media events such as the Olympics, Election Coverage, or a new television show. “Normal user” tweets are something akin to Facebook comments.

Even though this backchannel exists, it's not expected that brands and celebrities are supposed to pay much attention to everything that is said. Chernin himself hasn't replied to the numerous replies he received."

That's funny because that's how I used Twitter from the beginning (5 years ago).

jamesmoss 1 day ago 0 replies      
Although I'm fairly indifferent about app.net I could read Dalton Caldwell's blog posts all day. He's a clever guy that puts across his points well.
state 1 day ago 1 reply      
To me, what's implied by this piece " since it's written by Dalton " is that Twitter is leaving behind an opportunity that he plans to take advantage of. What remains to be seen is whether the thing they're leaving behind can be fully realized.

Do people really need a short-form messaging platform for communication?

cwp 1 day ago 0 replies      
Interesting, but it's reading an awful lot into a single tweet.
pbreit 23 hours ago 0 replies      
The one point that really did not resonate with me is that companies will take over. Currently, companies are by far the worst tweeters and are totally dominated by individuals whether thy be celebrities, experts, citizen journalists, etc. Even the good tweeters who sort of tweet under a company umbrella define themselves more than their companies. Twitter seems to me that it will remain the anti-company network since it provides so much advantage to the collection of individuals.
jsilence 14 hours ago 0 replies      
So are we finally moving to our own status.net instances?
michaelkscott 21 hours ago 0 replies      
If you want more insights about the things discussed here, there's an interview that Mark Suster did with Joel Spolsky last year where they talk about the "API wars":


They cover everything from the early days of Excel's API to the downfall of myspace and the rise of YouTube and Photobucket, and how twitter took off. It's worth your time.

winstonian 22 hours ago 0 replies      
Imagine a Hacker News without Dalton Caldwell or 37 Signals.... Mmmmmmmm
saumil07 18 hours ago 0 replies      
1. I like, support and (pay for) App.net
2. One tweet, by one famous media guy (admittedly a board member, yes, but really only known for being a great media executive) does not a strategy make.
3. The title "is pivoting" is extremely assertive and not really backed up by, well, a preponderance of facts or data.
ghostblog 9 hours ago 0 replies      
"Admit failure and give up on trying to get normal people to tweet."

What are you saying? Fourteen year olds and ethnic minorities use this website. How normal can you get?

"An experienced Twitter user would know to add a “.” at the beginning of his message"

Thanks for the protip, Dalton.

iomike 1 day ago 0 replies      
Hard to say you're a "long time user" if you've only been on since 2010. Been on 6 years, that line made me laugh.
diedsj 1 day ago 0 replies      
I think its outrages that a member of the board of directors of twitter has absolutely no idea what he's doing on twitter.
I have no knowledge of Dalton's other business then this blogpost, and therefore it doesn't strike me as annoying, just a well written critical analyses of what twitter is doing wrong. I really hate the protectionistic (i.e. stupid) way twitter is doing business and I hope if enough people vocalize it, twitter might do something about it.
mullingitover 23 hours ago 0 replies      
I'm a longtime twitter user. He should've checked out my account for an example of best practices - http://twitter.com/mullingitover
hayksaakian 1 day ago 1 reply      
I like that twitter forces you to be concise.
Codhisattva 1 day ago 1 reply      
Sounds like Chernin wants to make a news wire.
fudged71 1 day ago 0 replies      
TIL what the '.' in front of '@' is for. Interesting.
stephenhandley 22 hours ago 0 replies      
they're just gonna concentrate on content/media sharing via url, and that isn't necessarily big-company driven. people sharing links and talking about them etc.
NVD3 Erased From Existence loopj.com
126 points by foobar2k  19 hours ago   60 comments top 15
pygy_ 15 hours ago 1 reply      
It's a strange case to begin with:

The readme of the first public release says:

nv.d3 - v0.0.1

A reusable chart library for d3 by Bob Monteverde of Novus Partners.

The license later said that the copyright belonged to Novus, (not Montaverde), under the GPL v3.

This means that they couldn't (nor could anyone) use the Free contributions in closed source products.

Since Montaverde is responsible for ~95% of the code (https://github.com/RobertLowe/nvd3/graphs/contributors) and he sounds embarrassed by the ordeal, it looks like a dick move by someone above him at Novus.

lazyjeff 18 hours ago 4 replies      
Reading the google groups discussion raises some interesting questions:

What prevents other open source projects from being taken down with a "management did not authorize this" notice? For example, what prevents Twitter from saying Bootstrap was released by a rogue employee, invalidating the open source license and rendering millions of websites in copyright violation?

What happens to the commits by other authors to the source tree? Do they own the copyright to their commits, even if they modify invalid open source code?

How does the open source community react when this happen? Do they fork and pretend the source code is legit open source? (from reading the discussion, it seems like many developers have already forked the code and encouraged others to work off it)

Perhaps there are reasonable solutions to these, but I'm interested to see how this story unfolds, since it may affect how people think of companies open sourcing code in the future.

kevingadd 18 hours ago 2 replies      
The discussion thread is interesting - it is strongly implied that NVD3 was up publicly and widely used for ~9 months, and its open source release seemed to have been approved by management.

Are there any other notable examples where a project was 'open' for such a long period of time and then the company that claimed to own the copyrights tried to un-open it? It seems like there's a huge potential for nasty side effects when something like this happens. 9 months is long enough for lots of people to start relying on a library that's been released under a permissive license like Apache2 and then suddenly have the rug pulled out from under them because a vendor either did a terrible job of protecting their copyrights or decided to take their toys and go home.

fuzzythinker 17 hours ago 0 replies      
My trust with nvd3 pretty much ended when they pulled their finance part of the library out few months ago without any notice. That tells me they are capable of doing it again in the future.

EDIT: Now that I thought about this more, since they pull out the finance part of the library before, it is very likely that they _did_ know about the library being open sourced. Makes it much harder to believable the story.

yason 11 hours ago 2 replies      
Putting the specifics of this case aside, the whole question underlines once again the questionable sanity behind copyright and intellectual property. The corner cases like these are a signal that the copyright thinking isn't entirely in alignment with reality. With physical goods it's very clear: if an employee had gone rogue and given off a prototype device built by the company, any resale of that device would naturally be illegal (it's illegal to buy and sell stolen goods) and the device could eventually be returned to the company.

However, with bits, things are different. Bits can be copied, they can't be stolen, and bits aren't unique things whose possession can be controlled. Thus, the idea of copyright is to "own" the copyrighted works so as to control making copies of it. The company tried to assert that it owns the library and extrapolate from there that they could control the bits that represent copies of the library. But if the thing companies intend to control is the idea or "the works" instead of the physical bits then we're faced with another dilemma.

Consider if the leaked thing was a trade secret, which is an idea with no physical presentation. The trade secret was published without permission by a rogue employee and thus it wouldn't be a secret any longer, then how could the company possibly claim it could be restored somehow? How could anyone who had read about the trade secret explicitly unmemorize it? There are no physical copies or bits to destroy, the idea would simply live in peoples' minds and eventually travel to the company's competitors. The cat's out of the bag, what can you do.

I think that in this case, the only plausible view of what actually happened is just that. The culprit is the employee who should be liable for the damages if it turns out that he actually did publish the source code without a permission. (Based on the comments even verifying that is still uncertain.) Similarly, if an employee smuggles in GPLv3 code in to the company's codebase then the company can't just shrug that off, and must release their proprietary source code as GPLv3.

Both are quite harsh conclusions. It seems that for any company larger than a few dozen people would eventually bump into one of these two cases. Employees would have to require written permission from their managers to release source code. (What if their managers didn't have the permission to give that permission?) Companies would have to audit all new source code before adding it to their version control system. (Nearly an impossible task unless commit lag of months would be considered agile in their line of business.)

In practice, things don't work""neither way, as long as copyright is removed from the realm of bits, data, and software and the concept of intellectual "property" is disintegrated from the beginning. WHen companies stop relying on those delusions and base their business on things that actually work on real life, they are relieved of much suffering.

btipling 17 hours ago 3 replies      
NVD3 leaked memory terribly. For us creating and removing a small number of charts quickly ate memory in the tens of megabytes. While the code was readable, it was not a very efficiently written library. I also took issue with how it used a global shared function to throttle chart generation. This feature did not seem to work very well but I did not spend much time with it once I saw the memory footprint.

NVD3 is one of many chart libraries that placed more emphasis on design than robustness. Having gone through many charts I wonder if any of these developers have heard of the Profiles tab on web inspector.

Something like NVD3 can be used on a static page that isn't live updated for a short time. But a long living application will have problems.

In other words don't worry. NVD3 wasn't very good. Go look at the d3 basic chart examples on the d3 example's site. It is not hard to build graphs with d3. You don't need NVD3.

Having said this, I thought the NVD3 editor was pretty cool. Better than the actual library.

Void_ 18 hours ago 2 replies      
Does this mean we have to stop using it? If they once released it under permissive license, can they just change it and sue me if I still use it?
rcthompson 17 hours ago 1 reply      
I don't know the back story behind this, but I just want to say that this is by far the most respectful and reasonably-worded takedown request I've ever seen.
fuzzythinker 17 hours ago 0 replies      
Found this dc.js library reading the thread. Looks interesting.


yenoham 18 hours ago 1 reply      
I find it remarkable that the 'management' would want to do this. This make their company look ridiculously out of touch; by now LOTS of people have seen and edited this code themselves, and have copies; you can't put that genie back in the bottle.

They could have used this to their advantage by simply allowing it to stay open but requiring that their company/brand name be used in the project (like Twitter Bootstrap), thus allowing the company to be seen as a supporter of the open source community without much effort on their part. Now they look the exact opposite of that, by doing something that would require huge effort and resources to achieve/maintain.

RyJones 16 hours ago 0 replies      
As someone that works on releasing open source products from a closed source company, this is scary reading. Suddenly, all of the checks and balances we have to hurdle seem reasonable.
charlesboudin 4 hours ago 0 replies      
All of the comments seem to be very USA-oriented, but if one wants to learn a lesson from this we should also discuss other POVs. Does anyone know how would a similar case be handled in EU? Or just using a fork after the cease-and-desist - does estoppel and so on exist there?
mangler 16 hours ago 0 replies      
Some publicity is bad publicity, Novus Partners....
dkural 17 hours ago 3 replies      
I am very concerned that due to github relying on private repositories for revenue; it has been all too eager to comply with this very legally questionable take down request. Do we need an "open" github; that is truly on the side of open source software?
spiritplumber 16 hours ago 3 replies      
Anyone got a backup?
Z3 : An LLVM backed runtime for OCaml github.com
52 points by Raphael_Amiard  12 hours ago   19 comments top 4
pascal_cuoq 3 hours ago 0 replies      
Just checking that you are aware of OCamlCC. If you aren't, good news! You will have plenty of notes you can exchange with Benoît.


pjmlp 10 hours ago 1 reply      
C++ instead of OCaml?!

Specially taking into consideration how much better OCaml is suited for compiler development and the existence of LLVM bindings for OCaml.

ulber 11 hours ago 2 replies      
Bad name: Z3 is an SMT solver from Microsoft and a very good one at that. I'm assuming this LLVM OCaml thing is newer.
andrewcooke 10 hours ago 1 reply      
how close to ocamlopt do you think you can get? (ie do you have any handle on what the returns would be for further work on this? what are the main limiting factors?)
       cached 18 November 2012 02:02:01 GMT