hacker news with inline top comments    .. more ..    18 Feb 2014 News
home   ask   best   5 years ago   
Irrational Games (Bioshock Infinite) is shutting down irrationalgames.com
99 points by piratebroadcast  2 hours ago   52 comments top 16
beloch 15 minutes ago 0 replies      
You've had critial success. You've made so much money you could retire, buy an island, and still have enough left over to turn it into a supervillain lair! I get it. You're only in it now for the love of creating, so why not leave the headaches of A titles behind? This is perfectly sensible. Handing off irrational to a protg, taking your buddies and spinning off a smaller studio would be a great way to do this. Firing half the company that brought you success, however, is a bit of a dick move.
tibbon 23 minutes ago 2 replies      
Maybe its just me, but there is something deeply flawed with the game industry's hiring/firing practices.

If a game does well, its time to lay off half (or more) of the team. Same result happens if a game does poorly of course. But it seems the only way to 'win' is to be at the top, or simply not play.

I've seen this now with everything from Harmonix to Irrational Games. There seems to be a huge amount of money made with these blockbuster games, but vanishingly few companies seem to be able to manage their game development cycles efficiently as to always need a staff. It always comes off as terrible management/project management.

For example, Harmonix's Rock Band was huge. There was around $299 million of bonuses paid to people at the top. Yet, I had friends work there get laid off repeatedly (once right before Christmas), sometimes shortly after the people at the top got the bonuses. Why in the world didn't they think to diversify a bit, run a few concurrent development cycles, etc...

The most sane way to do game development seems to be to start your own indie studio and keep your expenses very low. Everything else seems... irrational.

hawkharris 3 minutes ago 0 replies      
This isn't the end...

Irrational Games will enter the waters of baptism, and a new studio will be born. An infinite number of Irrational Games studios are opening and closing at this moment, like lighthouses on an ever-expanding ocean. The only difference between past and present is semantics.

If what I'm saying sounds crazy, you owe it to yourself to play Bioshock Infinite. It's without a doubt one of the most beautiful and surreal games ever created.

Argorak 1 hour ago 2 replies      
Irrational created some of my favorite games, because of the amount of thought and attention to detail they poured into them. I loved most of them.

* SWAT 4: How cool is a multiplayer shooter where you actually have to breach a room from multiple sides to pressure the enemy into _not shooting_? And hold your guns until you saw any indication they would? We played that game for nights in one room for better communication.

* System Shock 2: Deeply flawed in some regards, but also the first game that creeped me out in a _perfectly well lit and bright environment_. Shodan, as always, was a great enemy.

* Freedom Force Series: A comic strategy game. It wasn't that hard (it wasn't easy, either), but had "comic" written all over the place. The description if you hovered the cursor over a mere building was "A proud participant of the Patriot City skyline." Someone put an ironic joke on the patriotic theme of the game in the description of a boring apartment block... How fun is that?

BioShock was a culmination of all that. Would you kindly pay them your respect?

mjn 1 hour ago 7 replies      
Kind of a strange letter, especially given that Irrational Games is a subsidiary of Take Two. It makes it sound like Irrational is doing great, and Ken Levine just wants to try something different. But if that were the case, demolishing Irrational to try his new thing doesn't make a whole lot of sense. It'd be more sensible for him to just leave Irrational, starting a new endeavor (either another subsidiary under Take-Two, or his own independent thing), and leaving Irrational intact.

Possible explanations include: 1) there is not as much success going on at Irrational as implied; 2) Ken Levine is just really attached to the name, and so wouldn't let it continue in present form while he leaves to do something else under a new name; 3) ...?

zacinbusiness 46 minutes ago 0 replies      
I truly understand not wanting to do "more of the same" after 17 years (I do something different nearly every day). But I really hate it for their developers and artists. Having only ever played the original Bioshock (which was beyond amazing to me), I know that at least the senior devs and artists have the chops to get hired somewhere new, or to start their own companies. But the jr. guys might have a tough time.
ChuckMcM 1 hour ago 1 reply      

   > we will focus exclusively on content delivered digitally. 
Another one bites the dust. Sad to see irrational closing up but agree that 17 years is a long time for anything. My hope is that he isn't out to build the next candy crush thing.

dave_sullivan 1 hour ago 1 reply      
Man, these guys made some great games.

    > To make narrative-driven games for the core gamer that are highly replayable.
Candy crush with a story it is... Or less cynically, a totally awesome RPG with procedurally generated storylines so no playthrough is ever the same?

venomsnake 57 minutes ago 1 reply      
It was expected. Probably he got fed up with modern publishing and Bioshock Infinite didn't do that great either - the 2 Bioshocks were on the verge of greatness, but it slipped from their hand mostly due to publisher interference (like not releasing modding tools and shipping with encrypted and signed content packs) when all of the community was begging for them. Bioshock has the potential to be the pre skyrim skyrim ...

And bioshock infinite was threading on too safe ground. I really hope that his new studio will have bursts of creativity and success and the left out employees find better jobs soon.

lectrick 1 hour ago 0 replies      
I don't know if anyone from Irrational Games is watching this, but from an oldish gamer, thank you so much for your creative and entertaining efforts over the years and I wish you all the best in your next adventures.
deletes 1 hour ago 0 replies      
my passion has turned to making a different kind of game than weve done before. To meet the challenge ahead, I need to refocus my energy on a smaller team with a flatter structure and a more direct relationship with gamers. In many ways, it will be a return to how we started: a small team making games for the core gaming audience.

I just read that as; we are gonna make an even more narative based System Shock 2 equivalent.

winslow 32 minutes ago 0 replies      
Well this is sad. I've been thoroughly impressed with Bioshock Infinite's gameplay, plot, story, AI, art, and attention to detail. Hopefully the artists/devs/writers will find another position elsewhere.

I wonder what happened. It sounds like Bioshock Infinite didn't bring in the cash they thought it would? Reminds me of Ensemble Studios closing after AoE:III and Halo Wars.

minimaxir 1 hour ago 4 replies      
This is likely correlated with the absurd sales Bioshock Infinite received after release. (Down from $60 to $20 less than 6 months after release). They probably needed money.
david_otoole 48 minutes ago 0 replies      
I had the privilege of working at IG for a few years back during the SWAT4 / FFV3R / Bioshock 1 days, when the people who made System Shock 2 were largely still there.

It's sad to see the name being retired, but it's better than seeing the name ruined by a flop or diluted by endless sequels.

The announcement is pretty opaque---I expect the rumor mill to churn for awhile.

johnny635 58 minutes ago 1 reply      
Get out of the gaming industry while you still can. Otherwise, you'll be looking at getting out of the software industry in no time.
wnevets 41 minutes ago 0 replies      
I cant forgive them for tribes vengeance.
Game Theory: How 70,000 Pokemon Players Sabotage Themselves minimaxir.com
117 points by minimaxir  3 hours ago   28 comments top 12
jfasi 2 hours ago 3 replies      
This is a fascinating example of the power of biased randomness. The gameplay challenges this article points out are notable because they require sequences of inputs that are both specific and lengthy. Even if the input stream were random, there would still be a nonzero probability that the correct sequence of instructions would be realized.

On the other hand, this stream isn't random. If it were truly random, the player would just move pointlessly in a horrible Brownian motion. It's nonsensical, to be sure, but in some weird way it encapsulates knowledge about the game, and as a result the game makes progress.

It sort of enlightens other places where true randomness is required, and the presence of any information or understanding radically changes the behavior of a system. In cryptography, even the slightest weakness in the probabilistic underpinnings of a cryptosystem can render it useless. In finance, even the slightest edge over the market can be leveraged to produce gains.

KVFinn 2 hours ago 0 replies      
Some of the trolling isn't trolling, it's a tactic -- specifically the constant start spamming.

Since twitch.tv recently added up to 30 seconds of stream lag, you want to spam start during the trickiest movement sections to minimize latency between the stream and gamestate. This is most important in ledges and mazes!

More of the chat catches up to the current position and starts putting in the right input, which actually has a higher chance of being accepted, after the start spam delay.

Dylan16807 1 hour ago 0 replies      
It's not the fighting and trolling that causes 90% of the problems, it's the delay on the video that means everyone has to guess where the game will be half a minute in the future.

Such a shame that twitch changed streaming technologies recently. It used to be easy to get as low as 2-3 seconds of latency. A world of difference in something like this.

For example, that ledge, it was easy to get enough 'right' movement to overpower malicious 'down' commands, because the first input in a direction only turns the character. But there were 'down' commands needed to get to the ledge, and the lag caused them to keep pouring in after they were no longer needed.

This article is a nice overview of the spectacle but its premise is fundamentally flawed.

Globz 3 minutes ago 0 replies      
This so fascinating, I am almost certain that a new genre of gaming experience has been created and people will demand more games like this one! This is complete madness!
skizm 33 minutes ago 0 replies      
I wonder how long it would take if the game went round robin. Everyone in the chat got a chance to enter a move and the move was guaranteed to work and everyone saw the outcome before the next person gets to press their button. I predict much faster.
normloman 2 hours ago 2 replies      
Proof that crowdsourcing doesn't work in every damn situation.
wudf 1 hour ago 0 replies      
Would be nice of you to credit the artists whose work you rehosted, minimaxir.
yayitswei 1 hour ago 0 replies      
Not much on game theory here; the article was mostly a summary of what happened so far on stream.
shittyanalogy 2 hours ago 1 reply      
It's a game, not everyone is playing it for the same reason and the input is anything but random. There aren't really any conclusions you can draw from this other than it's entertaining.
zwdr 2 hours ago 0 replies      
Using the twitch chat for anything constructive is guaranteed to fail. pls no coperino ravioli.
muyuu 36 minutes ago 0 replies      
This is what happens with democracy when you execute it badly.
nobodysfool 2 hours ago 3 replies      
should have just had a vote every 10 seconds... most voted answer wins?
Abacus (YC W14) Wants to Make Expense Reports Obsolete techcrunch.com
41 points by tgoldberg  2 hours ago   19 comments top 8
crazygringo 1 hour ago 2 replies      
I agree with the other commenter that real-time daily approval doesn't seem to be a big selling point (I'm not sure it's even worth mentioning in marketing, because it might actually elicit negative reactions).

But the idea that the employee gets the money immediately/overnight upon approval at the end of the trip, or maybe weekly (for long trips), instead of waiting weeks for a paper check, is huge. As is offloading a lot of the accounting 'grunt work' of it to another company. And snapping photos of receipts with location and time data and categories directly attached, and not having to worry about where they're stored (it's all in the app and on servers) is a huge convenience too.

Sounds like an exciting idea.

eykanal 1 hour ago 3 replies      
So, I'm a manager, and the last thing I want is to have fifteen different notifications coming to me in real-time asking me to approve one of my team member's gas receipt. I'd much rather get the batch after the trip. Not sure how this would be an improvement on the manager end.
goronbjorn 28 minutes ago 1 reply      
> The New York-based startup is competing in the same space as expense report software such as Concur and Expensify. Abacus founders Omar Qari, Ted Power, and Joshua Halickman say their service differentiates by making it easy for managers approve expenses in real-time through its app, instead of in batches at their desks.

This just isn't true. You can approve expenses through Concur's app: https://www.concur.com/en-us/mobile

cubix 1 hour ago 0 replies      
Our company has been using Nexonia for a few years. Not to say there isn't room for competition, but I don't see any differentiator.

One feature I would like to see is better integration with my calendar. Nexonia will figure out my mileage based on the addresses I enter, but it would be nice if would simply pull this information in from my calendar, and perhaps take a guess at what meeting I just attended based on my current location.

petenixey 1 hour ago 0 replies      
I have half paid attention to this space but can never quite figure out what everything does.

Xero has an app which scans receipts but as far as I can tell doesn't do expenses.

Shoeboxed does tonnes of stuff with receipts (including parsing them and integrating with Xero) but doesn't seem to do expenses (is that right?)

All I want is one app that integrates with Xero, scans a receipt, parses it and lets me classify it as a business receipt or a personal one (to go on an expense form). I know that's not what this app does (it's expenses only) but it feels like it should already be out there - am I missing something?

thejteam 1 hour ago 2 replies      
So if I read that right, if the company I'm working for wants to use this system then I have to give Abacus direct access to my bank account. Why wouldn't it integrate with the company's existing system? I hope that's just a bad writeup on techcrunch's part. Or I hope nobody I work for ever requires me to use this.
mathattack 1 hour ago 0 replies      
I hope they're right. I hate expense reports. It would be even better to autosort the credit card data. "You use the card, and just tell us what's not a business expense. We'll figure out where it goes."
zt 1 hour ago 0 replies      
I have known the Abacus guys for awhile. Their product is cool, solves a real problem, and has seen some great traction. It's a classic innovators solution type product: limit the feature set, have a wonderful product, and sell slightly down market. I can't wait to see the progress they've made by alumni demo day -- I think it's going to be amazing.
The Engineer Crunch samaltman.com
111 points by clarkm  1 hour ago   126 comments top 30
tptacek 1 hour ago 10 replies      
Broken record: startups are also probably rejecting a lot of engineering candidates that would perform as well or better than anyone on their existing team, because tech industry hiring processes are folkloric and irrational.

I co-manage a consultancy. We operate in the valley. We're in a very specialized niche that is especially demanding of software development skills. Our skills needs also track the market, because we have to play on our clients turf. Consultancies running in steady state have an especially direct relationship between recruiting and revenue.

A few years ago, we found ourselves crunched. We turned a lot of different knobs to try to solve the problem. For a while, Hacker News was our #1 recruiting vehicle. We ran ads. We went to events at schools. We shook down our networks and those of our team (by offering larger and larger recruiting bonuses, among other things).

We have since resolved this problem. My current perspective is that we have little trouble filling slots as we add them, in any market --- we operate in Chicago (where it is trivially easy to recruit), SFBA (harder), and NYC (hardest). We've been in a comfortable place with recruiting for almost a year now (ie, about half the lifetime of a typical startup).

I attribute our success to just a few things:

* We created long-running outreach events (the Watsi-pledging crypto challenges, the joint Square MSP CTF) that are graded so that large numbers of people can engage and get value from them, but people who are especially interested in them can self-select their way to talking to us about a job. Worth mentioning: the crypto challenges, which are currently by far our most successful recruiting vehicle (followed by Stripe's CTF #2) are just a series of emails we send; they're essentially a blog post that we weaponized instead of wasting on a blog.

* We totally overhauled our interview process, with three main goals: (1) we over-communicate and sell our roles before we ever get selective with candidates, (2) we use quantifiable work-sample tests as the most important weighted component in selecting candidates, and (3) we standardize interviews so we can track what is and isn't predictive of success.

Both of these approaches have paid off, but improving interviews has been the more important of the two. Compare the first 2/3rds of Matasano's lifetime to the last 1/3rd. The typical candidate we've hired lately would never have gotten hired at early Matasano, because (a) they wouldn't have had the resume for it, and (b) we over-weighted intangibles like how convincing candidates were in face-to-face interviews. But the candidates we've hired lately compare extremely well to our earlier teams! It's actually kind of magical: we interview people whose only prior work experience is "Line of Business .NET Developer", and they end up showing us how to write exploits for elliptic curve partial nonce bias attacks that involve Fourier transforms and BKZ lattice reduction steps that take 6 hours to run.

How? By running an outreach program that attracts people who are interested in crypto, and building an interview process that doesn't care what your resume says or how slick you are in an interview.

Call it the "Moneyball" strategy.

Later: if I've hijacked the thread here, let me know; I've said all this before and am happy to delete the comment.

GuiA 1 hour ago 11 replies      
I'm starting to be a bit disillusioned with this whole "we can't find great people" spiel that a lot of startups put up.

I have friends who are extremely good engineers (i.e., a mix of: contributors to major open source projects used by a lot of SV startups, have given talks at large conferences, published papers at ACM conferences, great portfolio of side/student projects, have worked at great companies previously, frequently write high quality tech articles on their blog, have high reputations on sites like Stack Overflow, etc.) and who have been rejected at interviews from those same companies who say that they can't find talent. (it also certainly doesn't help that the standard answer is "we're sorry, we feel like there isn't a match right now" rather than something constructive. "No match" can mean anything on the spectrum that starts at "you're a terrible engineer and we don't want you" and ends at "one of our interviewers felt threatened by you because you're more knowledgeable so he veto'd you").

Seriously, if you're really desperate for engineering talent, I can give you contact info for a dozen or so of friends who are ready to work for you RIGHT NOW (provided your startup isn't an awful place with awful people, of course) and probably another dozen or two who would work for you given enough convincing.

I'm honestly starting to believe that it isn't hard to hire, but that there's some psychological effect at play that leads companies to make it harder on themselves out of misplaced pride or sense of elitism.

Unless everyone wants to hire Guido Van Rossum or Donald Knuth, but then a) statistically speaking, you're just setting yourself up for failure and b) you need to realize that those kind of people wouldn't want to do the glorified web dev/sys admin'ing that a lot of SV jobs are.

pg 1 hour ago 5 replies      
"I have never seen a startup regret being generous with equity for their early employees."

Same here. I always advise startups to err on the side of generosity with equity.

x0x0 1 hour ago 1 reply      
This post, I think, makes 2 mistakes.

First, sf and the valley simply don't pay engineers well enough. This is the second, striving to become the first, most expensive housing market in the united states. $150k sounds great here until you look at that as a fraction of your housing cost and compare to anywhere else in the country, including manhattan (because unlike here, nyc isn't run by morons so they have functioning transportation systems). I don't want to just quote myself, but all this still applies: https://news.ycombinator.com/item?id=7195118

Second, immigration is a crutch to get around paying domestic employees enough. I see net emigration from the valley amongst experienced engineers in their 30s who start having families and can find better financial lives elsewhere. If companies paid well enough that moving to the bay area wasn't horrid financially, they'd find plenty of software engineering talent already in the united states. But consider my friend above: $165 total income in the midwest is (compared solely to housing cost) equivalent to approx $450k here, when holding (housing costs / post tax income) constant.

edit: not to mention, companies still don't want flexible employment arrangements or remote work. I'm a data scientist and I'm good at my job (proof: employment history, employers haven't wanted me to leave, track record of accomplishments.) I'd rather live elsewhere. 66 data scientist posts on craigslist (obv w/ some duplication, but just a quick count) [1]; jobs that mention machine learning fill search results with > 100 answers [2]. Now check either of the above for telecommute or part time. Zero responses for remote or part time workers. So again, employers want their perfect employee -- skilled at his or her job, wants to move to the valley enough to take a big hit to net life living standards, doesn't have kids, and doesn't want them (cause daycare or a nanny or an SO who doesn't work is all very expensive.)

[1] http://sfbay.craigslist.org/search/jjj?zoomToPosting=&catAbb...

[2] http://sfbay.craigslist.org/search/jjj?zoomToPosting=&catAbb...

jfasi 1 hour ago 0 replies      
> if people are going to turn down the certainty of a huge salary at Google, they should get a reward for taking that risk.

I often see a disconnect between perceptions of expected success of founders and engineers. I've observed this is particularly pointed for non-technical founders. To generalize, a young entrepreneur with some success under his belt is starting a company. As far as he's concerned his company is all but guaranteed to succeed: he's got the experience and sophistication necessary to make this happen, the team he's hired to his point is top-notch, he's got the attention of some investors, the product is well thought out, etc. He approaches an exceptional engineer and extends an impassioned invitation and... the engineer balks.

What happened? Is he delusional about the company's prospects, thinking he's got a sure fire hit when he's actually in for a nasty surprise once his hubris collides with reality? Is the engineer a square who would rather work a boring job at a big company than live his life, and wouldn't be a good fit for the team anyway?

I propose a different resolution: our confident businessman is certain about the success of the company, not the success of the engineer as part of the company. He knows the company's success is going to rocket him into an elite circle of Startup Entrepreneurs. The engineer, on the other hand, doesn't see the correlation between the company's success and his own: even if the company takes off to the tune of eight to nine digits, his little dribble of equity is just barely breaking even over the comfortable stable position he's in now.

lkrubner 22 minutes ago 0 replies      
Regarding this:

"Sometimes this difficulty is self-inflicted."

I want to emphasize how strong this point is. In most ways, the computer programming industry is a shrinking industry in the USA. There are less computer programming jobs in the USA than there were 20 years ago.

Stats from the Bureau of Labor Statistics (USA):


1990 Number of Jobs 565,000

2010 Number of Jobs 363,100

2012 Number of Jobs 343,700

There is a tiny subset of the industry that is growing, and we associate these with the startups in San Francisco and New York. But so far these startups have not created enough jobs to offset the jobs lost due to other factors.

This suggests that there must be a vast reservoir or programmers who would like programming jobs, but they can't work as programmers because the jobs have disappeared.

If the numbers were smaller, you could argue that the loss of jobs was due to inaccuracies in the way Bureau of Labor gathers statistics. But the drop from 565,000 jobs to 343,700 is too large to be a spurious blip.

This is a shrinking industry. Computer programming jobs are tied to manufacturing so as manufacturing leaves the USA, so to do the computer programming jobs. Don't get caught up in the hype about startups: look at the actual numbers. The government tracks these jobs. The numbers are shrinking.

Especially worth a look:


"In its 1990 Occupational Outlook Handbook, the U.S. Department of Labor was especially bullish: The need for programmers will increase as businesses, government, schools and scientific organizations seek new applications for computers and improvements to the software already in use [and] further automation . . . will drive the growth of programmer employment. The report predicted that the greatest demand would be for programmers with four years of college who would earn above-average salaries.

When Labor made these projections in 1990, there were 565,000 computer programmers. With computer usage expanding, the department predicted that employment of programmers is expected to grow much faster than the average for all occupations through the year 2005 . . .

It didnt. Employment fluctuated in the years following the report, then settled into a slow downward pattern after 2000. By 2002, the number of programmers had slipped to 499,000. That was down 12 percentnot upfrom 1990. Nonetheless, the Labor Department was still optimistic that the field would create jobsnot at the robust rate the agency had predicted, but at least at the same rate as the economy as a whole.

Wrong again. By 2006, with the actual number of programming jobs continuing to decline, even that illusion couldnt be maintained. With the number of jobs falling to 435,000, or 130,000 fewer than in 1990, Labor finally acknowledged that jobs in computer programming were expected to decline slowly. "

OmarIsmail 1 hour ago 1 reply      
The only thing you need to do is fix immigration for founders and engineers. This will likely have far more of an impact than all of the government innovation programs put together. - this is so darn true it's not even funny.

Conversely, and I know this is pretty out there, this is what I think will be the killer app of virtual reality. If I can ship a $5K "pod" to a developer somewhere in the world which allows us to work together 90% as well as we can in person, then you're damn right I'm going to do that.

I believe VR tech will get good enough (3-5 years) before immigration issues will be sorted out (10-20).

jamesaguilar 55 minutes ago 2 replies      
Given the offers I've heard people getting from early stage startups (engineer 2-5), I don't really get why someone would join them. Below market rate salary? 0.1% of a company that's going to exit at $80M with a 10% chance and $0 with a 90% chance? That comes out to $80k over four years of work, before taxes, in the typical successful case, which is itself atypical. And for what amounts to a relatively small bonus, I'd be expected to work 50-70 hour weeks? Sign me up!
cia_plant 45 minutes ago 0 replies      
What's up with the cliff anyway? You're already asking me to take a much higher level of risk and a much lower level of liquidity than I'd like in my compensation, by giving me stock options. In addition to that you want me to take on 100% of the risk of our working arrangement not working out, and in fact you insist on giving yourself a strong incentive to fire me before the first year is out? It seems ridiculously exploitative.
grandalf 1 hour ago 0 replies      
This is the most insightful thing I've read about the engineer crunch in a while. The market needs to realize that good engineers have lots of options and that 0.1% is just not a meaningful amount. I'd like to see 5-8% for key engineering hires, even as companies approach Series A.

Does the founder really want to get greedy and keep that extra few percent when so much depends upon solid engineering execution? Also, don't forget that 4 year vesting with a 1 year cliff is standard, so it's not as if the worth of a meaningful equity offer isn't fully obvious before the shares are "spent" on a key hire (also, before vesting, the risk is totally borne by the employee).

I think the ideal situation for engineers would be to earn a solid equity offer and then have a secondary market to use to trade some of it (once it's vested) for fractional ISOs of other promising startups.

fidotron 1 hour ago 0 replies      
There isn't so much a shortage of engineers as a shortage of people willing to relocate to a stupidly expensive area and gamble what they have in the process. From the outside, but as a reasonably regular visitor, the Bay Area has lost the plot completely.
Xdes 1 hour ago 3 replies      
>There are great hackers all over the country, and many of them can be talked into moving to the valley.

For all this "we work remote" stuff that is flying around this seems to be a direct contradiction. Is moving to the valley really necessary? I can postulate coming for a face-to-face interview, but I would never want to move to California.

trustfundbaby 30 minutes ago 0 replies      
The other thing that is interesting to me is the ludicrously highbar, and "weed out" interview techniques that some these companies have for recruiting engineers.

They're looking for someone to work on a rails app but they won't hire them unless they have demonstrated Linus-Torvalds like ability and knowledge. But the question is, why would someone with that kind of skill level want to work for you?

What if you were able to grab smart engineers on their way to becoming engineering stars? Why not aim for getting a solid lead/architect and adding midlevel guys who you know are going to turn into superstars? Why not develop talent instead of competing all the way at the top of the market for the most expensive ones? Why not figure out an interview technique that can let you identify exactly these kinds of people?

Its all about being resourceful and nimble enough to adjust, after all, isn't that what a startup is all about?

jfasi 53 minutes ago 0 replies      
This implies a somewhat one-way relationship between companies and their engineers: engineers give their time and talents, and in return companies give their money and equity. Under this system, why not be selective about who you hire? If you want great work, you need to hire a great engineer.

In actuality, the truth is great people aren't found, they're made. The role of a good leader isn't to squeeze great work out of his employees, but rather to develop within them the capability to do great work. Applied to hiring, this means having an understanding of the support and growth capabilities within your organization, and finding candidates who have the most potential to gain from it, rather than hiring those who are already well-developed. Applied to hiring rockstars, this makes them even more valuable: not only would they be producing outstanding work on their own, they would actually be improving the quality of the work their peers produce.

Pxtl 1 hour ago 1 reply      
I'm surprised that "get out of the valley" isn't an option for that. I mean, there are a lot of cities that produce lots of talented developers that cost a lot less than Silicon Valley rates.
sheetjs 1 hour ago 1 reply      
The one part of the problem that the author missed is salary. Offering a small amount of equity is fine if you are offering an above-market salary, but it's the combination of below-market salary and minimal equity that causes the perceived crunch.

"You get what you pay for"

mathattack 1 hour ago 0 replies      
"Finally, most founders are not willing to spend the time it takes to source engineering candidates and convince them to come interview. You can't outsource this to a recruiter until the company is fairly well-established--you have to do it yourself."

This is very important. It needs to percolate into their immediate reports too. I've seen high tech companies lose great candidates because the first line managers were too busy to interview them right away. If the right talent is available, you have to maket he time.

jjoe 1 hour ago 3 replies      
What protects an equity-holder employee from being viciously or prematurely fired prior to the exit or cash out event?
djb_hackernews 1 hour ago 1 reply      
Wait, I thought we all agreed that there isn't an engineering crunch?

I am not sure what immigration has to do with this, we make plenty of STEM graduates each year, and we'd make more if the professions didn't look like they were under attack by every employer and politician. The smart kids you want to hire are smart enough to go into more protected professions, if they knew their jobs wouldn't be shipped over seas or their market flooded with foreign competition, then maybe we'd be able to attract them and keep them.

I worry that focusing on equity will just exacerbate the problem, because I think that a lot of people are becoming wise to the equity lottery and just don't see a difference between 0.1% and 5% of nothing. The problem will most certainly be solved by $$, but no doubt it is tough pill to swallow for a business to pay 150k now what was 100k a few years ago...

Keep in mind I am a software developer and self interested.

wpietri 18 minutes ago 0 replies      
My approach to equity was to offer a range. We'd offer base equity and salary numbers, and then give them an ability to trade salary for equity. If I recall rightly, they got a modestly better deal than investors did.

In doing that, it became clear that not everybody even wants more equity. That was a little hard for us as founders to take, because we of course thought the equity was awesome, and wanted engineers to feel a real sense of ownership. But from the numbers, it was clear that some people would rather we sold more equity to investors and just gave them the cash.

I get that. If you've been around the industry for a while, you can accumulate quite a collection of expired startup lottery tickets. Landlords, mortgage-holders, and kids' orthodontists don't take options; they take cash.

pistle 11 minutes ago 0 replies      
>"Don't hire outside your network."

If nobody in your network has a track record of results, become a sycophant? What if all the hackers in your network are US as well? You got less than a 5% chance of a hope to do anything according to Altman.

michaelt 1 hour ago 1 reply      

  I frequently hear startups say [...] can't find   a single great candidate for an engineering role   no matter how hard they look. 
While I agree that offering better compensation is a wise move for individual companies, if the market has 10 job openings and 9 engineers, regardless of how much pay they offer one of the companies won't be able to fill the opening.

Offering more money might fix hiring problems for one company, stopping one person complaining, but to stop all people complaining the only solution is to increase the supply or reduce the demand.

(Increasing the supply doesn't have to mean immigration reform - it could mean training or lowering hiring standards or a bunch of other things)

rqebmm 25 minutes ago 0 replies      
The interview process is still something that hasn't been solved. A lot of companies make hiring decisions based almost entirely on how well someone does non-coding work under extreme social pressure, which is about the worst possible way to measure the prospects of a developer!

Personally I really like the interview process a previous empoyer used: shortly before the interview starts we had them look over a roughly CS 102-level programming project, then we ask them to design the architecture on a whiteboard while the interviewers ask questions/give guidance. What we're really looking for here is:

a.) how do they handle the social aspect of working with a superior who will often (gently) criticize your work and/or ask you to thoroughly explain why you're doing what. b.) that they have enough chops to architect a simple program.

If they can pass those test I'm confident they'll be an effective team member, because at the end of the day all you really want is someone who is competent enough to be useful, and fits into your culture/team. Everything else will shake itself out. You don't need some "superstar/rockstar/ninja" (unless you're solving a particularly hard domain-specific problem) so stop looking for them and excluding everyone else.

Instead start building an effective team.

j_baker 1 hour ago 1 reply      
> In fact, probably less than 5% of the best hackers are even in the United States.

Startups clearly need to be basing more of their decisions on unfounded conjectures. I have to say that startups seem to have unreasonable expectations of what kinds of programmers they can hire. We have plenty of viable hackers in the US, but startups don't want to hire them because they're not the next Knuth or they're "not a good cultural fit".


Matt_Mickiewicz 1 hour ago 0 replies      
"Third, if youre going to recruit outside of your network (usually a mistake, but sometimes there are truly no other options), focus on recruiting outside of the valley. There are great hackers all over the country, and many of them can be talked into moving to the valley."

UPVOTE!! I'm shocked at how many companies are unwilling to pay for a $500 Southwest ticket to fly someone in for a day to interview from Texas or Georgia... relocation costs are easily offset by a slightly lower salary, and the person you're interviewing is unlikely to have 4 or 5 paper offers in hand.

_random_ 11 minutes ago 0 replies      
There is a gold crunch as well - I can't find 999% gold at 1$ per kg.
rgarcia 1 hour ago 2 replies      
"If someone performs and earns their grant over four years, they are likely to increase the value of the company far more than the 1% or whatever you give them."

This argument seems flawed. If I think someone is going to double the value of my company, should I be comfortable giving them up to 100% equity? Put another way: percentage growth of your company from an early stage to some point in the future most often exceeds 100%.

zallarak 1 hour ago 0 replies      
Excellent essay. This really nails what I'd want in a dream job; knowing my effort influences my payout (via legitimately sized equity grants) and working on a mission I empathize with. Best article I've read in a while on hiring.
crassus 50 minutes ago 0 replies      
Double the equity. 9 month vesting cliff. Market salary.
w_t_payne 43 minutes ago 0 replies      
Or, y'know, maybe locate outside of the Valley?
Complaint-Driven Development codinghorror.com
154 points by dieulot  5 hours ago   57 comments top 18
foolrush 4 hours ago 2 replies      
The issue here is that it cleaves toward designing for the audience you have, as opposed to the audience you seek to appease.

Often, the design goals might exist on an alternate axis.

Random people offering random opinions amounts to random noise. However, the random noise will not appear neutral; it will appear as information.

If we consider many different things designed for particular audience members such as jet cockpits, medical tools, racing automobiles, we will see traits that exist that may seem nonsensical or otherwise when we divorce them from their designed contexts.

Bill Buxton covers this in Sketching User Interfaces when he describes Inuit coastal maps:"The Inuit have used. [...] Tactile maps of the coastline, carved out of wood. They can be carried inside your mittens, so your hands stay warm. They have infinite battery life, and can be read, even in the six months of the year that it is dark. And, if they are accidentally dropped into the water, they float. What you and I might see as a stick, for the Inuit can be an elegant design solution that is appropriate for their particular environment."[1]

Focusing on complaints of the above design in all likelihood would, given the mad rabble of audiences online, result in discarding a solid bit of design.

[1] Bill Buxton, Sketching User Experiences, pg 37

jorgeleo 5 hours ago 1 reply      
I like the fact that this blog talks about the elephant in the room

It really does not matter what methodology, or tools you will use, if at the end it does not pass the user acceptance test

So why consider it a failure if you have users telling you what they want and how off you are? Better to embrace it and make a better product

andyl 1 hour ago 2 replies      
I also practice CDD. But my experience is that few people complain. Maybe 1 out of 20. The rest silently endure a bad UI, walk away from the app without comment, or badmouth the app to their trusted friends.

Complainers have to be cultivated. Complainers can be your most valuable asset.

igravious 2 hours ago 1 reply      
Tried installing Discourse on Gentoo. Turns out to be a non-trivial project and I'm no Ruby on Rails newb. I'm no code-jock but I'd love if the install story was a tad easier. Maybe I should quit whining and figure out how to turn my pain into an ebuild :)
mathattack 5 hours ago 1 reply      
This process takes a lot of humility. It's not easy to say, "We are smart, but we don't know, so let's just get something out there." It's antithetical to the planning mania in so many big companies, and also why innovation like this is best from small places. I wish them the best!
RogerL 3 hours ago 2 replies      
While I grasp the reasoning in the article, and sometimes practice it, I also think it is often either counter productive or impossible.

For example, I have friends that released a rather ho-hum mobile app. They quickly garnered something like a 1.5 star rating, scathing reviews, and almost no conversions. The business cycle on this was a year, and they are still trying to claw back reputation and win users. It's a debacle. (the problems weren't their fault, but that is irrelevant to this point).

Then you have companies with secrecy, like Apple. I think this advice would be terrible for them (I have never worked there, and am open to correction). They can't dog food it widely due to the internal silos, and they certainly cannot test it with the public.

Then there are electronic systems - iterations on SW is easy, iterations on HW expensive and hard, even with simulations, mock ups, and what have you. I worked on an augmented reality hardware thingy several years ago; we went from foam cutouts to a couple of very expensive prototypes, and that was it.

It is awesome when we can completely sidestep a problem, and this process lets you sometimes sidestep the serious difficulty of UI design. I worry when it gets bandied about as a truism, or The One True Way (not saying Jeff is doing that, I'm remarking on the wider industry). Yes, Agile lets you sidestep the problem of scheduling and estimation - sometimes. Try that when you are making a new airliner, building a cloverleaf interchange, making a car entertainment system, and so on.

edit: the converse problem is equally as large. Someone below mentioned the 'planning mania' of companies. I don't mean to downplay that problem, just to point out the need to evaluate each situation on it's particular needs as opposed to a 'best practices' (oh, how I hate that term) unthinking approach.

wavesounds 2 hours ago 2 replies      
This is great for a startup but can be depressing for long term consulting.

I've had the unfortunate experience of building a product for someone else where the process was driven by a combination of Complain-Driven Development and Upper-Management Wish-lists. This alone might have been fine but at the same time anything positive like the real analytics about how successful the product was or any non-complaint communication coming from the users was hidden from me for fear, I suppose, that I might try to use that information get my company more money.

This became incredibly depressing. Everyday you show up to work putting in more and more hours into something that comes back with more and more complaints. It was hell and I did everything I could to end Complaint-Driven Development to no avail because thats how the customer liked to work. Eventually I finally just gave up and left to find something more rewarding and less soul crushing.

gingerlime 5 hours ago 3 replies      
I totally agree. The problem for us however is actually deciding what's the most complained-about request, and how to categorize those requests. I find that we each have our pet hates and loves, and even if we don't mean to, we develop selective hearing for complaints or praises.

We tried to add tags to support emails (via helpscout), but it's also hard to remember to tag things, and it's easy to use different tags for similar complaints.

I wonder about the best strategy to quantify complaints / suggestions and 'bucket' them correctly, so you can really choose the top ones.

jchung 5 hours ago 1 reply      
This definitely resonates. Although while feedback from discourse users naturally takes place on discourse, gathering feedback from users can often be much more difficult. We've enabled usersnap recently, which has been somewhat helpful, but hasn't created the kind of vociferous user-to-user and user-to-developer debates that you see on meta.discourse
yoha 5 hours ago 1 reply      
Something is not clear about the character count requirement: did they set it to one or just finally found the right way to present it? If this is the former as the messages from the dialogs suggest, I think he should have highlighted the fact that the problem was not one of design (i.e. making users understand the limit), but functional (i.e. not forcing users to pad a too-short message).
dodger 3 hours ago 0 replies      
One time I was talking to a designer about doing it this way. The designer said "Yeah, that's a great way to find a (look of total disdain) _local_maximum_." As usual, some truth to both points of view.
sopooneo 4 hours ago 0 replies      
I feel like responding to user complaints can help bring you closer to the nearest local maxima. But it often won't bring you beyond that unless your users are themselves UX designers or developers.

So it's useful to make your MVC as good as possible so that your users aren't forced to complain about the results of fundamental design shortcomings. That can result in more and more complex fixes, none of which should be necessary.

jackson1988 5 hours ago 0 replies      
The only thing I've ever seen work is getting down deep and dirty in the trenches with your users, communicating with them and cultivating relationships. That's how you suss out the rare 10% of community feedback that is amazing and transformative.

This is simple but practical advice I think far too many people ignore. You've inspired me with this article. Glad to see you've been successful from it!

johnny635 53 minutes ago 0 replies      
I've seen this works extremely profitably in the past. Hard to argue with results.
the1 5 hours ago 3 replies      
why not hire good UX experts who can capture these complains in product design phase?

Contact me if you need help. I'm a good UXpert.

leaxdc 4 hours ago 0 replies      
We in our company preferred chair-driven development - it's in case of failed estimate you are lashed out by chief's office chair.
sergiotapia 5 hours ago 3 replies      
Offtopic: I find it really tacky how this blog constantly links back to other posts of his hidden in the text.
Rails XSS vulnerability in number formatting (CVE-2014-0081) groups.google.com
16 points by bensedat  54 minutes ago   5 comments top 4
dave1010uk 13 minutes ago 0 replies      
I'm not sure I understand this. Are the number helpers now escaping for a HTML context? Isn't it best practice to escape user input just before outputting it (so you know the context) rather than in every helper function?

Disclaimer: not a RoR developer.

dmix 13 minutes ago 1 reply      
Looks like some rarely used view helpers. I doubt many of these are used in production apps.

Hopefully no Bitcoin apps use the currency helper. But I imagine in the context of an exchange the numbers come from the blockchain or a wallet, and aren't user controlled in the way that could be exploited.

bensedat 39 minutes ago 0 replies      
Also just wanted to mention: if you run a Rails app it's worth subscribing to the rubyonrails-security google group. Low traffic except for blasts like these to alert you to urgent patches.
hayksaakian 17 minutes ago 0 replies      
cool helper functions, now that I want to use them, I'll need to update.
Is it time to move away from silicon-based solar? arstechnica.com
13 points by turing  46 minutes ago   1 comment top
outworlder 1 minute ago 0 replies      
What about lower efficiency, but much cheaper panels? What is it that drives price up so much? Production capacity? Market forces?
Why I Chose Academia ketyov.com
25 points by ohblahitsme  2 hours ago   12 comments top 4
michaelhoffman 28 minutes ago 0 replies      
This blog post is a year old. The writer is now an assistant professor at the University of California, San Diego.


cryoshon 1 hour ago 3 replies      
Love of science and research does not pay the bills, nor does an academic level salary.

The "choice" that the author made wouldn't really crop up unless he had help paying the bills.

Create 56 minutes ago 0 replies      
"How should we make it attractive for them [young people] to spend 5,6,7 years in our field, be satisfied, learn about excitement, but finally be qualified to find other possibilities?" -- H. Schopper

The numbers make the problem clear. In 2007, the year before CERN first powered up the LHC, the lab produced 142 master's and Ph.D. theses, according to the lab's document server. Last year it produced 327. (Fermilab chipped in 54.) That abundance seems unlikely to vanish anytime soon, as last year ATLAS had 1000 grad students and CMS had 900.

In contrast, the INSPIRE Web site, a database for particle physics, currently lists 124 postdocs worldwide in experimental high-energy physics, the sort of work LHC grads have trained for.

Let's not confuse students and fellows with missing staff. [...] Potential missing staff in some areas is a separate issue, and educational programmes are not designed to make up for it. On-the-job learning and training are not separated but dynamically linked together, benefiting to both parties. In my three years of operation, I have unfortunately witnessed cases where CERN duties and educational training became contradictory and even conflicting.


An unsatisfactory contract policy

This will be difficult for LD staff to cope with. Indeed, even while giving complete satisfaction, they have no forward vision about the possibility of pursuing a career


Pensions which will be applicable to new recruits as of 1 January 2012; the Management and CERN Council adopted without any concertation and decided in June 2011 to adopt very unfavourable mesures for new recruits.


And a warning to non-western members:

"The cost [...] has been evaluated, taking into account realistic labor prices in different countries. The total cost is X (with a western equivalent value of Y) [where Y>X]

source: LHCb calorimeters : Technical Design Report

ISBN: 9290831693 cdsweb.cern.ch/record/494264

ancientrepeat 1 hour ago 1 reply      
the author needs to chime in a few years later when he fully understands how big science actually works:

- most work is done by untrained and inexperienced graduate students, good luck understanding/reproducing the process

- most faculty are little more than grant submitting machines trying to land a grant at all costs regardless of what actually interests them

- most research reviews processes are incredibly biased with countless people doing terrible jobs (the reviled "reviewer number 3") a single negative can sink a grant/paper acceptance

- most institutions are grossly monolithic and the rules and regulations are such that incompetent individuals can never be removed from any given position.

- most institutions are run as medieval lordships, with many smaller decision makers like deans, head of departments that have incredible influence on someone's career. It is great when the dictator is benevolent and unbearable if not.

Note how instead of paying a good salary the University choses to give out handouts (lower childcare fees, lower rentals) - because those in turn are paid via taxpayer grants. It hides the fact that they pay so little the people would qualify for foodstamps.

A Sochi Olympics API kimonolabs.com
27 points by pranade  2 hours ago   19 comments top 6
bbx 2 hours ago 1 reply      
I wanted to build a simple calendar for the London 2012 Olympics (and actually built another one for the Sochi Olympics). Both times I ended up using a JSON file found on NBC's website.

Like the linked article suggests, it would be nice to have access to an official (and even simple) API, to dig up some interesting statistics or just have some fun playing with the data.

untog 2 hours ago 2 replies      
Despite the expense and interest, there is no API

There is most definitely is. It's just not free.

blackdogie 2 hours ago 1 reply      
Using their Olympics logo is one quick way for you to get a cease and desist letter. http://registration.olympic.org/en/faq/detail/id/25 IANAL

Interesting data though !

johns 2 hours ago 0 replies      
Where is the data sourced from? Having had to secure rights to Olympic data at a past company, it's a minefield of legal and usage restrictions.
tonystark 2 hours ago 0 replies      
Sochi-mash anyone?
Eventjoy (YC W14) Is A One-Stop Shop For Organizing Events techcrunch.com
36 points by kevin  3 hours ago   15 comments top 4
_sentient 2 hours ago 4 replies      
Ridejoy, Pathjoy, Homejoy, Eventjoy. There's something about YC and a certain suffix...
brianbreslin 1 hour ago 1 reply      
As someone who organizes a lot of events and uses eventbrite now I would love to see a price break for sub $10 tickets. A $5 ticket, the $.99 fee is almost 20% + 5% in cc fees gets tough to swallow.

So things I would want to see in order to switch from EB:- Better ticket sales at the door (like square integration)- Better check-in/scanning systems- More flexible ticket types - hmm i could go on, just need more coffee right now.

sidenote: I generate hundreds of dollars a month for eventbrite but their support is abysmal.

_delirium 2 hours ago 1 reply      
Obviously only one angle, but for paid events, looks slightly cheaper than the main incumbent, EventBrite: 5% + $0.99 (including CC processing) rather than 5.5% + $0.99.
kumarski 31 minutes ago 1 reply      
Who's the customer base for something like this? Why would I use this as opposed to one of the many other event apps out there?
Your Docker image might be broken without you knowing it phusion.github.io
102 points by jballanc  6 hours ago   59 comments top 21
xal 5 hours ago 3 replies      
The comments about the init process are true. It makes sense to run a proper PID1 system here such as runit.

I'd argue with the rest of the post. The problem is that phusion makes the common mistake of thinking of containers as faster VMs. That's fine, this is where almost everyone starts when first looking at Docker paradigm.

A good rule of thumb is: If you feel like your container should have Cron[1] or SSH[2], you are trying to build a VM not a container.

VMs are something that you run a few of on a particular computer. Containers are something that you will run thousands or tens of thousands on a single server. They are a lot more lightweight and loading them up with VM cruft doesn't help there.

[1] Cron: use the cron of the outer machine with docker run[2] SSH: use lxc attach

nailer 2 hours ago 1 reply      
Fascinating. My first inclination, when I started running Docker, was to run /sbin/init and launch a full systemd and all services.

I even asked on ServerFault (ie, StackOverflow for servers) about it and was told, quite aggressively, that running a full OS is wrong:


Addressed individually:

1. Reaping orphans inside the container.

Yup. If your app's parent process crashes, its child processes may now be orphans. However in this case your monitoring should also restart the entire container.

2. Logging.

Assuming you run your docker image in a .service file (which is what CoreOS uses as standard), systemd-journald on the host will log everything as coming from whatever your unit (.service) name is. So if you `systemctl myapp start` output and errors will show up in `journalctl -u myapp` in the parent OS.

3. Scheduled tasks.

For things like logrotate, it really depends whether you're handing logs inside or outside the container. Again, I'd use systemd-journald in CoreOS, rather than individual containers, for logs, so they'd be rotated in CoreOS. For other scheduled tasks it depends.

4. SSHd

It depends. SSH isn't the only way to access a container, you can run `lxc-attach` or similar from the host to go directly to a container.

I do mention CoreOS here because that's what I use, but RHEL 7 beta, recent Fedoras, and upcoming Debian/Ubuntus would all operate similarly.

josh-wrale 4 hours ago 3 replies      
Cross-distro support notwithstanding, why not just skip Docker, LXC and VMs. Instead, use cgroups on bare-metal to make processes behave. On that note, forget bridging, use SR-IOV virtual functions with VLANs for QoS and _Profit_.

Edit: It seems this comment has been voted down. I think perhaps this is seen as irrelevant, but I would disagree, because Docker uses LXC and masks its function in much the same way as LXC uses cgroups and masks their function. cgroups can be used to achieve similar goals without these many layers of abstraction. In this way, I believe this comment to be relevant to the discussion of full vs. application containers on Linux. There are certainly many reasons for using containers, but one of the leading reasons is process limits (e.g. RAM, network namespace). Limiting process usage of those resources, using only cgroups, is quite easy in comparison to all Phusion has gone through here to something with similar (though admittedly different) aims. Example: http://www.andrewklau.com//controlling-glusterfsd-cpu-outbre...

Edit 2: I would also appreciate constructive criticism. That is, I've been downvoted without useful feedback. Specific feedback as to what is wrong with my comment would enable me to contribute more constructively to this discussion. Without such feedback, I believe the downvote can be seen as a simple and tribal "go away".

thu 4 hours ago 1 reply      
It may be a matter of opinion but advocating to run cron, sshd, and so on in your containers, let alone in every single one by providing a base image to do that seems plain wrong.

Let's take an example. You have Nginx, a web app, and a database. You can put everything in the same container or not. If you choose to put everything in different containers, you will be able to use tools at the Docker level to manage them (e.g. replace one of those processes).

And the fundamental idea is that we expect to have plenty of Docker images around that you can pick and play with, and those Docker-level tools will be able to manage all those things.

Now if you put everything in the same container, you're back to square one, reinventing the tools to manage those individual process. You can say that you don't need to re-invent anything, because you're used to full-fledged operating systems. Still, if you have a nice story to deploy containers on multiple hosts, to send logs across those hosts, and so on, the road will be more straightfoward when you decide to use multiple hosts.

This is about uniformity. I want processes (and containers around them), and hosts, that's it. I don't want additional levels. I don't want processes, arbitrarily grouped inside some VMs (or containers), and hosts. Two levels instead of three.

philips 5 hours ago 0 replies      
You really should not run ssh in your containers. If you have a ton of containers then key management and security updates of SSH will be a pain. There are two tools that can easily help out:

- nsenter lets you pick and chose what namespaces you enter. Say the host OS has tcpdump but your container doesn't. Then you can use nsenter to enter the network namespace but not the mount namespace: sudo nsenter -t 772 -n tcpdump -i lo

- lxc-attach will let you run a command inside of an existing container. This is lxc specific I believe and probably not a great long term solution. But, most people have it installed.

ewindisch 2 hours ago 0 replies      
I disagree with the premise that using Docker to run individual processes is "wrong". Phusion is doing a disservice by suggesting as such. There ARE use-cases where such a base-image is useful, but I believe these should be the uncommon case, not the common one. Even yet, if running multiple processes in a container is needed, it's preferable to use Docker-in-Docker.

I suppose part of the problem is the two benefits of Docker and containerization are frequently confused. Docker provides portability and build bundling, but ALSO provide loose process isolation. You should want to take advantage of that process isolation and by doing so, should want to run SSH or cron in their own containers, not in a single container with your application process. If your application has multiple processes, each should have their own containers. These containers can be linked and share volumes, devices, namespaces, etc. Granted, some of the functionality one might desire for this model is still missing or in development, but much of it is there already and that's the model I aspire Docker to follow.

It might also be to some degree a matter of legacy versus green-field applications. For instance, I've been deploying OpenStack's 'devstack' developer environment (which forks dozens of binaries) inside of a single Docker container. In this case, the Phusion base-image might make sense. However, the proper way of using Docker would be to run dozens of containers, each running a single service.

The reason I don't do this is because the OpenStack development/testing tools provide this forking and enforce this model, using 'screen' as a pseudo-init process. From the Docker perspective, this is a legacy application. I could and probably will change those development tools to create multiple containers, but until then, it's easiest to stick to a single container.

markbnj 2 hours ago 1 reply      
I've only been working with Docker for a couple of months, and I find this discussion really interesting. The goal of trying to get containers to behave more like a full system across various lifecycle events is somewhat orthogonal to my own aims, which have been to get my containers as close to stateless as I can.

Like some other posters here I view containers less as a lightweight VM, and more as a process sandbox. In the context of a scalable architecture I would like a container to represent a single abstract component, which can be spun up (perhaps in response to autoscaling events), grabs its config, connects to the appropriate resources, streams its logs/events out to sinks, reads and writes files from external volumes, and runs until it faults or you shut it down.

Ideally there would be nothing inside the container at shutdown that you care about. After shutdown the container, and potentially the instance it was running on, disappear. Spinning up another one is a matter of launching a new container from a reference image.

So far, in cases where I have needed daemons running in the container, I have pointed my CMD at a launch script that starts the appropriate services, and then launches the application components, typically using supervisord. That has worked fine, but I admit to not understanding the PID1 issue well-enough up to this point.

tel 2 hours ago 0 replies      
How does this play with the CoreOS premise where each docker should be hosting a single process managed intelligently through something like systemd?

Under this model I'd expect that systemd's pgroup support should help with zombie processes and generally take over many of the services that baseimage-docker is suggesting here. As other have mentioned in this thread, there's a fairly large difference of opinion between running containers like fast VMs or like thin layers around single processesdoes baseimage-docker make sense only in the latter?

DanHulton 1 hour ago 0 replies      
Off-topic, but I'd thought I'd screwed up my DNS for a moment and this article redirected to the silly side-project I've been working on: ipaidthemost.com.

I guess we borrowed the same template?

brokenparser 2 hours ago 1 reply      
So if you run anything other than Ubuntu inside Docker, this is useless because the steps to build your own aren't outlined.

I find Docker to be horribly counter-intuitive and ass-backwards anyway, so not much harm done there as people are in general better off with something else entirely (plain lxc, libvirt, virtualbox, xen, openvz...). I recommend to steer away from it at least until 1.0 is out.

EDIT: I put it in my .plan to build a better BusyBox image aimed at running statically compiled programs with minimal baggage, but I'm not sure when I'll get a round tuit*

*: http://i.ebayimg.com/00/s/NDgwWDY0MA==/z/z-4AAOxyUrZSr82N/$_...

tomgruner 1 hour ago 1 reply      
Docker is a container for running processes, or a process. Containers should be disposable and transient. I have begun to think of it in terms similar to OOP. Images are your Classes. Containers are your class Instances. When you are done with an instance, you discard it and make a new instance. So don't go shoving all kinds of crap into the instance like crons and sshd that don't belong there. Most devs don't expect to have their code be free of memory leaks when it comes to interpreted languages. And docker containers don't need to worry about child processes being stopped - they should just be disposed of and you make a new container from your image. Keeping containers around would be like trying to pickle a python class instance perpetually that has references to who knows what... Just make a new instance when you need it. And just make a new container when you need one. I use named containers and a Makefile that stops and deletes existing containers with the same name before starting a new one.
the_mitsuhiko 2 hours ago 2 replies      
I don't understand the PID1 case. You are running a single process, why do you have to collect zombies?

In fact, I understand none of these points. This seems all very hard to relate to. These are containers and not VMs. Most of that stuff should run in a separate container.

krakensden 3 hours ago 1 reply      
I'm pretty suspicious of using runit instead of Upstart- nobody tests Ubuntu with runit, and you're liable to get in trouble if you depend on some other service running on the machine. Although clearly it works well enough for them.

I also sort of suspect that the closer you are to running a full distribution in your containers, the less benefit you're getting from the containers.

zimbatm 5 hours ago 1 reply      
It will work but things are addressed on the wrong level in my opinion.

syslog: each container now has it's own logs to handle. If you want them to be persistent/forwarded it might be better if all containers could share the /dev/log device of the host (not sure of the implications though).

ssh: lxc-attach. Docker should expose that.

zombies: it's a bug in the program to not wait(1) on child processes.

cron: make a separate container that runs cron.

init crashes: bug in the program again. it's possible to use the hosts's init system to restart a container if necessary.

rschmitty 2 hours ago 2 replies      
Is there a "explain Docker to me like I'm 5" post?

This seems like the old "I have problems with managing everything I need for my app so I'll just run docker containers. Now I have 2 problems"

hrjet 5 hours ago 2 replies      
Why not just use

CMD ["/sbin/init"]

And start your app through an init.rd script?

The article says "upstart" is designed to be run on real hardware and not a virtualised system. If that is true, then perhaps there is value in baseimage-docker, but details are lacking.

willvarfar 5 hours ago 1 reply      
I've been trying to get some tools to run in a docker for a few days now. So far the problems have been that there isn't a convincing HOME folder and user, and that the locale isn't set (only explodes if there are unicode filenames, but there are plenty of those e.g. for SSL certs).

Does this script sort out those kind of things?

peterwwillis 3 hours ago 0 replies      
They just described implementing an OpenVZ VM.
jaybuff 2 hours ago 0 replies      
"Note that the shell script must run the daemon without letting it daemonize/fork it. Usually, daemons provide a command line flag or a config file option for that."

fghack is an anti-backgrounding tool.http://cr.yp.to/daemontools/fghack.html

pini42 4 hours ago 1 reply      
I think it is not related to Docker itself, but to the fact the it is using all purpose Linux distributions.I'm pretty sure that very soon we will see explosion on new distros addressing exactly these problem and built explicitly for running inside containers.
kapilvt 2 hours ago 0 replies      
just use the ubuntu-upstart stackbrew image.. compatible with all the packages etc..
FundersClub Reports Unrealized Net IRR of 41.2% mattermark.com
29 points by dmor  2 hours ago   23 comments top 3
patmcc 1 hour ago 2 replies      
A whole lot can happen between 'unrealized' and 'realized'
robertk 2 hours ago 4 replies      
My application for membership just got rejected for not making enough money.

Rich get richer, I guess. If I was in a different mood I would write a blog post in outrage.

ZoF 2 hours ago 0 replies      
There's a type on this[0] page. just ctrl-f: fo


How The Guardian successfully moved its domain to theguardian.com theguardian.com
106 points by malditojavi  6 hours ago   39 comments top 11
lawl 6 hours ago 3 replies      
The only clever thing they did was this:

> the Identity team started laying cookies on www.theguardian.com in advance. This was a nice touch because it meant that visitors would still be logged into the site when we eventually changed domain.

Everything else? Yeah uhm, not very interesting. As they wrote themselves, there's a thing called 301 - permanently moved.

nodata 6 hours ago 1 reply      
> We attempted to speak with all our major referrers including search engines and social media.

Reworded: "Google don't have a phone".

eponeponepon 4 hours ago 2 replies      
The Graun's address will forever be www.grauniad.co.uk to me.

(note to confused and/or non-UK people: look up the magazine Private Eye)

cowchase 5 hours ago 0 replies      
I'm surprised they went live with this without an expiration date on their permanent redirects. Now there's no way back, even if anything breaks. Looks like an unintentional Big Bang launch to me.


nhangen 4 hours ago 0 replies      
Not a very helpful article. I was hoping they'd share how they managed the SEO portion in a way that would prevent a drop in rankings. They glossed over almost every point.
macspoofing 2 hours ago 1 reply      
>Our goal was simple: to serve all desktop and mobile traffic on www.theguardian.com and no longer serve any content on www.guardian.co.uk, m.guardian.co.uk or www.guardiannews.com"


So is the consensus that .mobi was one of the worst ideas in existence?

lukasm 5 hours ago 1 reply      
They need to talk to Sean Parker. He'll convince them to drop "the"
MichaelTieso 5 hours ago 1 reply      
Interesting that they contacted Yoast for SEO advice.
ChrisArchitect 5 hours ago 1 reply      
in previous years say 5+ years ago, this was a scary concept to anyone working on sites and still believing in nonexisitent SEO voodoo. But it has become commonplace and more than simple to 301 a site from one domain to another, updating the usual suspects like google etc to make sure it all goes smoothly. So nothing really that super here. Just nice to hear about process behind the scenes and that everything was taken into account etc..as it should be.
paromi 5 hours ago 2 replies      
their first byte is not that fast :


also many requests on the page

napcae 5 hours ago 1 reply      
>If the host was www.theguardian.com, we would rewrite all the URLs on the site to be www.theguardian.com. If the Host was www.guardian.co.uk we would rewrite all the URLs on the site to be www.guardian.co.uk.


Fun with Zurl, the HTTP/WebSocket client daemon fanout.io
22 points by jkarneges  2 hours ago   discuss
Automated deployment with Docker lessons learnt hiddentao.com
43 points by goblin89  4 hours ago   2 comments top 2
jfoutz 2 hours ago 0 replies      
I'm just starting to grok the process container instead of VM model. It's valuable to read a writeup like this, it really gives a sense of the different flavor process containers have in production.

I'm still a little mystified by complex setups. Installing stuff as the correct user, adding a big group of database users, stuff like that seems pretty tedious in shell. I guess that's more of a provisioning issue though.

Articles like this make me realize i don't care much what the environment actually is, I care about getting that environment configured correctly with as little effort as possible.

andyl 2 hours ago 0 replies      
We're starting to move our system into containers - early days and there is a lot to learn. It feels like we are in the Cambrian explosion of containerization - many different theories/opinions on best practice. Articles that describe real-life experiences are really valuable.
Rails Data Injection Vulnerability in Active Record (CVE-2014-0080) groups.google.com
8 points by gkop  52 minutes ago   discuss
A Simple Pair Programming Setup with SSH and Tmux collectiveidea.com
43 points by trestrantham  4 hours ago   14 comments top 6
nviennot 46 minutes ago 1 reply      
http://tmate.io/ solves that problem, and does it a little better since it goes through firewalls :)
digitalsushi 2 hours ago 1 reply      
My huge takeaway is that you can specify the shell for each key you allow to connect. I have been tweaking /etc/shells like a dope for the past 15 years. So what this means is that, when you add a key to your authorized_keys file, you can also set an optional parameter that forces the command they are going to run (overriding any command they thought they would run instead).
skywhopper 2 hours ago 1 reply      
Pretty slick stuff. Fair warning: your "sed -i.bak" lines are only going to wind up with the next to last version backed up since each update to sshd_config will overwrite the last sshd_config.bak.

You could chain all the sed changes together into one command (tested on Linux, OSX's sed might need some tweaks):

    sed -i.bak 's/^#\?\(\(ChallengeResponse\|Password\)Authentication\).*$/\1 no/' /etc/sshd_config

wging 3 hours ago 4 replies      
Letting people onto my machine makes my skin crawl. I'd rather use something like Syme: https://syme.herokuapp.com

Seajure, the Seattle-area Clojure user group, uses Syme, and it seems pretty effective.

carrja99 2 hours ago 0 replies      
Another tool I tried out for pair programming remotely with great success was Floobits (https://floobits.com/) and their tmux like terminal plugin flootty.
busterarm 3 hours ago 0 replies      
As someone who would prefer to not have to use my Mac anytime I want to pair with people (I'm primarily a command-line-only Arch guy), this is super useful to me.

I was way too lazy to figure this out on my own.

Show HN: An easy-to-use Text Analysis API NLP and Machine Learning aylien.com
95 points by parsabg  7 hours ago   63 comments top 26
gklitt 7 hours ago 2 replies      
Cool stuff! It's nice to see platforms like this which abstract out good algorithms, so that developers can worry about thinking of interesting applications. .Open source libs are even better, but pragmatically speaking, I think these types of platforms probably move faster and get better results.

One major competitor (well known for anyone who's looked into this stuff) is Alchemy [1]. I tried a New York Times link [2] on Aylien and Alchemy, and Alchemy performed much better -- in fact, Aylien didn't even successfully find the article body. I'm sure you guys will be iterating on improving the algorithms, but just wanted to flag that as a potential turnoff for anyone comparing your website demo with Alchemy.

Best of luck!

[1] http://www.alchemyapi.com/products/demo/

[2] http://www.nytimes.com/2014/02/18/world/middleeast/bombings-...

fnl 5 hours ago 0 replies      
Seen quite a few times (NLP web APIs), and my opinion is that this kind of stuff tends to not be scalable: to be useful, such web APIs have to be able to do entire articles in just a split fraction of a second. Although I am not sure (because of the HN storm the API is down), it does not seem this tool will live up to those expectations, either. In the end, my choice always has been to include/wrap an off-the-shelf tool in your own pipeline rather than relying on a external service that might be too slow for end-users and mass mining alike...
crypto5 19 minutes ago 0 replies      
Maybe somebody will find useful and relevant my pet project: https://github.com/crypto5/wikivector .It uses machine learning and wikipedia data as training set, supports 10 languages, and completely open source.
mattmcknight 6 hours ago 1 reply      
These sorts of things are typically better offered as libraries, particularly as the training is usually specific to a corpus, or a particular context.

It would be a nice to offer a library with a bootstrapped training set.

iamwithnail 13 minutes ago 0 replies      
Annnnnnd that's my thesis sorted. Part of it anyway.
bane 3 hours ago 0 replies      
Playing around with it and seemed to have killed it by pasting the text from this WP article (http://pastebin.com/AtCU7E8H) in and hitting analyze. It's been spinning for a while.

edit I see from another response that the server room is on meltdown, I'll wait for a bit.

blueblob 4 hours ago 1 reply      
What do you use for the extraction of entities (if you don't mind saying)? I entered "The Cat in the Hat" is a good book. It didn't recognize any entities. Are you using an ontology for named entity resolution, or just extracting NPs?
zvanness 7 hours ago 1 reply      
Hey guys! Congrats, NLP is a huge problem that needs as many minds working on it as possible.

Just tried a few links:



Am I missing something here? It seems like it's just parsing text, i'm not seeing any context(keywords, categories, summaries)

edit: It's giving fantastic results when pasting the raw text! :)

Are you guys using DBpedia? It's giving very similar results to a system I was working on in the past: http://www.zachvanness.com/nanobird_relevancy_engine.pdf

polskibus 3 hours ago 1 reply      
There's more and more of text analysis APIs, would you mind comparing your feature set to something like Textrazor (http://www.textrazor.com) or Open Calais?

What is special about your project ?

kenshiro_o 6 hours ago 2 replies      
Unfortunately the web site is still analyzing the example Techcrunch link (it's been 3 min already).

Is something broken? Maybe you could cache some recurring analyses.

cliveowen 7 hours ago 3 replies      
"There was a time when men could roam free on earth, free from concrete and tarmac. Now it's all gone to shit."

Classification: arts, culture and entertainment - architecture .(WTF?)

Polarity: positive. (Nope)

Polarity confidence: 0.9994709276706056. (Well...)

Looks pretty rough to me.

syllogism 6 hours ago 1 reply      
Do you publish accuracy figures? Any information about what domains your training data is from?
drakaal 3 hours ago 0 replies      
This is a much better Noun Phrase / Entity extractor.


We don't rely on CoreNLP, or NLTK, we have our own sentence disambiguation, and our own part of speech tools. So we are a lot faster.

Our other api's let you piece together a lot of cool NLP projects with very little code.

analytically 6 hours ago 0 replies      
Another player in this space, from Oxford, UK: http://apidemo.theysay.io/
skiplecariboo 7 hours ago 1 reply      
Super nice !

This is a very interesting area... Good to see something new apart from Alchemy and opencalais !

mrg3_2013 5 hours ago 1 reply      
I tried bbc.com and nothing shows up. Is it supposed to work on top level links and summarize ?
cglace 6 hours ago 1 reply      
I posted a couple of paragraphs from a financial blog and the tool interpreted SEC to mean Southeastern Conference.
parsabg 6 hours ago 0 replies      
thanks for the feedback folks. FWIW, here's the documentation (/ NLP crash course!): http://aylien.com/text-api-doc
lukasm 7 hours ago 3 replies      
HN - the ultimate DDOS machine
afshinmeh 7 hours ago 0 replies      
One of stunning stuffs that I've seen. Good job.
adventured 7 hours ago 1 reply      
How is this superior to Alchemy?


moron4hire 7 hours ago 1 reply      
Should I have not tried it with a 3000 word essay I wrote? It has been beachballing for the last 5 minutes or so.
mm0 6 hours ago 0 replies      
sell it to a bank $$$
jhbellz 7 hours ago 1 reply      
pretty cool - what languages does your API support?
jackson1988 5 hours ago 0 replies      
This is incredible!
hamed_r 7 hours ago 0 replies      
Java SE 8 Date and Time oracle.com
56 points by javinpaul  5 hours ago   73 comments top 9
bane 4 hours ago 6 replies      
I've recently been sitting down and relearning Java. Not good enough to be idiomatic in the language, but whatever. One of the first things I banged into was the unbelievably bad date/time libraries that are built in. After banging my head for a couple hours trying to figure out the magical combination of classes I needed to assemble to get a date formatter to work, I just ended up rolling my own in half an hour.

I'm frequently surprised at the really rough and inconvenient bits in Java. Weird inconsistencies in the libraries, or not having a really convenient set of file read/write methods without having to cobble together bits and pieces of an I/O system to get a directory listing or read in a file or whatever, or how variable performance is between two similar looking pieces of code (if you haven't done lots of benchmarking on Java standard library containers, I urge you to do it, and make careful selection of containers based on that, it's frequently surprising how different otherwise identical looking code runs).

Considering there are what, 3 complete GUI toolkits built in, why isn't there a built in CSV parsing library, or a "read file to string" static method somewhere? Why do I have to put so much effort into basic tasks? It's such a weird and uneven and sloppy feeling thing.

There's all these weird aggregated different ways to do the same thing, but each built to fix some problem with some older solution, but the older broken ways were never really deprecated out for compatibility. Unless you run across some guide that says explicitly "use this instead of this because of <reasons>" you might never know the newer version exists. Yet the new releases includes so much compatibility breaking new syntax changes that it doesn't practically matter.

There's bits and pieces of related but complete solutions piled all over the library as well. e.g. Regex bits are in String, java.util.regex.* (Pattern and Matcher) and probably elsewhere...and don't get me started if you're moving back and forth between arrays and the various containers that make arrays more usable, and then all the utilities to help with that which are scattered all over the place. I spend half my time writing code to abstract all that nonsense away so I can write the main code logic in peace.

And then over the years the concepts about how to design an API have changed or something, because you can feel different stylistic concepts in different places. Here you instantiate an object, then set it up, then build another object of this type to catch the results and do some other magic. Here you instantiate the object with all the important bits and manipulate the object with local methods. It's like each class requires it's own style guide. I can understand that with 3rd party work, but it doesn't make sense with the batteries that come with the language to be so uneven.

It was probably 10 years ago that I last tried Java, and it sucked back then, with all the verbosity etc. But with modern IDE support I actually kind of like the flow and style of it to some extent. It's a beautifully simple syntax to use at its core. But then again I don't care about all the FactoryFactory nonsense. And I'm avoiding lots of the new stuff that doesn't really fit into the language.

I've actually started to become convinced that it's getting to the point that Java 10 or whatever should be a single minded house cleaning. Jettison all the broken old shit, clean up the style and usage, build decent syntax into the language for doing common tasks so the coder doesn't have to boilerplate themselves to death. Take 5 years to do it, the enterprise will survive that long.

edit I wonder if the idea of an "API editor" to vet the interfaces for consistency and style in these large standard libraries makes sense?

apaprocki 4 hours ago 1 reply      
Is there any talk in the Java world of decoupling the runtime from the tzdata distribution? AFAIK, it is still coupled together and requires a tool to update the data files or you have to apply JRE updates (which must be published in time).

The tzdata updates usually 10+ times a year and a company actually operating in most of those zones needs to have a straightforward way to push out updates not only across all production systems, but across all production languages. This usually makes solutions that "transpile" the binary data into native source or otherwise embed the data into resources somewhere much more operationally expensive. Native solutions (e.g., pytz in Python) that can be pointed to an directory to pick up the tzdata binary files are a good middle-ground, as it decouples the logic from the data.

Operationally, you need to be able to deploy the updates very quickly. Every year there are data updates that occur a few days before the DST change. If you have a lengthy dev/beta/prod rollout process to a very large number of machines, this can bite you. Just this week, Chile and Turkey are making changes, so prepare to update :)

al2o3cr 5 hours ago 1 reply      
"Time zones, which disambiguate the contexts of different observers, are put to one side here; you should use these local classes when you dont need that context. A desktop JavaFX application might be one of those times."

WAT. Even if you assume that your user has exactly one TZ, how does this work with DST / summer time?

bluedevil2k 5 hours ago 2 replies      
java.util.Date wasn't broken per se, since people have been using it since Java first came out. It's just not pleasant to work with, the naming conventions are wrong, it's short-sighted in terms of international dates, etc. Classic example of what's wrong with the old methods - the dates are 0-indexed.
beachwood23 5 hours ago 5 replies      
If Java Date was so broken before this, what have businesses been using the past few years? The article mentioned Joda-Date, but I'm curious if companies have used other solutions, or even built in-house solutions.
gldalmaso 3 hours ago 1 reply      
>> The new API avoids this issue by ensuring that all its core classes are immutable and represent well-defined values.

Is there any kind of talk of adding immutability to the core language? "final" keyword doesn't really cut it for objects and immutabililty by convention also is not easy to enforce on large teams.

fenesiistvan 4 hours ago 3 replies      
Good, but we expect much more from a major java release (i know that there are also some other improvements, but not too much).I would even welcome an applet come-back (with some fresh ideas and some strong answers for the browser security related issues, keeping the old good java stack, not just the name like in case of JavaFX)
nogridbag 4 hours ago 0 replies      
For those anticipating upgrading to Java 8 but would like to start using the new API now with JDK7, you might want to look at using the threeten backport so you can transition to the new Java 8 date API easily in the future.



moron4hire 5 hours ago 3 replies      
20 years to finally get this right. TWENTY YEARS, MAN!
Random Seeds in Ubuntu 14.04 LTS Cloud Instances dustinkirkland.com
24 points by jcastro  3 hours ago   8 comments top 3
zimbatm 2 hours ago 1 reply      
TLDR; PRNG seeds in the cloud are somewhat predictable. Because sshd generates the keys on boot it's possible to guess the private key on a fraction of the cloud hosts.

Ubuntu 14.04LTS solves the problem by adding a new source of entropy. They add a early-boot (before sshd) service that fetches data from an external server. In short: `curl http://some-server > /dev/urandom`

EDIT: Looking for the default server but launchpad seems to be down. Ideally it would be a trusted source like the cloud provider themselves.

EDIT2: https://entropy.ubuntu.com/ and the public cert is provided with the package.

bmm6o 2 hours ago 0 replies      
The links in the slides are borked, there seem to be 3 stray bytes at the end of each shortened url.
hiphopyo 1 hour ago 4 replies      
Choose OpenBSD for your Unix needs. OpenBSD -- the world's simplest and most secure Unix-like OS. Creator of the world's most used SSH implementation OpenSSH, the world's most elegant firewall PF, and the world's most elegant mail server OpenSMTPD. OpenBSD -- the cleanest kernel, the cleanest userland and the cleanest configuration syntax.
Bizarre Shadowy Paper-Based Payment System Being Rolled Out Worldwide ledracapital.com
206 points by user_235711  5 hours ago   225 comments top 20
hapless 4 hours ago 15 replies      
Satire works when you hold folly up to ridicule. Central banking is possibly the greatest human invention since sanitation. The U.S. central banking system ranks among the finest of its kind.

Bitcoin handles 7 transactions a second on a good day, has no reliable institutional actors, and I can neither pay taxes nor satisfy court judgments with it. It is an impressive proof-of-concept for decentralized trust in cryptosystems, but it is hardly a currency.

yoha 4 hours ago 4 replies      
I don't think it will get popular. It needs too much infrastructure for emission and tracking. You will need to manually find a way to get to the right amount using such a restricted set of values [1]. Additionally, you cannot make backups and it is very easy for lose or destroy some of these "bills". It's a nice idea, but there is not a chance it will gain value. I don't think it's worth investing any real money in it.

[1] https://en.wikipedia.org/wiki/Change-making_problem

Tloewald 3 hours ago 0 replies      
The article is idiotic and disingenuous. Paper money didn't attempt to replace coin in one fell swoop. Paper money evolved from letters of credit which make perfect sense, and bitcoin fundamentally relies on similar concepts anyway.

You can't buy something online by just anonymously transmitting credits into an account -- who gave us the money? what do they want? You fill in an order form, a bunch of stuff happens, you get a receipt and a product and money gets transferred at some point. Very little of that is the currency transaction. You needed to generate a letter that said I want X and I'll pay you Y, then you paid Y and sent confirmation of the money transfer, and so on.

The bitcoin bulls think that Credit Cards are a terrible way to buy things because they charge too much for what they do (and their security sucks). This may be true, but bitcoin is only solving the easiest part of the problem. Banks can already transfer money around cheaply and securely. Indeed even market transactions are so fast and inexpensive that high speed trading is now a huge economic force.

The fact is that credit cards (a) let you buy stuff on credit, (b) provide transactional support allowing commerce to proceed smoothly, and (c) already work. A credit card is merely automated checking with overdrafts, which is a direct descendant of the letter of credit (from which cash evolved), which is in fact a more fundamental method of trading than barter. (Tracking people's accounts is the reason writing was invented.)

leobelle 4 hours ago 5 replies      
Ever since POS hacks on Target and other retailers I've completely switched to cash only. Not only can't you be tracked, something I'm not sure I care about, but it reduces the chances of somebody stealing your card info.

And I only use wells fargo ATMs, because they have a nice green glowing card input, so you know nobody put a malicious card scanner on the ATM.

It's a complete reversal from a few years ago where I wouldn't carry cash and wouldn't go anywhere that was cash only. Going cash only also reduces the fees for stores where you buy things.

Cash is the way to go, despite all of our technology.

kordless 4 hours ago 3 replies      
This article is entertaining, interesting and relevant. Humans are exceptionally good at rationalizing their biases, especially if they involve trust, and interest is required to alter those biases in a positive way. The biggest challenge for cryptocurrencies today is raising interest and trust in the general population. We can best effect that by working hard to raise interest in cryptocurrencies in a positive way.

This is more commonly known as "marketing".

acconrad 4 hours ago 1 reply      
My issue with fluff pieces like this is that there are plenty of established tools in our lives that would not be available if they started out today. Acetaminophen (or Tylenol by it's brand name) is often touted as a drug that would never pass modern FDA regulations if it were introduced today due to it's issues with liver toxicity and link to Reye's Syndrome.

We make products and services with what we have available at the time, that doesn't necessarily mean that they are an indefinite solution, nor does it mean that we are going to hold our archaic solutions to the same standard we do with our new and improved solutions.

ashray 5 hours ago 1 reply      
Amazing post! I wonder why everyone is called Mike Smith in the article though ? Mike Smith - VP of this, Mike Smith - a tourist, Mike Smith - etc. ?

Something else that wasn't covered is that cash is also dirty and carries grime and infections. Ideally you should wash your hands before you eat if you handle any cash. (this is less of a problem with laminated/plastic notes like the CAD or EUR)

Imagine a super virus propagating and killing people through cash. Ouch!

talmir 3 hours ago 0 replies      
This reminded me of the fact that I havent handled, or even touched, real paper money for at least a year now. Here in Iceland paper currencly is slowly being phased out, and has been for a few years.
teddyh 3 hours ago 0 replies      
I Can Barely Draw webcomic has recently had a number of comincs in the same vein as this, starting here:


beobab 4 hours ago 1 reply      
Sounds like a great idea. I'd love to get some of this "cash" stuff everyone keeps talking about.
the_watcher 3 hours ago 0 replies      
Obviously hyperbolic and not a perfect comparison, but gets enough right to be pretty damn clever. The "hardware wallets" security flaws and anonymity facilitating criminals are particularly enjoyable.

The more policy-oriented economic points aren't nearly as strong, but overall, I thoroughly enjoyed it.

Just to be clear: I like Bitcoin a lot, but I don't think it will ever replace national currencies. I just enjoy pointing out the hypocrisy of a lot of bitcoin detractors.

keithgabryelski 4 hours ago 3 replies      
go home bitcoin -- you're drunk.

bank notes are a very simple system, so simple it has been used for over a thousand years.

raverbashing 4 hours ago 1 reply      
This is the premise behind the subreddit r/ActualMoney

Satire is an interesting way to analyze things, especially in this case.

Sheepshow 1 hour ago 0 replies      
What a circle jerk.
bertil 5 hours ago 1 reply      
[This is satyre, imagining cash introduced now, and mocking BitCoin's critics. Some ring true, some miss the two-sided market issues behind adopting new payment forms.]

To be honest, Ive been in Nordic countries for a year (where all transaction are card-based, and there is no minimum amount for payment) and that situation rung surprisingly true especially after yesterday, when I was facing a coat valet who was expecting a ten crown in cash.

Eleutheria 3 hours ago 0 replies      
The good.

Money is good, in any form. Banks are good, they help allocate resources in a capitalist economy. Central banks are good, they can help other banks in need.

The bad.

Fractional reserve banking is robbery. Inflation is robbery. Printing money is robbery.

The ugly.

Government, the fed, banksters, they are a maffia, a band of thugs writ large.

jcslzr 2 hours ago 1 reply      
they say that is why they sink the Titanic, on board was all the powerful people that opposed the creation of the Fed, (I personally don't think an iceberg can do that), but anyway one year later the Fed was born.
justinzollars 2 hours ago 0 replies      
I'll take any unwanted bills. :)
williamcotton 4 hours ago 2 replies      
Our currently existing currencies evolved over time. Central banking and fiat currency were institutions delivered by liberal democracies by the process of public debate.

They were not hair-brained ideas established by anonymous individuals engaged in private debate outside of the forums of democracy.

Look, I know all of you are very excited about these experimental economic and political systems, but please realize that the existing world that you are "suffering" under was mainly developed as a slow process of evolution.

Most extreme revolutionary ideas take a long time to work their way in to the existing societal structures and way of life. When they're quickly forced on to people, all fucking hell breaks loose.

So for fucks sake, get some balance and control your zealotry, people!

And read some history books! The future is built from the past no matter how long or hard of a process that seems to be. Go with the flow!

Engage with other people. Engage with the existing institutions. Even if it is a better idea, there are billions of people who rely on the current system in ways that you can't predict.

alayne 5 hours ago 5 replies      
Bitcoins (and others) aren't backed by a government. One of the major exchanges used to trade Magic the Gathering cards (You know, for kids!). Bills also have serial numbers that provide some amount of traceability, at least between Federal Reserve branches and banks.
Ladonia: An Illegally Created Nation Where Creativity Rules slate.com
41 points by wikiburner  5 hours ago   37 comments top 8
ivanca 1 hour ago 3 replies      
Such a moronic title: all nations are created illegally, if you believe otherwise show me the paperwork of the USA territory, signed by all previous residents (native Americans)... and no, their graves do not count as a signature under any legal definition.
dijit 3 hours ago 6 replies      
I had considered the possibility of purchasing an island and having actual people living and working with me.

a bunch of my friends on IRC were very keen on the idea, although I don't suspect they know what burden it would entail.

yes, we'd be able to set up our own privacy policies and, yes we can have super fast internet, lay our own fiber and infrastructure of that nature could be created.

however, agriculture, the bureaucratic hoops we'd have to jump through to successfully secede and the general hard work and labour that would have to go in, I believe, is unaccounted for.

this is a cool concept, and micronations are a nice idea, I just wish I could find a plot of land that's not owned, I'd definitely put a lot of hard work into getting out of my country.

supersystem 1 hour ago 0 replies      
"Goaded, Vilks ignored the announcement and decided to take control of the area and secede from Sweden"

The whole secession would be somewhat more believable if the police protection of Vilks didn't costs the Swedish tax payers about a million USD per year.

stevesearer 2 hours ago 4 replies      
I might be reading this story incorrectly, but it seems like Ladonia is just a normal piece of property which is owned by artists who issue "citizenship".

Is there some other quality I am missing that separates it from any other piece of property someone owns, creates a website for, and calls a micronation?

donretag 2 hours ago 1 reply      
At first, I thought it was a nation created by those that speak Ladin, which would not be surprising considering the autonomous stature of the region.

[1] http://en.wikipedia.org/wiki/Ladin_language

owenversteeg 2 hours ago 1 reply      
Does anyone else hate how the horizontal scrollbars make you want to scroll left (with your arrow keys) and then it turns out that this is the command to go to another article?
lcasela 4 hours ago 2 replies      
We need more mirconations.
marc0 1 hour ago 0 replies      
I consider it an enormously antiquated idea to bind nations to a piece of land. We should try to arrive at a more abstract definition. And yes, of such nations we need many more.
There Are Whales Alive Today Who Were Born Before Moby Dick Was Written smithsonianmag.com
187 points by spking  12 hours ago   71 comments top 13
srean 9 hours ago 9 replies      
Moby dick's kin the sperm whales are incredibly interesting. One of their remarkable ability is to dive deep, fast and long.

Among all free diving warm blooded animals they go the deepest. They dive to depths 25 times deeper than their other equally famous and endangered cousin the blue whales. The blue whale is the largest known animal to have ever inhabited the earth.

To give an idea of how deep they dive, here is a picture http://i.imgur.com/ESp2j.jpg It needs to be magnified for perspective and for the little surprise at the bottom.

It is interesting how they manage to hold their breath for so long and yet manage to survive the bends (decompression sickness).

The whales are seriously challenging our assumptions about animal intelligence, empathy, society, culture and language. For a long time we believed that the primates were at the top. Search Ted talks and youtube for dolphin intelligence, dont miss the Attenborough ones. For lack of a better word they are just amazing.

Dolphins are for example known to build difficult to make toys (air bubble vortex rings) just to entertain themselves.

They have to discover how to make it. Sometimes they can be quite possessive, they would break the toy if someone not so knowledgeable wants to play with it. Once a dolphin figures it out how to make one, his/her peers eventually figure it out too. So it kind of spreads within a group like fashion. This behavior has been observed both in captivity and in the wild.

Dolphins in captivity try to imitate us and seem to have no trouble mapping our body parts to theirs. A story goes that a scientist observing an young dolphin from an underwater portal had blown a cloud of cigarette smoke at it. The dolphin promptly went to the mother and did the same to the scientist with milk ! It is now strongly believed that they call each other by name. They try to imitate human speech which takes enormous effort on their part because unlike for example parrots their vocal tract is not conducive for this at all. People believe this to be an indication of their strong desire to communicate with us.

And they originated from ungulates: hoofed warm blooded animals. It came as a surprise to me that that there were hoofed carnivorous animals.

exratione 7 hours ago 0 replies      
The Methuselah Foundation has funded one of the research groups interested in comparative studies of the genetics of longevity, helping them to obtain the resources to sequence bowhead whales. This is one of those lines of work that is next to impossible to get funding for from the normal institutional channels at the present time:



Given the declining costs of DNA sequencing, all kinds of research that used to be prohibitively expensive even a few years ago is now becoming possible. For example, we recently awarded a $10,000 research grant to Dr. Joao Pedro de Magelhaes at the University of Liverpool to sequence the genome of the bowhead whale in order to study mechanisms for longevity in this warm-blooded mammal whose lifespan is estimated at over 200 years.

Not only are bowhead whales far longer-lived than humans, but their massive size means that they are likely to possess unique tumor suppression mechanisms. These mechanisms for the longevity and resistance to aging-related diseases of bowhead whales are unknown, says Dr. de Magelhaes, but it is clear that in order to live so long, these animals must possess aging prevention mechanisms related to cancer, immunosenescence, neurodegenerative diseases, and cardiovascular and metabolic diseases."

The bowhead whale study will be conducted at the state-of-the-art Liverpool Centre for Genomic Research and results will be made available to the research community.

samizdatum 9 hours ago 4 replies      
The long lifespan of whales could actually shed some light on human evolution.

Whales, along with many other mammalian species (including humans) exhibit a perplexing divergence of somatic and reproductive senescence. Female whales hit menopause long before their lives are over, in some cases spending the majority of their lives in a non-reproductive state, which prima facie seems rather maladaptive.

A number of hypotheses have been proposed to explain what seems like widespread evolutionary selection for menopause, and none of them are completely satisfactory. The "grandmother hypothesis", for example, posits that experienced grandmothers assist in the care of their grandchildren, increasing their odds of survival.

Certain species of whales, including Orcinus orca, the killer whale, exhibit early-life menopause, and form stable matrilineal groups, making them ideal candidates for testing the grandmother hypothesis. Interestingly, studies on killer whales observe no significant correlation between living grandmothers and grandoffspring survival rates, though there are plenty of unaddressed confounding factors.

Humans are the only species where the grandmother hypothesis is supported by data, but the dearth of corresponding data in whales suggests the dramatic disparity in our somatic-reproductive senescence might be more strongly selected for by factors we are not yet aware of.

madaxe_again 7 hours ago 0 replies      
There are clams alive today who hatched while the Ming dynasty was extant.

There are trees alive today which sprouted ten thousand years ago. Hell, Pando (albeit a clonal colony) could be 1,000,000 years old. http://en.wikipedia.org/wiki/Pando_(tree)

Astounding depths of time for a single organism to persist over - but ultimately dependent on a very sedate pace of life.

zipfle 8 hours ago 1 reply      
Interesting that their evidence --a stone arrowhead found in a whale-- is actually also described in Moby Dick itself:


It so chanced that almost upon first cutting into [a whale, not Moby Dick] with the spade, the entire length of a corroded harpoon was found imbedded in his flesh, on the lower part of the bunch before described. But as the stumps of harpoons are frequently found in the dead bodies of captured whales, with the flesh perfectly healed around them, and no prominence of any kind to denote their place; therefore, there must needs have been some other unknown reason in the present case fully to account for the ulceration alluded to. But still more curious was the fact of a lance-head of stone being found in him, not far from the buried iron, the flesh perfectly firm about it. Who had darted that stone lance? And when? It might have been darted by some Nor' West Indian long before America was discovered.

gutenberg.org full text:


antimagic 7 hours ago 1 reply      
And they still haven't read past the first fifty pages because, like everyone else, they get bored and put the book down.

Seriously though, I find it remarkable that bowhead populations have come back so fast considering how long they live. The oceans must have been absolutely teeming with them back in the day, if they reproduce that fast and live that long.

the_watcher 3 hours ago 0 replies      
Aquatic mammals are fascinating. They are so clearly mammals, yet they've adapted the mammalian traits that we take for granted as land-based evolutions to living in the water. I'm reminded of the bomb sniffing dolphins right now.
001sky 10 hours ago 0 replies      
Not related to the content, but the construct and display of this article is less than conducive to reading it.
whistlerbrk 4 hours ago 0 replies      
Somewhat off topic... but I encourage everyone to watch the Blackfish documentary. These are incredible, amazing, highly intelligent animals. Different nations have their own languages. They know who we are, they know what we're doing to them, they know when we're making them do tricks for them for food for other's amusement.
lifeisstillgood 10 hours ago 1 reply      
And today is the day of vengeance !

"Planet of The Cetaceans"

vrypan 8 hours ago 0 replies      
Slightly off-topic, but I can't help it: This is exactly why we picked the bowhead whale for our mascot at www.longaccess.com. :-)
chris_wot 9 hours ago 1 reply      
We really should kill them to examine them for scientific purposes. Plus, whale blubber is yum!
Are we shooting ourselves in the foot with stack overflows? embeddedgurus.com
218 points by nuriaion  13 hours ago   118 comments top 21
kens 12 hours ago 8 replies      
If I'm reading the testimony correctly, there is actually no evidence that a stack overflow caused unintended acceleration. The idea is that Toyota used 94% of the stack, and they had recursive functions. If (big if) the recursive functions used enough stack to cause an overflow, memory corruption could happen. If (big if) that memory corruption happened in exactly the right way, it could corrupt variables controlling acceleration. And then, maybe, unintended acceleration could occur.

But that's a far cry from the stack overflow actually causing any cases of unintended acceleration.

bjourne 10 hours ago 2 replies      
Even if you work in a gc language with a vm and all memory errors are checked, here is the major, MAAJOR, wisdom you should take with you:

The crucial aspect in the failure scenario described by Michael is that the stack overflow did not cause an immediate system failure. In fact, an immediate system failure followed by a reset would have saved lives, because Michael explains that even at 60 Mph, a complete CPU reset would have occurred within just 11 feet of vehicles travel.

We have seen this scenario played out a million times. Some system designers believe it is acceptable to keep the system running after (unexpected) errors occur. "Brush it under the rug, keeping going and hope for the best." Never ever do that. Fail fast, fail early. If something unexpected happens the system must immediately stop.

Gracana 28 minutes ago 0 replies      
Are there any downsides to having the memory set up the "safe" way that they describe? It seems like a win-win situation.

[edit] I guess I was thrown off by the shoot-yourself-in-the-foot scenario, where the stack grows toward fixed data structures. If the heap and stack grow towards each other, you have quite a bit of flexibility (though with some danger of collision). If you have the stack grow towards fixed data structures, its size is fixed and it can cause a dangerous overflow. The only disadvantage of the safe example is less flexibility, but for a critical embedded system, that is fine.

rcfox 6 hours ago 1 reply      
Talking about how to catch stack overflows and protect your data against them isn't useless, but it misses the point. There are rules/guidelines, like MISRA[0] (which the testimony mentions 54 times!) for the automotive industry that prohibit recursion, and tools that will check for conformance.

Toyota should not have been using recursion in the first place, and it seems they were too cheap to invest analysis tools like Coverity.

[0] http://en.wikipedia.org/wiki/MISRA_C

cognivore 5 hours ago 0 replies      
Stack overflows hate the elderly:

http://www.forbes.com/2010/03/26/toyota-acceleration-elderly... (forbes.com)

tragomaskhalos 8 hours ago 2 replies      
I had an "unintended acceleration case" in my old Austin Morris 1300; the cable connecting the pedal to the throttle snapped, jamming it at a fixed (fairly high revs) level, requiring me to control the speed using the brake.

The solution was to pop open the bonnet and swap in a replacement cable, which probably cost a couple of quid.

This recollection combined with the Toyota story merely convinces me that automobile automation has got completely out of control.

erichocean 3 hours ago 0 replies      
Isn't recursioneven if it's indirectdisallowed completely when doing embedded C programming for safety-critical devices?

UPDATE: Yup, #70 on the MISRA C rules: http://home.sogang.ac.kr/sites/gsinfotech/study/study021/Lis...

pwg 7 hours ago 0 replies      
Example 7 on page 18 of "UNIQUE ETHICAL PROBLEMS IN INFORMATION TECHNOLOGY" by Walter Maner seems quite appropriate here:


"A program is, as a mechanism, totally different from all the familiar analogue devices we grew up with. Like all digitally encoded information, it has, unavoidably, the uncomfortable property that the smallest possible perturbations -- i.e., changes of a single bit -- can have the most drastic consequences."


noelwelsh 10 hours ago 3 replies      
It's sad that recursion is considered dangerous. Tail calls have been known about for a very long time, and the duality between stack and heap for just about as long.
xerophtye 12 hours ago 2 replies      
So what's the catch? We have been developing memory architectures, and embedded systems, and OS's for decades now. So if the solution is as simple as this post says, why hasn't it ever been implemented before?

I am hoping there are experts here that can shed some light on this

raverbashing 7 hours ago 1 reply      
The least specialized in SW a company is, the worse the software is.

What we are accustomed to in discussing in HN for example does not exist in these worlds. Continuous integration? Unit test? Even complexity analysis.

And very very old code that's patched over and over and shipped "when it works"

It's usually people who have had an academic contact with programming languages and embedded development and don't know anything about code quality. But you can bet their bosses incentive CMMI and other BS like that. (Yes, complete and utter BS)

Not to mention ClearCase which seems to be a constant, the worse the company the more they love this completety useless piece of crap

Fasebook 1 hour ago 0 replies      
tl;dr: make the stack bigger! (then is it really a stack overflow?, oh by the way, this won't work in most systems due to virtualized stacks on top of the physical memory making concepts such as order of memory meaningless... but nevermind that)

The obvious solution to stack overflows is to make the stack bigger. The obvious problem with this solution is that it just kicks the can down the road.

pjmlp 12 hours ago 0 replies      
Another example of C's impact into our daily life.
robryk 12 hours ago 2 replies      
Would it be considerably expensive to check in runtime that SP is in an expected range every time it gets moved? This'd work with multiple stacks, too.
laichzeit0 9 hours ago 0 replies      
I'm always skeptical about non-trival recursive calls and generally pass a "depth" variable in as the first param, increasing it each time I do another call, with some sane cut-off point where it just returns.
jmnicolas 11 hours ago 2 replies      
Although I use managed languages, I wouldn't want my code audited by the NASA.

When 180+ IQ brains analyze your work they're bound to find "horrible defects" that no "competent" programmer would ever make.

pasbesoin 2 hours ago 0 replies      
I haven't waded into all this, and it's been years -- and years -- since my education touched upon systems that physically separate operating instructions from data memory.

But... sooner or later, it seems, we are going to go (back) there.

Instructions will become truly privileged, physically-controlled access. Data may go screwy -- or be screwed with -- but this will not directly affect the operating instructions.

Inconvenient? As development becomes more mature, instructions will become more debugged and "proven in the field". Stability and safety will outweigh ease and frequency of updates.

My 30+ year old microwave chugs along just fine. It doesn't have a turntable nor 1000 W, but I know exactly what it will do, how long to run it for various tasks, and how to rotate the food halfway through to provide even heating.

My 34 year old, pilot-light ignited furnace worked like a champ, aside from yet another blower motor going bad. I listened to the service tech when he strongly suggested replacing it before facing a more severe, "winter crisis" problem.

The new, micro-processor based model is better in theory (multi-stage speeds, and longer run times for more even heating). In practice, it's been a misery. The first, from-the-factory blower motor was defective. When that was replaced, the unit started making loud air-flow noises periodically.

Seeing the blower assembly removed, its constructed of sheet metal. The old furnace, by contrast, had a substantial metal construction that was not going to hum and vibrate if not positioned absolutely perfectly and with brand new, optimized duct work.

Past a point, reliability starts to -- far -- outweigh some other optimizations.

This is going to become true in our field, as well.

wirrbel 12 hours ago 0 replies      
First I was annoyed at yet another upvote fishing blog post on stack overflow. Then I read it, while I was annoyed at getting caught by the catchy headline that I conciously despised. Then I saw that it was not at all about some forum on the web and now I cannot stop smiling.
gkoberger 12 hours ago 0 replies      
Completely unrelated to yesterday's "I No Longer Need StackOverflow" https://news.ycombinator.com/item?id=7251169

I was all excited to defend StackOverflow.com.

jtokoph 13 hours ago 5 replies      
My first thought was: How could stackoverflow.com be responsible for car crashes?
skrebbel 12 hours ago 0 replies      
Could you at least try to read an article before you comment? Like, at least the first 5 words?
Show HN: Massren multi-rename tool using your text editor github.com
38 points by laurent123456  5 hours ago   32 comments top 15
peterjmag 1 hour ago 1 reply      
This is awesome! Really great work. I can think of about five different instances in the last couple weeks where this could've really helped me out. Oftentimes, I ended up just using something like NameMangler[1] instead and pining for the flexibility of my editor.

For the other commenters in this thread that don't see the appeal or keep comparing it to other alternatives, here's what's so compelling to me:

- Editor agnostic. This isn't just for vim, people. ST2 is awesome for this kind of thing.

- Undo. Easy undo. That's a killer feature, and I wouldn't be surprised if it's unique to this tool.

Effusive praise aside, I ran into a couple small issues on OS X:

    $ massren --config editor 'subl'    massren: Config has been changed: "editor" = "subl"    $ massren    massren: exec: "subl": executable file not found in $PATH
subl is indeed in my $PATH, but it's actually just a symlink to the ST2 application directory, as the ST2 docs suggest [2]. I solved this by just adding the app directory to my $PATH, but it'd be nice to keep it out of there if possible.

Also, I'd like to be able to pass switches along with my editor command, like git config's core.editor [3]. However, this doesn't seem to work:

    massren: exec: "subl -wn": executable file not found in $PATH
Anyway, great work once again, and thanks for releasing such a cool tool!

[1] http://manytricks.com/namemangler/

[2] http://www.sublimetext.com/docs/2/osx_command_line.html

[3] https://help.github.com/articles/using-sublime-text-2-as-you...

xbryanx 11 minutes ago 0 replies      
NameChanger is another great tool that helps with this family of tasks on OS X.http://www.mrrsoftware.com/MRRSoftware/NameChanger.html
limmeau 4 hours ago 2 replies      
Emacs users can use wdired in a dired buffer instead.

M-x wdired-change-to-wdired-mode

(not to spoil the fun of creating a useful command-line tool with Issue9 ;)

atmosx 5 hours ago 1 reply      
Smart :-)

Since "wget https://raw.github.com/laurent22/massren/master/install/inst... comes out with certification error, because wget doesn't know github's certification, you need to either add an ignore-cert option or you might wanna change that option to 'curl -O https://raw.github.com/laurent22/massren/master/install/inst... which will not came out with an error. Also, curl is installed by default on MacOSX while wget is not :-)


jweir 1 hour ago 0 replies      
Great tool.

This wasn't clear from the README, but this will work with files across directories (which is both useful and confusing)

massren /*foo.rb

Will rename matching files in different directories, but there is no indication of what directories those are in the editor.

<snark>Also, how could you build something so useful without generics!?</snark>

dewey 2 hours ago 1 reply      
Would be great if someone could add it to homebrew. [0]

[0] http://brew.sh/

seivan 1 hour ago 0 replies      
Hmm should I look into getting this on homebrew? Would anyone other than me be interested?
felixr 3 hours ago 0 replies      
You should also have a look 'vidir' from Joey Hess' moreutils [1]. I think it is very similar.

Moretutils also includes 'vipe' (edit pipe in text editor) and other useful utilities.

[1] https://joeyh.name/code/moreutils/

bhousel 4 hours ago 2 replies      
holy crap, it has an --undo switch!

Why don't more commands have this?

blueblob 4 hours ago 1 reply      
This is cool. There is already a command called rename[1] that can do some of this but this is much more interactive (and probably more intuitive for vi users). Is this scriptable?

[1] http://linux.die.net/man/1/rename

dnr 2 hours ago 0 replies      
It looks like there are lots of implementations of this idea or there. Here's mine in 30 lines of bash:


p0ckets 3 hours ago 1 reply      
grimgrin 2 hours ago 0 replies      
It seems like there is a neat thing that could be done with this sort of thing + id3 tags.
b6fan 3 hours ago 2 replies      
There is a vim plugin: rename.vim [1] which basically do the same job but without go dependency.

[1] http://www.vim.org/scripts/script.php?script_id=1721

aashishkoirala 4 hours ago 1 reply      
Neat! Were you inspired somewhat by Git's interactive rebase or something similar?
First animals may have lived with almost no oxygen newscientist.com
4 points by prateekj  26 minutes ago   discuss
Making Angular.js real-time with Websockets pusher.com
3 points by knes  10 minutes ago   1 comment top
benarent 0 minutes ago 0 replies      
This is a great introduction, defiantly something we're going to experiment with.
Cluster Level Container Deployment coreos.com
4 points by robszumski  13 minutes ago   discuss
Viber Replaces MongoDB with Couchbase couchbase.com
47 points by naryad  5 hours ago   15 comments top 4
rubiquity 2 hours ago 3 replies      
I don't entirely know what Viber does, but given how very different MongoDB and Couchbase are, I think they made a terrible choice of using MongoDB in the first place. You can't fault MongoDB for that. Viber announcing this change is more of an admission that their architects/engineers made a bad choice than it is a slight against MongoDB.

Also, I don't like MongoDB very much and almost always find another more suitable database (both SQL and NoSQL) for the projects that I have worked on.

manishsharan 4 hours ago 1 reply      
My eyes hurt from reading so much PRese. Look, every organization from from Solution A to Solution B in due course-- and that is a PR Story. But for it to be useful programmers, there has to be some amount of quantitative data to support the decision ;a video embed doesn't count. Surely Couchbase folks can do better than this to catch our attention.
vosper 4 hours ago 3 replies      
Is anyone else using Couchbase? I'm evaluating it for a project with a mobile component and Couchbase Mobile with its automatic syncing seems like a great solution. Would love to hear peoples thoughts.
ninv 4 hours ago 0 replies      
They picked the wrong product in first place. MongoDB and Couchbase are two different databases.
       cached 18 February 2014 20:02:01 GMT