If a game does well, its time to lay off half (or more) of the team. Same result happens if a game does poorly of course. But it seems the only way to 'win' is to be at the top, or simply not play.
I've seen this now with everything from Harmonix to Irrational Games. There seems to be a huge amount of money made with these blockbuster games, but vanishingly few companies seem to be able to manage their game development cycles efficiently as to always need a staff. It always comes off as terrible management/project management.
For example, Harmonix's Rock Band was huge. There was around $299 million of bonuses paid to people at the top. Yet, I had friends work there get laid off repeatedly (once right before Christmas), sometimes shortly after the people at the top got the bonuses. Why in the world didn't they think to diversify a bit, run a few concurrent development cycles, etc...
The most sane way to do game development seems to be to start your own indie studio and keep your expenses very low. Everything else seems... irrational.
Irrational Games will enter the waters of baptism, and a new studio will be born. An infinite number of Irrational Games studios are opening and closing at this moment, like lighthouses on an ever-expanding ocean. The only difference between past and present is semantics.
If what I'm saying sounds crazy, you owe it to yourself to play Bioshock Infinite. It's without a doubt one of the most beautiful and surreal games ever created.
* SWAT 4: How cool is a multiplayer shooter where you actually have to breach a room from multiple sides to pressure the enemy into _not shooting_? And hold your guns until you saw any indication they would? We played that game for nights in one room for better communication.
* System Shock 2: Deeply flawed in some regards, but also the first game that creeped me out in a _perfectly well lit and bright environment_. Shodan, as always, was a great enemy.
* Freedom Force Series: A comic strategy game. It wasn't that hard (it wasn't easy, either), but had "comic" written all over the place. The description if you hovered the cursor over a mere building was "A proud participant of the Patriot City skyline." Someone put an ironic joke on the patriotic theme of the game in the description of a boring apartment block... How fun is that?
BioShock was a culmination of all that. Would you kindly pay them your respect?
Possible explanations include: 1) there is not as much success going on at Irrational as implied; 2) Ken Levine is just really attached to the name, and so wouldn't let it continue in present form while he leaves to do something else under a new name; 3) ...?
> we will focus exclusively on content delivered digitally.
> To make narrative-driven games for the core gamer that are highly replayable.
And bioshock infinite was threading on too safe ground. I really hope that his new studio will have bursts of creativity and success and the left out employees find better jobs soon.
I just read that as; we are gonna make an even more narative based System Shock 2 equivalent.
I wonder what happened. It sounds like Bioshock Infinite didn't bring in the cash they thought it would? Reminds me of Ensemble Studios closing after AoE:III and Halo Wars.
It's sad to see the name being retired, but it's better than seeing the name ruined by a flop or diluted by endless sequels.
The announcement is pretty opaque---I expect the rumor mill to churn for awhile.
On the other hand, this stream isn't random. If it were truly random, the player would just move pointlessly in a horrible Brownian motion. It's nonsensical, to be sure, but in some weird way it encapsulates knowledge about the game, and as a result the game makes progress.
It sort of enlightens other places where true randomness is required, and the presence of any information or understanding radically changes the behavior of a system. In cryptography, even the slightest weakness in the probabilistic underpinnings of a cryptosystem can render it useless. In finance, even the slightest edge over the market can be leveraged to produce gains.
Since twitch.tv recently added up to 30 seconds of stream lag, you want to spam start during the trickiest movement sections to minimize latency between the stream and gamestate. This is most important in ledges and mazes!
More of the chat catches up to the current position and starts putting in the right input, which actually has a higher chance of being accepted, after the start spam delay.
Such a shame that twitch changed streaming technologies recently. It used to be easy to get as low as 2-3 seconds of latency. A world of difference in something like this.
For example, that ledge, it was easy to get enough 'right' movement to overpower malicious 'down' commands, because the first input in a direction only turns the character. But there were 'down' commands needed to get to the ledge, and the lag caused them to keep pouring in after they were no longer needed.
This article is a nice overview of the spectacle but its premise is fundamentally flawed.
But the idea that the employee gets the money immediately/overnight upon approval at the end of the trip, or maybe weekly (for long trips), instead of waiting weeks for a paper check, is huge. As is offloading a lot of the accounting 'grunt work' of it to another company. And snapping photos of receipts with location and time data and categories directly attached, and not having to worry about where they're stored (it's all in the app and on servers) is a huge convenience too.
Sounds like an exciting idea.
This just isn't true. You can approve expenses through Concur's app: https://www.concur.com/en-us/mobile
One feature I would like to see is better integration with my calendar. Nexonia will figure out my mileage based on the addresses I enter, but it would be nice if would simply pull this information in from my calendar, and perhaps take a guess at what meeting I just attended based on my current location.
Xero has an app which scans receipts but as far as I can tell doesn't do expenses.
Shoeboxed does tonnes of stuff with receipts (including parsing them and integrating with Xero) but doesn't seem to do expenses (is that right?)
All I want is one app that integrates with Xero, scans a receipt, parses it and lets me classify it as a business receipt or a personal one (to go on an expense form). I know that's not what this app does (it's expenses only) but it feels like it should already be out there - am I missing something?
I co-manage a consultancy. We operate in the valley. We're in a very specialized niche that is especially demanding of software development skills. Our skills needs also track the market, because we have to play on our clients turf. Consultancies running in steady state have an especially direct relationship between recruiting and revenue.
A few years ago, we found ourselves crunched. We turned a lot of different knobs to try to solve the problem. For a while, Hacker News was our #1 recruiting vehicle. We ran ads. We went to events at schools. We shook down our networks and those of our team (by offering larger and larger recruiting bonuses, among other things).
We have since resolved this problem. My current perspective is that we have little trouble filling slots as we add them, in any market --- we operate in Chicago (where it is trivially easy to recruit), SFBA (harder), and NYC (hardest). We've been in a comfortable place with recruiting for almost a year now (ie, about half the lifetime of a typical startup).
I attribute our success to just a few things:
* We created long-running outreach events (the Watsi-pledging crypto challenges, the joint Square MSP CTF) that are graded so that large numbers of people can engage and get value from them, but people who are especially interested in them can self-select their way to talking to us about a job. Worth mentioning: the crypto challenges, which are currently by far our most successful recruiting vehicle (followed by Stripe's CTF #2) are just a series of emails we send; they're essentially a blog post that we weaponized instead of wasting on a blog.
* We totally overhauled our interview process, with three main goals: (1) we over-communicate and sell our roles before we ever get selective with candidates, (2) we use quantifiable work-sample tests as the most important weighted component in selecting candidates, and (3) we standardize interviews so we can track what is and isn't predictive of success.
Both of these approaches have paid off, but improving interviews has been the more important of the two. Compare the first 2/3rds of Matasano's lifetime to the last 1/3rd. The typical candidate we've hired lately would never have gotten hired at early Matasano, because (a) they wouldn't have had the resume for it, and (b) we over-weighted intangibles like how convincing candidates were in face-to-face interviews. But the candidates we've hired lately compare extremely well to our earlier teams! It's actually kind of magical: we interview people whose only prior work experience is "Line of Business .NET Developer", and they end up showing us how to write exploits for elliptic curve partial nonce bias attacks that involve Fourier transforms and BKZ lattice reduction steps that take 6 hours to run.
How? By running an outreach program that attracts people who are interested in crypto, and building an interview process that doesn't care what your resume says or how slick you are in an interview.
Call it the "Moneyball" strategy.
Later: if I've hijacked the thread here, let me know; I've said all this before and am happy to delete the comment.
I have friends who are extremely good engineers (i.e., a mix of: contributors to major open source projects used by a lot of SV startups, have given talks at large conferences, published papers at ACM conferences, great portfolio of side/student projects, have worked at great companies previously, frequently write high quality tech articles on their blog, have high reputations on sites like Stack Overflow, etc.) and who have been rejected at interviews from those same companies who say that they can't find talent. (it also certainly doesn't help that the standard answer is "we're sorry, we feel like there isn't a match right now" rather than something constructive. "No match" can mean anything on the spectrum that starts at "you're a terrible engineer and we don't want you" and ends at "one of our interviewers felt threatened by you because you're more knowledgeable so he veto'd you").
Seriously, if you're really desperate for engineering talent, I can give you contact info for a dozen or so of friends who are ready to work for you RIGHT NOW (provided your startup isn't an awful place with awful people, of course) and probably another dozen or two who would work for you given enough convincing.
I'm honestly starting to believe that it isn't hard to hire, but that there's some psychological effect at play that leads companies to make it harder on themselves out of misplaced pride or sense of elitism.
Unless everyone wants to hire Guido Van Rossum or Donald Knuth, but then a) statistically speaking, you're just setting yourself up for failure and b) you need to realize that those kind of people wouldn't want to do the glorified web dev/sys admin'ing that a lot of SV jobs are.
Same here. I always advise startups to err on the side of generosity with equity.
First, sf and the valley simply don't pay engineers well enough. This is the second, striving to become the first, most expensive housing market in the united states. $150k sounds great here until you look at that as a fraction of your housing cost and compare to anywhere else in the country, including manhattan (because unlike here, nyc isn't run by morons so they have functioning transportation systems). I don't want to just quote myself, but all this still applies: https://news.ycombinator.com/item?id=7195118
Second, immigration is a crutch to get around paying domestic employees enough. I see net emigration from the valley amongst experienced engineers in their 30s who start having families and can find better financial lives elsewhere. If companies paid well enough that moving to the bay area wasn't horrid financially, they'd find plenty of software engineering talent already in the united states. But consider my friend above: $165 total income in the midwest is (compared solely to housing cost) equivalent to approx $450k here, when holding (housing costs / post tax income) constant.
edit: not to mention, companies still don't want flexible employment arrangements or remote work. I'm a data scientist and I'm good at my job (proof: employment history, employers haven't wanted me to leave, track record of accomplishments.) I'd rather live elsewhere. 66 data scientist posts on craigslist (obv w/ some duplication, but just a quick count) ; jobs that mention machine learning fill search results with > 100 answers . Now check either of the above for telecommute or part time. Zero responses for remote or part time workers. So again, employers want their perfect employee -- skilled at his or her job, wants to move to the valley enough to take a big hit to net life living standards, doesn't have kids, and doesn't want them (cause daycare or a nanny or an SO who doesn't work is all very expensive.)
I often see a disconnect between perceptions of expected success of founders and engineers. I've observed this is particularly pointed for non-technical founders. To generalize, a young entrepreneur with some success under his belt is starting a company. As far as he's concerned his company is all but guaranteed to succeed: he's got the experience and sophistication necessary to make this happen, the team he's hired to his point is top-notch, he's got the attention of some investors, the product is well thought out, etc. He approaches an exceptional engineer and extends an impassioned invitation and... the engineer balks.
What happened? Is he delusional about the company's prospects, thinking he's got a sure fire hit when he's actually in for a nasty surprise once his hubris collides with reality? Is the engineer a square who would rather work a boring job at a big company than live his life, and wouldn't be a good fit for the team anyway?
I propose a different resolution: our confident businessman is certain about the success of the company, not the success of the engineer as part of the company. He knows the company's success is going to rocket him into an elite circle of Startup Entrepreneurs. The engineer, on the other hand, doesn't see the correlation between the company's success and his own: even if the company takes off to the tune of eight to nine digits, his little dribble of equity is just barely breaking even over the comfortable stable position he's in now.
"Sometimes this difficulty is self-inflicted."
I want to emphasize how strong this point is. In most ways, the computer programming industry is a shrinking industry in the USA. There are less computer programming jobs in the USA than there were 20 years ago.
Stats from the Bureau of Labor Statistics (USA):
1990 Number of Jobs 565,000
2010 Number of Jobs 363,100
2012 Number of Jobs 343,700
There is a tiny subset of the industry that is growing, and we associate these with the startups in San Francisco and New York. But so far these startups have not created enough jobs to offset the jobs lost due to other factors.
This suggests that there must be a vast reservoir or programmers who would like programming jobs, but they can't work as programmers because the jobs have disappeared.
If the numbers were smaller, you could argue that the loss of jobs was due to inaccuracies in the way Bureau of Labor gathers statistics. But the drop from 565,000 jobs to 343,700 is too large to be a spurious blip.
This is a shrinking industry. Computer programming jobs are tied to manufacturing so as manufacturing leaves the USA, so to do the computer programming jobs. Don't get caught up in the hype about startups: look at the actual numbers. The government tracks these jobs. The numbers are shrinking.
Especially worth a look:
"In its 1990 Occupational Outlook Handbook, the U.S. Department of Labor was especially bullish: The need for programmers will increase as businesses, government, schools and scientific organizations seek new applications for computers and improvements to the software already in use [and] further automation . . . will drive the growth of programmer employment. The report predicted that the greatest demand would be for programmers with four years of college who would earn above-average salaries.
When Labor made these projections in 1990, there were 565,000 computer programmers. With computer usage expanding, the department predicted that employment of programmers is expected to grow much faster than the average for all occupations through the year 2005 . . .
It didnt. Employment fluctuated in the years following the report, then settled into a slow downward pattern after 2000. By 2002, the number of programmers had slipped to 499,000. That was down 12 percentnot upfrom 1990. Nonetheless, the Labor Department was still optimistic that the field would create jobsnot at the robust rate the agency had predicted, but at least at the same rate as the economy as a whole.
Wrong again. By 2006, with the actual number of programming jobs continuing to decline, even that illusion couldnt be maintained. With the number of jobs falling to 435,000, or 130,000 fewer than in 1990, Labor finally acknowledged that jobs in computer programming were expected to decline slowly. "
Conversely, and I know this is pretty out there, this is what I think will be the killer app of virtual reality. If I can ship a $5K "pod" to a developer somewhere in the world which allows us to work together 90% as well as we can in person, then you're damn right I'm going to do that.
I believe VR tech will get good enough (3-5 years) before immigration issues will be sorted out (10-20).
Does the founder really want to get greedy and keep that extra few percent when so much depends upon solid engineering execution? Also, don't forget that 4 year vesting with a 1 year cliff is standard, so it's not as if the worth of a meaningful equity offer isn't fully obvious before the shares are "spent" on a key hire (also, before vesting, the risk is totally borne by the employee).
I think the ideal situation for engineers would be to earn a solid equity offer and then have a secondary market to use to trade some of it (once it's vested) for fractional ISOs of other promising startups.
For all this "we work remote" stuff that is flying around this seems to be a direct contradiction. Is moving to the valley really necessary? I can postulate coming for a face-to-face interview, but I would never want to move to California.
They're looking for someone to work on a rails app but they won't hire them unless they have demonstrated Linus-Torvalds like ability and knowledge. But the question is, why would someone with that kind of skill level want to work for you?
What if you were able to grab smart engineers on their way to becoming engineering stars? Why not aim for getting a solid lead/architect and adding midlevel guys who you know are going to turn into superstars? Why not develop talent instead of competing all the way at the top of the market for the most expensive ones? Why not figure out an interview technique that can let you identify exactly these kinds of people?
Its all about being resourceful and nimble enough to adjust, after all, isn't that what a startup is all about?
In actuality, the truth is great people aren't found, they're made. The role of a good leader isn't to squeeze great work out of his employees, but rather to develop within them the capability to do great work. Applied to hiring, this means having an understanding of the support and growth capabilities within your organization, and finding candidates who have the most potential to gain from it, rather than hiring those who are already well-developed. Applied to hiring rockstars, this makes them even more valuable: not only would they be producing outstanding work on their own, they would actually be improving the quality of the work their peers produce.
"You get what you pay for"
This is very important. It needs to percolate into their immediate reports too. I've seen high tech companies lose great candidates because the first line managers were too busy to interview them right away. If the right talent is available, you have to maket he time.
I am not sure what immigration has to do with this, we make plenty of STEM graduates each year, and we'd make more if the professions didn't look like they were under attack by every employer and politician. The smart kids you want to hire are smart enough to go into more protected professions, if they knew their jobs wouldn't be shipped over seas or their market flooded with foreign competition, then maybe we'd be able to attract them and keep them.
I worry that focusing on equity will just exacerbate the problem, because I think that a lot of people are becoming wise to the equity lottery and just don't see a difference between 0.1% and 5% of nothing. The problem will most certainly be solved by $$, but no doubt it is tough pill to swallow for a business to pay 150k now what was 100k a few years ago...
Keep in mind I am a software developer and self interested.
In doing that, it became clear that not everybody even wants more equity. That was a little hard for us as founders to take, because we of course thought the equity was awesome, and wanted engineers to feel a real sense of ownership. But from the numbers, it was clear that some people would rather we sold more equity to investors and just gave them the cash.
I get that. If you've been around the industry for a while, you can accumulate quite a collection of expired startup lottery tickets. Landlords, mortgage-holders, and kids' orthodontists don't take options; they take cash.
If nobody in your network has a track record of results, become a sycophant? What if all the hackers in your network are US as well? You got less than a 5% chance of a hope to do anything according to Altman.
I frequently hear startups say [...] can't find a single great candidate for an engineering role no matter how hard they look.
Offering more money might fix hiring problems for one company, stopping one person complaining, but to stop all people complaining the only solution is to increase the supply or reduce the demand.
(Increasing the supply doesn't have to mean immigration reform - it could mean training or lowering hiring standards or a bunch of other things)
Personally I really like the interview process a previous empoyer used: shortly before the interview starts we had them look over a roughly CS 102-level programming project, then we ask them to design the architecture on a whiteboard while the interviewers ask questions/give guidance. What we're really looking for here is:
a.) how do they handle the social aspect of working with a superior who will often (gently) criticize your work and/or ask you to thoroughly explain why you're doing what. b.) that they have enough chops to architect a simple program.
If they can pass those test I'm confident they'll be an effective team member, because at the end of the day all you really want is someone who is competent enough to be useful, and fits into your culture/team. Everything else will shake itself out. You don't need some "superstar/rockstar/ninja" (unless you're solving a particularly hard domain-specific problem) so stop looking for them and excluding everyone else.
Instead start building an effective team.
Startups clearly need to be basing more of their decisions on unfounded conjectures. I have to say that startups seem to have unreasonable expectations of what kinds of programmers they can hire. We have plenty of viable hackers in the US, but startups don't want to hire them because they're not the next Knuth or they're "not a good cultural fit".
UPVOTE!! I'm shocked at how many companies are unwilling to pay for a $500 Southwest ticket to fly someone in for a day to interview from Texas or Georgia... relocation costs are easily offset by a slightly lower salary, and the person you're interviewing is unlikely to have 4 or 5 paper offers in hand.
This argument seems flawed. If I think someone is going to double the value of my company, should I be comfortable giving them up to 100% equity? Put another way: percentage growth of your company from an early stage to some point in the future most often exceeds 100%.
Often, the design goals might exist on an alternate axis.
Random people offering random opinions amounts to random noise. However, the random noise will not appear neutral; it will appear as information.
If we consider many different things designed for particular audience members such as jet cockpits, medical tools, racing automobiles, we will see traits that exist that may seem nonsensical or otherwise when we divorce them from their designed contexts.
Bill Buxton covers this in Sketching User Interfaces when he describes Inuit coastal maps:"The Inuit have used. [...] Tactile maps of the coastline, carved out of wood. They can be carried inside your mittens, so your hands stay warm. They have infinite battery life, and can be read, even in the six months of the year that it is dark. And, if they are accidentally dropped into the water, they float. What you and I might see as a stick, for the Inuit can be an elegant design solution that is appropriate for their particular environment."
Focusing on complaints of the above design in all likelihood would, given the mad rabble of audiences online, result in discarding a solid bit of design.
 Bill Buxton, Sketching User Experiences, pg 37
It really does not matter what methodology, or tools you will use, if at the end it does not pass the user acceptance test
So why consider it a failure if you have users telling you what they want and how off you are? Better to embrace it and make a better product
Complainers have to be cultivated. Complainers can be your most valuable asset.
For example, I have friends that released a rather ho-hum mobile app. They quickly garnered something like a 1.5 star rating, scathing reviews, and almost no conversions. The business cycle on this was a year, and they are still trying to claw back reputation and win users. It's a debacle. (the problems weren't their fault, but that is irrelevant to this point).
Then you have companies with secrecy, like Apple. I think this advice would be terrible for them (I have never worked there, and am open to correction). They can't dog food it widely due to the internal silos, and they certainly cannot test it with the public.
Then there are electronic systems - iterations on SW is easy, iterations on HW expensive and hard, even with simulations, mock ups, and what have you. I worked on an augmented reality hardware thingy several years ago; we went from foam cutouts to a couple of very expensive prototypes, and that was it.
It is awesome when we can completely sidestep a problem, and this process lets you sometimes sidestep the serious difficulty of UI design. I worry when it gets bandied about as a truism, or The One True Way (not saying Jeff is doing that, I'm remarking on the wider industry). Yes, Agile lets you sidestep the problem of scheduling and estimation - sometimes. Try that when you are making a new airliner, building a cloverleaf interchange, making a car entertainment system, and so on.
edit: the converse problem is equally as large. Someone below mentioned the 'planning mania' of companies. I don't mean to downplay that problem, just to point out the need to evaluate each situation on it's particular needs as opposed to a 'best practices' (oh, how I hate that term) unthinking approach.
I've had the unfortunate experience of building a product for someone else where the process was driven by a combination of Complain-Driven Development and Upper-Management Wish-lists. This alone might have been fine but at the same time anything positive like the real analytics about how successful the product was or any non-complaint communication coming from the users was hidden from me for fear, I suppose, that I might try to use that information get my company more money.
This became incredibly depressing. Everyday you show up to work putting in more and more hours into something that comes back with more and more complaints. It was hell and I did everything I could to end Complaint-Driven Development to no avail because thats how the customer liked to work. Eventually I finally just gave up and left to find something more rewarding and less soul crushing.
We tried to add tags to support emails (via helpscout), but it's also hard to remember to tag things, and it's easy to use different tags for similar complaints.
I wonder about the best strategy to quantify complaints / suggestions and 'bucket' them correctly, so you can really choose the top ones.
So it's useful to make your MVC as good as possible so that your users aren't forced to complain about the results of fundamental design shortcomings. That can result in more and more complex fixes, none of which should be necessary.
This is simple but practical advice I think far too many people ignore. You've inspired me with this article. Glad to see you've been successful from it!
Contact me if you need help. I'm a good UXpert.
Disclaimer: not a RoR developer.
Hopefully no Bitcoin apps use the currency helper. But I imagine in the context of an exchange the numbers come from the blockchain or a wallet, and aren't user controlled in the way that could be exploited.
The "choice" that the author made wouldn't really crop up unless he had help paying the bills.
The numbers make the problem clear. In 2007, the year before CERN first powered up the LHC, the lab produced 142 master's and Ph.D. theses, according to the lab's document server. Last year it produced 327. (Fermilab chipped in 54.) That abundance seems unlikely to vanish anytime soon, as last year ATLAS had 1000 grad students and CMS had 900.
In contrast, the INSPIRE Web site, a database for particle physics, currently lists 124 postdocs worldwide in experimental high-energy physics, the sort of work LHC grads have trained for.
Let's not confuse students and fellows with missing staff. [...] Potential missing staff in some areas is a separate issue, and educational programmes are not designed to make up for it. On-the-job learning and training are not separated but dynamically linked together, benefiting to both parties. In my three years of operation, I have unfortunately witnessed cases where CERN duties and educational training became contradictory and even conflicting.
An unsatisfactory contract policy
This will be difficult for LD staff to cope with. Indeed, even while giving complete satisfaction, they have no forward vision about the possibility of pursuing a career
Pensions which will be applicable to new recruits as of 1 January 2012; the Management and CERN Council adopted without any concertation and decided in June 2011 to adopt very unfavourable mesures for new recruits.
And a warning to non-western members:
"The cost [...] has been evaluated, taking into account realistic labor prices in different countries. The total cost is X (with a western equivalent value of Y) [where Y>X]
source: LHCb calorimeters : Technical Design Report
ISBN: 9290831693 cdsweb.cern.ch/record/494264
- most work is done by untrained and inexperienced graduate students, good luck understanding/reproducing the process
- most faculty are little more than grant submitting machines trying to land a grant at all costs regardless of what actually interests them
- most research reviews processes are incredibly biased with countless people doing terrible jobs (the reviled "reviewer number 3") a single negative can sink a grant/paper acceptance
- most institutions are grossly monolithic and the rules and regulations are such that incompetent individuals can never be removed from any given position.
- most institutions are run as medieval lordships, with many smaller decision makers like deans, head of departments that have incredible influence on someone's career. It is great when the dictator is benevolent and unbearable if not.
Note how instead of paying a good salary the University choses to give out handouts (lower childcare fees, lower rentals) - because those in turn are paid via taxpayer grants. It hides the fact that they pay so little the people would qualify for foodstamps.
Like the linked article suggests, it would be nice to have access to an official (and even simple) API, to dig up some interesting statistics or just have some fun playing with the data.
There is most definitely is. It's just not free.
Interesting data though !
So things I would want to see in order to switch from EB:- Better ticket sales at the door (like square integration)- Better check-in/scanning systems- More flexible ticket types - hmm i could go on, just need more coffee right now.
sidenote: I generate hundreds of dollars a month for eventbrite but their support is abysmal.
I'd argue with the rest of the post. The problem is that phusion makes the common mistake of thinking of containers as faster VMs. That's fine, this is where almost everyone starts when first looking at Docker paradigm.
A good rule of thumb is: If you feel like your container should have Cron or SSH, you are trying to build a VM not a container.
VMs are something that you run a few of on a particular computer. Containers are something that you will run thousands or tens of thousands on a single server. They are a lot more lightweight and loading them up with VM cruft doesn't help there.
 Cron: use the cron of the outer machine with docker run SSH: use lxc attach
I even asked on ServerFault (ie, StackOverflow for servers) about it and was told, quite aggressively, that running a full OS is wrong:
1. Reaping orphans inside the container.
Yup. If your app's parent process crashes, its child processes may now be orphans. However in this case your monitoring should also restart the entire container.
Assuming you run your docker image in a .service file (which is what CoreOS uses as standard), systemd-journald on the host will log everything as coming from whatever your unit (.service) name is. So if you `systemctl myapp start` output and errors will show up in `journalctl -u myapp` in the parent OS.
3. Scheduled tasks.
For things like logrotate, it really depends whether you're handing logs inside or outside the container. Again, I'd use systemd-journald in CoreOS, rather than individual containers, for logs, so they'd be rotated in CoreOS. For other scheduled tasks it depends.
It depends. SSH isn't the only way to access a container, you can run `lxc-attach` or similar from the host to go directly to a container.
I do mention CoreOS here because that's what I use, but RHEL 7 beta, recent Fedoras, and upcoming Debian/Ubuntus would all operate similarly.
Edit: It seems this comment has been voted down. I think perhaps this is seen as irrelevant, but I would disagree, because Docker uses LXC and masks its function in much the same way as LXC uses cgroups and masks their function. cgroups can be used to achieve similar goals without these many layers of abstraction. In this way, I believe this comment to be relevant to the discussion of full vs. application containers on Linux. There are certainly many reasons for using containers, but one of the leading reasons is process limits (e.g. RAM, network namespace). Limiting process usage of those resources, using only cgroups, is quite easy in comparison to all Phusion has gone through here to something with similar (though admittedly different) aims. Example: http://www.andrewklau.com//controlling-glusterfsd-cpu-outbre...
Edit 2: I would also appreciate constructive criticism. That is, I've been downvoted without useful feedback. Specific feedback as to what is wrong with my comment would enable me to contribute more constructively to this discussion. Without such feedback, I believe the downvote can be seen as a simple and tribal "go away".
Let's take an example. You have Nginx, a web app, and a database. You can put everything in the same container or not. If you choose to put everything in different containers, you will be able to use tools at the Docker level to manage them (e.g. replace one of those processes).
And the fundamental idea is that we expect to have plenty of Docker images around that you can pick and play with, and those Docker-level tools will be able to manage all those things.
Now if you put everything in the same container, you're back to square one, reinventing the tools to manage those individual process. You can say that you don't need to re-invent anything, because you're used to full-fledged operating systems. Still, if you have a nice story to deploy containers on multiple hosts, to send logs across those hosts, and so on, the road will be more straightfoward when you decide to use multiple hosts.
This is about uniformity. I want processes (and containers around them), and hosts, that's it. I don't want additional levels. I don't want processes, arbitrarily grouped inside some VMs (or containers), and hosts. Two levels instead of three.
- nsenter lets you pick and chose what namespaces you enter. Say the host OS has tcpdump but your container doesn't. Then you can use nsenter to enter the network namespace but not the mount namespace: sudo nsenter -t 772 -n tcpdump -i lo
- lxc-attach will let you run a command inside of an existing container. This is lxc specific I believe and probably not a great long term solution. But, most people have it installed.
I suppose part of the problem is the two benefits of Docker and containerization are frequently confused. Docker provides portability and build bundling, but ALSO provide loose process isolation. You should want to take advantage of that process isolation and by doing so, should want to run SSH or cron in their own containers, not in a single container with your application process. If your application has multiple processes, each should have their own containers. These containers can be linked and share volumes, devices, namespaces, etc. Granted, some of the functionality one might desire for this model is still missing or in development, but much of it is there already and that's the model I aspire Docker to follow.
It might also be to some degree a matter of legacy versus green-field applications. For instance, I've been deploying OpenStack's 'devstack' developer environment (which forks dozens of binaries) inside of a single Docker container. In this case, the Phusion base-image might make sense. However, the proper way of using Docker would be to run dozens of containers, each running a single service.
The reason I don't do this is because the OpenStack development/testing tools provide this forking and enforce this model, using 'screen' as a pseudo-init process. From the Docker perspective, this is a legacy application. I could and probably will change those development tools to create multiple containers, but until then, it's easiest to stick to a single container.
Like some other posters here I view containers less as a lightweight VM, and more as a process sandbox. In the context of a scalable architecture I would like a container to represent a single abstract component, which can be spun up (perhaps in response to autoscaling events), grabs its config, connects to the appropriate resources, streams its logs/events out to sinks, reads and writes files from external volumes, and runs until it faults or you shut it down.
Ideally there would be nothing inside the container at shutdown that you care about. After shutdown the container, and potentially the instance it was running on, disappear. Spinning up another one is a matter of launching a new container from a reference image.
So far, in cases where I have needed daemons running in the container, I have pointed my CMD at a launch script that starts the appropriate services, and then launches the application components, typically using supervisord. That has worked fine, but I admit to not understanding the PID1 issue well-enough up to this point.
Under this model I'd expect that systemd's pgroup support should help with zombie processes and generally take over many of the services that baseimage-docker is suggesting here. As other have mentioned in this thread, there's a fairly large difference of opinion between running containers like fast VMs or like thin layers around single processesdoes baseimage-docker make sense only in the latter?
I guess we borrowed the same template?
I find Docker to be horribly counter-intuitive and ass-backwards anyway, so not much harm done there as people are in general better off with something else entirely (plain lxc, libvirt, virtualbox, xen, openvz...). I recommend to steer away from it at least until 1.0 is out.
EDIT: I put it in my .plan to build a better BusyBox image aimed at running statically compiled programs with minimal baggage, but I'm not sure when I'll get a round tuit*
In fact, I understand none of these points. This seems all very hard to relate to. These are containers and not VMs. Most of that stuff should run in a separate container.
I also sort of suspect that the closer you are to running a full distribution in your containers, the less benefit you're getting from the containers.
syslog: each container now has it's own logs to handle. If you want them to be persistent/forwarded it might be better if all containers could share the /dev/log device of the host (not sure of the implications though).
ssh: lxc-attach. Docker should expose that.
zombies: it's a bug in the program to not wait(1) on child processes.
cron: make a separate container that runs cron.
init crashes: bug in the program again. it's possible to use the hosts's init system to restart a container if necessary.
This seems like the old "I have problems with managing everything I need for my app so I'll just run docker containers. Now I have 2 problems"
And start your app through an init.rd script?
The article says "upstart" is designed to be run on real hardware and not a virtualised system. If that is true, then perhaps there is value in baseimage-docker, but details are lacking.
Does this script sort out those kind of things?
fghack is an anti-backgrounding tool.http://cr.yp.to/daemontools/fghack.html
Rich get richer, I guess. If I was in a different mood I would write a blog post in outrage.
> the Identity team started laying cookies on www.theguardian.com in advance. This was a nice touch because it meant that visitors would still be logged into the site when we eventually changed domain.
Everything else? Yeah uhm, not very interesting. As they wrote themselves, there's a thing called 301 - permanently moved.
Reworded: "Google don't have a phone".
(note to confused and/or non-UK people: look up the magazine Private Eye)
So is the consensus that .mobi was one of the worst ideas in existence?
also many requests on the page
I'm still a little mystified by complex setups. Installing stuff as the correct user, adding a big group of database users, stuff like that seems pretty tedious in shell. I guess that's more of a provisioning issue though.
Articles like this make me realize i don't care much what the environment actually is, I care about getting that environment configured correctly with as little effort as possible.
You could chain all the sed changes together into one command (tested on Linux, OSX's sed might need some tweaks):
sed -i.bak 's/^#\?\(\(ChallengeResponse\|Password\)Authentication\).*$/\1 no/' /etc/sshd_config
Seajure, the Seattle-area Clojure user group, uses Syme, and it seems pretty effective.
I was way too lazy to figure this out on my own.
One major competitor (well known for anyone who's looked into this stuff) is Alchemy . I tried a New York Times link  on Aylien and Alchemy, and Alchemy performed much better -- in fact, Aylien didn't even successfully find the article body. I'm sure you guys will be iterating on improving the algorithms, but just wanted to flag that as a potential turnoff for anyone comparing your website demo with Alchemy.
Best of luck!
It would be a nice to offer a library with a bootstrapped training set.
edit I see from another response that the server room is on meltdown, I'll wait for a bit.
Just tried a few links:
Am I missing something here? It seems like it's just parsing text, i'm not seeing any context(keywords, categories, summaries)
edit: It's giving fantastic results when pasting the raw text! :)
Are you guys using DBpedia? It's giving very similar results to a system I was working on in the past: http://www.zachvanness.com/nanobird_relevancy_engine.pdf
What is special about your project ?
Is something broken? Maybe you could cache some recurring analyses.
Classification: arts, culture and entertainment - architecture .(WTF?)
Polarity: positive. (Nope)
Polarity confidence: 0.9994709276706056. (Well...)
Looks pretty rough to me.
We don't rely on CoreNLP, or NLTK, we have our own sentence disambiguation, and our own part of speech tools. So we are a lot faster.
Our other api's let you piece together a lot of cool NLP projects with very little code.
This is a very interesting area... Good to see something new apart from Alchemy and opencalais !
I'm frequently surprised at the really rough and inconvenient bits in Java. Weird inconsistencies in the libraries, or not having a really convenient set of file read/write methods without having to cobble together bits and pieces of an I/O system to get a directory listing or read in a file or whatever, or how variable performance is between two similar looking pieces of code (if you haven't done lots of benchmarking on Java standard library containers, I urge you to do it, and make careful selection of containers based on that, it's frequently surprising how different otherwise identical looking code runs).
Considering there are what, 3 complete GUI toolkits built in, why isn't there a built in CSV parsing library, or a "read file to string" static method somewhere? Why do I have to put so much effort into basic tasks? It's such a weird and uneven and sloppy feeling thing.
There's all these weird aggregated different ways to do the same thing, but each built to fix some problem with some older solution, but the older broken ways were never really deprecated out for compatibility. Unless you run across some guide that says explicitly "use this instead of this because of <reasons>" you might never know the newer version exists. Yet the new releases includes so much compatibility breaking new syntax changes that it doesn't practically matter.
There's bits and pieces of related but complete solutions piled all over the library as well. e.g. Regex bits are in String, java.util.regex.* (Pattern and Matcher) and probably elsewhere...and don't get me started if you're moving back and forth between arrays and the various containers that make arrays more usable, and then all the utilities to help with that which are scattered all over the place. I spend half my time writing code to abstract all that nonsense away so I can write the main code logic in peace.
And then over the years the concepts about how to design an API have changed or something, because you can feel different stylistic concepts in different places. Here you instantiate an object, then set it up, then build another object of this type to catch the results and do some other magic. Here you instantiate the object with all the important bits and manipulate the object with local methods. It's like each class requires it's own style guide. I can understand that with 3rd party work, but it doesn't make sense with the batteries that come with the language to be so uneven.
It was probably 10 years ago that I last tried Java, and it sucked back then, with all the verbosity etc. But with modern IDE support I actually kind of like the flow and style of it to some extent. It's a beautifully simple syntax to use at its core. But then again I don't care about all the FactoryFactory nonsense. And I'm avoiding lots of the new stuff that doesn't really fit into the language.
I've actually started to become convinced that it's getting to the point that Java 10 or whatever should be a single minded house cleaning. Jettison all the broken old shit, clean up the style and usage, build decent syntax into the language for doing common tasks so the coder doesn't have to boilerplate themselves to death. Take 5 years to do it, the enterprise will survive that long.
edit I wonder if the idea of an "API editor" to vet the interfaces for consistency and style in these large standard libraries makes sense?
The tzdata updates usually 10+ times a year and a company actually operating in most of those zones needs to have a straightforward way to push out updates not only across all production systems, but across all production languages. This usually makes solutions that "transpile" the binary data into native source or otherwise embed the data into resources somewhere much more operationally expensive. Native solutions (e.g., pytz in Python) that can be pointed to an directory to pick up the tzdata binary files are a good middle-ground, as it decouples the logic from the data.
Operationally, you need to be able to deploy the updates very quickly. Every year there are data updates that occur a few days before the DST change. If you have a lengthy dev/beta/prod rollout process to a very large number of machines, this can bite you. Just this week, Chile and Turkey are making changes, so prepare to update :)
WAT. Even if you assume that your user has exactly one TZ, how does this work with DST / summer time?
Is there any kind of talk of adding immutability to the core language? "final" keyword doesn't really cut it for objects and immutabililty by convention also is not easy to enforce on large teams.
Ubuntu 14.04LTS solves the problem by adding a new source of entropy. They add a early-boot (before sshd) service that fetches data from an external server. In short: `curl http://some-server > /dev/urandom`
EDIT: Looking for the default server but launchpad seems to be down. Ideally it would be a trusted source like the cloud provider themselves.
EDIT2: https://entropy.ubuntu.com/ and the public cert is provided with the package.
Bitcoin handles 7 transactions a second on a good day, has no reliable institutional actors, and I can neither pay taxes nor satisfy court judgments with it. It is an impressive proof-of-concept for decentralized trust in cryptosystems, but it is hardly a currency.
You can't buy something online by just anonymously transmitting credits into an account -- who gave us the money? what do they want? You fill in an order form, a bunch of stuff happens, you get a receipt and a product and money gets transferred at some point. Very little of that is the currency transaction. You needed to generate a letter that said I want X and I'll pay you Y, then you paid Y and sent confirmation of the money transfer, and so on.
The bitcoin bulls think that Credit Cards are a terrible way to buy things because they charge too much for what they do (and their security sucks). This may be true, but bitcoin is only solving the easiest part of the problem. Banks can already transfer money around cheaply and securely. Indeed even market transactions are so fast and inexpensive that high speed trading is now a huge economic force.
The fact is that credit cards (a) let you buy stuff on credit, (b) provide transactional support allowing commerce to proceed smoothly, and (c) already work. A credit card is merely automated checking with overdrafts, which is a direct descendant of the letter of credit (from which cash evolved), which is in fact a more fundamental method of trading than barter. (Tracking people's accounts is the reason writing was invented.)
And I only use wells fargo ATMs, because they have a nice green glowing card input, so you know nobody put a malicious card scanner on the ATM.
It's a complete reversal from a few years ago where I wouldn't carry cash and wouldn't go anywhere that was cash only. Going cash only also reduces the fees for stores where you buy things.
Cash is the way to go, despite all of our technology.
This is more commonly known as "marketing".
We make products and services with what we have available at the time, that doesn't necessarily mean that they are an indefinite solution, nor does it mean that we are going to hold our archaic solutions to the same standard we do with our new and improved solutions.
Something else that wasn't covered is that cash is also dirty and carries grime and infections. Ideally you should wash your hands before you eat if you handle any cash. (this is less of a problem with laminated/plastic notes like the CAD or EUR)
Imagine a super virus propagating and killing people through cash. Ouch!
The more policy-oriented economic points aren't nearly as strong, but overall, I thoroughly enjoyed it.
Just to be clear: I like Bitcoin a lot, but I don't think it will ever replace national currencies. I just enjoy pointing out the hypocrisy of a lot of bitcoin detractors.
bank notes are a very simple system, so simple it has been used for over a thousand years.
Satire is an interesting way to analyze things, especially in this case.
To be honest, Ive been in Nordic countries for a year (where all transaction are card-based, and there is no minimum amount for payment) and that situation rung surprisingly true especially after yesterday, when I was facing a coat valet who was expecting a ten crown in cash.
Money is good, in any form. Banks are good, they help allocate resources in a capitalist economy. Central banks are good, they can help other banks in need.
Fractional reserve banking is robbery. Inflation is robbery. Printing money is robbery.
Government, the fed, banksters, they are a maffia, a band of thugs writ large.
They were not hair-brained ideas established by anonymous individuals engaged in private debate outside of the forums of democracy.
Look, I know all of you are very excited about these experimental economic and political systems, but please realize that the existing world that you are "suffering" under was mainly developed as a slow process of evolution.
Most extreme revolutionary ideas take a long time to work their way in to the existing societal structures and way of life. When they're quickly forced on to people, all fucking hell breaks loose.
So for fucks sake, get some balance and control your zealotry, people!
And read some history books! The future is built from the past no matter how long or hard of a process that seems to be. Go with the flow!
Engage with other people. Engage with the existing institutions. Even if it is a better idea, there are billions of people who rely on the current system in ways that you can't predict.
a bunch of my friends on IRC were very keen on the idea, although I don't suspect they know what burden it would entail.
yes, we'd be able to set up our own privacy policies and, yes we can have super fast internet, lay our own fiber and infrastructure of that nature could be created.
however, agriculture, the bureaucratic hoops we'd have to jump through to successfully secede and the general hard work and labour that would have to go in, I believe, is unaccounted for.
this is a cool concept, and micronations are a nice idea, I just wish I could find a plot of land that's not owned, I'd definitely put a lot of hard work into getting out of my country.
The whole secession would be somewhat more believable if the police protection of Vilks didn't costs the Swedish tax payers about a million USD per year.
Is there some other quality I am missing that separates it from any other piece of property someone owns, creates a website for, and calls a micronation?
Among all free diving warm blooded animals they go the deepest. They dive to depths 25 times deeper than their other equally famous and endangered cousin the blue whales. The blue whale is the largest known animal to have ever inhabited the earth.
To give an idea of how deep they dive, here is a picture http://i.imgur.com/ESp2j.jpg It needs to be magnified for perspective and for the little surprise at the bottom.
It is interesting how they manage to hold their breath for so long and yet manage to survive the bends (decompression sickness).
The whales are seriously challenging our assumptions about animal intelligence, empathy, society, culture and language. For a long time we believed that the primates were at the top. Search Ted talks and youtube for dolphin intelligence, dont miss the Attenborough ones. For lack of a better word they are just amazing.
Dolphins are for example known to build difficult to make toys (air bubble vortex rings) just to entertain themselves.
They have to discover how to make it. Sometimes they can be quite possessive, they would break the toy if someone not so knowledgeable wants to play with it. Once a dolphin figures it out how to make one, his/her peers eventually figure it out too. So it kind of spreads within a group like fashion. This behavior has been observed both in captivity and in the wild.
Dolphins in captivity try to imitate us and seem to have no trouble mapping our body parts to theirs. A story goes that a scientist observing an young dolphin from an underwater portal had blown a cloud of cigarette smoke at it. The dolphin promptly went to the mother and did the same to the scientist with milk ! It is now strongly believed that they call each other by name. They try to imitate human speech which takes enormous effort on their part because unlike for example parrots their vocal tract is not conducive for this at all. People believe this to be an indication of their strong desire to communicate with us.
And they originated from ungulates: hoofed warm blooded animals. It came as a surprise to me that that there were hoofed carnivorous animals.
Given the declining costs of DNA sequencing, all kinds of research that used to be prohibitively expensive even a few years ago is now becoming possible. For example, we recently awarded a $10,000 research grant to Dr. Joao Pedro de Magelhaes at the University of Liverpool to sequence the genome of the bowhead whale in order to study mechanisms for longevity in this warm-blooded mammal whose lifespan is estimated at over 200 years.
Not only are bowhead whales far longer-lived than humans, but their massive size means that they are likely to possess unique tumor suppression mechanisms. These mechanisms for the longevity and resistance to aging-related diseases of bowhead whales are unknown, says Dr. de Magelhaes, but it is clear that in order to live so long, these animals must possess aging prevention mechanisms related to cancer, immunosenescence, neurodegenerative diseases, and cardiovascular and metabolic diseases."
The bowhead whale study will be conducted at the state-of-the-art Liverpool Centre for Genomic Research and results will be made available to the research community.
Whales, along with many other mammalian species (including humans) exhibit a perplexing divergence of somatic and reproductive senescence. Female whales hit menopause long before their lives are over, in some cases spending the majority of their lives in a non-reproductive state, which prima facie seems rather maladaptive.
A number of hypotheses have been proposed to explain what seems like widespread evolutionary selection for menopause, and none of them are completely satisfactory. The "grandmother hypothesis", for example, posits that experienced grandmothers assist in the care of their grandchildren, increasing their odds of survival.
Certain species of whales, including Orcinus orca, the killer whale, exhibit early-life menopause, and form stable matrilineal groups, making them ideal candidates for testing the grandmother hypothesis. Interestingly, studies on killer whales observe no significant correlation between living grandmothers and grandoffspring survival rates, though there are plenty of unaddressed confounding factors.
Humans are the only species where the grandmother hypothesis is supported by data, but the dearth of corresponding data in whales suggests the dramatic disparity in our somatic-reproductive senescence might be more strongly selected for by factors we are not yet aware of.
There are trees alive today which sprouted ten thousand years ago. Hell, Pando (albeit a clonal colony) could be 1,000,000 years old. http://en.wikipedia.org/wiki/Pando_(tree)
Astounding depths of time for a single organism to persist over - but ultimately dependent on a very sedate pace of life.
It so chanced that almost upon first cutting into [a whale, not Moby Dick] with the spade, the entire length of a corroded harpoon was found imbedded in his flesh, on the lower part of the bunch before described. But as the stumps of harpoons are frequently found in the dead bodies of captured whales, with the flesh perfectly healed around them, and no prominence of any kind to denote their place; therefore, there must needs have been some other unknown reason in the present case fully to account for the ulceration alluded to. But still more curious was the fact of a lance-head of stone being found in him, not far from the buried iron, the flesh perfectly firm about it. Who had darted that stone lance? And when? It might have been darted by some Nor' West Indian long before America was discovered.
gutenberg.org full text:
Seriously though, I find it remarkable that bowhead populations have come back so fast considering how long they live. The oceans must have been absolutely teeming with them back in the day, if they reproduce that fast and live that long.
"Planet of The Cetaceans"
But that's a far cry from the stack overflow actually causing any cases of unintended acceleration.
The crucial aspect in the failure scenario described by Michael is that the stack overflow did not cause an immediate system failure. In fact, an immediate system failure followed by a reset would have saved lives, because Michael explains that even at 60 Mph, a complete CPU reset would have occurred within just 11 feet of vehicles travel.
We have seen this scenario played out a million times. Some system designers believe it is acceptable to keep the system running after (unexpected) errors occur. "Brush it under the rug, keeping going and hope for the best." Never ever do that. Fail fast, fail early. If something unexpected happens the system must immediately stop.
 I guess I was thrown off by the shoot-yourself-in-the-foot scenario, where the stack grows toward fixed data structures. If the heap and stack grow towards each other, you have quite a bit of flexibility (though with some danger of collision). If you have the stack grow towards fixed data structures, its size is fixed and it can cause a dangerous overflow. The only disadvantage of the safe example is less flexibility, but for a critical embedded system, that is fine.
Toyota should not have been using recursion in the first place, and it seems they were too cheap to invest analysis tools like Coverity.
The solution was to pop open the bonnet and swap in a replacement cable, which probably cost a couple of quid.
This recollection combined with the Toyota story merely convinces me that automobile automation has got completely out of control.
UPDATE: Yup, #70 on the MISRA C rules: http://home.sogang.ac.kr/sites/gsinfotech/study/study021/Lis...
"A program is, as a mechanism, totally different from all the familiar analogue devices we grew up with. Like all digitally encoded information, it has, unavoidably, the uncomfortable property that the smallest possible perturbations -- i.e., changes of a single bit -- can have the most drastic consequences."
I am hoping there are experts here that can shed some light on this
What we are accustomed to in discussing in HN for example does not exist in these worlds. Continuous integration? Unit test? Even complexity analysis.
And very very old code that's patched over and over and shipped "when it works"
It's usually people who have had an academic contact with programming languages and embedded development and don't know anything about code quality. But you can bet their bosses incentive CMMI and other BS like that. (Yes, complete and utter BS)
Not to mention ClearCase which seems to be a constant, the worse the company the more they love this completety useless piece of crap
The obvious solution to stack overflows is to make the stack bigger. The obvious problem with this solution is that it just kicks the can down the road.
When 180+ IQ brains analyze your work they're bound to find "horrible defects" that no "competent" programmer would ever make.
But... sooner or later, it seems, we are going to go (back) there.
Instructions will become truly privileged, physically-controlled access. Data may go screwy -- or be screwed with -- but this will not directly affect the operating instructions.
Inconvenient? As development becomes more mature, instructions will become more debugged and "proven in the field". Stability and safety will outweigh ease and frequency of updates.
My 30+ year old microwave chugs along just fine. It doesn't have a turntable nor 1000 W, but I know exactly what it will do, how long to run it for various tasks, and how to rotate the food halfway through to provide even heating.
My 34 year old, pilot-light ignited furnace worked like a champ, aside from yet another blower motor going bad. I listened to the service tech when he strongly suggested replacing it before facing a more severe, "winter crisis" problem.
The new, micro-processor based model is better in theory (multi-stage speeds, and longer run times for more even heating). In practice, it's been a misery. The first, from-the-factory blower motor was defective. When that was replaced, the unit started making loud air-flow noises periodically.
Seeing the blower assembly removed, its constructed of sheet metal. The old furnace, by contrast, had a substantial metal construction that was not going to hum and vibrate if not positioned absolutely perfectly and with brand new, optimized duct work.
Past a point, reliability starts to -- far -- outweigh some other optimizations.
This is going to become true in our field, as well.
I was all excited to defend StackOverflow.com.
For the other commenters in this thread that don't see the appeal or keep comparing it to other alternatives, here's what's so compelling to me:
- Editor agnostic. This isn't just for vim, people. ST2 is awesome for this kind of thing.
- Undo. Easy undo. That's a killer feature, and I wouldn't be surprised if it's unique to this tool.
Effusive praise aside, I ran into a couple small issues on OS X:
$ massren --config editor 'subl' massren: Config has been changed: "editor" = "subl" $ massren massren: exec: "subl": executable file not found in $PATH
Also, I'd like to be able to pass switches along with my editor command, like git config's core.editor . However, this doesn't seem to work:
massren: exec: "subl -wn": executable file not found in $PATH
(not to spoil the fun of creating a useful command-line tool with Issue9 ;)
Since "wget https://raw.github.com/laurent22/massren/master/install/inst... comes out with certification error, because wget doesn't know github's certification, you need to either add an ignore-cert option or you might wanna change that option to 'curl -O https://raw.github.com/laurent22/massren/master/install/inst... which will not came out with an error. Also, curl is installed by default on MacOSX while wget is not :-)
This wasn't clear from the README, but this will work with files across directories (which is both useful and confusing)
Will rename matching files in different directories, but there is no indication of what directories those are in the editor.
<snark>Also, how could you build something so useful without generics!?</snark>
Moretutils also includes 'vipe' (edit pipe in text editor) and other useful utilities.
Why don't more commands have this?
Also, I don't like MongoDB very much and almost always find another more suitable database (both SQL and NoSQL) for the projects that I have worked on.