hacker news with inline top comments    .. more ..    31 Jan 2014 Ask
home   ask   best   5 years ago   
Ask HN: Why is ExtJS not getting any traction?
18 points by porker  9 hours ago   18 comments top 10
x86_64Ubuntu 8 hours ago 2 replies      
Let me preface this with the fact that I am a Flex dev who ventured into the HTML/JS/CSS stack during the "Flexodus". Cleary, ExtJS seemed to suit my fancy more than the other libraries in the JS world.

As far as why it doesn't have traction, I would attribute this to 2 things. The first being that the JS community isn't really into app development. What I mean by that, is when you look at JS, and you look at languages like Flex and Silverlight, you notice a massive disconnect in ideas. In the JS world, visual components are rarely if ever first class citizens, simple layout tasks are far more difficult than they need to be, and SPA paradigms are also second class citizens. After noticing this and poking around, you find out that the main JS community either doesn't want the solution to begin with, or they seem to be re-inventing the wheel.

The second strike against ExtJS is the cost. I got a license for it a few years ago when I was delusional and thought I could use it to replace Flex. Dropping $1,000 on something when virtually every other tech out there is free is hard to choke down.

Also, it seemed like a major step down for those of us coming from the Flex community. Like I said, with JS app paradigms seem like afterthoughts and each peculiarity wore me down further and further. For instance, if you have a button and you declare a listener function but misspell it, you don't get an error complaining that the listener doesn't exist, you get a blank white screen. Another issue was marshalling data into objects. Whenever I brought back a Classroom JSON representation, the Student objects in the students collection remained unmarshalled. I had to use ANOTHER store to get those to behave as objects. All of these "why on earth is this acceptable" moments led me to believe such features weren't really in demand in the JS community if not resented altogether.

lightblade 6 hours ago 1 reply      
Having you checked out KendoUI? They seem to be somewhere between the ExtJS/AngularJS spectrum.

I've used Sencha's frameworks on 3 different projects now, and I still hate it. For many different reasons:

1. The learning curve: This is probably something you already know. But you may not be aware of is the other effects this can cause. If your team consists of members of different level of experience (from 0+), the code quality is going to suffer a lot from the lack of knowledge.

2. ExtJS and Sencha Touch are 2 different products. Today's web app demands to be runnable on many different platforms. With ExtJS you get point and click. With Sencha Touch you get swipe and pinch. But what if you're running it on a touch laptop that needs both point and click and swipe and pinch? I'm working on a product right now that integrated the 2. While we succeeded at reaching the goal, the result is not something I'm proud of.

3. Sencha is a walled garden that makes it hard to integrate other libraries into it. For the most part, Sencha already have a lot of things that you needed (like data grid, combo box, etc). But what if you want to do complex data visualization with d3? Complex interactive behavior with Rxjs? Or realtime updates with socket.io?

Eventually, I found out that no matter which framework you choose to use, you'll end up needing to read its source code to understand what's underneath. If you have to do so anyway, why not pick a framework that's simple to understand?

kmnc 7 hours ago 1 reply      
Most ExtJS apps are probably internal admin apps, but I imagine it has decent usage throughout the corporate world. 5 Years ago if you were looking to create a SPA business oriented web-app extjs was the only option really, and at the time it also looked damn nice (against java generated web-apps).

It seems the ExtJS community has fallen off a bit as the company has (in my opinion) put it in the rear-view as they focus on Sencha and mobile. I also never really felt that I have seen what a true ExtJS app could be. The examples are the same as they were 5 years ago. Only the theme has been changed. You get a lot for free with the datagrid and charts but they aren't the only game in town for this.

No one wanted to build customer facing ExtJS apps because of performance issues, hard to style so that it doesn't look like some dull enterprise thing and at the time Rails was just getting popular. I havn't touched ExtJS in a while but building an admin CRUD in Rails would be much easier. The extra features of ExtJS can be gotten elsewhere and easily integrated.

Angular just seems like a breath of fresh air when comparing it to ExtJS, and again most of the components can be found elsewhere.

PopsiclePete 6 hours ago 0 replies      
Last time I tried ExtJS, it struck me as a framework made by people who suffer from an extreme case of NIH syndrome. Personally, I prefer small, light frameworks that play with other frameworks well. I don't like big "uber" frameworks that re-implement everything. That's not value, in fact, it's negative value in my mind.

If I want to manipulate the DOM or do AJAX I'll use JQuery, if I want to do data binding I'll use knockout.js, etc.

But these are minor annoyances compared to the huge ExtJS no-no for me which is the fact that ExtJS seem to actually hate Javascript. Or, they targeted the framework towards people who'd rather write C++ in the browser - giant "class" hierarchies everywhere. Ugh.

Blinkky 6 hours ago 0 replies      
Cost. In my day job we use extJS so I know it pretty well. I would have been the first one to use it in my side project. But because pretty much every other javascript frameworks is free and the extJS licensing is terrible they weren't even in the decision.
bayesianhorse 7 hours ago 0 replies      
The licensing is bad, the control over the design is terrible, and the experience developing and learning it is painful.

Never ever use it for a client who has any ambitions on GUI design. Figuring out how to put an extra pixel here or there can take hours...

cjbprime 8 hours ago 1 reply      
It's pretty restrictively licensed.
ciokan 8 hours ago 0 replies      
I mostly use Extjs for admins and such. I would say that it offers little control over the design, structure and HTML compared to something like angular which allows full control over the presentation. Creativity room is important.

The licensing is pretty restrictive also and the prices are big.

I wouldn't say ext has no traction. I see sencha going pretty well and I've worked with extjs since the early days. It just looks more "business oriented"

elwell 3 hours ago 0 replies      
Sencha Touch plays rather well with CoffeeScript
davenonymous 8 hours ago 1 reply      
Ask HN: What is needed for a successful Series A?
8 points by SkyMarshal  6 hours ago   1 comment top
mchannon 34 minutes ago 0 replies      
Traction (customers, preferably paying) are the primary thing investors look for with existing businesses.

The less you have need for that series A, the easier it will be to obtain.

Beyond that, spend the next few months focusing on your product and market, then if you really want that series A, meet with as many potential investors as you can, but not to pitch; you want a nice competitive round when the time comes. The worst series A failure is a completed one with bad terms.

Mailgun IMAP/POP Mailbox EOL
3 points by mcnully  3 hours ago   3 comments top
ferrantim 2 hours ago 1 reply      
Mailgunner here. That's actually not the reason we end of life's the product. It just wasn't very popular unfortunately. If it was, we probably would have keep it. This was actually what we thought would be one of the coolest features of mailgun, but it never took off. Sorry you won't be able to use it anymore. Obviously you were a happy user. I wish there had been more! Btw, Rackspace email is $2/box not $5. Any they have reseller plans that cut cost even more. Not free but still really cheap. Good luck with your search!
Ask HN: Why are "save password" checkboxes ever defaulted to "checked"?
2 points by RankingMember  2 hours ago   2 comments top 2
anigbrowl 1 hour ago 0 replies      
The vast majority of people are using their own personal computer and want the convenience more than the security.
0x420 2 hours ago 0 replies      
it is safer to assume the user is at a public computer.
Ask HN: Is there a way to generate text editor color schemes?
6 points by pspeter3  6 hours ago   2 comments top 2
Reassign Your Bluetooth "Phone" Button to Open Google Now
2 points by IWasteI  4 hours ago   discuss
Ask HN: Why don't major browsers ship with jQuery?
4 points by GigabyteCoin  7 hours ago   3 comments top 3
alt_ 6 hours ago 0 replies      
There is just no need to.

It's small, so the overhead of fetching it is minimal. Once fetched it can be cached practically indefinitely and re-used by any sites that uses the same CDN, while still making live updates possible.

Bundling it would only provide a stale version, add bloat and open up a floodgate for other inclusions.

anauleau 6 hours ago 0 replies      
According to wikipedia: "Used by over 80% of the 10,000 most visited websites, jQuery is the most popular JavaScript library in use today."

Despite it being ubiquitous, it is still an open source javascript library. Just a guess here but, different applications use different versions and I'd imagine relying on one version supplied by the browser could cause issues for developers using JQuery especially if their application depends on an older version of the library.

daveslash 6 hours ago 0 replies      
fwiw: This is purely an opinion/speculation.

I would imagine that it's has to do with JavaScript being a language that conforms to a standard (ECMA-262 specification and ISO/IEC 16262). jQuery is a library that sits on top of JavaScript - that is, it's just more JavaScript that makes things all ready available in JavaScript a little easier for developers to work with. jQuery also, to my knowledge, does not adhere to any standard.

jQuery is also updated periodically, which can potentially introduce different behaviors. If jQuery shipped with the browser, then a website might behave differently based on what version of jQuery was coupled with the browser. I know that standards, like JavaScript, are also updated and can introduce discrepancies, but with a standard you have a little more concreteness.

Ask HN: What's the worst you've ever screwed up at work?
196 points by kadabra9  1 day ago   298 comments top 120
patio11 1 day ago 6 replies      
I've only cried literal tears once in the last ten years, over business. Due to inattention while coding during an apartment move, I pushed a change to Appointment Reminder which was poorly considered. It didn't cause any immediate problems and passed my test suites, but the upshot is it was a time bomb that would inevitably bring down the site's queue worker processes and keep them down.

Lesson #1: Don't code when you're distracted.

Some hours later, the problem manifested. The queue workers came down, and AR (which is totally dependent on them for its core functionality) immediately stopped doing the thing customers pay me money to do. My monitoring system picked up on this and attempted to call me -- which would have worked great, except my cell phone was in a box that wasn't unpacked yet.

Lesson #2a: If you're running something mission critical, and your only way to recover from failure means you have to wake up when the phone rings, make sure that phone stays on and by you.

Later that evening I felt a feeling of vague unease about my change earlier and checked my email from my iPad. My inbox was full of furious customers who were observing, correctly, that I was 8 hours into an outage. Oh dear. I ssh'ed in from the iPad, reverted my last commit, and restarted the queue workers. Queues quickly went down to zero. Problem solved right?

Lesson #3: If at all possible, avoid having to resolve problems when exhausted/distracted. If you absolutely must do it, spend ten extra minutes to make sure you actually understand what went wrong, what your recovery plan is, and how that recovery plan will interact with what went wrong first.

AR didn't use idempotent queues (Lesson #4: Always use idempotent queues), so during the outage, every 5 minutes on a cron job every person who was supposed to be contacted that day got one reminder added to the queue. Fortuitously, AR didn't have all that many customers at the time, so only 15 or so people were affected. Less than fortuitously, those 15 folks had 10 to 100 messages queued, each. As soon as I pressed queues.restart() AR delivered all of those phone calls, text messages, and emails. At once.

Very few residential phone systems or cell phones respond in a customer-pleasing manner to 40 simultaneous telephone calls. It was a total DDOS on my customers' customers.

I got that news at 3 AM in the morning Japan time, at my new apartment, which didn't have Internet sufficient to run my laptop and development environment to see e.g. whose phones I had just blown up. Ogaki has neither Internet cafes nor taxis available at 3 AM in the morning. As a result, I had to put my laptop in a bag and walk across town, in the freezing rain, to get back to my old apartment, which still had a working Internet connection.

By the time I had completed the walk of shame I was drenched, miserable, and had magnified the likely impact that this had on customers' customers in my own mind. Then I got to my old apartment and checked email. The first one was, as you might expect, rather irate. And I just lost it. Broke down in tears. Cried for a good ten minutes. Called my father to explain what had happened, because I knew that I had to start making apology calls and wasn't sure prior to talking to him that I'd be able to do it without my voice breaking.

The end result? Lost two customers, regained one because he was impressed by my apology. The end users were mostly satisfied with my apologies. (It took me about two hours on the phone, as many of them had turned off their phones when they blew up.)

You'd need a magnifying glass to detect it ever happened, looking on any chart of interest to me. The software got modestly better after I spent a solid two weeks on improved fault tolerance and monitoring.

Lesson the last: It's just a job/business. The bad days are usually a lot less important in hindsight than they seem in the moment.

yan 1 day ago 2 replies      
Not the worst at all, but probably one I found most amusing. One of my jobs included some sys admin tasks (this wasn't the position, but we all did dev ops), among my other responsibilities. I spent half a day going through everything with the person responsible for most of the admin tasks at the time. She was an extremely dilligent and competent admin, did absolutely everything through configuration management and kept very thorough personal logs and documentation on the entire network. One of my first tasks was to change backup frequency (or other singular change) and going by how I usually did things at the time, just sudid a vi session, changed the frequency and restarted the service.

She found out about it pretty quickly due to having syslog be a constant presence in one of her gnu screen windows and gave me a look. She quickly reverted what I did, updated our config management tool, tested it, then deployed it, while explaining why this was the right way to do things. I slowly came around to doing things the right way and haven't thought much about the initial incident until we found her personal logs that she archived and left on our public network share for future reference.

In the entries for the day that I started, we saw the following two lines:

    [*] 2007/09/09 09:58 - yan started. gave sudo privs and initial hire forms.    [*] 2007/09/09 10:45 - revoked yan's sudo privs.

ggreer 1 day ago 4 replies      
One summer in college, I got an internship at a company that made health information systems. After fixing bugs in PHP scripts for a couple weeks, I was granted access to their production DB. (Hey, they were short on talent.) This database stored all kinds of stuff, including the operating room schedules for various hospitals. It included who was being operated on, when, what operation they were scheduled for, and important information such as patient allergies, malignant hyperthermia, etc.

I was a little sleepy one morning and accidentally connected to prod instead of testing. I thought, "That's weird, this UPDATE shouldn't have taken so long-oh shit." I'd managed to clear all allergy and malignant hyperthermia fields. For all I knew, some anesthesiologist would kill a patient because of my mistake. I was shaking. I immediately found the technical lead, pulled him from a meeting, and told him what happened. He'd been smart enough to set up hourly DB snapshots and query logs. It only took five minutes to restore from a snapshot and replay all the logs, not including my UPDATE.

Afterwards, my access to prod was not revoked. We both agreed I'd learned a valuable lesson, and that I was unlikely to repeat that mistake. The tech lead explained the incident to the higher-ups, who decided to avoid mentioning anything to the affected hospitals.

If it's any consolation, the company is no longer in business.

Just remember when you screw things up: Your mistake probably won't get anyone killed, so don't panic too much.

hluska 1 day ago 4 replies      
A local Subway franchise was the very first company that hired me. I was extremely young, shy, and intensely socially awkward, yet excited to join the workforce (as I had my eyes set on a Pentium processor).

When I worked at Subway, the bread dough came frozen, but you would put loaves in a proofer, proof it for a certain amount of time, and then bake it. My first shift, however, got busy and I left several trays in the proofer for a very, very long time. Consequently, they rose to roughly the size of loaves of bread, as opposed to the usual buns.

It was my very first shift alone at any job in my life, so I did the most logical thing I could think of and put the massive buns in the oven. They cooked up nicely enough and I thought I was saved. Until I tried to cut into one.

Back in that day, Subway used to cut those silly u-shaped gouges out of their buns. In retrospect, I think this was most likely a bizarre HR technique designed to weed out the real dummies, but at the time I was oblivious (likely because I was one of the dummies they should have weeded out). When I ran out of the normal bread, I grabbed one of my monstrosities, tried to cut into it, and discovered that it was not only rock hard, but the loaf broke apart as I tried to cut it.

That night, my severe shyness and social awkwardness had their first run-in with beasts known as angry customers. I was scared I would get fired, so I promptly made new buns, but spent the rest of my shift trying to get rid of my blunder. I discovered some really interesting things about people that night. First, you'd be surprised how incredibly nice customers are if you are straight up with them. Some customers I never met before met the big, crumbly buns as an adventure and, in doing so, helped me sell all the ruined buns.

In the end, I came clean (and didn't get fired). That horrible night was a huge event in the dismantling of my shell. It taught me an awful lot about ethics. And frankly, that brief experience in food service forever changed how I deal with staff in similar types of jobs.

Smerity 1 day ago 1 reply      
I was testing disaster recovery for the database cluster I was managing. Spun up new instances on AWS, pulled down production data, created various disasters, tested recovery.

Surprisingly it all seemed to work well. These disaster recovery steps weren't heavily tested before. Brilliant! I went to shut down the AWS instances. Kill DB group. Wait. Wait... The DB group? Wasn't it DB-test group...

I'd just killed all the production databases. And the streaming replicas. And... everything... All at the busiest time of day for our site.

Panic arose in my chest. Eyes glazed over. It's one thing to test disaster recovery when it doesn't matter, but when it suddenly does matter... I turned to the disaster recovery code I'd just been testing. I was reasonably sure it all worked... Reasonably...

Less than five minutes later, I'd spun up a brand new database cluster. The only loss was a minute or two of user transactions, which for our site wasn't too problematic.

My friends joked later that at least we now knew for sure that disaster recovery worked in production...

Lesson: When testing disaster recovery, ensure you're not actually creating a disaster in production.

jawns 1 day ago 2 replies      
I run Correlated.org, which is the basis for the upcoming book "Correlated: Surprising Connections Between Seemingly Unrelated Things" (July 2014, Perigee).

I had had some test tables sitting around in the database for a while and decided to clean them up. I stupidly forgot to check the status of my backups; because of an earlier error, they were not being correctly saved.

So, I had a bunch of tables with similar names:

    users_1024    users_1025    users_1026
I decided to delete them all in one big swoop.

Guess what got deleted along with them? The actual users table (which I've since renamed to something that does not even contain "users" in it).

So, how do you recover a users table when you've just deleted it and your backup has failed?

Well, I happened to have all of my users' email addresses stored in a separate mailing list table, but that table did not store their associated user IDs.

So I sent them all an email, prompting them to visit a password reset page.

When they visited the page, if their user ID was stored in a cookie -- and for most of them, it was -- I was able to re-associate their user ID with their email address, prompt them to select a new password, and essentially restore their account activity.

There was a small subset of users who did not have their user IDs stored in a cookie, though.

Here's how I tackled that problem:

Because the bulk of a user's activity on the site involves answering poll questions, I prompted them to select some poll questions that they had answered previously, and that they were certain they could answer again in the same way. I was then able to compare their answers to the list of previous responses and narrow down the possibilities. Once I had narrowed it down to a single user, I prompted them to answer a few more "challenge" questions from that user's history, to make sure that the match was correct. (Of course, that type of strategy would not work for a website where you have to be 100% sure, rather than, say, 98% sure, that you've matched the correct person to the account.)

gmays 1 day ago 1 reply      
In late 2008 when I was in the Marines and deployed to Iraq I was following too closely behind the vehicle in front while crossing a wadi and we hit an IED (the first of 3 that day).

Nobody was killed, but we had a few injured. Thankfully the brunt of it hit the MRAP in front of us. If it hit my vehicle (HMMWV, flat bottom) instead I probably wouldn't be here.

That was the first major operation on my first deployment, too. Hello, world!

My takeaway? Shit just got real.

We ended up stranded that night after the 3rd IED strike (our "rescuers" said it was too dangerous to get us). It was the scariest day of my life, but in similar future situations it was different. I still felt fear and the reality of the existential threat, but I accepted it. It was almost liberating. Strange.

I deployed for another year after that (to Afghanistan that time). After Afghanistan I left the Corps and started my company. Because if it fails, what's the worst that can happen? Lulz.

wpietri 1 day ago 2 replies      
Long ago when I was, I think, a sophomore in college and worked for the university IT group, I was trying to add an external drive to an early NeXT machine [1]. I wanted to try out their fancy GUI development stuff, you see. I was at best a modestly competent Unix admin, and this was circa NextStep 1.0, so the OS was... rough. It was in the dark days of SCSI terminators, so just telling if the drive was properly connected and, if so, how to address it was challenging.

After a couple hours of swearing, instead of working from a root shell in my own account, I just logged into the GUI as root. And there was a pretty interface showing the disks. I could just click on one and format it. Hooray!

Well either the GUI was buggy or I clicked on the wrong disk, because as the format was going, I realized the external drive wasn't doing anything. I was formatting the internal boot hard drive. And since nobody but me gave a crap about this weird free box somebody had given them, they had repurposed it. As a file server. For the home directories of a bunch of my colleagues. Who were now collecting around me wondering what was going on. Oops.

No problem, says I. I'll just restore from backups. But this thing used a weird magneto-optical drive [2]. The only boot media we had was on an MO disk. The backups were on another. And there was only one of these drives, probably only one in the whole state. The drives were, of course, incredibly slow, especially if you needed to swap disks. Which, I eventually discovered, I would have to do about a million times to have a hope of recovery.

Long story short, I spent 28 hours in a row in that chair. It was my immersion baptism [2] in the ways of being a sysadmin. The things I learned:

Fear the root shell. It should be treated with as much caution as a live snake.

Have backups. People will do dumb things; be ready.

A backup plan where you have never tried restoring anything may lead to more excitement than you want.

Be suspicious of GUI admin tools. Avoid new GUI admin tools if at all possible. Let somebody else be the one to discover the dangerous flaws.

If you were smart enough to break something, you're smart enough to fix it. Don't give up.

When some young idiot fucks up, check to make sure that they are sufficiently freaked out. If they are, no need to yell at them. Instead support them in solving the problem.

Seriously, my colleagues were awesome about this. I went on to become an actual paid sysadmin, and spent many years enjoying the work. The experience taught me fear, and a level of care that sticks with me today. I'm sure at the time I was wishing somebody would wave a magic wand and make it the problems go away, but working through it gave me a level of comfort in apparent disasters that has been helpful many times since.

[1] http://en.wikipedia.org/wiki/NeXTcube[2] http://en.wikipedia.org/wiki/Magneto-optical_drive[3] http://en.wikipedia.org/wiki/Immersion_baptism

nostromo 1 day ago 0 replies      
I was once in charge of running an A/B test at my work. Part of the test involved driving people to a new site using AdWords.

After the test was complete, I forgot to turn off the Adwords. (Such a silly mistake...) Nobody notices until our bill arrives from Google, and it's substantially higher than normal. When my coworker came to ask me about it, "are these your campaigns?!?" I just sank in my chair.

I think it cost the company $30k. I suppose it's not that much money in the grand scheme of things, but I felt very bad.

jboggan 1 day ago 1 reply      
I love these topics.

~ 2007, working in a large bioinformatics group with our own very powerful cluster, mainly used for protein folding. Example job: fold every protein from a predicted coding region in a given genome. I was mostly doing graph analysis on metabolic and genetic networks though, and writing everything in Perl.

I had a research deadline coming up in a month, but I was also about to go on a hunting trip and be incommunicado for two weeks. I had to kick off a large job (about 75,000 total tasks) but I figured spread over our 8,000 node cluster it would be okay (GPFS storage, set up for us by IBM). I kicked off the jobs as I walked out the door for the woods.

Except I had been doing all my testing of those jobs locally, and my Perl environment was configured slightly differently on the cluster, so while I was running through billions of iterations on each node I was writing the same warning to STDOUT, over and over. It filled up the disks everywhere and caused an epic I/O traffic jam that crashed every single long-running protein folding job. The disk space issues caused some interesting edge cases and it was basically a few days before the cluster would function properly and not lose data or crash jobs. The best part was that I was totally unreachable and thus no one could vent their ire, causing me to return happy and well-rested to an overworked office brimming with fermented ill-will. And I didn't get my own calculations done either, causing me to miss a deadline.

Lessons learned:

1) PRODUCTION != DEVELOPMENT ever ever ever ever2) Big jobs should be proceeded by small but qualitatively identical test jobs 3) Don't launch any multi-day builds on a Friday4) Know what your resource consumption will mean for your colleagues in the best and worst cases5) Make sure any bad code you've written has been aired out before you go on vacation6) Don't use Perl when what you really needed was Hadoop

leothekim 1 day ago 1 reply      
Not the worst, but certainly most infamous thing I've done: I was testing a condition in a frontend template which, if met, left a <!-- leo loves you --> comment in the header HTML of all the sites we served. Unfortunately the condition was always met and I pushed the change without thinking. This was back in the day when bandwidth was precious and extraneous HTML was seriously frowned upon. We didn't realize it was in production for a week, at which point several engineers actually decided to leave it in as a joke. Then someone higher up found out and browbeat me into removing it, citing bandwidth and disk space costs.

Now, if you go to a CNET site and view source, there's a <!-- Chewie loves you --> comment. I like to think of that as an homage to my original fuckup.

michh 1 day ago 13 replies      
Classic forgetting the full WHERE-part of a manual UPDATE-query on a production system. The worst part is you know you fucked up the nanosecond you hit enter, but it's already too late. Lesson learned? Avoid doing things manually even if a non-technical co-worker insists something needs to be changed right away. And if you do: wrap it in a transaction so you can rollback, leave in a syntax error that you'll only remove when you're done typing the query.
yen223 1 day ago 2 replies      
I wrote a piece of code controlling an assembly line machine. These machines require manual operation, and would come with a light curtain, which detects when someone places their hand near the moving parts, and should temporarily stop the machine.

A relatively minor bug in the software that I wrote caused the safety curtain to stop triggering when a certain condition was met. We discovered this bug after an operator was injured by one of these machines. Her hand needed something like 14 stitches.

Lessons learnt:

1. Event-driven code is hard.

2. There's no difference between a 'relatively minor' bug and a major one. The damage is still the same.

preinheimer 1 day ago 1 reply      
I ended up as the architect for a new live show we were putting on. You could either pre-purchase some number of minutes, or pay per minute, it was like $4.99/minute or something insane.

The billing specs kept changing, as did the specs for the show itself. New price points, more plans, change the show interface, add another option here, etc. The plan had been to do a free preview show the day before to work out the kinks. That didn't happen.

The time leading up to show start was pretty tense, lots of updates, even a few last minute changes! Then the show actually started, brief relief. The chat system built in started deleting messages, one of those last minute feature changes had screwed up automatic old-message deletion. We had a fix though, update the JS, and bounce everyone out of the show and back in so the JS updates. Fixed!

Then the CEO pointed out that the quality just kept getting worse. Turns out that while the video player had both a numeric value and a string description for the different quality levels, it assumed they were in ascending order. So once it confirmed it could stream well at a given level, it automatically tried the next, which worked! Poor quality for everyone. Fixed, and another bounce.

Then it was over, time to go home. Back in the next day to finish off the billing code. I decided to approach it like a time card system. Traverse the logs in order, recording punch in time, when someone punches out, look up their punch-in times and set that user's time spent to the difference. Remove punch-in and out from the current record so they're not used again.

Now two facts from above added up to a pretty serious bug.1) I _set_ the time spent to the difference between the two times. Not added, set.2) We bounced everyone from the show twice to update their JS, and video player. So everyone had multiple join/parts.

I under-billed customers by tens of thousands of dollars.

Things I learned:

- Don't just argue that you need a trial run, make sure management understands the benefits. Why, not What.

- Duplicate billing code. After that a co-worker and I wrote two separate billing parsers for things, 1 designed to be different, not efficient.

- Give yourself ways to fix problems after they crop up. The bounce killed my billing code, but not doing it would have damaged the actual product (which later became a regular feature). Wish that thing had been my idea.

alexmarcy 1 day ago 1 reply      
My worst would have been catastrophic if I had waited one minute to make my mistake.

I was commissioning a new control system at a power plant's water treatment facility. I was fairly new to the industry and had mostly looked over the guy who did the bulk of the work's shoulder as on the job training.

This particular day the guy was out sick and we had to finalize a couple of things before we ran through the final tests.

There was an instruction to open a valve to fill a tank and it had the wrong variable linked to it. The problem was to maintain the naming standards I had to do a download to the processor to make the change. When I had been doing work in the office this was not a big deal, download the program to the processor, it stops running for a moment while it loads the new logic into memory and starts back up.

Not thinking through the implications of the processor shutting down while the process was up and running I made the code changes, hit download and about 30 seconds later an operator came running over looking like he had seen a ghost and he was pissed.

While I was making my code changes the operator was hooking up a hose to drain a rail car of some chemicals. The way the valves were configured before I made my changes was correct and would have had no consequence it I didn't touch anything. The way the valves were configured when the processor restarted would have routed the rail car's contents to the wrong tank resulting in a reaction which would have created a huge plume of highly toxic gas. The way the wind was blowing this plume would have blown directly to the largest town in the area and could have killed a ton of people.

The operator heard the valves in question changing position before he opened the valve on his hose to empty the rail car and figured something was up. When he saw the whole process had shut down he got really angry because I had ignored the protocol in place to avoid such a disaster.

I got chewed out and kicked off the site. My boss attributed my mistake to inexperience and I had to give a safety presentation on what I did wrong.

Lessons learned:Be sure you are aware of any implications your actions have. If you are unsure or guessing about something stop what you are doing and go ask someone first.

Don't give people mission critical work on their first project and have them work unsupervised. Training is important.

Always be aware of safety requirements, especially when you are working with machinery, automated processes, chemicals or anything else that can hurt, maim or kill you.

discardorama 1 day ago 1 reply      
I bet > 66% of these are something to do with databases. :-)

My story (though I wasn't directly responsible): we were delivering our software to an obscure government agency. Based on our recommendation, they had ordered a couple of SGI boxes. I wrote the installation script, which copied stuff off the CD, etc. Being a tcsh afficianado, I decided to write it in tcsh with the shebang line

Anyways: we send them the CD. Some dude on the other side logs in as root, mounts the CD, and tries to run "installme.csh". "command not found" comes the response.So he peeks at the script, and sees that it's a shell script. He knows enough of unix that "shell == bash". So he runs "bash installme.csh" . A few minutes go by, and lots of errors. So he reboots; now the system won't come up.The genius that he is, he decides to try the CD on the second SGI box. Same results.

In the script, the first few lines were something like:

    set HOME = "/some/location"    /bin/rm -rf $HOME/*
Hint: IRIX didn't ship with /usr/local/bin/tcsh. And guess what's the value of "HOME" in bash?

tptacek 1 day ago 0 replies      
I once accidentally ruined the Internet.


rfreytag 1 day ago 0 replies      
About 30 years ago I deleted the JOBCONTROL process on an old VAX 11/780 thinking it might be the reason why someone's process was stuck.

It wasn't a but an hour before I lost sysadmin privileges.

Never "experiment" with a production system - ever.

benched 1 day ago 0 replies      
I once cared about a job to the point of damaging my mental health. I haven't made that mistake since. I did, however, rather stupidly accomplish the same thing, years later, by caring too much about an entrepreneurial venture.
Beltiras 1 day ago 1 reply      
I work at a newspaper as a programmer for the website. Mostly my job is backend programming, some HTML and CSS work (mostly left to designers). I run our local computer infrastructure as well as manage a cluster for our online presence and assist in technology related journalism as well as assisting our CEO in managing the IT budget.

I inherited a mess of an architecture and am finally getting around to rewriting our deployment process. We buy VM services from a local outfit and the prices are basically an arm and a leg for rather small machines. Due to this my predecessor put in place an insane deployment script. It pulls the new version from github then reloads code on the running dynos, one after another. Reverting is out of the question with our current approach to VCS (something I am also fixing). Most of the time this is no problem, all we are changing really is some template code, or introducing new models and their views.

Thinking back I am quite happy we don't run into more problems than we do, but also happy that this type of insanity is soon in the rearview mirror.

The worst mistake was recently, cost us about 4 hours of downtime during the busiest time of the day.

A big feature on all news sites are lists of stories to present to the user to look at after they have read what you put in front of them at the moment. They may take the form of most viral, most read, most commented, sliced by time or category or many other factors. My predecessor had written all those lists statically, which made maintenance a nightmare and extension very fragile.

I made a function that was a generic list of items. You supply basic parameters, amongst them a QuerySet for what would construct the list and my function would check to see if it was cached and if it wasn't, generate it and cache it.

The framework I use (Django) generally uses lazy evaluation for all QuerySets and I rarely have to think about the size of the list I generate, I just take care to limit the query before I list() it. During development nothing showed up as a problem and I deployed this and all seemed to be good with the world.

A week passes by where I made at least 2 minor deploys (small changes to templates, minor tweaks to list filters) and all seemed to be good with the world.

Designer sends me a pull request, I look over the code, just some garden-variety template changes, nothing that should raise an eyebrow. Make the merge, plan to deploy and then go to lunch. Deployment done, all seems well for 2 minutes but then suddenly servers lit on fire. Pages spewed 404's and 500's like there was no tomorrow.

For 4 hours I tear my hair out, examine every piece of code I was deploying that day, call in the big gun support (the kind that costs more money than I care to think about). Everything I was looking at pointed to the caching agent not working. Too many pageviews requesting the database, too much load on the servers, reboots made them work fine for about a minute but then everything became bogged down.

The big gun support pointed something out finally that I had missed: Traffic from the database to the dynos was abnormally high. Made me take a look at code that had been there for a while and lo and behold: For some reason when you pass a QuerySet as a parameter, it seems to be evaluated for the receiving function! 2 lines of code added, one deploy, problem fixed.

I have no idea to this day how this code could be live for a week without causing problems but an unrelated change triggers the bad behavior. This is not be the first time I've seen strange behavior from code, having seen a Heisenbug in Java code.

There's a happy ending to this. I made a big mea culpa slideshow where I pointed out all the flaws and what we needed to do to prevent a re-occurence. I got support to make the changes needed and my new cluster goes live day after tomorrow. Now I can carefully change NEW dynos for a deployment, keeping the old one's around if the shit hits the fan. I got some changes instituted in how we approach VC, something that's hampered work for a while. And we save money in the long run because we will no longer be paying an arm and a leg for the VMs (AND I got to learn about clustering machines with HA, goodstuff with gravy).

snikch 1 day ago 0 replies      
Sigh, I cringe even remembering this one.

We were storing payment details sent from a PHP system into a Ruby system, I was responsible for the sending and receiving endpoints. Everything was heavily tested on the Ruby end but the PHP end was a legacy system with no testing framework. Since the details were encrypted on the Ruby end, I didn't do a full test from end to end AND unencrypt the stored results.

Turns out for two months we were storing the string '[Array]' as peoples payment details.

Takeaway: If you're doing an end to end test, make sure you go all the way to the end.

m3mnoch 1 day ago 2 replies      
way back in the late 90s stone age of interactive ad agencies, we were doing our first really big gig for hp. it was a demo shipping out to retail stores showcasing one of their products -- a run of 30,000 stamped cd roms.

i was the one developing the macromedia director app running on the cd.

we were on-time.

we were ready to send them out the door.

it was awesome.

and then we tested the rom outside of our network...

in some far-off corner of code, i had baked in a hard reference to one of our file servers on our network for some streaming assets. the cd failed as soon as you put it in the drive due to that reference to the missing file.

by the time we discovered this, we'd already glass-mastered and stamped 30,000 discs to the tune of $40k or so. or, about $6k per employee. in a company that booked about $50k the previous year. where i worked for free for 9 months.

so, my line of code cost our little company the equivalent of almost all of our previous year's revenue -- not profit, but revenue.

we, of course, had to make the run again -- only this time at the emergency rush prices. and this time, we were running late.

we managed to book some time in the middle of the night at the stamping plant. it was 4am. i had a courier standing over my shoulder watching me run the final build again, this time without the dreaded line of code -- which broke other things i had to fix when i removed it -- before he could take it.

i finished testing. ejected the disc. handed it to the courier, who started running as he was placing it into its case. he drove like hell to make it to the airport where we counter-to-countered it on a 2-hour, 6am flight to vegas for stamping.

oh, and it almost got even worse from there. almost.

we didn't know if they would be able to stuff the cds into the packaging because this was an emergency run and they didn't have the people available.


we were actually on our way to rent a uhaul which we calculated we could drive to vegas just in time for the stamping run to finish. from there, we would load the discs on their spindles, and 4 of us were going to sit in the back of the van, stuffing 30,000 discs while we drove the uhaul to palo alto. from vegas. yes, stuffing discs in the back of a traveling uhaul.

we even had the patio furniture from one of the employees yards already picked out to sit in while we were in the back of the truck.

luckily, the plant managed to squeeze in our packaging (at rush pricing, of course) and all we needed to do was have one of our guys take them as luggage on a later flight that day to the bay area instead.

as to a couple, big lessons learned?

1) i can honestly tell you, i've never, ever had a hard-coded, local network link in anything i've shipped since and never will again. always test off-network. especially these days with mobile apps and their on-off-network states.

2) a strong, non-finger-pointing team is where you need to be. i felt appropriately awful, but we handled it as a team and proceeded to grow that little company to about $40 million a year before a merger.

p.s. oh, and next time, remind me to tell you about the time i ran a database query on production that nuked the entire website for the publicly-traded software company which relied on -- wait for it -- the website to do all its commerce.

trustfundbaby 1 day ago 0 replies      
rm -rf .

yup. that really happened.it was 4-5am in the morning and I'd been working all night. I was on the server trying to set something up and was trying to blow away a folder ... I did a normal rm and that didn't work (obviously) because there was crap in the folder. So I pulled out my nuclear weapon to nuke the folder but left off the preceding ./ (which still wasn't that smart anyway) ... I sat there for a second wondering why the deletion was taking so long ... then another 30 then a minute ... then I looked at what I'd just typed again ... then I realized what had happened.

ctrl-c'ed (or d, can't remember now) out of it. then tried to find root folders

cd /etc=> folder not found

cd /var=> folder not found

I'm from a third world country where we laugh at Americans (sorry) for throwing up when they're nervous or having panic attacks, but at that moment, I had a full blown panic attack. I'll never forget it.

The work was a subcontract for a client who was doing work for Nike, and it was a decently sized project that was critical to the success of the firm, and I'd just blown away their live production server ...

Afer freaking out and almost crying for 5 minutes. I decided to call media temple support (we were using one of their vps servers) ... and by the biggest absolute stroke of luck they'd just backed up the entire server ... not even 2 hours prior to my madness. $100 for a full restore (I don't recall why) and would I like to do that?


so they restored the server for me. I wrote an email to the head of the small company I was doing all the work for, explaining what I had happened and telling him I'd sent over a check for $100 to cover the backup because it was my fault. He was obviously very relieved and never cashed the check I sent.

I still get chills thinking about that exact moment when I thought I'd fucked up my career and reputation for good.

cgh 1 day ago 1 reply      
I was in a remote meeting and failed to realise my laptop's camera was broadcasting. A roomful of people saw me, clad in horrid workout clothes, jam my finger up my itchy nose and scratch my balls.

Key takeaway: always check the cam.

admiraltbags 1 day ago 0 replies      
Lurker turned member to post this.

Second web related job at an insurance company, I was 20 years old at the time. We were heavy into online advertising, mostly banners at the time (this was right around when adwords started to get big). The company just bought out all of the MSN finance section of their site for the day-- it was a pretty big campaign ($100,000). We drove all the traffic to a landing page I had created with a short form to "Get a quote".

IT had given me permissions to push things live for quick fixes and such, I made a last minute design tweak and, you guessed it, broke something. I was checking click traffic and inbound leads and realized traffic was through the roof but leads were non-existent. This was about 45 minutes after the campaign was turned on. I jumped on the page and tested it out and got an error on submit. FUCK. I literally started to perspiration INSTANTLY.

Jumped into my form and quickly found the bug, can't recall what it was but something small and stupid, then pushed it live without telling a soul. Tested, worked, re-tested, worked. Ran some quick numbers to get a ballpark estimate on the damage I caused... several thousand.

Stood up and walked over to the two IT guys, mentioned I borked things and that I had fixed it... what should I do? I can still see the look on their faces. Shock, then smiles. Walked back to my desk and about 10 minutes later my two bosses show up (I worked for both dev & marketing managers).

They said thanks for catching the problem, not to worry. I did good for finding it myself, fixing it, and pushing it live. I was still sweating and shaking. They walk off and later that day marketing manager informs me MSN will refund us for the 45 minutes of clicks.

It took about a month before I felt competent enough to touch our forms again.

zimpenfish 1 day ago 1 reply      
Many years ago, when I was but a fresh faced idiot, the partition that contained the mSQL database which had All The Data filled up. I moved it into /tmp because there was plenty of space.

On a Solaris box.

Hilarity ensued when we next rebooted it.

byoung2 1 day ago 0 replies      
When I worked at ClearChannel back in 2010, we rebuilt Rush Limbaugh's site. When migrating over the billing system, I realized a flaw that granted at least 20,000 people free access to the audio archive ($7.95/month). The billing provider processed the subscriptions, but their system would only sync with our authentication database once a week with a diff of accounts added or removed in the past 7 days. You got the first 7 days free for this reason. If this process failed (e.g. due to a connectivity issue, timeout, or SQL error), all accounts after the error would not be updated. Anyone with a free trial or people who cancelled during a week with an error would get a permanent free trial. I rewrote the code to handle errors and retry on failure so that errors wouldn't happen in the future, but my downfall was running a script that updated all accounts to the correct status. Imagine angry Rush Limbaugh fans used to getting something for free now getting cut off (even though it shouldn't have been free). Management quickly made the decision to give them free access anyway, so I rolled back the change.
ufmace 1 day ago 0 replies      
When I first started my professional career, I was a field engineer in the oilfield, working on drilling rigs around Texas. There was some amount of computer stuff, but a lot of hardware work too. One of the things that we had to do was install a pressure sensor on the drilling mud line, which is normally pressurized to around 2k psi with water or oil-based drilling fluid.

This sounds like a simple task, but it gets complicated by the variety of pipe fittings and adapters available. Our sensors are a particular thread type, and we have to find a free slot to install them, and come up with any pipe connection converters necessary to install them there. Another tricky part is that the rig workers who actually know about all of this stuff are often not particularly eager to help out.

So on one particular job, the only free slot to install the sensor is a male pipe fitting, capped with some sort of female plug. Our sensors are male in that pipe size, so I need a female-female adapter to install it. I go looking around and come up with one, not paying too much attention to it. I install it, and everything seems to go more or less smoothly. We go on drilling with this installed for like a week or two.

One day, the rig manager comes to find me and ask me about this adapter that I used. He tells me that it is meant for drinking water lines, and is only rated to 200 psi. And had been installed on a 2000 psi line for weeks. My jaw dropped in shock - I have no idea how that adapter didn't fail, and it's entirely possible it could have hurt or killed somebody if it did.

They sent one of their guys to find an adapter that was actually rated for the pressure and replace it, and never said much else of it. No telling how much trouble I could have been in there if anything else had happened. It did make me a lot more safety-conscious.

frogpelt 1 day ago 0 replies      

I was doing HVAC work while I was in college and we were removing an old air handler from underneath a house. Just inside the crawl space, under the access door was a water pipe. My boss told me to make sure I held it down while we slid the air handler out through the hole. I lost my grip on the pipe and the air handle snapped it in two, at which point gallons of water began to gush into the crawl space.

I ran for all I was worth to the road, which in this case was about 600 feet away, to turn off the water at the water meter. I ran up and down the road in front of the house and never found the water meter. So I ran back to the house and inside and told the homeowner who promptly informed me that they used well water. She called her husband and he told us where to turn off the well pump.

It wasn't really that bad in the grand scheme of things but letting the homeowner's water gush under the house for about 15 minutes does not bode well when you are supposed to be there to fix problems not create them.

itwasme 1 day ago 1 reply      
I once worked for a company that schedules advertising before films. This wasn't in the US and the company had a monopoly over all of the ads shown across the country. It was my first programming job and done during university holidays, so I was there for a couple of months and then back to university. Toward the end of the following year I get a phone call: something was wrong with the system, it was allowing agents to overbook advertising slots. I diagnosed the problem over the phone and they put a fix in but management decided it was too late for the company to go back and cancel all of the ads that were already booked. This was not surprising as it was the most money they'd ever made. Conveniently, the parent company owned the cinemas so they did a deal where they just showed all of the ads that were booked.

Because of me, one December, everyone in the country who went to the cinema got to watch anywhere between 30 and 45 minutes of ads before the main presentation started.

Lesson learned: write more tests, monitor everything.

vacri 1 day ago 0 replies      
Still feeling my way around in the new job, I was fiddling with a backup script, got distracted, turned back, and dropped the production database. Two minutes later "Hey, is the website down"? Then I look at the prompt...

I run around like a headless chicken trying to find who knows the right backup to use and so forth, and I can't figure out why everyone is so calm and collected about it. Production was down/shit, I hope I still have a job. Turns out we had no active clients at the time - no-one was accessing the site. We'd finished one run and were in 'dead time' before the next. My next project involved implementing coloured prompts and I no longer leave production ssh sessions lying around when I've finished with them.

My CTO still has me listed as "database [vacri]" in his phone...

donretag 1 day ago 0 replies      
A long time ago while working on a *nix box logged in as root, I executed a simple "!find". Basically execute the last find. In root's history, the last find command was something like "find ... -exec rm ...". The command was run at the root of the content directory of a CMS, deleting all the content (major media website). CMS was down while backups were restored.

I now never execute ! commands as root. Actually, nowadays I simply use CTRL-r.

lancepantz 1 day ago 1 reply      
I worked for a very hot start-up in San Francisco some years ago. I took an ambien after working for about 40 hours on aderrall, blacked out, then transferred $165 million dollars with of virtual goods to my father's best friend. I didn't remember doing it, but as an investigation was going on the next day, different details slowly came back to me.

I immediately admitted it and showed everyone the bash history, I was suspended, then fired.

saganus 8 hours ago 0 replies      
So the post is not on the front page anymore but I guess confessing feels good juging by all the people that contributed.

My screwup was at my first "real" job, fresh out of college. I was asked to free up some space from the production server at $BIGCOMPANY, because it was already at 99% capacity (it managed to get to a 100% for a few minutes before I "solved" the problem). The thing is, at this $BIGCOMPANY, for some reason the budget for disk drives was non-existant, and this meant that whenever the disk usage was at or below 95%, we were happy because we still had free space... figure that.

So here I come, armed with the most dangerous tool a newbie can wield... root access and the drive to impress your boss. I said to myself, "I've used root at my home machines plenty of times and nothing bad happened because I've been using Linux for several years by now and I know I need to be careful... so I don't get why everyone says you should never log in as root". Oh boy, how I learned the hard way.

To continue my story, it turns out that the easiest/fastest way to free up some space was to delete the log files for pretty much everything(except the last 5 or 10 logs... because we were "careful", in case we ever needed them). We usually deleted things under certain directories known to hold "useless" logs. So here comes Mr. Newbie-guy-with-the-need-to-shine, and I thought to myself, "why keep deleting the logs from the same directories over and over if that only buys us about 1 or 2 percentage points, instead of cleaning as much logs as possible for the system and freeing up a lot more space?"

After thinking about it for like 10 seconds the most genius thought of my career materializes: do an rm -rf *.log on the topmost level directory of where we used to store everything (webserver, webservices, databases, etc). I happily pressed enter, and a couple of minutes later, hooray! I got the disk usage down to a whooping 90%! I was a hero! that meant we had bought enough time to keep on working without worrying about the disk space for at least another one or one and a half months. This was a clear victory and an testament to my superb sysadmin skills.

Fast forward 4 hours, and the phone starts ringing like crazy as every other employee (non-IT ones) started wondering and then calling us to try to figure out why was their data gone. They did not understand how come they have been working A-OK so far, and then suddenly ALL data from sales team, admin team, the bosses, etc was gone. And then a few minutes later... the whole intranet came down crashing and burning.... then a full stop... nothing was working.

So we went to the logs directory... oops... no logs there!. Ok, let's try to ping the DB. Dead. It's not running and it's responding with an unknown error. When I tried to connect to it would do so, but then some cryptic ORA-xxxxx error came up. No problem says I, I'll just google it and fix it.

Not so fast young grasshopper. That error meant that the DB was out of sync with its own files used for, ironically, data corruption prevention and rollback (or something like that... to this day I still don't fully undersand what those files were used for).

As far as I can remember those logs where a sort of pre-commit place, where all changes would be stored on those files and every X amount of hours the changes would get commited to the actual DB tables. It was some functionality that supposedly was used to correct corrupted entries and to recover (figure that.. ) and rollback data when lost, or something like that. And unfortunately bringing the system back in-sync was way out my league (did I failed to mention that I was by no means a DBA?).

However a struck of good luck came down on me, as the company had a support contract with Oracle and it was the Platinum-covered-diamonds level or something. That meant that after creating a support ticket at like 1AM, I got a call from one of the support guys like less than 20-30 minutes later. This guy seems calm and tells me I should not panick, it was just as easy as doing $crypticOracleStep1, $crypticOracleStep2, $crypticOracleStep3 and voil! all would be good again. Except for the fact that I had NO IDEA what those steps actually required me to do. Almost in tears I ask the rep to pretty please SPELL every command I needed to execute, letter by letter. I did not want to screw up again.

So there I was, at close to 2AM, with my boss breathing down my neck asking me what every frigging letter of the command I was typing did (which I had no idea...), all the while trying to keep up with this supper friendly guy that was patient enough to spell everything two times.

After a couple of commands later, behold! the DB could be brought up again! oh boy, did I felt relieved. I was jumping up and down because I had fixed my stupid mistake... or so I thought. After almost causing the support guy to go deaf due to my loud cheering, he says "however...". wait... what? there's a "however"?!?. Then he continues saying, "since you deleted the pre-commit file of the last day, the DB is back in-sync... up to yesterday". My jaw dropped to the floor. That meant that the ENTIRE previous day was utterly lost.... sales data, contracts, customer's info, etc.

I thanked the guy for his help, hanged up the phone and turned to my boss telling him that I was ready to turn in my resignation letter just after helping capture what availabe data was actually there (in papers, by calling customers and asking them again, etc).

My boss then turns to me and says, don't worry. We've all been through this at least once in our careers. Even I made a mistake that is terribly similar... however when I brought down the database, it took us one full week instead of one day... and rest assured that as I learned my lesson, you did as well. And I need guys like you, that have the initiative to solve things... and the ability to learn from mistakes. So don't worry, you are not losing your job. However you can't go home until you help everyone get as much data back as you can.

Aw shoot... well.. I guess it could've been worse. So after having lunch with my boss and the other teammates at like 6-7 AM, I went to the sales dept and started asking around how could I help them get their data back.

Those were the longest 38 continous work hours I've ever had to resist. I did not go back home until more than a full day and a half later. I was tired as hell to say the least... but to this day I think it was a blessing that I got to learn such a hard lesson but being backed up by a boss that was very cool and progressive about it.

Lessons learned:

0) Never ever ever ever use root, especially for deleting files and ESPECIALLY with the -f flag.

1) Do not assume that something you know will hold. Confirm it in the particular system you are going to be working with. (i.e. do not assume .log files are always log files because in your laptop that holds true)

2) Be ready and willing to assume the consequences of your actions. Most of the time if you assume responsability for your mistakes, people will forgive you and even give you a piece of advice.

3) Never ever ever ever use root.

hcarvalhoalves 1 day ago 2 replies      
Happened to a colleague: it was the end of the day, and we were packing up to leave. He used Ubuntu on his notebook, so he typed "shutdown -h now" on his shell prior to closing the lid. Seconds later he's groaning, having noticed it was a SSH session to the production server...

It wouldn't be a big deal, wasn't for the fact it was an EC2 instance, and back then halting the instance was equivalent to deleting it permanently. We then spent the night at the office recovering and testing the server. I think we left 3:00 AM that day.

Lesson #1: it's never a good idea to "shutdown -h now" on a shell. any shell.

Lesson #2: have the process to spin up a new production server fully automated and tested

tsaoutourpants 1 day ago 1 reply      
Back in my younger days, I once had a project manager who was asking me to make a significant network infrastructure change but refused to tell me why the change was necessary and basically told me to do as I was told. I messaged a coworker to see if he knew what was going on, and dropped in that the PM was being a "fucking cunt." I was unaware, however, that the co-worker and the PM were troubleshooting an issue together and the PM was staring at his screen as my message came through.

The PM brought the issue to the CTO, but somehow I didn't get fired. Ended up apologizing (obviously a poor choice of words :)) and moved on. Never made that infrastructure change.

Key takeaway: if you're going to talk shit, don't do so in writing. ;)

joncooper 1 day ago 0 replies      
There's a saying in the rates market: "don't counter-trend trade the front end".

I lost $7 million dollars in minutes by being short $700 million of US 2yr notes when the levees failed during the hurricane Katrina disaster.

Although my bet that the 2y point would be under pressure in the intermediate term turned out to be true, I got carried out by fund flows as folks spazzed out to cut risk by rolling into short duration high quality paper.

To his credit, my boss, who sat across from me, said only: "wouldn't want to be short 2 years." He let me make the call, which I did, and I covered my position. (Ouch.)

My book was up considerably on the year already, but this was a huge hit, and nearing year-end. I dialed back the risk of my portfolio and traded mostly convex instruments (options) for the remainder of the year.

Tloewald 1 day ago 0 replies      
In terms of feeling bad, I once had a client who wanted to demo a multimedia project that we currently had in alpha on his Windows 3.11 laptop, but the sound drivers weren't working properly (everything else was fine). He had about an hour before he had to leave for the airport. I started monkeying with the four horsemen of the apocalypse (Windows.ini, System.ini, Autoexec.bat, and Config.sys) as I had many times before but I screwed up saving backups, bricked his machine, and couldn't fix it). In the end it was more embarrassing than anything else, but it was a facepalm stupid mistake.

The lesson from this is pretty obvious. Backup. Make sure your backup is good and safe.

My worst work-related mistake was getting into business with a friend. It cost me the friendship, a very valuable client, and a good portion of my retirement savings. I'm not sure how related it was, but a few years later my (former) friend killed himself.

And the lesson here is not to go into business with friends. Or at least to set up the business as if you're not friends.

Ecio78 1 day ago 0 replies      
I can't decide between these two:

1) after few months working in a bank, I was doing some simple admin check task via RDP to a Windows 2003 (no, maybe 2000) server, when I right-clicked the network icon and instead of clicking the properties options i clicked "disable". Just the time to say "oh sh!t" and to realise that it was the production Trading On Line machine, on a remote datacenter, during market hours, and to discover couple of minutes later that the KVM over IP was crappy and was not working. We had to call the datacenter operators to go back to the local KVM and re-enable the NIC.

Lesson 1: Better move slowly when you're on a production machine (and also have plan B and C to reach your machines is a good idea)

2) same bank, one or two years later, I was doing some testing on a new mail system that integrated also VoIP (SIP). Mail/SIP System running in a VM (I think Vmware Server at that time) in the same remote datacenter as above. So, I enable the SIP feature and after few seconds, bum, we lose the whole (production) datacenter and the connection between the local server room and the datacenter.Panic, I look at my colleague, WTF in stereo, everything come back for few sec, bum again down. Long story short, the issue was that that version of Netscreen firewall ScreenOS had a buggy ALG implementation for SIP that lead to core dumps.The fun thing is that we had two of those in HA, same version of course, so they were bouncing between core dumping, rebooting slave becoming master and then core dumping again etc..We had to ask a datacenter operator to reach the rack, disconnect one of the cables from the firewall (the one that was managing the traffic of the DMZ where that machine was hosted) and then reach the virtual host to kill the machine.

Lesson 2: you can segment your network but if everything is connected through the same device(s), sh!t can still hit the fan...

BjoernKW 1 day ago 0 replies      
Around 2000 my team was responsible for installing and maintaining a larger amount of servers in 19" racks in a data centre.

Most servers had those hot swap drive bays for convenient access from the front while the server was running. You only had to make sure no write operation occurred while you pulled the drive out of the bay.

So, I had to exchange a backup disk on a database server running quite a few rather large forums. The server had two disk bays: One for the live hard disk and one for the backup disk. I was absolutely sure at that time which one was the backup disk so I didn't bother to shut down the database server and incur a minimal downtime. Of course, I was wrong and blithely yanked the live disk from the drive bay.

I spent the rest of the night and most of the following day running various MySQL database table repair magic. It worked out surprisingly well but having to admit this error to our forum users was embarrassing, nonetheless.

Lesson: Appropriately label your servers and devices.

JasonFruit 1 day ago 0 replies      
I sent an email to three thousand insurance agents informing them of the cancellation of policy number 123456789, made out to Someone Funky. I learned to appreciate Microsoft Outlook's message-recall function, which got most of them. I also learned that just because you're using the test database instance doesn't mean nothing can go wrong.
tilt_error 1 day ago 2 replies      

  # cd /etc  # emacs inetd.conf  # ls  ...  ... inetd.conf  ... inetd.conf~  ...  # rm * ~  # ls  # ls

grecy 1 day ago 1 reply      
I added some products to a system on a Thursday, not remembering we added some new columns to the product definitions, and the columns were nullable.

I was off Friday, so I come in Monday morning to see that ~20k customers have been getting free stuff since Thursday lunchtime.

Lost something like $200k because of two nullable columns :(

killertypo 1 day ago 0 replies      
During a server migration for our web based file sharing system our lead engineer (at the time) forgot to ensure that all cron jobs (for cleaning up files and sending out automated emails) had been turned back on.

Queue me 7mos later reviewing the system. Realizing that critical jobs were no longer running and that our users were all essentially receiving 100% free hosting for however much storage they wanted. SOOOO i turned the jobs back on.

The lead engineer before me left no documentation of what the jobs did other than that they should be run. In my stupor i did not review the code. The jobs sent out a blast of emails warning that files would be deleted if not cleaned up or maintained. Then seconds later deleted said files...

We nuked around 70GB worth of files before we realized what happened. WELL GET THE TAPES! Turns out our lead engineer ALSO forgot to follow up w/ system engineers and the backups were pointed at the wrong storage.

No jobs lost, thankfully the manager at the time was a word smith of the highest degree and can play political baseball like a GOD.

rmc 1 day ago 0 replies      
When trying to put our webserver-cum-database-server onto nagios, I tried to apt-get install nagios-plugins. For some reason when installing that, apt wanted to remove mysql-server. I just pressed "Y" without thinking (because, hey, it's like 99.9999999% the right thing to do). So apt dutifuly stopped and uninstalled MySQL in the middle of the day.

Within about 2 minutes CTO strolls in asking about the flood of exception emails due to each request being unable to connect to the database.

Thankfully, I was able to apt-get install mysql-server, all the data was still there, and things were back to normal within 5 minutes.

edw519 1 day ago 2 replies      
Boss: We have thousands of bad orders that must be fixed now!

Me: No we don't. We have 121 bad orders.

Boss: There are thousands of them!

Me: No there aren't. There are exactly 121 of them. I'm sure.

Boss: I'm not going to argue with you!

Me: Good. Because you'd lose.

I fixed 121 orders that night. The next day my login & password wouldn't work.

riquito 1 day ago 0 replies      
Last day of work before moving to the new job: I do some cleanup and rm -fr my home directory. Seconds passed. Minutes passed. I start to think about how can it take so long.

I list the content of my home directory trying to understand which folder was so big. Then I see it. A folder usually empty. Empty because I use it as generic mount point. A mount point that the day before was attached via sshfs to the production server...

I had a strange feeling, like if I was seeing myself from behind, something crumbling inside me. And at that moment someone start to ask "what's happened to <hostname>"?

I take my courage and I say "I know it"...

That was really hard. The worst day at work in years, and during the last day too. Luckily we had a good enough backup strategy and the damage was mostly solved in a couple hours.

There I realized how much of an idiot I was to have mounted the production server on my home and I grow a little.

quackerhacker 1 day ago 1 reply      
I messed up epically on an interview. It was a 3 part interview for a JS/RoR coder.

1. I passed the resume and chat portion

2. I passed the telephone questionnaire and got along great with the interviewer

3. (Fail) I scheduled my interview on a Friday at 4:30pm and there is a 30 min travel time. I left 1hr early...still it was Memorial Day weekend, so I thought the streets would be quicker than the freeway since it was at a stand still. I was so stressed that I literally had an anxiety attack and couldn't even find the address. Never happened to me before, so I'll never forget it.

joshbaptiste 1 day ago 1 reply      
In 2001 my first IT tech job as help desk analyst I heard beeping in the server room on one of the Solaris/Oracle machines and pressed the power off/power on button on the chassis. DBA came running in and I promptly left saying "oh I think it rebooted itself". The company went bankrupt shortly after so no huge lashing came my way but all my more experienced friends where like "wtf never do that again!"
PakG1 1 day ago 0 replies      
My first real summer job was working for a computer store that also did tech support contracts with local businesses. I'll preface that the boss should never have given me the responsibilities he gave me, or should have gotten me to job shadow more experienced people, but the shop was tiny and I was actually the only full-time employee.

We had the tech support contract for the city's Mexican consulate. One of the things we were doing was patching and updating their server and installing a tape drive backup system. Server was NT4.

I'm in there doing work after 5pm, and wrongly assume that everyone's gone home for the day. Install some patches and the server asks me if I want to reboot. I say yes. Few moments later, a guy sticks his head into the server room and asks if I'd shut down or rebooted the server. Oh, whoops, someone's here. Yeah, I just installed some patches. Oh, OK, see ya.

Next day? Turns out he had been doing some work in their database where they track and manage visa applications. That database got corrupted when I did the server reboot while he was doing his work. That night, the backup process then overwrote the previous good copy database on the tape drive with the newly corrupted database. We had not yet started rolling over multiple tapes to prevent backups of corrupt data, though we were going to purchase some tapes for that purpose shortly.

Summer was ending, and I quit a week later to return to school. Horrible timing in terms of quitting! No idea what happened after that, as I was spending the summer in a city that was not my own. I do know that the original database developer contractor was on vacation at the time and so they couldn't reach him. I think the consulate was SOL. I regret rebooting that server without checking if anyone was working to this day.

Lesson learned? Don't assume anything when doing anything. Carried that lesson with me for the rest of my life. And find a boss who knows how to guide you if you don't have much experience in your area. I guess for founding startups, at least get an advisor.

edit: spelling

Debugreality 1 day ago 0 replies      
This one is really embarrassing. I started a new job for a small company as the only developer with the aim of creating a new site for them. So they gave me full access to their very small technology stack that included one Mssql server.

So one of the first things I wanted to do was setup a development db for which I exported the structure from their prod db. I then proceeded to change the name of the create database statement at the top to the new dev db I wanted and ran the script.

Unfortunately the prod db name was still pretended to every drop and create table command in the script so I had just replaced their whole prod db with an empty one.

Owning up to that was one of the most embarrassing moments of my career. It was such a rookie mistake I just wanted to die. Luckily they had daily backups so I only cost their 4 man business about half a day of work but... it was enough for me to be a much more careful developer from that day forward!

earino 1 day ago 3 replies      

me: "unix definitely won't just let me cat /dev/urandom > /dev/sda"

other: "sure it will"

me: <presses enter>

what I learned? unix will absolutely let you hang yourself. 1998, production server for a fortune 5 company.

reppic 1 day ago 1 reply      
One time I tried to change a column name in a production database. I learned that when you change a column name, mysql doesn't just change a string somewhere, it creates a new table and copies all the values from the old table into the new one and when that table has millions of rows in it, it really slows down your production server.
bengarvey 1 day ago 1 reply      
I poured gasoline into the tractor's radiator instead of the gas tank.

Thankfully, someone stopped me before I turned it on.

rosser 1 day ago 0 replies      
An UPDATE statement without a WHERE clause.

In production.

I'm the DBA.

aryastark 1 day ago 0 replies      
This wasn't me, but a coworker.

We were rearranging the layout of the office. Coworker was moving in to his new space, setting up his desk. He boots up his computer, wonders why he has no network. Looks around, discovers the ethernet cable isn't plugged in. Plugs it in to the wall, still has no network.

A few minutes pass, and the entire office is running around wondering why the hell the network isn't working. Maybe an hour passes, the network guys are losing their shit trying to hunt down what is wrong. I'll give you a hint: the router was lit up like a Christmas tree, and the aforementioned coworker had both ends of his ethernet cable plugged in--but neither end was attached to his computer.

chrislomax 1 day ago 0 replies      
Simple one really and probably most common. Realise a data integrity issue on DB, try to load from backups and notice that the backups have the same integrity issue. Find a backup from about 2 weeks previous where data is intact and piece together the good pages from the daily backups and the 2 weeks old one.

All in, took 4 days and a new server where the hard drive had stored bad pages on the DB. We lost 2 days of orders (they were processed through to the internal systems though so not really lost)

Lesson learned, validate backups and check page integrity when backing up

schmichael 1 day ago 1 reply      
I unknowingly reset serial number counters in a bicycle part's database, so now there are a few hundred people in the world with high end bike hubs that overlap each other.

Lesson: Keep the code that touches production databases as simple as possible so it's easy to verify exactly what it does. I was using a framework's database tooling incorrectly because I never dreamed what I used would touch the databases's counters.

(Not my worst mistake in terms of people affected, but it's the only mistake that was literally laser etched in metal forever.)

camperman 1 day ago 1 reply      
dd if=outfile of=infile

Raw unadulterated fear followed by panic.

A full reinstall.

Triple checked dd params ever since.

jamesbrownuhh 1 day ago 1 reply      
Demonstrated SQL injection to a colleague on the live website. Bringing a sample URL up into the address bar, I explain, "You see, that ASP script takes the value of ?urlparameter and updates the record - but what if I modify urlparameter so that instead of 1, it is... (types) semicolon dash dash DROP TABLE usermaster (presses enter)"

"Shit. Well, as I have just demonstrated, it becomes possible to wipe out a million user login credentials at the touch of a button. So now we'll be needing to restore that from the backups which we don't have." Luckily, and ONLY BY CHANCE, I happened to have a copy of that table exported for other reasons from a few days back.

Lessons learned: Never press enter.

glazskunrukitis 1 day ago 0 replies      
Two screw ups come to mind.

1. First day at a job. I need to get familiar with a legacy system and get a SQL dump from it to create a local copy of the database. After some SSHing and MySQLing, I confuse my two split terminal panes and end up importing my local dump to production server. Of course database names and users were the same so I end up dropping the database. No biggie. Backups were available from previous day.

2. Similar story to the first one. I got a new shiny Zend Studio IDE. Want to set up sync with remote server (just a static company website with no version control). Fill all the settings, press the sync button - and what happens? Zend Studio somehow figured that I want to force sync my local folder, which is empty, to the remote site, and it just deletes everything on the web root and uploads my empty folder. Wat. Should have read the settings twice.

alok-g 1 day ago 1 reply      
The following was not actually me, but worth sharing.

They had ASIC design runs for research purposes once every three months, yielding your design on Silicon as ten 6" wafers. It gives enough parts for testing the first revision of your design. The person was carrying the wafers to a vendor for cutting into separate ICs and packaging or something. Gets to the parking lot, and where are the keys. Puts the wafers on the top of the car, finds the keys in his pockets and starts driving. Boom, the box of wafers was still on the top of the car, now on the ground. All broken. Some $100K in wafers + three months lost + bad face before the customer + ... Lesson: Don't put stuff on the top of the car!

SDGT 1 day ago 0 replies      
Ticked a debug output flag on prod for a specific IP (Proprietary CMS, couldn't replicate the issue on test even with a full codebase and db sync), brought down the entire server for an hour.

edit: This was after I asked for permission to do this.

Lesson learned: Don't EVER use Coldfusion as a web server.

peterwwillis 1 day ago 1 reply      
"Let go" a few hours into the first day on the job.

A friend had referred me for a sysadmin job opening at a web hosting company in Florida. After a brief interview I got the job for a pretty decent salary and was told when I could start. What they hadn't told me was that my schedule would be tuesday to saturday. I had informed the hiring manager of my preferred schedule (monday-friday), but I guess nobody mentioned it to the manager of the group.

When I got there they told me my schedule and I immediately told them that's not what I signed up for. So they asked me to sit for a while so they could figure out what to do next. I took a tour of the NOC, and saw one of their tier 1 technicians was chatting and watching a movie. I walked up and asked him "Heyya! Workin' hard, or hardly workin'?" and smiled. He did not smile back. So I went back to the desk I was assigned to, which was already logged in - with the credentials of the previous admin.

While I waited I decided to see what other trouble I could get into. Sure enough, all the old passwords were saved in the old admin's browser with no master password. I couldn't copy-paste the list, so I took a screenshot and began to find a way to print the list out to post on my cube wall. Before I could finish I was asked to leave for the day while they figured out my schedule changes. I should have gotten the hint when they asked me to leave the badge there.

Later I got a voicemail telling me they'd pay me for the time I spent there (about three hours) and they'd no longer require my services. Luckily I got hired soon after to a different company, which was also hiring away all the talented people from the place that had let me go, and the web hosting company eventually went under. So it turned out to be a good thing in the end.

onyxraven 1 day ago 0 replies      
My first deploy at a once-top-10 photo hosting site as a developer was a change to how the DNS silo resolution worked.

Users were mapped into specific silos to separate out each level of the stack from CDN to storage to db. There was a bit of code executed at the beginning of each request that figured out if a request was on the proper subdomain for the resource being requested.

This was a feature that was always tricky to test, and when I joined the codebase didn't have any real automated tests at all. We were on a deploy schedule of every morning, first thing (or earlier, sometimes as early as 4am local time).

By the time the code made it out to all the servers, the ops team was calling frantically saying the power load on the strips and at the distribution point was near critical.

What happened: the code caused every user (well upwards of millions daily) to enter an infinite redirect, very quickly DoSing our servers. It took a second to realize where the problem was, but I quickly committed the fix and the issue was resolved.

Why it happened: a pretty simple string comparison was being done improperly, the fix was at most 1 line (I can't remember the exact fix). There was no automation, and testing it was difficult enough that we just didn't test it.

What I learned: If its complicated enough to not want to test using a browser, at least always build automation to test your assumptions. Or have some damn tests period. We built a procedure for testing those silos with a real browser as well.

I got a good bit of teasing for nearly burning down the datacenter on my very first code deploy, but ever since, its been assumed that if its your first deploy, you're going to break something. Its a rite of passage.

maxaf 1 day ago 0 replies      
I spoiled business users by saying "yes" way too often.
famousactress 1 day ago 0 replies      
Accidental sudo chown www-data:www-data /. on the production server.

Thoughtful pause "Why is this taking so long!?"


kisamoto 1 day ago 0 replies      
Introducing a master/minion update system to work I ran a batch update to take a certain percentage out of the cluster.

Unfortunately I got my selection criteria wrong and pulled out all of one cluster and half of a second, halting a few thousand operations.

Luckily the monitoring system was very quick to alert me of this and using the same (wrong) selection criteria it was a fairly simple process to stop the update and put them all back in the cluster.

Takeaways?The age old cliche of "With great power comes great responsibility". Oh and have good monitoring!

drdeadringer 1 day ago 1 reply      
I dropped two units of equipment, ~$1.5Mil a piece. Each unit was dropped in separate incidents. No damage at all, but management didn't care. I blamed myself despite mitigating factors such as impossible schedules, vicious multi-tasking "to compensate", and less-than-ideal support equipment. At the time, I didn't handle it very well but I ended up living through it -- first job//assignment ever in the worst environment I've ever had before or since with the worst coworker I've ever had before or since, and I mess up in the millions of dollars. "Lasting Impressions", tonight at 8/7 Central.

I left that job about 3 years later when the metaphorical train stopped at a nicer place. My name is still known in certain circles for this ["Oh bah, how could I forget?" one former manager recently stated], but I don't plan to go back there at this time.

I learned that life's too short for assholes and working in an environment you don't like. If you don't screw up, your soul will die and you'll become that former coworker you hated so much and who hated you in return. It's worth picking and choosing where you work.

TheCapn 1 day ago 0 replies      
I may, and/or may not have caused a production site's PLC to go into STOP mode during daily operations while making network updates remotely.

Possible outcomes of unplanned system haults include plugged machinery that would need to be manually cleared, mixed products which would become immediate net losses for the company and damaged motors.

Thankfully no product was being run at the time. I have also implemented changes across the board to our client sites that prevent this type of shit from ever happening again. You know when you look at a system and go "this is going to bite us in the ass eventually?" This was one of those systems, they just needed a new hire to give them the push.

dougbarrett 1 day ago 0 replies      
I used to work at Fry's Electronics right before iPhones were released and MP3 players were seeing better days. Creative had come out with a nice, $300 MP3 player and I was in charge of creating the sign tags in my department because I was the only one who could get it done the quickest. I would do hundreds a day, and sometimes there would be slight slip ups, in this case I forgot a 0, so there was a lucky customer that got a $300 MP3 player for $30 that day.

Luckily, there was no slap on the wrist or anything, the store manager knew that after doing thousands of these cards this was only one of a few slip ups I've made so they just brushed it off and moved on.

ozten 1 day ago 0 replies      
Many years ago I was being shown the server room for the first time. They asked me to unplug a certain box. I unplugged everything on one power strip. Panicking at the drop of ambient noise in the room, I quickly plugged it back in, but...

I have no idea why they didn't use UPS, but it took many critical servers offline and caused a few hours of headaches for everyone.

Come to think of it, that was the last time I was allowed in the server room.

Lessons learned - don't let developers in the server room.

taf2 1 day ago 0 replies      
I hacked our development machines using a rooted rpm, we only had access to the sudo rpm command so I decided to deploy our rails app using capistrano. to work around the sudo rpm only access I decided to add some install scripts to the rpm because these run as root. This allowed me to re-configure sshd making it possible to do a local capistrano deploy. I was smart about it by reverting the ssh changes back after the deploy completed - bash has a kind of ensure that allows you to roll things back like a transaction. The cool thing about the whole thing was that our ops team was on the ball and detected the changes to the sshd configuration even though I restored them. Mind you this was all in a staging development environment. The issue was just how immature it was of me to go this far to cap deploy instead of rpm install our rails app. For me, I looked at it then like a good learning experience in hacking rpms and in security. When you run sudo rpm -Uhv package.rpm - you better trust package.rpm it can execute any shell scripts it wants as root. Also, in the future I would walk away from a company like this much sooner. I enjoyed everyone there I worked with and would work with any of them again, but just would not want to work in such a stress filled environment for so long again.
highace 1 day ago 0 replies      
Changed the default RDP port on a remote Windows box, but didn't open the port on the firewall and couldn't get back in. Whoopsie.
arethuza 1 day ago 0 replies      
I led an engineering team that almost sent out a demo on tens of thousands of IBM CDs (this was 1998) that contained test data that included some that had been sourced from the worst possible alt.* newgroup.

As it turned out the only data that did go out was the single word "sheep" in the search index.

kirkthejerk 1 day ago 0 replies      
I mixed up the meanings of "debit" and "credit", and wrote a credit card processing app that ended up PAYING $75K to our customers instead of charging them.

I'm still not sure how this bug slipped past the bank's tough app certification process, though.

jmspring 1 day ago 0 replies      
Early on in the implementation of one of the PKCS "standards" while at a browser company many years ago, due to an improper interpretation of a spec that was still in flux. There wasn't enough testing and "release bits" went live.

I had to quickly get a patch in for the improper code and had to maintain that buggy implementation. In addition, the "standard" itself got a rather scathing write up from Peter Gutmann, which is completely valid:


This is a critique on the "standard" itself, the process was just as ugly.

a3n 1 day ago 0 replies      
Connected leads on an expensive piece of equipment, power live, being very careful with a pair of needle nose pliers. Because the power switch was way in the other room, tag out procedure took time and I was late.

Poof. Equipment electronics fried and useless.

I was chewed out. Could have been way worse.

Follow your safety procedures.

embarrassed99 1 day ago 0 replies      
Leading a group working in an underground bunker on a live military radar site in the Australian outback, where it rains every few years. We had to open a rooftop cable duct and when the job ran overtime we closed it up with some rags that were to hand. That night it rained.

The next morning, the bunker was full to ground level and the automatic power cutoff had failed, as the float switch was directly under the cable duct and the water pressure of the deluge and kept the float depressed. By the time the water stopped flowing the float was under a foot of mud. The powered circuits were undergoing electrolysis and eating themselves away, made worse the the site managers refusing to drain the bunker or turn off the power until a week long arse-covering evaluation had been completed.

A few hundred million dollars of front line radar was out of action for several months.

Being a naive newly graduated engineer, I wrote a completely honest report and analysis. My boss said it was one of the best reports he had read and there was no impact on my career (if anything it got me noticed by the upper echelons of the organisation).


1. If you tell the truth you will be respected, even if it is incriminating.

2. If there is a way for something to go wrong it can do so (slight variation of Murphy's Law). Even if it's judged to be uneconomic to take preventative action, be aware of the possibilities, so you can make a conscious decision about the risk.

anilshanbhag 1 day ago 0 replies      
This is about https://dictanote.coI changed the login flow to use a different package. After pulling the latest changes in server, I restarted apache, opened the website to notice everything working smoothly and went to sleep.

9 hours later I wake up to check my inbox has 800+ emails. Django by default sends out email when an error occurs and a tiny mistake of not installing a package led to a lot of frustrated customers and well a huge pile of email in my inbox !

Moral of the story: Put that pip freeze > requirements.txt and pip install -r requirements.txt into your deployment flow.

adw 1 day ago 1 reply      
I've screwed up countless things, many much more expensive than this, but those stories aren't entirely mine to tell.

But this was one of my first. Years ago, making boot floppies for a physics lab where I was reinstalling all the servers:

I meant: dd if=/dev/zero of=/dev/fd0

I did: dd if=/dev/zero of=/def/hda

Oops. Bye, partition table.

(Always double-check everything you type as root.)

CurtMonash 1 day ago 0 replies      
Short version:

I was a stock analyst, for a firm with dozens of institutional salesmen and thousands of retail brokers. Some of my recommendations were very, very wrong.

The right thing to do is stand up, take the heat, and explain what you now know as best you can. I learned that watching a colleague who I thought was otherwise an unserious ass.

Davertron 1 day ago 0 replies      
I wrote an update script for a database table not realizing I had the key wrong (I'm kind of fuzzy on the details, but essentially I think it was a composite key but I was only using one of the columns in my WHERE clause...) and accidentally updated all customers addresses in our database to the addresses of one account.

Luckily we had backups from that morning so we only lost any address updates people would have done that day, but it made for some interesting customer service calls for awhile...

jonathanjaeger 1 day ago 0 replies      
I often turn on my performance-based ad campaigns before going into the office, as they are very predictable at the beginning (slow ramp up to spend). However this time the CTR was through the roof for something new and spent $15,000 by the time I could turn them off when I got to the office. It only brought in about $5,000 in revenue. Not the end of the world in the grand scheme of the monthly P&L, but still not something to replicate.
rokhayakebe 1 day ago 0 replies      
I built a content site, worked on it for two years and a few months. While updating the entire codebase to make the site faster and easier to work on for future updates, I accidentally deleted my database. 2 years gone, SEO traffic gone.

Takeaway: Sometimes, it takes a disaster to realize you were in another disaster anyways.

unfunco 1 day ago 0 replies      
I think everybody has done this at some point, and I'm sure I will not be the last person to have done it; leaving the WHERE clause from DELETE and UPDATE statements when writing SQL, I caused about 45 minutes downtime on our RDS instance the last time I did it, but since we had multi-AZ setup, no data was lost. I also frequently get mixed up between development and production environments.

Every database alias I have now has the MySQL --i-am-a-dummy flag appended. This has been a career-saver in my eyes.

jason_slack 1 day ago 2 replies      
I once revoked my bosses e-mail and VPN access because his password was 'password123'. It was my job to keep things safe after all and I had asked him nicely a few times.

EDIT: I proposed a new password of: @$tevezA$$ignedPwD@# (Steve's Assigned Password)

He said no to that one.

webstonne 1 day ago 0 replies      
I asked them for a job in the first place.
jjindev 1 day ago 0 replies      
Once, as a relative UNIX newbie, I "cleaned up" a Sun box, until I had moved things I needed for boot off the boot partition. I got it all back, manually mounting partitions, and etc. but I was certainly in a cold sweat for about 15 minutes.

Perhaps the only lesson is "slow down."

ClayFerguson 1 day ago 0 replies      
I didn't do this, but was the one who figured out what happened. I guy wrote an installation utility for internal use to automate certain software setups. Part of the program had to clear out a certain directory, where you had to enter the name of the directory. Problem was, if you leave that field blank (the default), it converted into c:\ and people would run it, and it would wipe out their hard drive. After finding the problem, I told only the guy who did it, and no one else. I didn't have the heart to destroy his reputation by telling everyone what had done it. I SHOULD HAVE let the chips fall where they may, because I needed to be sure NO ONE ever ran that EXE "utility" again. They figured it out pretty quick, but nobody really knew the true problem but me and the guy who wrote the bug!
krishnasrinivas 1 day ago 0 replies      
I had done "rm" of a big log file to free up space on the customer's server but our process kept on filling the log file. I assumed that the disk has got enough free space and I got busy with something else. The space was never freed up as the file descriptor was kept open by the process. Ultimately the entire disk got filled up by the opened log file and their server came to a grinding halt. I think the customer stopped using our product after that because we never heard from them again.

learning from this experience: never do an "rm" on the log file, instead do "truncate -s 0" on the log file.

blueskin_ 18 hours ago 0 replies      
Meant to reboot my desktop...

  [root@importantServer]# reboot
"Hmm... This is taking a while..."

it200219 1 day ago 0 replies      
I had installed "osCommerce - Open source E-Commerce platform" just like Magento on one of our client who had > 500 transactions a day.

Some how in settings, we had flag "Store Credit Card Info" as "Plain Text" enabled. The Admin/Staff of that client could have use this information to make transactions (As in Backend it would show Full CC info into order details)

We didnt realized untill we worked on it again for some bug fixes and adding new features.

Lesson Learned :- When transitioning from DEV to PROD env, make sure to check all these critical flags and correctly set

Luckily, the client didnt had any idea about what was wrong in backend.

erobbins 1 day ago 0 replies      
my first day ever using unix. Left with a root shell. Trying things out, learning, made a few junk files somewhere or other. I was done with that and decided to delete them... "delete everything from that directory" I think: rm * /path/towhateveritwas

now on to my tasks.. had some files to print out. Where did they g...... FUCK.

I found a box of tapes and some sunos manuals. Spent the next several hours figuring out how tar and tape drives worked. Got everything back. Never told a soul.

1992. I've never done anything so careless since.

double051 1 day ago 0 replies      
We shipped an Android app that didn't like the way we had our HTTPS certs configured, so I had logic in there to accept the connection if the cert matched the one we had.

Two months later, the certs were expiring soon and we changed our configuration to something Android liked by default. The bad news was that our production Android app rejected the new configuration and only wanted to accept the current certs.

We ended up quickly shipping a hotfix that accepted the current and upcoming configuration a few days before the certs expired. There technically wasn't any 'downtime' as long as users updated the app, but this all took place right before 'holiday vacations', and the QA team had to test the fix while all the devs were away.

peg_leg 1 day ago 1 reply      
This was some time ago. I've learned a lot since. I mkfs'd the main disk on our email server. There was no redundancy. There was a new volume that needed to be formatted, my superior told me to do it. I protested that I didn't quite know how. He pushed it. So, I would up wiping it by mistake. since then I've made it a mission to make the entire stack at that place resilient and redundant. Now, it's virtualized, failover file and DB systems, NLB web servers, redundant storage, proper backups. It would take a hell of a lot more than what I did to make the same mistake again.
gbasin 1 day ago 0 replies      
Added some additional logging for an edge case, rolled it out to production and then went camping in the remote wilderness for a week. Two days in, the edge case got hit and the logging wasn't sufficiently tested. It logged as intended... and kept logging and logging... until out of disk space :(

Oh yea, I run a proprietary trading firm (still at the same spot), as a result of that bug we went down and lost about $250k over the next few hours. Testing is important in automated trading :)

michaelochurch 1 day ago 1 reply      
Tried to prevent a massive product failure.

It failed anyway, but I wasn't around when it did and there would have been no "I told you so" credit even if I were.

One of those "big company" lessons, but probably applicable to startups (which have an even higher ego density).

deanly 1 day ago 0 replies      
Not me, but a co-worker at an internship I held:

Said person entered the number of metric tons of concrete 3 magnitudes higher than it should have been. Imagine the cost difference between 1.0 * 10^6 and 1.0 * 10^9 metric tons... Our boss was not pleased, to say the least.

But imagine how easy it is to enter a few extra zeros in an excel data cell. Yikes!

contingencies 1 day ago 1 reply      
First job, circa 2000, at an ISP that was run very clearly as a business and cutting corners. Not only was it critically understaffed, but management was more interested in laughing their way to the bank than management. They had me - with literally no routing protocol experience - manage a live route advertisement transition between two peering providers. Result: all customers offline, ~24 hours.

Reaction was standard: mostly to point out I did my best in unfamiliar territory and things should be sorted soon.

Take aways were: (1) less support calls than expected - users put up with things. (2) you learn when you fail (3) always have a backup

They kept me on at that job but I left pretty soon anyway as I got a 'real' (as in creative) job hacking perl-powered VPN modules for those Cobalt Raq/Qube devices, and building a Linux-related online retail venture for the same employer ... that worked great, but failed commercially.

_mikelcelestial 1 day ago 0 replies      
I accidentally deleted all data from live database, thought it was our beta database server. Good thing it is synchronized on our beta servers so was able to bring it back in no time. The moment I clicked that delete button I was like face palming myself all over. I learned from thereon to double check every time especially when working on between production and test servers.
alexmarcy 1 day ago 0 replies      
The worst one I ever heard about was while I was at a potato processing plant in Idaho where they make McDonald's hashbrowns.

After the potatoes are peeled and washed they are run through a pipe with blades to slice the potatoes into french fries. These blades are sharpened with lasers and are insanely sharp because they need to cut a lot of potatoes before being changed.

One day they were shutdown and it was time to change the blades. The lady doing the change placed the new blades on the table and bumped the table when she turned to grab a wrench from her toolbox. The new blades started to fall and she instinctively reached out to grab them to prevent them from falling to the floor.

She ended up not grabbing anything because the blades sliced her fingers clean off. They took her to the hospital and due to the blades extreme sharpness, the cut was so clean reattachment was a pretty easy procedure. I don't know if she had any long-term negative effects from the incident.

Safety is important, be aware of your surroundings and don't instinctively grab things you shouldn't be touching in the first place.

krak3n_ 1 day ago 0 replies      
The worst thing I have done is terminate a running production instance with no database backups.

Client, not happy.

derwiki 1 day ago 0 replies      
I accidentally brought down yelp.fr by typoing the timezone field in the database.
andy_thorburn 1 day ago 0 replies      
My worst screw up was causing a fire that destroyed one of the two prototype 3D printers my company had built.

I was working at a startup that was trying to create an affordable 3D printer. We had two working prototypes that were used for everything - demos, print testing, software testing, PR shoots, everything. Each prototype had cost hundreds of man hours to build and debug and quite a bit of cash as well.

Among other things I had done all the work on the thermal control system for the printer, it kept the print heads and build chamber at the correct temperature. One night while working on one of the printers I hit an edge case that my control code didn't handle well and the printer turned all of the heaters on full-bore. Half an hour all the plastics in the prototype had either melted or burned and I was left with a room full of smoke and a pile of scrap aluminum.

enthdegree 1 day ago 1 reply      
I was this close from setting the asset management server's hostname to `ASSMAN'
mattwritescode 19 hours ago 0 replies      
Deleted a database table and not the temporary table I was working on.
seanhandley 1 day ago 1 reply      
Wrote an article on the company blog and linked it on HN. Traffic brought down the server -_-
teamcoltra_ 1 day ago 0 replies      
I deleted the entire sales team's sales database (for Canada's second largest cable company) because I was making a minor change and was too lazy to back it up first.
slipangel 1 day ago 1 reply      
sudo chown -R myname:myname /

Learned: Learning on the job as you hack away on problems is great, but recognize that it's one part enthusiasm and one part risk management. Also learned to never try anything on the command line that wouldn't want to see pulled from my bash history and stuck on the breakroom fridge. Also learned to cope with humiliation well.

findjashua 1 day ago 0 replies      
In my newbie days of event-driven programming, I forgot to add 'if (err) {...}' in an express application and crashed the server.
coherentpony 1 day ago 0 replies      
I shouted at someone.
slowmover 1 day ago 1 reply      
system("tar --delete-files -czf archive.tar.gz $datadir/");

What could possibly go wrong?

pasbesoin 1 day ago 0 replies      
Believing the CFO when he made a point of telling me, "If you ever need anything, let me know."

Gratitude is demonstrated through actions, not vague verbal commitments.

typicat 1 day ago 1 reply      
mysql> drop database PRODUCTIONDo you really want to drop the 'PRODUCTION' database [y/N] y^HnDatabase "PRODUCTION" dropped
smalu 1 day ago 1 reply      
chmod -R 0777 /* instead of chmod -R 0777
kentwistle 1 day ago 0 replies      
git push -f
cdelsolar 1 day ago 0 replies      
sudo reboot
elf25 1 day ago 0 replies      
Working Christmas Day at the Liquor store (before cameras were everywhere) and drinking Tanqueray and Mt Dew ALL day. WHEEE!!
failsrails 1 day ago 1 reply      
This one time we used Ruby on Fails. That was the worst screw up ever!
Ask HN: Will efforts toward efficient Bitcoin mining advance computing?
2 points by anauleau  7 hours ago   2 comments top 2
asperous 6 hours ago 0 replies      
Well it's only speculation, but Bitcoin mining is a specific activity. It's true there's probably plenty of money going into making the fastest double-sha1 + nonance chip; that activity alone isn't likely to advance computing.

There's some money going into making better Litecoin/Scrypt miners, which is currently only gpus so that's positive.

The question you should be asking is how will Bitcoin advance computing. That question I think will get you a lot more answers:

- Security is essential. People require secure computing to safely operating in Bitcoin.

- etc.

maxerickson 5 hours ago 0 replies      
At the moment it is doubtful that they are spending anywhere near what intel and their suppliers are on process development (intel spends billions of dollars on R&D every year; they probably spent more last year than dollars have ever touched bitcoin).
Ask HN: How do you architect software that needs to be maintained for 30-years?
2 points by spiffytech  8 hours ago   6 comments top 3
pedalpete 7 hours ago 2 replies      
First off, I think you might not be getting responses to this because you labeled it a 30-year software project, which isn't what you're asking really, you're asking how do you architect a product that will be maintained for 30 years.

Those are two VERY different questions. I came here expecting to answer not to expect to develop for 30 years.

Ok, now for my answer. There are a few things you'll want to look at.

First, I'd argue you build very modularly. You don't know what the product is going to need to do in 5 years, let alone 30 years. Building a product from independent modules will allow future developers to add, remove or update modules to reflect the times.

As far as languages, go with either what is popular today, or what you have EVIDENCE will be popular in the next 10 years. I say evidence because if you just hop on the next big thing (Go, Julia, etc) because at this point you don't know how difficult it will be to hire developers that are familiar with those languages in 10 years.

The libraries and dependencies answer is the same as the previous. Don't pick something obscure that you think 'might' be popular in the long-term, go with what's popular now or has evidence of popularity later. jQuery, Bootstrap, Rails, Node.js all had massive amounts of interest right from the start. However, if you can get away with not relying on a library, you should probably do it. If you need something now to ship quickly, but don't want it populating your project long-term, use it only in the modules that need it, and that way it can be easily replaced later (I'm doing this with jQuery now as Angular DOM traversal isn't great.. yet).

bennyg 3 hours ago 0 replies      
Most people have hit on the big ones, but I think the most underrated answer is: Comments.

Seriously, comment the hell out of that code where anybody coming in at any time can understand what's happening. Make it so that a technical director can read only the comments and understand the entire system and it's dependencies.

brudgers 5 hours ago 0 replies      
Emacs and Autocad both use a similar model based on a core plus a powerful command language...ie, the same architecture as LISP. That's not to say that either was initially envisioned as a 30 or 40 year project. It's just that their architecture has allowed them not only to survive but evolve continuously alongside expectations and needs release by release.
AWS billing, RI utilization, growing complexities in the cloud
2 points by dtseng123  9 hours ago   discuss
How to securely distribute student grades?
6 points by plg  13 hours ago   23 comments top 13
mindslight 11 hours ago 0 replies      
Think low tech, you don't need or want computer software. In class, ask every student to submit a 'handle' (which can be any identifier they want), and a 'grade offset', which is just a random number in the range of grades (say 0-100 for numeric grades). Then you publish an email/webpage/office door posting with a list of handles and corresponding ((grade - offset) mod 101). Figuring out the range of letter grades is a little harder on specifics, but still straightforward. This also allows students an explicit opportunity to not opt in and arrange other methods instead.
patmcc 11 hours ago 2 replies      
Use the university provided method - students have to deal with it anyway, most likely, so you're not saving them any hassle. And however slow and ugly it is, it's already built and (presumably) works.
brucehart 11 hours ago 0 replies      
If it's only 20 students, maybe just ask them to come by your office during office hours. Most of my professors would just post grades in the hallway using student IDs, but a few did it this way. It gives you a chance to connect with the students and give some personal feedback. At the graduate level this can be helpful not only for them, but also for you since these students are people you will likely work with later in industry and academia.

If you are not in your office much, then I would just offer the GPG option. Sending 20 e-mails will not take very long. Out of the 20 students, I bet only half of them get their act together and e-mail you a key, so it's really more like 10 emails that you would need to send.

jordsmi 56 minutes ago 0 replies      
Regardless of if you encrypt it or not, to the school it is still going to be against the rules. You are still sending it to their email, which someone else may have access to. If they have access to the computer they may also have access to encryption keys, etc.

Even though it is terrible I would just stick with the schools system.

brudgers 3 hours ago 0 replies      
SASE over USPS meets all privacy requirements, allows communications to be tailored for each student and uses a proven technology stack to provide robust and reliable delivery regardless of the student's computing platform or internet bandwidth.
mchannon 12 hours ago 0 replies      
If you set things up properly, you could be granted 20 public keys and one single boilerplate e-mail containing everybody's grades, with each one encoded in their appropriate key.

Everybody would then attempt to decode each cipher, with only one working for any individual private key.

This isn't all that different from your original posting, except that you now only need to send one unique e-mail.

(For Beavis, who's getting an F because he never showed up to class, you might get in trouble with the administration because a simpleton couldn't decode their grade through this or other sophisticated means.)

asdf3 12 hours ago 0 replies      
Use the website the college has provided. This isn't about your convenience or technical judgement (about a system you don't maintain). It's about the students and the rest of the college.

That said, encrypt the each grade with a key derived from the students ID (which is privileged information) and make a webpage to do the decryption for the students. SHA256 ( ID + Salt ) == Key for symmetric encryption.

--former IT college staffer

spurgu 12 hours ago 1 reply      
You could generate personal URLs for each student, pointing to whatever site/service you choose (for example Pastebin, with an expiry date), then generate QR codes of those URL:s and hand them out on paper notes.
maibaum 13 hours ago 1 reply      
Make an excel doc with one column as the log of their student ID #'s and grade in the next column. Include a line at the top for instructions to take the log of their student ID # to find their grade. Label column headings appropriately
plg 12 hours ago 1 reply      
I think what I'll do is send a list of all grades listed next to the SHA256 hash of their student IDs

They have access to SHA256 so they can privately find their own student ID hash and then look up their grades

stumpyfr 13 hours ago 0 replies      
Sound a little "too much" of privacy laws but...if you really need:https://bitmessage.org/wiki/Main_Page
studentthrow 4 hours ago 0 replies      
I've had professors do many different things, Blackboard is actual okay for checking your grades as a student though it does suck to put grades in.

Another option is set up a website with a login id (student id or something) and have students submit a pin (4-12 chars) and let them use that to login and see their grades (probably should be ssl).

As for your question I don't see how you could send them encrypted, you could make up random ids for each student and only give that student their id then send grades out corresponding to their random ids but that may still violate privacy issues.

percomis 13 hours ago 1 reply      
How about this: you ask every student to send you a password word. You hash these passwords, give them the algorithm and send them a list with the hashes with the grades.
Ask HN:MAMP like tool for Ubuntu/Linux?
2 points by uptownhr  10 hours ago   8 comments top 7
ereckers 4 hours ago 0 replies      
Isn't MAMP/WAMP's purpose to emulate a Linux environment? Basically, a way of running a LAMP stack application on a Windows machine.

I could be out of my league here and you may have other reasons, but you could just run VM's. Vagrant/Virtualbox takes all the pain away.

LarryMade2 4 hours ago 0 replies      
Nice thing about MAMP and WAMP is the control GUI, start stop services, also specify the current web root, etc. I would like to see a nice local webhost control GUI in Linux for my web development.
phantom_oracle 7 hours ago 0 replies      
There is a product called: XAMPP for linux which is the linux version of XAMPP/WAMP or whatever the name of it is now.
uptownhr 7 hours ago 1 reply      
I'm looking for something more like

cmd 'app add site Test' - this would create the vhost file, put it in sites-enabled with the doc root.

Piskvorrr 10 hours ago 0 replies      
a2ensite + a2dissite for enabling/disabling a vhost; for adding one, I have a template in /etc/apache2/sites-available, copy that and then edit
johnatwork 10 hours ago 0 replies      
Could this be what you are looking for?https://help.ubuntu.com/community/ApacheMySQLPHP
helpful 9 hours ago 0 replies      
Ask HN: how do blackhats meet?
3 points by bachback  11 hours ago   6 comments top 4
phaus 9 hours ago 0 replies      
If you are going to be doing sysadmin work, and you want to get a feel for the attacker mentality, there are a few things you could do.

If you have the money, know at least 1 scripting language, and have an aptitude for technology, the OCSP certification course is pretty good.

If you want to go the cheaper route, there are lots of books. One introductory text a lot of people like is Hacking: the Art of Exploitation.

If you want to learn about web security, the Web Application Hacker's Handbook is a great book. For something less intensive, The Tangled Web would suffice.

If you want to learn to harden Linux servers, reedit.com/r/linuxadmin, /r/linux and /r/linux4noobs are great resources. Before you post questions, however, I suggest using the search function because lots of people ask for hardening guides.

runjake 9 hours ago 0 replies      
Max Butler's forums got infiltrated by a task force composed of FBI and Secret Service personnel, not the CIA.

Black hats generally network on IRC. You sit on some public IRC channel, build rapport [1][2], and eventually get invited to private channels.

There are plenty of resources out there on how to harden your server and reduce attack surfaces. You just need to spend more time familiarizing yourself with the landscape and quantify your actual goals.

1. http://guerrillamerica.com/2013/12/source-recruitment/

2. http://guerrillamerica.com/2014/01/source-handling-part-one/

spoiler 10 hours ago 1 reply      
Blackhats are just people who abuse their Whitehat knowledge.

There is a plethora of IRC channels, forums, mailing lists and whatnot where people share that kind of stuff. Frankly, a bug report is something like sharing it, before its fixes it is a zero day exploit.

thelogos 4 hours ago 0 replies      
A lot of them meet in private invite-only forums. Krebs had some success infiltrating those forums but eventually got discovered.
Ask HN: Should we apply to YC, if we are a competitor of a YC Alum?
3 points by rrpadhy  11 hours ago   2 comments top 2
ig1 11 hours ago 0 replies      
Yes a number of times. Rapportive, Etacts and Xobni were all YC and competitors. YC has also funded a number of competing developer recruitment startups.
TheMakeA 10 hours ago 0 replies      
YC funded a startup (Double Robotics) that competed with a YC partner's co (Anybots). Just apply.
Ask HN: What do you call someone who has 20 apps released but can't code?
7 points by WesleyThurner  1 day ago   4 comments top 4
kitcar 1 day ago 0 replies      
Mockups w/photoshop = information architecture / user interface designer

Managing workers and segmenting tasks = project management

both are available as fields of study and are defined roles in most businesses -

msteigerwalt 1 day ago 0 replies      
Seems like you're playing a few roles, but the chief ones would probably be:

Product Manager: You take business goals and turn them into technical requirements, then ensure the product gets built.

Project Manager: You take technical requirements and ensure that your team delivers on those requirements.

User Experience Designer: You create mockups for products which can be implemented by the technical team.

I'd put together a portfolio of your work and send it off to a few companies, as that might be good enough to land you a job.

dear 13 hours ago 0 replies      
Entrepreneur. ;)
thatthatis 1 day ago 0 replies      
Product guy.
Where should I start learning Assembly?
60 points by shinvou  1 day ago   53 comments top 38
revelation 1 day ago 1 reply      
Reverse engineering is quite a different skill set from assembly. Unless you are reverse engineering malware, whatever you are analyzing is unlikely to have been written in assembly or to be heavily obfuscated. Then it's more about knowing how certain high-level programming constructs (think virtual function calls in C++) will be translated into assembly by a compiler, what residual information there might be left in the binary or what all that noise is you are seeing (think C++ templates, destructors called for stack-allocated variables..).

For many reverse engineering projects, assembly might be a wholly uselss skill, since whatever you are looking at is actually MSIL or running on Python with its own embedded interpreter. Here assembly only serves you to quickly tell you would be wasting your time :)

gaius 1 day ago 3 replies      
Which assembly? x86, PowerPC, ARM, MIPS?

Personally my favourites are 6502 (http://skilldrick.github.io/easy6502/) and 68k (http://www.easy68k.com/) tho' neither of these are realistically of any commercial use.

csmithuk 1 day ago 1 reply      
I started with the following book:


Wonderful book from which a lot of knowledge is applicable to other architectures straight away. It teaches you about planning, control structure implementation and the maths behind it all as well.

ChuckMcM 1 day ago 0 replies      
Start with a computer architecture introduction. The McGraw Hill Computer Science series book "Computer Architecture" did a good job of creating a fictional processor and then designing the machine code for it. "Assembly" is just a way to represent machine code in text files.

That way you will learn what it is the computer is trying to do, and how constraints on how it is built change that.

Then I'd suggest some cheap 8 bit Microprocessors like the AVR series and the PIC series from Atmel and Microchip respectively, (the AVR has solid C support so its probably a better single choice, but the PIC has weirdness associated with architecture constraints which is good to understand as well).

Once you are a pro writing AVR assembly code, then grab a copy of x86 assembly and a description of the Pentium architecture. To do it proper justice start with an 8086 assembly book, then a 286 assembly book, then a 386 one, and finally a Pentium one. That will let you see how the architecture evolved to deal with the availability of transistors.

minikomi 1 day ago 1 reply      
Although I cannot claim to know a lot, http://microcorruption.com was a very nice "fun" way to at least start with a small, easy to grasp instruction set.
forgottenpaswrd 1 day ago 0 replies      
Get IDA pro and start reversing things with some clear objective. I learned a lot having friends that knew and competing with them to remove limits on commercial software when I was a teenager.

Making trial version complete and so on. Some times it was really easy(just finding a jmp and changing it), other times we had to compare with the complete program, finding code blocks,patching the trial and making all checksums and stuff to work.

None of the software that we cracked was released to the public, it was just for fun.

At the time there was little exercises called "crackme" for exercising your abilities.

It takes at least over a year of work to start being really good at this, and is not like Obj.C, Java or Python, or even c, but way more tedious. Without having friends on this and clear objectives I would had found it boring.

It would be probably a better idea to buy a micro processor and code simple things in assembly, like blinking LEDs.

penberg 1 day ago 0 replies      
If you already know C, you can start out by looking at the machine code generated by your compiler with "objdump -d" on Linux and "otool -tV" on Mac. Start experimenting by writing out C constructs like functions, loops, switch statements, etc., and just looking at what the generated code looks like.

Of course, to do that, you need to find the manual for your machine architecture. The x86 manuals are, for example, available here:


You also then start to notice things like the operating system specific application binary interfaces (ABI):


and object file formats such as ELF that's used in Linux:


or Mach-O used in Mac OS X:


You can also do the same thing with the JVM and look at its JIT-generated machine code with the '-XX:+PrintCompilation' option:


maggit 1 day ago 0 replies      
I'm writing a tutorial in x86-64 assembly on OS X that you might enjoy: https://plus.google.com/+MagnusHoff/posts/9gxSUZMJUF2

Its focus is actually writing assembly on an acutal computer, with the goal of implementing a snake game.

znowi 1 day ago 0 replies      
I can suggest this free book called "PC Assembly Language" by Dr Paul Carter.


The tutorial has extensive coverage of interfacing assembly and C code and so might be of interest to C programmers who want to learn about how C works under the hood. All the examples use the free NASM (Netwide) assembler. The tutorial only covers programming under 32-bit protected mode and requires a 32-bit protected mode compiler.

traviscj 1 day ago 1 reply      
Code by Charles Petzold [1] is a fantastic introduction. It isn't so much the nitty gritty "this opcode performs this operation, and these are all the tricks to making it do things, edge cases and things you should worry about" and more along the lines of "what opcodes should a CPU have, and how do those translate into electricity flowing through physical wires?" I feel like really thinking through that book made MIPS and x86 assembly much easier for me.

1 - http://www.charlespetzold.com/code/

brudgers 1 day ago 0 replies      
As an option to jumping into real world assembly language there is Knuth's MMIX [and MIX]. It provides access to the underlying concepts alongside structured exercises. One might say it's an "onramp to the foundations of computer science." I prefer "gateway drug to TAoCP" however.


The first fascicle is a free download and the place to start.

noonespecial 1 day ago 0 replies      
I'd second what others have said and go with a micro like an avr or a pic. Tons of open source support and a small system you can totally "own" will help you understand not just the code but how computers execute code at the lowest human-legible level.
zaptheimpaler 1 day ago 0 replies      
Check out the bomb lab from CMUs systems course. Its an assignment specifically designed to teach you assembly and gdb via reverse engineering a binary "bomb". There are 6 levels, and you need to figure out the right password for each level by reading the assembly/inspecting the program via gdb.


csmatt 1 day ago 0 replies      
For MIPS (recommended for starting out), check out my post. It walks you through creating the initial program in C all the way through finding its vulnerability and exploiting it. The buffer overflow building is done in Python through Bowcaster. http://csmatt.com/notes/?p=96 also check out the links at the end). Good luck!
svantana 1 day ago 0 replies      
If you're on a mac, XCode has a really nice feature: using the Assistant Editor (press the "bowtie icon"), you can get (dis-)assembly parallell to your source code and step through it with the debugger. A really convenient way of learning what's going on, and also understanding potential inefficiencies!
psuter 1 day ago 0 replies      
As an intermediate step, you could also study LLVM bitcode. It should give you a good idea of what assembly languages "feel" like without tying you to a particular architecture. It is easy enough to write smallish programs in the ASCII format and assemble them with llvm-as.
erbdex 1 day ago 0 replies      
1. i suggest diving a little into a processor architecture first. Z-80 and 8085 are almost the same, conceptually. Once you grasp the fundamentals, you can move onto x86. It too builds upon the architectures mentioned previously. Added concepts are- pipelining, segmentation etc. One of the best sources for me has been- http://www.amazon.com/Microprocessors-Principles-Application...

2. Knowing how the microprocessor works comes really handy while coding assembly as you can't 'catch exceptions' out there. It is like treading a land-mined area and nothing can replace the knowledge of the fundamental terrain- the architecture.

3. Since you know C, you can start with some serious gdb usage, as mentioned by @penberg.

4. Then find your sweet spot between these two ends. You could start with embedded robotics, another viable hobby could be IoT application. Two added advantages of these over 'theoretical' assembly language learning are that-

a) You are doing something with a real-scenario implementation, so you're surely hooked.

b) You can eventually mold a business model around it if you end up with something really innovative.

Adrock 1 day ago 2 replies      
stcredzero 1 day ago 1 reply      
First, find Core Wars and play it until you can beat the "tutorial" programs. Hell, I should reimplement Core Wars as a JavaScript app doing CodeCombat style instruction for assembly.
syncopate 1 day ago 0 replies      
A good way to learn asm is through books but there are not many for current architectures (especially x64, except the official Intel manuals which are quite good but also hard to read). Nevertheless, there are some on ARM which I can recommend, namely: ARM System Developer's Guide by Sloss, Symes and Wright. ARM Assembly Language by Hohl. ARM SoC Architecture by Furber.

IDA Pro is the industry standard for reverse engineering but it also is expensive (like USD $2k). There is a free version but it doesn't offer 64bit, so not really an option for modern ObjC or Intel computers. As you've mentioned ObjC chances are you work on OS X. IDA pro is not working well on OS X (the recommended way is to use the Windows version via virtualbox and not the OS X version). Still, Hopper.app is a great alternative on OS X. Not as good as IDA, but it has a Python interface, GDB support, and decompile support for ARM, Intel (and some knowledge regarding Objc). And it's only ~USD$100. [There is also a Windows version of hopper.app but it seems not yet ready to use, as I've only heard bad things about it there so far.]

khetarpal 1 day ago 0 replies      
I would recommend picking a project that you can do only in Assembly. For me, this was creating a special waveform on a microchip controller. I had to create a custom 800kHz signal using a 16MHz clock, so there was no way other than to respect each and every clock cycle, and make the most of it.

The key is to choose a project that you are excited about. If you pick another blah assembly tutorial, without the excitement of a project pushing you, your enthusiasm will evaporate sooner or later.

bobowzki 1 day ago 0 replies      
A good place to start programming assembly are on micro controllers (Arduino etc.). They have a more limited set of instructions, registers etc, and an easy to grasp memory layout. The development environments also often come with a pretty good debugger/simulator so you can step through your code and we how it works.

Good luck!

eru 1 day ago 0 replies      
Try having some fun with Core War. (https://en.wikipedia.org/wiki/Core_War
golem_de 20 hours ago 0 replies      
As always learning by doing is the best, look at this old school website: http://www.japheth.de/index.htmlAside of it's manual, he also recommends the (partially free) book http://www.phatcode.net/res/223/files/html/toc.html
eximius 1 day ago 0 replies      
Well, that depends how comfortable you are thinking in terms of machine code. It takes a completely different mindset because you're now literally dealing with blocks of memory -- even more so than C.

It also depends how steep of a learning curve you want to encounter. I, personally, have not yet played with x86 assembly because the documentation for them is so unfriendly for beginners. To that end, when I want to play around in Assembly and learn techniques for that level of programming, I usually play with the DCPU (http://dcpu.com/dcpu-16/). It's fake and was designed for a (sadly) not-to-be-made game. But it is an absolute joy to program in.

Play around with that until you're comfortable and THEN tackle x86.

aosmith 1 day ago 0 replies      
I found this on HN a while back... This is a fun way to get your feet wet:


I would also grab a copy of Art of Assembly Language.

en4bz 1 day ago 0 replies      
Id start with ARM first. Its a lot easier to pick up and is a lot easier than x86. Also take a look at the C++ itanium abi. It can be found on the GCC website. It explains the rules of going from C++ to assembly.
neals 1 day ago 0 replies      
Because Transport Tycoon is written in Assembly by Chris Sawyer. (I know, pretty amazing right?)
nedzadk 1 day ago 0 replies      
http://flatassembler.net/ is very good assembler (linux, win, dos)http://flatassembler.net/docs.php is good place to startand http://board.flatassembler.net/ is very good place to explore
mpl 1 day ago 0 replies      
This isn't the most aesthetic site, but the content really is top-notch. If you really want to learn assembly (MIPS, in particular), I can't recommend this enough:


yomritoyj 1 day ago 0 replies      
I found it very useful to read the Intel software developer's manual to get an understanding of the instruction set. If doing this for the x86 architecture seems too daunting at first, a fun alternative is to read the manual for the AVR microcontroller which powers the Arduino and then program an Arduino in assembly.
fromdoon 1 day ago 0 replies      
I highly recommend Computer Systems: A Programmer's Perspective


castor_pilot 1 day ago 2 replies      
I enjoyed Jeff Duntemann's "Assembly Language Step-by-Step".I see there is a 3rd edition. Nice writing style and overall fun read.
fuj 1 day ago 0 replies      
x86 ?This should get you started: http://www.asmcommunity.net/
duffdevice 1 day ago 0 replies      
Ask HN: Has Twitter given N his username back yet?
5 points by scotthtaylor  14 hours ago   4 comments top 3
kohanz 14 hours ago 1 reply      
Apparently not yet [0][1]. The new account holder is titling the account "Badal_NEWS" and has 189 followers with 0 tweets.

[0] http://twitter.com/N_is_stolen[1] http://twitter.com/N

socrates1998 12 hours ago 0 replies      
It's amazing how bad Twitter, Paypal and especially GoDaddy look throughout this whole thing.

Don't they have people who monitor this stuff?

notwedtm 9 hours ago 0 replies      
I'm confused by this. Surely with all of the negative PR any use of @n by a company/person would point to the person who was responsible?
Ask HN: Has anyone ever clicked on a tag cloud?
8 points by tomkin  1 day ago   11 comments top 9
Fzznik 2 hours ago 0 replies      
Glancing at a tag cloud is a great way to figure out what someone's blog tends to write the most about (the biggest tags) and what they write the least about (the smallest tags). I've been known occasionally to arrive at someone's blog to read some article, glance at their tag cloud and see maybe some other interesting topics they've written about---especially if it's one of the bigger ones---which I do click on and sometimes discover more interesting content on their blog as a result.

So yes, I've clicked on them. But not very often. But certainly "look at" more than I actually "click on", so there is still some value in these to some degree even if people don't click on them.

ggchappell 1 day ago 0 replies      
> I have no idea why anyone would just click on a word because it's bigger than other words.

The point is that the size of a word means something. A large word is typically intended to indicate, "Here is something that his person is talking about, that a lot of other people are also talking about. Click here to see what others have to say."

> Can someone explain why these became popular?

I suppose this happened, in part, because they seemed to have some reasonable-sounding theory behind them (see above).

The cool factor was probably more important, though. UI trends often ignore usability issues, after all, and tag clouds are an automatically generated example of the kind of "messy" art that became popular a decade or two ago.

codegeek 1 day ago 1 reply      
Funny because everytime I see one, it seems dirty, spammy and UI gone wrong. I never click on those. Just doesn't feel right. But it could just be me. Not to mention that I almost feel I have dyslexia [0]

[0] http://en.wikipedia.org/wiki/Dyslexia

ctb_mg 1 day ago 1 reply      
Yes, I have, but only because I was looking for items under a certain category. I would still have gotten the same usefulness if it were just a regular list of tags as opposed to a cloud.

My opinion is that tag clouds are better served as art than as functional UI elements...

angdis 1 day ago 0 replies      
FWIW, I thought they were cool when they came out and used a lot in the original Delicious. I still like them! I do click!
rartichoke 1 day ago 0 replies      
I never click on them either. Come to think of it I never click on a typical archive either. I'm interested in the content, not that you have 17 posts back in October 2008.
mulligan 1 day ago 0 replies      
I've clicked on the one they used to have at theeconomist.com
Meltdown 1 day ago 0 replies      
Sometimes... when it pokes me in the eye.
by_Seeing 1 day ago 0 replies      
Not once
Ask HN: Have you found your passion yet?
9 points by FindingPassion  1 day ago   11 comments top 9
debt 1 day ago 0 replies      
Finding your passion takes time. However, being broke really sucks. It really, really, really sucks. It's super stressful and distracting and many times if you're trying to figure out what your passion is, it's helpful to also not be simultaneously broke at the same time.

To put it another way, if you don't know what you really want to do with your life, it's generally a good idea to have as high-paying a job as possible. This way, you can get paid to figure out your life. Also it helps to have some money in the bank when and if you do figure out what you want to do.

Wanna be an actor? Well, now you can afford an agent. Wanna be a writer? Well now you can afford to go to conferences and fly around the country. Wanna be a musician? Well know you have some cushion to tour the country.

Basically, if you don't know what your passion is, then just keep working at your high-paying job until you figure it out. It's not worth just sitting around doing basically nothing.

The things you take for granted like being able to pay for shit, may not be the case once you figure out and pursue your passion. You may want to purchase a membership to some exclusive writers club(if you want to become a writer) but find that you no longer have the funds because you quite your high-paying job. I would say figure out a way to pursue your passion smartly so that you're not left completely broke.

Also, if you're not already doing it on the side in some capacity, then I would definitely recommend NOT quitting your day job. It's a passion. If you're truly passionate about it, then it should be something you're already doing.

pilom 1 day ago 1 reply      
Yes, but it doesn't pay well enough until I can get rid of my student loans. I worked as a whitewater rafting guide for summers while in college and have never felt so alive and happy in my life. Unfortunately, I had 60k in student loans (and my wife had more). My current plan is to work a job which pays way more than I deserve in tech while keeping my living expenses as low as possible.

About 4 years from now my wife an I are leaving the corporate world forever with all of the loans paid off and enough in the 401k's to never worry about retirement again. We will work at what we love (outdoor guiding and photography) and will be significantly happier for it.

Engineering and tech are great for some people. I'm glad I looked elsewhere to find a passion though.

talmir 16 hours ago 0 replies      
If I had a billion dollars to my name tomorrow I would probably not change a thing. My passion is coding. I would still find a job where I would feel like a valuable member of the team. I would still try my best to be better tomorrow than I am today. I would still live in my apartment, drive my old, beat up, car.

My computer would probably get an upgrade or two tho.

All I need from life from here on out is to be able to code, and enough income to sustain me. I have no dreams of owning my own company or getting rich. So I am pretty close to my ideal existence :)

gesman 9 hours ago 0 replies      
In my view the passion could be something that you like today, but it could be something totally different tomorrow.

The secret of success in an ability to recognize and follow the passion moment to moment.

That is the big part of the secret of "living in the now".

sdegutis 1 day ago 0 replies      
My passion changes from week to week it seems.

But the common thread between all the passions I've had is that they involve solving problems creatively and aiming for beauty in the end result.

tpae 1 day ago 1 reply      
I think being good at something does not mean that's your passion. I think it's more about finding a passion, and working towards to be good at it would make more sense. That's the case for me.

I was exposed to web programming early age, built my first Geocities website that served pirated movies, at the age of 11. After finishing the website, I noticed a lot of people actually came to my site, wrote thank you notes in my shitty, Geocities guestbook.

Being 11 years old, seeing those thank you notes really encouraged me to move forward, and bought my first PHP book.

Ever since, I started building websites and products to reach out to the users. There were lots of failures, actually, most were failures, but that didn't stop me from moving forward.

Passion is something that you find value in doing. For me, it's not about the money, but it's about seeing those thank you notes for providing value to my customers.

bluejellybean 1 day ago 0 replies      
Computers and more recently programming.

In 2001 I got my first taste of the internet when my mom brought home a crappy laptop and connected it via dial up. I was 6 or 7 and got hooked playing chess on yahoo games.

Ever sense I've been a power user spending at least 8+ hours online a day and loving it!

Started teaching myself to code in high-school.(4-5 hours a day easy throwaway online classes == great way to spend senior year) After I graduated last year my average time spent in front of a computer climbed to summer vacation levels of around 16+ hours a day.

Little bit of addiction, whole lot of passion.

_random_ 1 day ago 0 replies      
What would you do if you were moderately rich (apart from partying, traveling etc.)? I would still code, maybe hire a team to help me.
wglb 1 day ago 0 replies      
Yes, when I was about 8.
Do you want to learn some abstract math? Will you give me your opinion?
22 points by ColinWright  2 days ago   3 comments top 2
ColinWright 2 days ago 0 replies      
I should learn that before posting something like this I need to set up an auto-responder. It's late here and I have an early start, but if you email me, I will reply.

Thanks for all the emails so far - I hope you get something interesting out of it, and I look forward to your comments.

meerita 1 day ago 1 reply      
I would love a simply math blog. You know, even the most silly things to higher levels. And with real life applications or software. Sometimes, the easy and plain language works better than complex formula
Ask HN: What services are you paying for every month and how much?
4 points by uladzislau  1 day ago   7 comments top 7
sejje 5 hours ago 0 replies      
Heroku ~40/monthTwilio ~100/monthShared Hosting ~15/month
sandrae 7 hours ago 0 replies      
- 30-80 Euro a month for a German telephone answering service. Best decision ever.- a book keep service, the price depends on the hours worked in that month, a couple of hundred Euro
Nicholas_C 5 hours ago 0 replies      
PythonAnywhere: $5, Twilio: ~$3
heldrida 18 hours ago 0 replies      
- Digital Ocean around $10 a month ?!- LastFM around 4 pounds a month ?!
chisto 1 day ago 0 replies      
AWS Ec2 for personal website, 20-30usd,AWS S3 like 3 usd,Google Apps (1 user) 5 usd
mindcrime 1 day ago 0 replies      
Rackspace - a couple of VPS's, different sizes, about $100.00 / month

Github - $50.00 / month

Hoovers.com - $99.00 / month

Random AWS EC2 time here and there for demos or experimenting - varies, usually about $0.00 / month, but has been $40-50 a couple of times.

Co-working at Underground @ Main (Durham) - $199.00 / month

Mixergy subscription - I forget.. $20-25 / month or thereabouts, I think.

viame 1 day ago 0 replies      
vps $80.00, internet $45.00, phone $80.00
points by    ago   discuss
juanre 17 days ago 12 replies      
Bash, running in your terminal, understands both the Emacs and the Vi commands. By default is Emacs, so you can C-a (control-a) for beginning of line, C-p to go back in command line history, or C-r to search it.

I prefer the Vi mode, though. Add to your .bashrc

set -o vi

Then you can press escape to go from input mode to normal mode; there k will take you to the previous line in command line history, j to the next line, ^ and $ to the beginning and end of the line, /something will search something back.

Editing is really fast; move by words with w (forward) and b (backward), do cw to replace a word, r to replace a letter, i to go back to input. It will remember the last editing command, just as Vi, and repeat it when you press . in normal mode.

guelo 17 days ago  replies      
After making the switch to OS X in the last couple years after living in Linux and Windows before that, I think it's objective to say that keyboard shortcuts in OS X are much worse in both ease of use and consistency across applications.
Ask HN: How do you create diagrams in documentation?
6 points by ahtomski  1 day ago   11 comments top 10
alok-g 1 day ago 0 replies      
Microsoft Word and PowerPoint.

Sometimes Excel when diagram could be made by drawing borders around cells and resizing rows and columns as per need. This helps when unplanned non-linear horizontal and vertical scaling is needed while making the diagram.

The SmartArt concept they introduced was quite promising, though I find the current state of it lagging. I am sad that it never picked up.

radq 1 day ago 0 replies      
fromdoon 1 day ago 1 reply      
I had started with using MS Visio at my job. Though I had absolutely no experience in making diagrams, Visio was quite easy to get started and become comfortable in quick time.

The downside of course is that it is not free.

I would also like to know open source alternatives to MS Visio. I can see that Ubuntu 12.04 comes with Libre Draw, but I haven't tried it yet.

phantom_oracle 1 day ago 0 replies      
LibreOffice Draw is reasonable if you are coming from Linux.

Otherwise there is GIMP too.

+1 to all the other open source solutions mentioned here.

hnjake 18 hours ago 0 replies      
Give www.draw.io a go. I have used it a few times at work.
ahtomski 1 day ago 0 replies      
Yeah I quite like Gliffy. More often than not, we take photos of whiteboards and email them round the team.
atsaloli 21 hours ago 0 replies      
rsmaniak 1 day ago 0 replies      
Gliffy(www.gliffy.com) works great for me.
bsaunder 1 day ago 0 replies      
chatmasta 1 day ago 0 replies      
Ask HN: Alternatives to OpenAFS?
3 points by baileyb  1 day ago   3 comments top
joncfoo 20 hours ago 1 reply      
Does OpenAFS not solve your problem in some way? A little more context would be helpful.
Ask HN: You're building a new house. What tech features would you want in it?
7 points by mannylee1  2 days ago   8 comments top 8
xauronx 1 day ago 0 replies      
Comprehensive security cameras outside

Integrated speakers (mostly in the bathroom, but through out the house and independently controllable would be nice).

This one just came into my mind today, but having a weather proof mic outside would be amazing, especially in conjunction with the built in speakers. I would love to have a natural rain/thunderstorm/birds played throughout my house (echoed from outside). Sure, windows are great but how often can you actually have them open.

dalke 2 days ago 0 replies      
Why not look to the Passivhaus standards to improve energy efficiency and reduce the house's environmental impact? Then the house would be a tech feature.
Peroni 1 day ago 0 replies      
A tablet in each major room with a customisable UI that controls the following in each room:

* Lighting

* Heating

* Locks (windows & doors)

* TV

* Music

The one other primary feature I would want is an instant facetime type setup between each tablet so that you can communicate with others in the house without having to shout or get up off my lazy arse.

davismwfl 2 days ago 0 replies      
Depends on the square footage of the house but in general.

I'd want a single closet/small room where I could put all the equipment for the media/entertainment, cable boxes etc. Then use IR extenders or better yet, one of those wireless remote control systems.

I'd do speakers in the ceilings/walls of most every room/area with zones and volume controls in them. Depending on the square footage, you may need multiple receivers to make it work nicely where everyone can listen to different tunes. Also, outdoor tunes have to be available too.

Wire the house for both wired network and of course wifi. Depending on budget and size of the house, fiber would be nice for at least interconnecting sections of the house.

Along the idea of the wireless remote system, turn an iPad into the house controller. Make life as easy as possible, something you could hand to your grand/parents and they would be able to push buttons and make it work. I have seen systems like this and drool at how nice it is, and it isn't like it is crazy expensive. No more 4 remotes or a "single" remote that works 95% of the way but takes a small training session to even turn on the TV.

lsiunsuex 2 days ago 0 replies      
I've thought about this often. I have the added experience that I've slowly gutted and remodeled every room of my house for the last 4 years so I've gotten an idea of what I want and don't want.

Network drops everywhere. Even in the ceilings of major rooms. 802.11N is great, but nothing trumps Cat5E over fiber.

In wall (or ceiling) speakers. atleast 1 in every room, 1 in the 2nd floor hallways. All with volume controls. All wired to a central network closet with multiple Airport Express inputs so the wife scan stream 1 music to the bedroom when shes dressing, and I can stream another station to the family room while I'm waiting.

Network closet should span 2 floors with future pipes into the attic and into the basement for new drops. Network closet is preferably close to the main family room TV for major components. Switches, routers, firewalls (i was a sys admin in a past life) can all go in here. Money willing, put network equipment in 2nd floor closet, tv equipment in first floor closet.

sfrechtling 2 days ago 0 replies      
Do not underestimate the power of just simply wiring every room in the house with cat6. Instead of planning for every eventuality, you can just extend your house when you need to. Need a server rack - put it anywhere! Bought a new smart tv - just connect it to the open port!
sdegutis 2 days ago 0 replies      
A standing desk. That is all.
vientspam 2 days ago 0 replies      
Not really techy but, inspired by http://www.youtube.com/watch?v=f-iFJ3ncIDo, I've always thought it would be interesting to suspend beds, tables and storage to the ceiling and then have a (rope+pullyy) system to only put the furniture in a room that you are actually using; freeing up the other space to walk/work/lie down. Plus it's probably real easy to keep clean.
Ask HN: What have you recently learned about yourself?
15 points by namenotrequired  3 days ago   35 comments top 15
chewxy 3 days ago 0 replies      
That I should actually give myself some credit and that I'm not that bad. I gave a talk on Javascript[0] recently when I went to Malaysia. I thought I did an extremely terrible hatchet job. The audience thought otherwise, and I took quite a long time (like a few days) to adjust to the fact that I did okay.


[0] slides if you're interested: https://speakerdeck.com/chewxy/underhanded-javascript-how-to...

pmiller2 3 days ago 0 replies      
I turn into a bumbling idiot when asked to code in front of anyone. Since I'm out of work, this is a bit of a problem. :P
tempBadPerson 3 days ago 3 replies      
I am capable of domestic violence, despite a lot of evidence in my life that suggested I would never do such a thing or even need to worry about that side of myself.
yarou 3 days ago 0 replies      
That I spend an unhealthy amount of time on HN. In my defense, my day job is as dead end as it gets.
innertracks 3 days ago 0 replies      
I've not been valuing myself. Being good with people and social situations as well as engineering and problem solving is valuable. At least that's what I'm having to remind myself. And I'm starting to believe it!

The only real problem is when I'm programming or furthering my tech skills I feel like I'm short changing the social side. Same thing going the other direction. As I'm getting more comfortable with the tension between the two modes I'm feeling good about my potential as a consultant.

namenotrequired 3 days ago 0 replies      
- I find it very hard to relax over the weekend. But when I manage to do so, it helps my productivity for the week a lot.

- I can't stop something I don't like about myself merely by being aware of it.

ZenoArrow 3 days ago 1 reply      
Plenty, but picking one lesson... aside from death, guilt is the biggest block to building positive momentum.
bradleysmith 3 days ago 0 replies      
That I am not aware of some expectations I have of others until after they have been broken.
NAFV_P 3 days ago 2 replies      
I worry a lot about others thinking I'm a retard.

I have practically no chance of getting a job in software development or something similar.

I'm prone to being misquoted, which is probably worse than being misunderstood (I learnt I was prone to being misunderstood when I was about six).

I really enjoy hideous data structures.

meerita 3 days ago 0 replies      
That, every year, I'm becoming more and more cynical.
nicholas73 3 days ago 0 replies      
It all goes back to childhood.
mcintyre1994 3 days ago 1 reply      
That while I can really easily and quickly get myself engaged with an idea I come up with, I can't come up with an idea that doesn't suck...yet
2014OC00000XXX 3 days ago 0 replies      
That I thrive in relatively structured work environments (as frustrating as they can be), but fall into apathy and depression in more horizontal/adhoc/Valve style places. This doesn't bode well for my potential move to SF/SV...
mattwritescode 2 days ago 0 replies      
I am not very good at badminton.
dfraser992 2 days ago 2 replies      
That I have to absolutely get out of IT, because it has become a soulless, mind-breaking, and exhaustive way to be exploited by idiots. I suppose any job is like that, given how Western society is organized these days. But I have no other useful skills and though I have a work ethic, my CV is such that if I even applying for a bar job, or busing tables, ... "why the hell are you applying for this job?" I'm stuck and can't get over this reluctance to burn all my savings trying to change my direction in life.
Ask HN: Mastermind group for 9-5'ers looking to escape
17 points by makerops  3 days ago   10 comments top 7
dzink 1 day ago 0 replies      
Check out the open projects on DoerHub.com . You can see if you like one you want to join, or post this as a project others can join. It works like a Github for non-hackers, so you can find subject matter experts who have solid material for a passive income project (a doctor with algorithms for a medical iPhone app for example, though that exact one found a teammate already) and are looking for someone with your skill-set to split the project with.
phantom_oracle 3 days ago 0 replies      
You should look at a case study like WooThemes.

They worked remotely (and still do) and have found success.

The only problem with your idea and the rest of the bootstrapped remote successful companies is that they were addressing a real customer need/problem and you are simply trying to address your situation of hating your 9-5.

Come up with a great idea, open your world to devs from everywhere, work your ass off initially and then build the company in a way in which that 9-5 doesn't feel like work anymore and can be done at 10-6 or 8-12 + 2-6.

minimaxir 3 days ago 2 replies      
"9-5'ers looking to escape" is an odd demographic to target, since if your group launched a product, you'd be working much longer than 9-5.
cybernomad99 2 days ago 0 replies      
I am working on something similar. The project management tool is based on Bugzilla, and I have built real time chat software for Android, iOS, and Web app. I organize everything around a "project" and each group has strong focus to get the product to market. If you are looking for general socializing place, HN works well in that regard.

I am involved with a couple of active projects. They are games targeting Asia Pacific market. You can take a look,

It is not much a way to escape 9-5, more of a place where like-minded people get together and build something interesting. If it pays off financially, it is even better.

charlieirish 3 days ago 0 replies      
Great idea. You might want people to include their timezone so that you can segment groups in to when people are available
AznHisoka 3 days ago 0 replies      
I have a 9-5 and am bootstrapping a startup. Unfortunately my startup may compete with yours :)
westonplatter31 3 days ago 1 reply      
http://jfdi.bz/ - $20/month. I'm giving it a try.
Ask HN: What do you think about our last HN Search update?
23 points by jlemoine  4 days ago   26 comments top 14
gruseom 4 days ago 1 reply      
Impressive work in so short a time. I'm relieved that you guys are this responsive to the community. You've already addressed one of my personal concerns and hopefully the other (sort by date) soon. Here's some more feedback.

I think the defaults should be "all" (not story) and "forever" (not past week). I searched for something that I knew existed and when it didn't come up, my first thought was that the search was broken. Better to show everything at first and let the user decide how to narrow it down if they want to.

Second, I think you'd be better off if the design looked like HN itself, especially the comment rendering. Not that what you've got is bad, but the user is almost always going to go from HN to search and back to HN. This context switch is already a speed bump. Visual changes, which take time and mental energy to process, add to that jolt. Anything you can to do minimize these cognitive hurdles will serve the #1 goal, which is to get the reader the info they're looking for with minimum overhead. In this respect, the old HN Search is more usable, precisely for being unoriginal.

Thus, if I were you, I would drop the thumbnail pictures of the stories (it's cool that you can do it, but they don't really add anything and are distracting); would not include the story info with each coment (rather, I would do just like HN does and have a bit of text that says "| on: the-story-title-linked-here"), would make the text rendering look much like HN, and would follow HN's lead in having a text- and information-density-centric design. I'm not saying that HN's design is the global optimum (though I think it is better than most attempts to improve on it), but rather that HN search is an extension of HN and therefore not the place to innovate on its design. You're in a counterintuitive position for a startup with this project, since calling too much attention to yourself in this context is bad. You want to be unobtrusive and have the thing just work; the HN community is smart enough to figure out who you are from that and like you better for it. (That said, you shouldn't obliterate yourselves. For example, I like the visual cues that say "Page 1 of 10, got 237 results in 3 ms" and "1,244,896 stories and 5,289,181 comments indexed". They are unobtrusive and impressive.)

Lastly, a bug in Chrome: if I search for something, scroll to the bottom, click on "about", then hit the back button, the original results page freezes (i.e. refuses to scroll).

mjn 4 days ago 2 replies      
Main two bits:

1. Is there a way to force off the typo correction? I tried "smartos" both with and without quotes, and either way I get results for the word "smartest", which aren't about SmartOS (an operating system derived from OpenSolaris).

2. In terms of CSS/font/layout, honestly I like hnsearch.com's results better. I think part of it is that I use hnsearch.com as an alternative HN interface, not just a search engine, and it's usable for that purpose, in part because it looks more like regular HN.

The issue for me more generally is that hnsearch.com is almost perfect as an HN search engine, as far as I'm concerned. It does what I want, does it fast, with good coverage and a usable interface. So my advice for alternatives would tend towards just "yeah, make it more like that", which is maybe not the most useful commentary.

Trufa 4 days ago 1 reply      
Very nice but it seems to break the back button, if I type and search something it seems to query every letter and so when I go back, I have to do it for each letter I type, I think queries could be grouped smarter.


Good work!

neverland 4 days ago 0 replies      
1. Negative keyword search is not available. Can we please add that back in.

2. Having a date filter is better than none but I STRONGLY preferred the old one where you can sort by descending order from most recent to oldest. The current date filter still leaves things out of order even if it restricts time frame trying to mimic Google's filter.

3. Exact match search doesn't work either. It currently works similar to phrase match, where as long as the keywords exist, that will show results rather than in the exact order specified. Perhaps create two separate ways to do this kind of search.

petercooper 4 days ago 0 replies      
Had an interesting match for a vanity search: "peter cooper" matched "et cetera" .. combining both a first character typo and prefixing :-)

Also, one bug. If text in a URL gets highlighted and it's linked, the link URL itself picks up the EM tags.

hardxxxtarget 3 days ago 0 replies      
The user-experience is not good as compared to the previous HN Search. I couldn't figure out that the filters were located underneath the search bar initially. Also, the thumbnail doesn't look good for me as I'd prefer the compact view where I could see more search results in my screen. Lastly, I'd prefer using a contrasting highlighted background color for search text instead of having them bold, as it should be more distracting so that it strikes into the user's eyes when he searches.
thrush 4 days ago 1 reply      
There are already a lot of comments and feedback on this previous thread: https://news.ycombinator.com/item?id=7118496
dclara 4 days ago 0 replies      
Good job! I've learned a lot from it .

If I understand correctly, it is for enterprise search, not for web search, correct?

The response time is really impressive, especially with the new sorting by date function. To me, using NoSQL database, it's hard to do sorting if it's not impossible. That's why it's not available in the initial current release.

This implementation reminded me about the CTF algorithm, which needs to match the input query against a file. The reason why I thought it's not for large volumes of queries is because:

1. Each keystroke is a AJAX call which could be lightening fast when the query volume is low. But JavaScript could run slow on mobile phones. In this case probably it's not too bad because the AJAX call is very simple, fetch the results from the server

2. After receiving the request, on server side, there are only three steps to go before the results are returned:

- Initialized a new index (actually it's not a new index, just a new search)

- set criteria for the order of the attributes sets

- Call search and return

Of course, there are more detailed steps under the search api. The majority of the work is done on the search engine server side to keep crawling and updating the indexes.

When the request volume becomes huge, the real time response may be slow down due to the file size.

I appreciate this tool to help me find things quickly on HN.

mindprince 2 days ago 0 replies      
Couple of things:

1) www.hnsearch.com had three sort options: relevance | date | points. It would be great if the new search also have all three options.

2) Please make your legacy style exactly like the old one. That style matched HN style perfectly. Right now there is an extra line which links to the HN thread (we are used to clicking the comments link for that) and the way comments are displayed feels not right.

read 4 days ago 1 reply      
I wonder how much http://hn.algolia.com will be used if it continues to not be linked to from http://news.ycombinator.com
deanclatworthy 4 days ago 0 replies      
Looks great. Now if only HN would have a responsive design too so I can read properly on my phone.
negrit 4 days ago 0 replies      
So far so good: It's fast, accurate and readable.

The response time is actually really impressive!

tiboll 4 days ago 0 replies      
Great work Algolia!
fulmicoton 4 days ago 1 reply      
Awesome! Could we get indexation of url chunks?
Ask HN: Why are domain registrars offering pre-registrations for the new gTLDs?
2 points by dylanlacom  1 day ago   4 comments top 2
ohashi 15 hours ago 1 reply      
Probably depends on the registry. But two scenarios I see likely as happening:

1. Registry has a deal with registrars to sell premium domains at inflated premium rates (eg .tv domains)

2. Once sunrise period for TM holders is over, they are opening the registry and all the registrars offering pre-registration are treating it like the expired domain drops and hammering the registry to try and secure domains the millisecond the registry opens up. People bid/pay registrars to do this on their behalf.

cstrat 1 day ago 0 replies      
I also don't understand how this will work...

Multiple sites offer pre-registration for domains, is it a first-in-best-dressed situation? Is it a gamble?Should you pre-register with multiple registrars?

       cached 31 January 2014 05:05:01 GMT