This has worked for me with tens of thousands of customers and millions in billings, and I've never had to worry about being locked into some payment company's proprietary system. Whatever pricing scheme you initially come up with for your app probably won't be right. You might have 6 plans when you only need 2. You might find out you were charging a flat monthly rate when you really need to be charging per widget or per user or per server. The more you rely on someone else running your billing, the harder it will be to experiment and find the right way to do billing for your customers.
You can avoid being locked in to a payment processor for storing and charging payment info too. I use Spreedly (https://www.spreedly.com) which provides payment card tokenization and a single unified API for over 100 payment gateways. I can use Braintree today, Stripe tomorrow and ShinyPaymentStartup next year without changing any code or re-collecting billing info from customers.
Connections to Facebook, Google, Amazon, etc. should go un-tunneled. It's good to feed the beast with data it already owns anyway.
Connection to porn websites (say by your 16-year-old cousin who came to stay at your place for the weekend) and other ethically debatable content should be routed via Tor. Connections to torrents should be routed via VPN.
I understand that some people here prefer their personal VPN against a VPN provider like TorGuard, etc. There's no good and poor solution here, everything depends on the use case. A VPN provider will be handling thousands of encrypted connections and gives you a dozen exit nodes. From each exit node thousands of different connections are routed. It's way harder to target and isolate a user, even for a medium state-level actor.
Conversely, if you route all your connections from, say a DO droplet, you're controlling the droplet, but you have one exit point for all your connections... It's extremely easy to target your connections for a state level actor.
Of course there are thousands of schemes one can choose. Everything depends on the use case.
The bill was actually enacted to prevent privacy rules which hadn't even gone into effect yet, which means that technically, ISPs would have already been able to sell such data. However, the consensus seems to be that ISPs were only selling "anonymized" data, and this move will embolden them to push further into invasive practices.
You should remove all CA certificates installed by software that show like it were "installed by you".
Some AV software does MITM sending you a "trusted" certificate signed by their own CA whilst acting as a proxy between the actual site and the AV.
Theoretically anybody could do the same on the network side transparently.
Also if you don't trust your ISP, you shouldn't use their DNS servers. I don't know about commercial integration between DHCP and DNS requests to track people but it is feasible with some work.
For DNS just grab a raspberry pi and setup a dns resolver. You only need the right root zone seed file. Just don't make it available to the whole internet.
Also important to note -- existing regulations still in effect prevent selling un-anonymized data. Selling someone the browsing habits of a particular identifiable customer is not allowed, never has been.
Will they actually do it? Well, can they make money from it? Do you trust Comcast to be a good steward of your privacy in the absence of a legal requirement to do so? Comcast did a hard pull on my credit when switching my account to a new address because they were too incompetent to update it and ended up creating a second account as a new customer for me. Comcast is my only option of ISP, as it is for many many other apartment dwellers and many single-family homes as well.
I trust ISP companies more than I trust VPN companies because ISP companies are in the USA and are much larger (so engage in less risky behavior), so they at least have to sell data in aggregate and scrub PII
In my opinion, VPNs help more than hurt privacy, assuming you choose a reputable one to use.
If a person wants anonymity, then go for Tor or Freenet.
The VPN comparison chart  is the best reference I've seen on the dozens of factors one might care about.
For myself, the consistency was awesome. I applied it to work, leisure, studying, socializing - everything. Before this, I would binge. Go out for hours at a time, play games for hours at a time...I didn't track/measure anything.
I started having more time to accomplish my goals (because I was deliberately making time to do so) and I also got my hobbies/leisure activities under control.
But about a year ago, I would get frustrated when I would be getting into my state of flow just about when the timer went off. I decided I wanted more time in my flow state, so I decided to not follow the technique when I was doing work-related things.
Now, I only use it when I know I'm going to do something leisurely - mostly video games. It now serves as a way for me to avoid getting into the "flow" for things that I really should be cognizant of, while freeing up time to be in the flow of things I'm passionate about.
I've always found the estimation bit to be tricky, because its viability depends hugely on 1) the kind of work you're doing and 2) whether you're running an ongoing planning deficit or not. To point 1, the less well-defined your work is, the harder pom-level estimation is. For example, consider a task that boils down to "learn how to apply, new, complex set of APIs to solve problem X", but might just be written as "implement wireflow 2a". At some point, despite planning effort, you end up with tasks that are indivisible atoms with high variability. I don't necessarily feel it's worth putting a huge amount of time learning to precisely estimate those, if it's even possible. (I'd love to hear counter-examples from folks, tho.) My personal approach is really to try to bubble up overall variability/uncertainty to a higher-level than counting-the-poms, then mostly use poms to maintain focus/velocity.
As to point 2, part of the Pomodoro Technique is supposed to be doing a planning pom at the start of the day, and that's really the minimum. Sometimes that's not sufficient (e.g. you have higher-level planning/workflow problems), or sometimes your planning skills just need work. If you're at least doing your planning pom, that gives you time to reflect upon and begin to address these higher level issues.
1) I have a maximum sustainable rate of about 8 pomodoros (4 hours) high focus work per day. This can be temporarily overridden, but I work much less the following days.
2) Bimodal days seem to work best for me. One big block in the morning, followed by a long-ish lunch, and then another big block in the afternoon. Similar to PG's essay on maker vs. manager schedule.
3) Having many small unrelated tasks is inversely correlated with number of pomodoros completed. Usually I get the most pomodoros in when I have 2 big tasks for the day.
On a productive day, I can get about five 50-min pomodoros.
The technique helps me A LOT psychologically. Once that timer starts, I do not do anything else from what I have named my pomodoro. It helps me focus my attention and keeps me from drifting off to HN or reddit or whatever.
That said, the standard pomodoro time of 20 minutes is WAY too short for programming tasks. The technique itself is solid though.
I have lots of fires to put out at a moments notice. I think it would work well for people that are able to focus on one project at a time.
That's basically it. I'm counting the number of reasonably solid starts into "flow" I make each day. That seems to be enough to make me a lot more productive.
It's incredibly easy to procrastinate when we have Facebook, HN, WhatsApp, email - whatever, but I know if I start a pomodoro it's only ever 25, 22, 17, 9, 3 minutes until I can take 'reward' myself with the aforementioned for a few minutes. It's easy to push through knowing I only _have_ to do (at most) 25 minutes more work. And once I'm rolling, it's a lot easier to continue.
I use Harvest throughout the day to check how much _actual_ work I'm getting done. This helps me make better estimates of times and costs (as well as see the days I'm most and least productive).
Aside from that, I don't follow anything about Pomodoro specifically.
One issue with Pomodoro is taking 1 session, then 1 break. In my opinion, one should flex this "rule" in flow. It's more valuable to do, eg 3x uninterrupted sessions followed by a break of 1-3x your normal break length than 3 * (work, break).
I also prefer 15-minute blocks.
B.F. Skinner, the esteemed psychologist known for his work in behaviorism, reinforcement, and conditioning is known for using a similar approach.
Nowadays I use an app called Forest, it provides an element of encouragement of how many trees have I planted and how can I get other trees :-D
- Interruptions definitely affect my feeling of accomplishment and may affect my results. I find even the smallest external interruption or moment of weakness triggers my internal critic, resulting in an arguably more detrimental cascade of self-criticism. An avalanche of blog posts argue that these minor interruptions dramatically impact my productivity for other reasons. I totally buy this anecdotally but won't attempt to justify it since I suspect most people here agree anyway.
- Sometimes 25 minutes just doesn't cut it. I especially chafe at the forced 5 minute break during my 1.5 hour period pre-standup where I haven't eaten at all and am caffeinated. I know the creator of the technique and blog authors like Martin claim that I should be able to slice all of my tasks such that I don't need longer than 25 minutes, but I disagree. While I enjoy holding the state of a program in my head and occasionally finding the zone, I'm willing to acknowledge Martin's overstated but partly true point that the zone can induce tunnel vision and the downsides that go along with that. However, I've also observed that 3 break-interleaved chunks of work can zap my energy more one large block of work would have.
- How do people deal with waiting for things that take longer than a minute? I've recently been working with jobs that take multiple hours to run. It's difficult to both schedule my Pomodoros such that I have a free one to check the result of this job. Even worse, the validation of the job can take between a minute if it succeeded and hours if it failed. This makes budgeting hard.
- Should I budget Pomodoros for checking email and Slack or include that in my breaks? Ideally, I'd use breaks to recharge and not context-switch between communication platforms. But, while I'm not an always on, 10-minute to respond to any email guy, a consistent multiple hour time-to-respond to any communication is a recipe for face-to-face interruptions in the age of the open office.
- Should I include lower level planning in my break or 25-minute chunk? I often find going from high-level task statement to knowing exactly what I need to do requires a few minutes to orient myself. I'd be fine including this in my Pomodoros except this orienting can involve firing off a quick question to a colleague or searching through my emails / Slack messages. Maybe I just need to get better at gathering requirements beforehand...
To be clear, I'm not putting down the technique. I suspect any time management strategy would reveal the issues I described above and that we simply don't hear about the pains of actually implementing a system beyond a week of casual usage (see any blog post with a just-so title like "I Adopted <> and It Changed My Productivity Forever" as an example).
As I look back at my bullets, I've realized I'm mostly looking for wisdom from some seasoned Pomodoro veterans. I see one or two people on this thread who fit this description, but overall I'm disappointed with the ratio of people who want to sell the technique or have tried it to people who have used it consistently for months or years. This seems to be a common problem among productivity techniques.
Meta-comment: I recognize this comment could be condensed, and "if I had time I would have written a shorter letter" (http://quoteinvestigator.com/2012/04/28/shorter-letter/).
Quick note, it's "solutions architect" not "system architect".
AWS certs are very focused on AWS products, so may not be helpful for Google Cloud. The developer/sysops -> devops path may be a bit better of the two, as it focuses on how to get stuff done, while solutions architect is a bit more higher level knowledge of putting AWS products together.
It's for when a question comes up like "Hmm. How can we we send a message that allows a yet to be decided amount of subscribers to act on that message? I know! Azure Service Bus!".
It's not that expensive to get the certifications so I usually say go for it. But it can be really dry and not as "real worldy" as you might expect.
In terms of what to shoot for, a) recurring revenue (BCC didn't have it and _believe me_ did that radically raise the savviness bar required on the customer acquisition front), b) B2B where something is important enough to need but not enough to require a long sales cycle or urgent support if the thing hiccups, c) a well-understood marketing and sales model that you can semi-automate.
The third thing is probably hardest to build, and as your time scale gets longer, it is the most likely part to require your sustained attention to improve. (I have no information how BCC is doing these days but I rather suspect the original organic SEO strategy which served me well for 5+ years will not continue operating unaltered for 20.)
In terms of where these folks hang out: business owners who have priorities in their life other than the business are still business owners. I think the great mistake in the "passive income" community is failure to treat running a business like running a business; it becomes aspirational for lots of folks who have neither the skills nor the inclination to run a business nor, unfortunately, the desire to change either of those two things.
This makes "passive income" spaces into a whirlwind of depression and hucksterism. Meanwhile, if you ask around the table at MicroConf, you'll find some folks who had a really good year and worked really hard for it and you'll find some folks who phoned it in while taking care of parents, getting married, throwing themselves into a home-building project, starting a new business, etc.
MicroConf, BaconBiz, and DCBKK are three conferences which all had folks who were at many points along the spectrum here. All have online ambits to them, too. (I suppose one could run a not-awful conference about software businesses in maintenance mode but if you have one then flying out to a conference would absorb a few weeks of maintenance mode and be probably a lot more boring than going to MicroConf.)
Q: Why are you bothering with BCC when consulting is so much more lucrative?(I can think of a few good reasons, but I'd like to hear what your reasons are.)
A: I really enjoy being a product guy. BCC has a very desirable property in that it mostly works in my sleep. Consulting is quite lucrative and intellectually engaging, but it often disrupts my life in ways that BCC does not: for example, flying off to $BIG_CITY_ACROSS_OCEAN for a few weeks is wonderful once or twice a year but would get tiresome if I were doing it every month. I very rarely get tired of BCC, and with the exception of a trivial amount of support all of my work for it is at my absolute discretion to schedule. I mean, my little brother is graduating college this spring and, without even looking at the calendar, I can say "Sure, no problem, I'll be there. Tell me the day sometime."
Money is also not a huge motivator for me. I like it, don't get me wrong, but after I've got the rent and necessities covered (oh look, bingo) money generally has to be the icing on the cake to motivate me to do something. (Shh, no telling the consulting clients.)
I've personally always failed building anything that people would like to use enough to pay for it, and I've seen many people waste (or invest) about 10x what they would put into consulting to make less than minimum wage (or even 0).
Yet, I'm still trying.
For instance, a good deal in business is one where you all a good for more than it costs.
If you were a freelance developer, that might mean charging a large amount for a feature of high business value because you know the client will pay, even though it doesn't cost you much to do.
Another aspect is information asymmetry. In the above instance, lieing about how many hours the feature would range to implement would be unethical for sure. But not saying how long it would take - charging for features and keeping the cost to implement to yourself - probably not.
Then we get to practices like subscriptions you expect people will forget to cancel (see every consumer facing sas), or building a platform with the intention it will be hard to leave (no export or api = a moat = Facebook)
Are these unethical though? I think mostly not. You need a business model, and business models mean not running things like you are doing a favour for your uncle. I think the line is where you mislead or get your customer to do things potentialy against thier interests without telling them they are taking a risk.
I would argue that you don't every need to cheat your own self to be successful.
To have a career and your basic needs plus some met? no
To be an industry leader\ceo of wealthy company? evidence suggests it wouldn't hurt to be a bit "flexible" (http://www.cnbc.com/2017/03/21/apparently-psychopaths-make-g...)
How you deal with this reality is up to you.
Moreover, reading code is a skill, and much of being a good developer involves working with other people code, which means being able to efficiently parse other people's code.
There is an analogy to writing in a human language. If all you do is read for 10 years, and then start writing, you'll almost certainly be a bad writer. You learned to read and extract information (and get enjoyment), but you almost certainly didn't understand how writers achieved those goals. This is even more sure for programming. Writing code over and over again helps teach you when and why to apply certain rules, because code has to work, not just look pretty.
But unlike writing human languages, there is far more to learn about computer programming - it's not just grammar and style, there's all levels of design, and architecture. So you read articles and books, but those are usually very hand-wavy. By reading the code for very large programs and then trying to copy what you see, you learn.
Also unlike writing human languages, you start by writing code. Reading code before you've written any is nearly pointless - you just won't understand what you are looking at.
It also matters what code you are looking at, just like it matters what you are reading - but again, only once you get good. So until you think you are at the median level for software engineering, don't worry too much about what you read.
The cool thing is that there's so much code available to read now, as compared to 40 years ago when programming first started becoming a real thing. And especially with Google and a few others not just employing hordes of great engineers but releasing a lot of their source as open-source. Read the Chrome source, for example. Or the Linux source.
-It lets you work effectively as part of a larger team or project
-Gives you a much larger surface area of material to learn from: you can learn new patterns and libraries by seeing how other people use them instead of having to find documentation or tutorials
-Sometimes there are bugs in libraries (open source or otherwise) that you depend on. You will find these much quicker if you are good at reading code.
If you are inclined towards self-motivated improvement and if you have a good internal monologue where you are able to be real honest with yourself and your own failings and limitations then reading good code opens up the possibility that you can apply the things you take away from it to your own code and become better.
Good code isn't a gold ticket, it's always on you.
I see one interpretation of better as "can copy, repeat, fix, comprehend, maintain." And the other as "comprehends and exceeds - often without explicitly 'reading' that which is comprehended in the first place"
The first group will argue that you have to read.The second will argue that it's optional.
I am in the second group but would argue in favor of the first. It never hurts to stand upon the shoulders of the giants who came before you.
That being said, I never read other's code unless its to fix it.
The way I like to do this is to think through how I would implement something. Then, reference code that does something similar to the problem I was trying to solve. Then, I compare their solution to mine.
What does it mean to be a good developer? Do you simply want to write your own black-box undocumented software? Then there is a much lesser advantage to reading other projects' code.On the other hand, if you want to manipulate another codebase, or use another library, reading code is a necessity.
That leaves one final question: Is it beneficial to your own development skill to read others' code? Yes. Proficiently reading others' code is a very beneficial skill, even if you do not intend to write code to be read by someone else. Not only will you get better at reading your own code, you will learn idioms and practices that will improve your comprehension, and writing skill.
So reading other code can surely help. I have seen some Github projects where I would have loved to have used the project for personal use, but I just couldn't understand the coding style, or have had to incorporate my own coding style because I could not adopt the code as it was in the program.
You don't read it like a book, you read it to modify it and therefore you just understand it.
For a new code base, I start off with a high level design and one feature which touches the important parts. May be there is more knowledgeable with codebase around to ask and also decent documentation.
If I am in a Lisp like editing language, this  tracer is fantastic. Otherwise in non-live languages, I am a dead duck in the circus of breakpoints/debugger to step through the code.
You may also have a look at this , the author of Coders at Work, Peter Siebel talking about decoding code and from the same book a very interesting discussion on reading code which I have reproduced here.
Seibel: Im still curious about this split between what people say and what they actually do. Everyone says, People should read code but few people seem to actually do it. Id be surprised if I interviewed a novelist and asked them what the last novel they had read was, and they said, Oh, I havent really read a novel since I was in grad school. Writers actually read other writers but it doesnt seem that programmers really do, even though we say we should.Abelson: Yeah. Youre right. But remember, a lot of times you crud up a program to make it finally work and do all of the things that you need it to do, so theres a lot of extraneous stuff around there that isnt the core idea.Seibel: So basically youre saying that in the end, most code isnt worth reading?Abelson: Or its built from an initial plan or some kind of pseudocode. A lot of the code in books, they have some very cleaned-up version that doesnt do all the stuff it needs to make it work.Seibel: Im thinking of the preface to SICP, where it says, programs must be written for people to read and only incidentally for machines to execute. But it seems the reality you just described is that in fact, most programs are written for machines to execute and only incidentally, if at all, for people to read.Abelson: Well, I think they start out for people to read, because theres some idea there. You explain stuff. Thats a little bit of what we have in the book. There are some fairly significant programs in the book, like the compiler. And thats partly because we think the easiest way to explain what its doing is to express that in code.
Overall, I try to read in small pieces with some clear goal. Interestingly enough, I don't know if we can read code from start to end like a book, may be literate programming makes that possible; but yes reading code is hard and that is simply because we don't have good code reading tools, yet in the 21st century!
EDIT: link to the code is not literature article
Seeking alpha is a great site you might look at. They also have mobile apps and user generated content. seekingalpha.com
We just received a new Mainframe from IBM. Big beast big power consumption.
My primary task was to be Sysadmin of LPAR/instances of Linux inside the new IBM.
The new mainframe was unpacked, and the power connectors had to be "modified" to the local standard. You know.. You ask your local contractor to read the manual in English and hope for the best.
There was two person on that day on that Data Center. Me and the IBM Tech Representative.
Well, I was checking some blade servers looking at the Robotic library, and I see him plugin it 5 meters from me.
I just heard a BANG. And for the first time, I saw an electric fire. Like a Dragon spitting green fire. I shout for him stop and move away. He by instinct unplugged it (I grabbed a chair to throw at him if by chance he gets stuck on electricity).
It stops.And everything gets pitch black.Lights onHe looks at me.I look behind. And there are. 200 servers Down. all Down.It even had broken the APS system.
I walk to the extension. Dial 28 to my co-worker and say:
"P.... come here. Serious! Get everybody here... Big problem.. BiiiiiiiiiiiiiG."We had to start everything on its right orders (SAN storage, ADs, Servers, SQL) but we knew it.
8 minutes late the electricity company appears the IBM tech had to go to the hospital with cardiac arrest by the stress.
The IBM tech guy got lucky and is alive.I got a good recommendation for keeping cool in emergency situations.
To help us troubleshoot this, my boss asked me to program the unit to give a missed call to the server every hour. If we got a missed call, we knew that unit was still working. In countries like India, giving a missed call is a zero cost way to communicate. For example: You would pull up in front of a friend's place and give them a "missed call" to let them know that you are waiting outside etc.
Anyway, I implemented the logic and we sent off our field techs to intercept trucks at highways and update the firmware.
The way I implemented the logic was the unit was to call our server's modem number every hour at the top of the hour. No random delay nothing. So, soon after that, around 50 units tried to call our server at the same time. Remember the clocks in the units are being run off GPS and they are super accurate. This caused our telecom company's cell tower BTS to crash. Cell service in my office area, a busy part of Bangalore, was down for a whole 2 hours.
I was called into the telecom company's head office for their postmortem. They didn't yell at me or anything. They were super nice. In fact, when I finished explaining my side of the story, one of their engineers opened his wallet and gave a hundred rupees to another guy. Guess they were betting on the root cause. From what I understand, they escalated the bug to Ericsson who manufactured the BTS and got it fixed. For my part, I added a random delay and eventually removed that feature.
The owner was a very impatient youngish founder who new just enough about HTML/CSS to have the dangerous notion that he new something about programming. Additionally he was obsessive compulsive to the point where when he saw that different browsers didn't render the HTML/CSS EXACTLY identically in all cases, he had me redo ALL text on the site as IMAGES!..because those would look the same regardless of what the browser supported.
Now, the payment processing part. Since he was a cheap bastard and didn't want me spending any time on actually versioning, managing code, doing deployments, testing etc....we only had one development/test environment: PRODUCTION.
Yup, I'd connect my trusty VisualStudio IDE directly to the file system on the production IIS webserver and code away. Whatever I had coded when I hit save...was live.
No issue. Since we had no monitoring, logs, analytics or anything else unnecessary like that, he never could tell how many live transactions were lost because I had forgotten to close some tag, looped once too many times, mistakenly truncated some part of a card number, swapped the first name and the last name field accidentally or mistakenly told the payment gateway to CREDIT rather than DEBIT the charity's account (yes, that one did happen...and he did notice).
I would come home a nervous wreck every day just wondering what kind of pissed off customer calls I'd be hearing about the next day for something I had done that day.
Turns out that the only difference between testing and opening night was that the front doors of the theater were open, and it was a windy night. The projector has a "wind vane" style airflow sensor in its exhaust vent to check and double-check that the fans are running correctly. The sudden changes in airflow when the control room door was open was enough for the airflow sensor to drop and trigger a panic shutdown. Since the projector also had fan sensors and temperature sensors, the manufacturer okayed us to bypass the airflow sensor.
Early in my career a fellow team member working with me at a fairly well known fortune 500 company was testing out a process where using Microsoft Forefront Identity Manager we would cleanup in-active accounts and shuffle things around to various systems auth systems. Since this service was used to sync our prod AD and test AD there was a "connector" into prod. From this single FIM instance you could hit dev, test and prod ADs. Sadly there had never been any rules put into place to prevent a push from test -> prod.
On the day that this co-worker was doing some testing he somehow managed to push a change that he though was going to our test AD but instead went to the prod AD. This change ended up wiping out quite a few prod AD accounts. As in totally deleting them. All of our systems, including the phone system (not sure why) were tied into AD. All of the sudden people on our floor were saying they couldn't login to anything or send e-mail. Soon we found out that the CEO of the company was feeling the same pain and on top of that, was not able to receive or make phone calls. My co-worker took a look at the process he was running and realized he had screwed up big time. He killed the process but not before about half of our production AD had been wiped out.
Like most backup systems, restoring our AD from a backup had not been tested in awhile. Between figuring that out, since it naturally didn't work as designed, and having to get the backups from our off-site backup company most of the company was unable to do anything for about 8-10 hours. This included remote sites, field techs, customer support agents, etc.
What sucked is that this co-worker was one of the top members of our team and had been handed this FIM environment that somebody no longer with the company had built. On top of that he was not provided any sort of formal training and was really learning on the job. They let him hang around for another week or so and then let him go.
The project was late and there was a daily-charge penalty clause in the contract with the customer, a very large company. A long enough delay could wipe out all the profit from the project. So engineering management told the programmers to suppress all signs of runtime bugs, no error messages, no halts, just slog on, bugs and all.
I objected, nobody paid attention. For my sensor, I had it scream bloody murder (on the diagnostic console) for every runtime problem it found. So I could fix it. The rest of the team followed instructions.
My unit was debugged, up and running, a year before everybody else. If the whole project would have been ready, the profit would have been reasonable. In a whole-team meeting,near the end, I asked the testing team if they had found any bugs in my unit. They asked "What's that?". They didn't even know its name. Suddenly, I was a hero.
Spent some time trying to figure out why. No luck.
Spent some more time.
Eventually I realized that the log timestamps were weird - it looked like the query had been sent a response, but the log message appeared 30 seconds later.
I instrumented the servers to measure disk latency. I noticed massive spikes in latency every few hours. Couldn't figure out why. Then someone told me the servers were running on virtual machines with a shared NetApp for storage... and it all came together.
Every few hours a multi-gigabyte file was delivered to each machine. This was a design that had originally been done for physical machines. With virtual machines, 30 copies of a multi-gigabyte file were being dumped to a single NetApp, filling up the file server's memory buffer and making disk latency spike since it was waiting for physical writes.
Meanwhile, the server I was debugging was doing log writes in the main I/O thread, so it blocked on handling requests when this happened.
I went and talked to team lead for the server. "Oh yeah, we fixed that recently, logging will be in its own thread as of next release."
Moral of the story:
1. Talk to the people maintaining the software before you spend too much time debugging.
2. Disks do block, don't assume they won't.
3. Changes to operational setups can have significant, hard to predict impacts.
If you want to hear more stories, I'm writing a weekly email with one of my programming or career mistakes: https://softwareclown.com
Datacenter had about ~300 servers in it. Not huge, but not small either. The lynchpin in the system is this: neither the battery supplies or generator can run the AC or air handler, so when the power is out everything non-essential needs to come down to maintain sane temps in the DC.
Anyway, my page goes off in the middle of the night -- power outage. The DC is running on battery backup. I hurry into the office to start powering things down as temps are climbing. I start shutting down VM's, blades, and 1/2U servers. About 1/2 of the way through, the power comes back on -- but the AC isn't kicking on (red flag). The air handler will function though, so let's run with that until the AC guy comes out.
I start powering everything back up. At this point, a few co-workers trickle in to help. After about 2 minutes the fire suppression alarm triggers -- 30 sec to evacuate the DC. I glance over to the air handler vent, and it's SHOOTING flames into the DC. We oh-crap the heck out of there just in time to see the suppression system trigger and the door lock closed. I run to the electrical panel and kill the power to the AC and air handler knowing that they were possible sources of the fire. The fire dept. arrives and forces us out of the building. At this point, nearly the entire DC is cranking on sustainable power with 0 cooling. It's a locked box effectively. We watch our notification slowly alert to servers going down hard due to heat one-by-one. VM hosts -- boom. Network switches -- Boom. SAN -- BOOM.
Long story short, we lost a number of servers and restored a lot of data from backup once things were back online. The cause was traced back to the wiring of the air handler motor. When the power came back on, only 2 of the 3 phases came back online. This was enough for the UPS system to operate, but not enough for the AC (wired correctly). The motor on the air handler was 3 phase but installed incorrectly (or something to that effect, it's been years and I'm not an electrician) allowing it to run, but turning it into a ticking time bomb of an electrical fire.
The distribution service used the same reporting system as our ad hoc notification service. Records for scheduled distributions had a field that stored the ID of the scheduled event that prompted the report generation. In this way, the distributions could grab all the output by event ID without needing to know anything about the reports themselves.
The problem: The ad hoc system relied on the same behavior, but since it didn't have actual scheduled events, the programmer who implemented it halfheartedly spoofed an ID based on the current time when the ad hoc notification created report requests. Over time, these spoofed IDs collided with the real event IDs used by the scheduled distributions.
We entered a bug report. The bug report got closed, because the developers said the module was being re-implemented in a different language, so the functionality would likely be different (read as "broken in different ways").
Time passes. On a lark, the colleague I originally investigated the issue with suggests we do a code review to see whether the bug did indeed get fixed.
The new programmer copied & pasted the original Visual Basic code responsible for the bug into the new Visual Basic .NET project, comments and all.
In the end, of course, a true war story is never about war. It's about the special way that dawn spreads out on a river when you know you must cross the river and march into the mountains and do things you are afraid to do. It's about love and memory. It's about sorrow. It's about sisters who never write back and people who never listen.
As I hinted earlier, the AC system was epic - literally cold enough to hang meat. It originally used several chillers, but even after turning off all but one, it was still FREEZING (well, nearly) in there - there was literally a rack of parkas hanging at the entrance, and you put one on if you were staying more than a couple of minutes.
Not long after the system went in, we got a call from their admins saying there was something wrong with the network, as they couldn't get the Storage Arrays back up - sometimes. Eventually, a pattern emerged: the Arrays refused to restart after they had been shut down for more than an hour or two, but when we took them back to our office, they worked just fine. Turns out the problem was thermal: The room was so cold that the new state-of-the-art HDDs literally didn't have the torque to spin the platters up again against the cold-shrunk tolerances. We checked with Seagate and they said NASA was operating the disks well below their design temperature - they had never expected anyone to do that! The fix: 1) Don't shut the array down for too long, and, 2) if you really have to shut it down longer you'll have to wrap the whole array in plastic to prevent condensation, then take it outside to let the disks warm up enough before bringing it back into the DC and spinning up before the disks almost literally froze up again! This was, of course, duly written up in an official NASA policy and procedures manual. I suppose SSDs were a big win for NASA, as I expect the problem only got worse with succeeding generations of spinning drives....
One the biggest banks in Europe.
They just bought (aka "rescued") another bank somewhere in Europe and they wanted to lay dark fiber for datacenter synchronization.
And they wanted us to manage that network.
After several weeks laying fiber, and setting all connections we started the network manager on the operations center... and couldn't see any device, except the "secondary" network manager in customer premises wich could see everything.
Turns out the "security" people at the simply put a Firewall between the NOC and the network and didn't allow any traffic to go from the NOC to the fiber optic devices.
We asked them to give us access but they didn't have IPv4 addressing compatible with ours. Whay? Because the bank used all IPv4 address space internally. All private IP adressing was already alocated and the NAT ("public") they could use colided with other customers equipment.
Finally I had to build a ugly "semi-isolated" network island between the bank, the noc, and some virtualized workstations that could "see" (natted) the customer devices (wich we were paid to manage) and the NOC workstations.
On the plus side, that junk made me write the best documentation I've ever made because just in case something broke in the middle of the night.
I looked at him and said the installer is ~200MB in size, 20 seconds is more than reasonable. He started arguing that the network connections had to be at least 100meg links so it shouldn't take more than 2 seconds. It went round and round, until I realized he didn't understand that network links were in bits/sec. At this point, he was refusing to listen and started disparaging everyone 'against' him. I gave it one more go and showed him the unit conversion and basic math on file size, rate, and time.
For a while, it looked like he was trying to get everyone arguing against him released since he was an 'architect' and everyone else was engineers and I was just the ops guy. Too bad for him, he didn't realize in the land of inflated titles, I was the security, storage, and infrastructure architect. I just felt it was presumptuous and relabeled myself the ops guy.
Sometime in the middle of the night, I received an SMS: "Datacenter temperature is high: 24C" (yes, centigrade about 74F).
I saw the graphs and I could see there was a temperature spike. In the last hour the datacenter went from 16C to 24C and it was going up.
Three of the four industrial HVAC machines failed that night. Nobody knew why - I called the HVAC technician (he would come around 9am) and meanwhile I opened all windows and doors in the datacenter floor to stabilize temperature. 26C inside the DC, -6C outside.
Turns out HVAC systems used water... and someone broke the pipe isolation and the water froze inside.
It was fixed in the morning aplying a blowtorch to the 3 frozen pipes, then when we had water in the circuit instead of ice, HVAC machines made the DC cold again.
Temperature peaked at 27C in the morning (-1C ouside) - at 28C (84F) the VP of Operations would call everybody for a controlled datacenter shutdown that - fortunately - didn't happened.
6 months later I had a similar problem with HVAC. This time water evaporated inside the pipes and we had a "normal" panic because both the inside and the ouside were hot.
That was my last on-call duty.
Top-level execs had made the decision to not get a backup generator. The one compensation was that we got a manual transfer switch, so that we could easily truck in a generator & cooling in case of a planned outage. There was the possibility that we'd be moving at some point, so self-containinment was a big thing.
Taking that into account, I suggested getting an Eaton 9390 UPS, with two IBC-L battery cabinets and an IDC breaker/distribution cabinet. (http://lit.powerware.com/ll_download.asp?file=Eaton9390UPSBr...) The distribution cabinet outputs went to in-rack Eaton RPMs (http://powerquality.eaton.com/Products-services/Power-Distri...), and from there to PDUs.
This setup gave us ~45 minutes runtime at normal load, and more if we shut down non-prod. The one time we had an outage (during my tenure there), shutting down non-prod allowed us to ride the outage. I also liked this setup because our only connections to the outside (power-wise) were from the fused disconnect input, and the EPO. In the end, the "single-line drawing" looked like this:
Building fused disconnect-->Manual Transfer-->IDC cabinet breakers-->UPS-->IDC cabinet-->Rack-->PDUs Outside Generator Hookup---> Switch (input/bypass) dist. panel RPMs
Unfortunately, the electrical engineer hasn't seen such a thing before. In the past, the 480/208 transformer was external to the UPS, and this is what the electrical engineer was used to. So, the engineer wrote up plans to run an electrical duct from the UPS, to the Manual Transfer Switch, and then on to the transformer (in other words, back to the UPS).
I totally missed this mistake on the plans. It was actually caught by the construction crew, who was laying out the ductwork and realized that something looked weird.
In the end, one of the conduits was used, and the other one was just left in place. Luckily our connections from the IDC distribution panel to the RPMs were flexible, because that second conduit got in the way of pretty much everything.
I read a lot of articles about AI in medicine, pretty much anything I can get my hands on. I also read generic tech articles related to everything from Nintendo Switch, Tesla, Brain-Computer interfaces, and other popular media articles.
-How many articles do you read each day? Likely 10+. These aren't high-brow articles, just random blog posts and pop culture tech. I read about 2-3 research abstracts per day in medicine and maybe skim the text of 1-2 articles.
-They're usually related to your job or to some side projects? Usually they are related to my interest in medicine or technology. Sometimes they are related to my job (I work as a part-time developer / data scientist). I also run a small website (https://www.cronote.com). I encountered a number of issues with time-zone switching and the daylight savings change on March 12th. Read about 20 articles having to do with correctly implementing timezones in Python.
-Do you usually read about a variety of topics or it's focused in 2 or 3 topics only? Topics cover a vast span of medicine and computer science. I enjoy computer science more than medicine so it's a 20:80 split.
-Do you usually read during some time of the day or it's usually random? I read whenever I'm behind my computer, usually alternating between work and browsing the Internet. This amounts to ~5 hours per day.
I usually clear my pocket reading list each weekend, even if there was something I dint finish reading (used to happen a lot), I just flush it out because that helps me determine my bandwidth for reading over a fixed time period.
Though mostly I am interested in comments section of tech/startups related topics, I also use feedly's reader count to decide whether to read or not articles on other topics.
The subject doesn't count. I don't read the new stories though, otherwhise it would be a bigger timesink ;-) .
Always interested in hearing other people's thoughts, HN has some good reasoning in comments. I prefer it over watching the daily news in the noon :)
I save interesting stories on my side project http://tagly.azurewebsites.net/, which can also show HN comments when adding the tag: commentsbyhackernews ( it's currently a bookmarking service for myselve mostly, but it can do a lot more under the hood)
Eg. : http://tagly.azurewebsites.net/Item/Details?id=49b1ed7e-5d35...
Edit: Example feature, add a article to wsj.com ( paywalled) and it will automaticly create a link through facebook. So you can read it ( i hate paywalled articles)
If it is the former, I middle-click 3-4 articles a day, and if they are also juicy topics, I middle-click the comments links as well.
If it is the latter, I read tons of articles a day (avg 20), some related to tech, but mostly not. I read in the morning, at lunch (very productive time to read), and after dinner.
Offline: I have subscriptions to dead-tree versions of Time, Harvard Business Review, and Foreign Affairs. I also have 4-5 books on the go at any given time, mostly nonfiction. I go through phases, and my last major one was statistics and category theory.
Online: Slashdot, Reddit, HN, Marginal Revolution, John D Cook, Farnam Street, Quora, and a bunch of data science related blogs. I also read articles on the getpocket.com recommended list, and I find myself drawn to reading articles on The Atlantic.
If there's something I want to read later I send it to Pocket which my ereader supports, so I can read them on my nice portable eink device whenever I have a spare moment stuck in a waiting room or on a bus or whatever.
According to my reading habits I've averaged reading 690 articles that way in each of the last two years.
In order to track my article reading habits, plus follow up on articles in related forums such as Hacker News after Id read them and such, so I wrote a litte PHP browser based application that interfaces with the Pocket API to help me manage all that.
Naturally I called it Pocket Lint.
I use https://bazqux.com as a RSS reader to keep up with the stuff I actually want to follow. Some gaming sites, LWN, EFF's deeplinks and the blogs of various products my company or I use.
(BTW, I highly recommend bazqux. UI very close to Google Reader, very cheap and with a lifetime subscription option)
If maybe I'm too busy and can't read HN one night, what I do is read the next day starting from "?p=10", if I miss two days I start from "?p=15" and so on, though that query has a varying limit, going after the limit gives no results, in the past I've gotten a successful request til "?p=25" but today it seems the limit is just "?p=10", most times I've seen the "?p=15" working.
I don't want to miss new tools or discussions so I always try to keep a maximum of 2 days of not reading HN.
also fwiw this is by nature a broken census since the people that will click this link are already gonna be the people that like the comment threads (since it's only a comment thread) and the people that respond are the people that post comments. so basically your feedback about how people behave based on comments is already going to select down to people that post comments on HN, which is likely a single-digit percentage of people that visit HN. asking users how they use a website on that website will always be subject to extreme sampling bias. so... this is fun by all means but let's not look too far into it ;)
I also try to read a book or article on something new I want to learn. My most recent book I started reading is called the Mom Test. Its about doing customer development, and it touches on the subject of what type of questions you should be asking.
0 - 1 Articles
100+ Comments on 20 - 25 articles.
I use the comments as a curation tool, to decide if the article is really worth reading, or click-bait. Sometimes the comments also do a TL;DR; summary of the original article, so that saves me time (esp. on rambling articles that write 1000 words to prove a couple of points or make a statement / take a stance on something).
> They're usually related to your job or to some side projects?
Job, side-project and technology related. I'm here only for the comments as I see gems from software industry veterans and experts whose knowledge on various tech topics far exceeds mine.
> Do you usually read about a variety of topics or it's focused in 2 or 3 topics only?
Usually 2 to 3 - I mostly come here for "Show HN", "Ask HN" and technology related announcements / findings. I come here to find inspiration and motivation to ship my side-projects.
> Do you usually read during some time of the day or it's usually random?
Random, throughout the day. It's gone up more ever since I gave up reading mainstream news after the elections. ( Nov 10th 2016 to be precise). I try to avoid political news on HN also. The mods have done a great job of flagging and removing them, so I am very grateful for that.
Related Reading: http://joel.is/the-power-of-ignoring-mainstream-news/
P.S. I also use https://hckrnews.com/ It loads super fast , has a very clean pleasing UI and helps me quickly scan the top stories on HN and decide which ones to come and peruse.
Depending on my energy and/or how long my build is taking, sometimes I just skim articles headings and throw them to Pocket. Then when I have medium energy and more time, I open up my Pocket, filter aggressively, and read the rest. Really long articles get tagged with #someday and go to the weekend.
I've been trying to focus on C & C++ related articles, as that's what I want to and will be doing more. But I also find articles about Functional Programming very interesting.
I couldn't care less about start-ups or the culture. I can't even open most policy or political posts now because it's just a punch to the gut every day. I read less than 1 comment on average per article.
On average I read 2-3 articles fully but it also depends what's it on. On breaks I always try to read top 10 or 15 and sometimes comment. I find this community and commentators quite a pleasure to read because many of us posses something unique or at least it seems so.
As to the type of article, I'm all over the place. Sometimes it's work related, sometimes a side project, sometimes just something I've got a passing interest in.
I've been on a reading diet for the last few weeks, I plan to kick back into high gear soon, with a project I'm building to ingest all my reading materials and present them to me in bite-sized formats. I used to be satisfied with Pocket, but my reading workload is too heavy to comfortably shoulder, so I need my own power tools.
What would be great is if I could break books up by chapter and feed them into the system, so that way they don't feel so heavy. I'll find a way to do that eventually, probably based on some ugly hack of converting Kindle books to EPUB or something ungodly like that.
Grab a copy at your local indie bookstore!
During the workday, I check various sources of information about once an hour, unless I'm working on something that requires either research or flow.
I run an RSS collector to manage repeating sources of information and categorize them for me. I add sources as I come across them and clean it out about once every six months.
Everything I read during the workday is related to work, but that's about fifteen different topics.
That's why I'm in the diet of not skimming through instead if I start one article I finish no matter how boring it is. But it's very hard I'm old surfer and suffering for deep concentration..
I read about 15 articles on average and all comments to about 10 of them and scan some comments for the rest. My reading is batched around morning, lunch, and evening. I download few articles to Pocket for offline readin. During subway commute.
For the others, I would usually skim through the article and also read the comments.
I am finding that there's a lot of value reading the comments, as some folks have that deep seated knowledge, as well as providing relevant links that will help you further grasp what's on the article.
I still have a list of ~40 articles to clear out...
Either about iOS development, design or (lately) learning thai language-material.
2) Usually I open the interested topics in other tabs and have a quick scan on the passage/ website
3) If that's interesting, I will add it to my reading list
4) I go over the reading list after dinner when I have free time
Fun stuff : xkcd and the like, 4 sites
News: Chinese edition NYT and the like, 11 sites
Technical: Hacker News, Venture Beat, ARS Technica, etc. 12 sites
Daily I maybe read / peruse ~100 articles out of what is summarized in the RSS feeds. Meaning, I see an article headline, it interests me enough to actually click to open the underlying website article. Maybe half of what I open I spend 10 seconds looking at only to immediately close. Half of what remains gets a speed read scan through. Maybe 5-10 articles a day get a thorough slow read. I try not to comment as much as humanly possible. I need to do other stuff in life youknowwhatimean....
Probably split equally between tech things I think might be helpful ("Python, Bash, SQL how tos" etc.) and non-tech things which are novel.
Like "Guy frozen in ice brought back to life after 600 years" (which wasn't a real article but if it had been you bet I would have read it).
I avoid most article from major news source (I keep up with the news anyway) and most Medium stories and anything with a social justice type slant (nothing wrong with that, it's just not of interest and not why I'm here). Also skip most "Our startup is doing XX or shutting down or whatever".
Skim comments for many more articles (~20) and if they look interesting read more in depth.
Back in the past I worked at Franklin Lakes, the IBM Office Products Division headquarters. I was a VM/360 Systems Programmer. I printed out a "core dump" of memory and since the printer was just down the hall I walked out of my office, picked it up, and returned. Someone saw me, called my manager, who passed it up the hierarchy as an "infraction"... and I ended up getting chewed out by the Director (3 levels up the management chain).
Office Products sold typewriters, paper, and punched cards. Our typewriter repair persons (all male) wore suits and wing-tip shoes to service your typewriter.
Times have changed.
Of course that doesn't mean that people who use this text are wishing suffering on anyone.
Anyway, it's better to send the suggestions to the mods by email email@example.com because this threads are most of the times unnoticed.
Seems dead to me, and to be honest, it always kind of sucked and was slow.
If your problem is more complicated and you want to use some unique architecture, you'll have to use one of the more low-level frameworks. I would recommend Tensorflow just on the basis of its popularity (you're more likely to find people who have run into the some problems as you). But Theano, Torch, and MXNet are probably pretty much equivalent in terms of speed and ease of use. I hear Caffe has a steeper learning curve.
If you're really doing something fancy, then you'll have to look into more detail. Torch and MXNet have the advantage that you can adaptively change your computation graph based on the data, but you'd probably have to be pretty far into deep learning research before something like that is useful. Tensorflow Fold does something similar, but I'm not sure how well integrated it is with the rest of Tensorflow (I've never used it).
You might also take a look at this:
It's a little out of date now, but it'll get you started.
Some of these frameworks are more general than others (e.g., Tensorflow is more general than Keras), so you can specify architectures in some that you can't in others. But as long as you can specify the architecture in a particular framework, you'll be able to get a working model. Your choice of framework just comes down to whatever one is easiest to work with for the problem at hand.
I started off using Caffe/Torch and currently use mostly Keras for most of my deep learning related experiments. With a more base level framework, I actually could tinker with different moving components to understand why they are used as they are, while with a higher level abstraction, I can concentrate on the problem at hand, knowing that most basic abstractions (or building blocks) are well developed already and have more or less been battle tested by people far smarter than me.
And of course, when it comes to pure speed numbers and architecture for scaling/deployment, these frameworks do vary among themselves: https://github.com/zer0n/deepframeworks/blob/master/README.m...
That is about right provided that 1) you use the same initial values and hyper-parameters, and 2) you can implement the same network with all frameworks. Issue 2) is complicated. Some networks are easy to implement in one framework can be hard or even impossible in another framework. Here "hard" can mean two opposite things: lack of flexibility (which disallows you to construct a certain topology) or excessive flexibility in the framework (which takes too many steps and care to construct a topology). Which framework to use depends on your goal and skill level. For starters, keras is usually easier.
What surprise me the most is that, at least tf, is almost declarative as framework.
I needed to add some random noise to a point in a multidimensional space so to generate other n points, close to first one.
In python I would loop through n, each time I would add some noise to the initial point and then I would push it into a list or whatever structure, a list compression.
In tf I am "stacking" n times the original point so to obtain n times that same point, then I am generating n random noise and finally I am adding the two.
The second solution more elegant in my opinion but require an important mental shift.
If the other frameworks are somehow similar at tf your biggest hurdle will be this kind of mental shift, just pick one.
I ended up bumping into the edges of the Keras API too much, and coming up with hacky type solutions to do things that are actually quite simple if you just do them in TensorFlow yourself.
Theano and Torch are also great options, but I think I will be sticking with TensorFlow, simply because I trust that Google will be putting solid effort behind it for years to come.
first is language. need to choose a familiar language.
second is feature set. they don't implement the same set of operators. but if you only want to use the common ones, most frameworks should have them.
the third is their ability to train in parallel. for example, does a framework support multiple machines? or just single machine, multiple gpus? Performance is also a factor. Do they support simd/gpu? do they generate intermittent code and compile it into cpp/cuda? or they just call into gpu libraries? Do you want to support mobile devices?
the fourth difference is the level of abstraction. if a framework is very low level, users need to understand many fundamentals of deep learning. but on the other hand, if you want to extend the framework to add new operators, a low level framework is easier to hack.
a high level framework lets you to write less code, but it hides details and makes it harder to hack.
the last thing that can be considered is the difference between dynamic/static framework. dynet and chainer and tensorflow with something called "fold" are dynamic frameworks. I was told they are more flexible. but I don't understand the details.
The math involved is pretty simple, in terms of the calculations that have to be performed.
Where frameworks differ is in things like speed and ease of use. Use the one that is the easiest for you. Tensorflow is certainly going to be the most popular for the foreseeable future.
Our own work calls cudnn/cublas directly because we're C++ programmers and its just more convenient for our use case.
For testing the scripts, I've used virtualbox -- install the latest ubuntu server LTS into a VMinstall your ssh keys, dotfiles, etc but leave it otherwise bare-bones. Then, clone it (this takes just a few seconds) and do your testing inside the clone. When you need a clean environment, delete the clone and create a new one... Makes for fast iteration on testing that install scripts always work. Don't configure anything by hand -- learn enough sed/awk/grep/etc to modify what configs you need without invoking an editor.
If you need to scale this up to something real and in production on multiple systems -- then start learning Ansible / Salt / etc. Doing in those systems what you now have documented in bash scripts will be some work, but doable.
Not an insane amount of money, of course, but enough that you can consider the project successful if you're getting a solid amount of views.
My largest project, http://sleepyti.me, gets about 1.5 million unique views per month. The revenue Google Adsense brings in is not nearly enough support myself, but it's enough to make the effort feel solidly "worth it" in terms of development time and hosting costs (which are very low at this point).
How (or if) you should be monetizing depends on the nature of your side project. If your "side project" is a business -- say, designing WordPress themes -- then you should sell your product! If it's something that gets 50 views per month, maybe it's not the best candidate for monetization (and is instead a portfolio/resume builder). Either way, gaining experience building things is almost always a good thing.
One important idea from the book is the distinction of side project/product confusion:
> A project is a software application that you build as a fun side project. The code is fun to write because youre not concerned about quality and performance, and the end result is a neat little application that likely isnt of use to many people.
> A product is a project that people will pay money for. In other words, its a project that has a market (a group of people who want to buy it). Without a market, a software application is just a project.
I think it's important to start in the right place here. Both approaches are fun but they have opposing goals. If you want to build a product that makes money, start with the market. If you want to build a side project... that's great, just keep in mind that when a side project tries to tack on "and make money" later, it mostly doesn't work.
2) I put side project technologies on my resume.
3) I put side project link on my resume.
4) I put my resume on LinkedIn.
5) I get a raise during my next performance review (or next job)
I write side projects when I want to try a new technology in order to integrate it into my flow. I don't make money from putting a couple of JS libraries and generating an automatic ping pong game from Bitcoin transactions (https://writecodeeveryday.github.io/projects/bitpong/) but I do get the experience on Websockets for clients.
I charge a subscription fee - either $5.99 or $9.99 a month. One of the hardest things for me to learn is that as developers, we tend to price things too low and don't really value our work enough.
60 Seconds Everyday is currently trending on Product Hunt too!
It aggregates tech events (mostly meetups, conferences, workshops, etc) across ~50 US cities and tweets them out and broadcasts a weekly mailing list. Hashtags, time of day of messages, including/filtering submissions, etc are driven by some simple machine learning. It's grown from basically nothing to ~13M+ impressions last year and is on track to generate ~30M this year.
The business model is affiliate links to the conferences and workshops. It turns out when you find 5-10k tech people in a given geography who are trying to improve their skills and network, event organizers come to you.
I do not include jobs, job fairs, etc though I know that would make more.
Of course I'd love to hit on something that meant I could quit the day job, but I think that unlikely.
In 2011 I built www.illustrators.co, a multi-vendor marketplace for artists to sell their work. I met some cool people and learned web development and UX in the process, completely changing my career trajectory. It just about covers costs, despite languishing for the last few years. I'd love to work on it full time.
In addition make and sell prints of public domain images. A chance to experiment further with online marketing and selling and building sites with static generators. I also make and sell cyanotype prints of my photographs. Mostly to experiment with photography processes.
I'm currently building a compendium of UX concepts, methods, tools, books and events. Mostly to help me better understand the subject, but it may also be useful to others.
If you gain some traction, there is ad revenue to be gained. I have found even 100 to 300 page views a day is enough to start seeing a few dollars each day. B2B ads pay out more. I've seen single clicks bring in $5+. Once built, not usually much you have to do after that, and they will typically grow slowly over time. And I feel good about them, because I know businesses are getting actual value out of them.
I have not struck the right chord with affiliate revenue yet. But I know there is money to be made with the right niche. I think you need to be a little more invested in affiliate sites, and have a real interest. These sites seem to require a steady flow of new content, though you may be able to automate some of it.
My latest side project is https://www.smsinbox.net, which provides a drop-in chat interface for Twilio apps. It's targeted at developers who use Twilio in their apps, and want to easily expose a two-way messaging interface to their users. It doesn't make a ton of money right now, but definitely covers costs.
Google Adsense + Amazon Associates can bring in quite a good bit if done correctly.
Sidenote: If you guys are interested in passive income discussions, you should probably join the live chat on IRC.
It's averaged US$340 gross per month since last May but it's been hard to grow it. I think in some ways it's quite a technical tool and you need to be interested in actually debugging issues on device, but there are a lot of people using Unity who aren't super technical. Also it doesn't lend itself to sexy screenshots. I've noticed some of the successful plugins are those which are about creating things, and they get a lot of people sharing screenshots of things they've created on the forums.
We've just started selling customised modular staves to people who use them as props, novelties or promo items. We've become good at 3D printing via much learning at our makerspace (http://sparkcc.org) and so with a couple of printers we can basically run our own small-scale manufacturing business.
Currently we just take orders via email and word of mouth but we're building a website that allows people to customise their own staff (like in a video game).
Also while being fully employed as an engineer, I would take side projects, contracts etc that I would work on during the evenings and weekends. Eventually that became my full time job. I now run a small consultancy.
I definitely learned a lot during last year while emailing with people interested in the product. Thanks to that I improved my tool iteratively while the early adopters discovered new areas or edge cases about testing I didn't even think about.
This week I published success story about running tests across 100 parallel CI nodes with my tool:http://docs.knapsackpro.com/2017/auto-balancing-7-hours-test...
> I made a stamp calculator for any postage for my own use then shared it online. It's all organic traffic via google searches and lots of repeat users. One google ad that pays for my server and a few meals a month. A Pennsylvania post office uses it to help the Amish! I love that. http://fancyham.com/stamp_calculator
> Just released a music notation iMessage sticker set For a music teacher friend. My goal for this one: long tail. We'll see!https://itunes.apple.com/us/app/music-notation-sticker-pack/...http://fancyham.com/#notation
> Helvetica shirts for font nerds like myself: Lots of sales at first through Twitter, but nothing now that the fed has passed. Other folks copied the idea, too. http://fancyham.com/shirts
> Geiger counter app, secretly controlled by pressure on the screen, that always drops jaws but just a few sales a month. http://fancyham.com/#detecto
I do UX design for a living and while these are fun creative outlets and an opportunity to try some programming, I think of these as play, not work.
Though I'd like to say these projects bring in money via boosting my day job, I don't know if that's true. These are such quick and dirty projects that I haven't mentioned them on my resume.
Though, I have been inspired recently by Nadja Buttendorf's 'brutalist' HTML site: http://nadjabuttendorf.com/ I'm going to embrace the ugly.
If you're curious on the details, I wrote a post recently about getting to $100/month with them: https://www.simonmweber.com/2017/01/09/side-project-income-2....
Not a huge money maker (>300 copies sold) but I also filed a patent on it in 2015, so hopefully a larger VR company will see our locomotion technique as an essential step (no pun) to bring VR to the masses, since it reduces cybersickness.
To monetize it, I use Amazon affiliate links. When Uncover tells you about new books, it presents links to buy those books on Amazon, for which I earn a commission.
So far, it has not been a huge moneymaker; I have made exactly $2.10 USD.
Still, if it can make ~$10 a month, it covers its cost, which is good enough for me.
Getting projects from personal connections is pretty easy ( eg websites, shops, ... ), got some money on a webapp ( not much though) and hosting off course.
I'm working on a second SaaS product at the moment.
Here's a list of some common revenue models: https://taprun.com/revenue/
The guy approached my friend letting him know that he needed to update it. My friend said it would be a lot of work and he didn't really feel like working on it. The guy said it was urgent and this was his livelihood. My friend came back at the guy with a price because the whole system basically needed to be re-developed and he was going to probably make it better than it was before -- having almost a decade now of experience. His quote: $25k.
The guy refused. As I said before: A guy who was making hundreds of thousands of dollars off this app refused to pay my friend to update it. Of course, he could take his business elsewhere, but my friend knew the app well because he designed it.
Another story to go with this in a way: I purposely did not update a clients' website for almost 5-6 months. Guess what happened? Exactly what I thought would happen: things started breaking. My client contacted me letting me know that things were broken, thus solidifying my justification for why I charge for monthly updates. Clients think that they can pay once and thats it. Sure, go ahead. But the web is changing so fast and especially things that rely on API that you just need to be handy.
Here is the thing by not doing that: If you go in months later, you have to figure out what you did, remember what is going on, try to fix it, etc. If you are in there monthly just to maintain it and update it, you have a constant reminder of the work you have done and the general maintenance helps you keep in contact with the client.
I have had it happen: I built a website/app for a guy and he was busy doing something else... I kept the web app on a private server and it was nearly done. He contacts me over 2 years later telling me he wants it to go live, but not before having a whole bunch of changes. I had to go in there, understand where I left off, and pick up where I left off to finish for him.
Anyways, my point of these stories is this: Know that once you develop it, you cannot guarantee what it will need in the future, or any bugs that might occur, but whether it needs monthly or yearly updates, you should incorporate that into your initial price, or monthly/yearly invoices, just so you can keep checking up on it, make sure everything is working, etc.
If they need things fixed during the course of the year, you can charge them by the hour, or just come up with a fair monthly price, even if you do no work on it -- just helps you keep everything in check.
You could also come up with an affiliate plan -- pay them a percentage for referring you to other potential clients -- on work completion, of course.
Yes you can use many tricks, and they work. But in the end of the day you are not different that a czsino, a drug dealer or a tabloid magazin when you use them.
It is more economically efficient to trick the user into staying in the short run. But in the long run, we are using google and stack overflow because they are the most useful, not for tricks.
Among tricks, some useful ones are also often badly implemented and become armful.
Ex: notifications are only useful if it result in more productivity than distraction. I hate most notifications, they disturb my flow, but I like being request them for specific cases to save me time.
A light gamification can be fun, but if it's at the price of my main usage of the site I'll leave.
Basically give me want I want, quickly and in a useful way, and I'll stay. Save me time and energy and i may even pay.
Unfortunately, hn readers are not a good sample for user behavior. We all have ad blockers, we boycott sites with behavior we dislike and we know how to by pass stupid tech decisions.
But if you manage to engage hn users with the same content and design than regular user, your model has a good base to be sustainable because it has credibility.
Credibility is harder to build than addiction. It's not quickly as rewarding, but it more reliable. And more satisfying for you.
Figure out how the app can trigger or remind them of these moments. Notifications are the obvious one, but you want to be careful not to train the user to tune out your app if you send too many notifs or notifs they are uninterested in. The best triggers are ones the user has asked for themselves.
For example, when they get a new book you can prompt them to set a goal to finish reading it in a week or two, or whatever is comfortable, and then ask them if they want you to check up on them (notification) when that time is up. If they set the goal, they will probably welcome it.
Additionally, you want to try and build habits that help the user improve his or her life. So if you build a habit of recording their daily reading log, and reward the user for reading every day without missing one, then they are happy because they are reading more books and they are also connecting their new reading habit with the action of opening and using your app.
I would say look at what Facebook does, they are like digital crack.
Your site should be very useful to users and you should try to gamify things a little bit as others have said.
I would recommend building out a segmented newsletter for people. Say for instance, you get a bunch of HN users that signup. If they are interested in startups, you have a weekly or monthly email that goes out that lists books on startups, customer development, lean startups etc.
You give a small summary on each book in the newsletter and the link brings them back to the site.
I have also seen email notifications for responses to forum posts that have worked really well.
One other thing, I tried your site on a Nexus 5, and the images of the books are a little shifted and oversized. I would suggest tweaking this a little as the majority of people browse the web on their phones these days.
For example, if I use the app for X minutes in a day, the next day I will get a summary as a notification.
It also adds light questions, to give an example it takes a random word and asks something like "how do you pronounce this?" or "what does X mean?". Tapping it takes you to the answer. They are made to require as little actual app interaction as possible, kind of making the learning (or rather, retention) almost passive to a degree.
I found the success of this minor so far, but I think it can work fairly well. It just needs a good balance.
* Rewards (variable rewards)
Ref book on Amazon: http://amzn.to/2ngLIuz
I try to wear low-power reading glasses while at the computer, on the advice of my (retired) optometrist friend, to avoid nearpoint stress, despite having 20/20 vision.
She bought me a pretty simple-looking variety, with amber lenses. I don't really care about the tinting, I primarily want to get the assistance for my eyes.
They fit well and I can wear them comfortably all day.
FWIW I am a remote developer (at home).
I read some bad reviews on them, but nothing that affects their usage as I've stated it above. It's possible they claim more benefits in their marketing, I don't really know.
This sounds silly but when we are focused and in the zone, we often blink far less than normal, this contributes to eye discomfort.
I wear prescription lenses, so I would need to swap out glasses while I'm working, or buy some clip ons.
To be honest with you, DBA's no longer exist at many companies. Most modern databases are easy enough to use that all you need is developers. My current company has a single DBA for 7 development teams.
If you want to break into software dev the low hanging fruit is usually web development with something like PHP.
You're going to be competing with a ton of people that have degrees and experience so try not to get disappointed about your search. If you can't land a developer job straight away, I know many who have gotten into dev by starting as testers and building their knowledge laterally within the company.
It take the average candidate with experience and a degree maybe 4 interviews to get an offer, you will probably have to do at least triple that.
If you don't have those skills, get them. Consider an associate degree at a community college or something, because those F500 companies don't give a crap about GitHub portfolios nearly as much as they do about a piece of paper that says you can show up on time enough to get the piece of paper.
These companies also use contracting and consulting firms to staff three- and six- month projects, often for data migration, decommissioning legacy systems, and other data-heavy tasks. Ask around in the tech community in your area about which firms treat people fairly, help professional development, etc. and make contact with some of them. Many sponsor "boot camps" or other training activities.
After you have that actual job under your belt (and the 40-hours-a-week of real-life experience with the systems) it will be much easier to pivot into something that adds up to more than just monkeying around with data loads and report generation.
The best way to comply if your app is used in Europe is 1) start writing a .doc document detailling which data you want to collect, where do you store it, when do you use encryption (suggestion: both in the application and the data volumes - but be careful choosing the ciphers for volume and in-app), and why do you allow people to see the data.
Look at what the EU is requiring for this - it used to be called Safe Harbor.
A few things I remember about those requirements:- data encryption at rest and in transit- no onward transfer to third parties- opt-out methods for users to not allow you to capture the data
You may want to look into any restrictions on using a cloud provider or specific configurations you may need (i.e. no failover to a non-AU AWS farm).
"Termination by You. Unless you have signed a minimum term addendum, you may terminate this Agreement for any reason at any time by notifying Comcast in one of three ways: (1) send a written notice to the postal address of your local Comcast business office; (2) send an electronic notice to the e-mail address specified on www.comcast.com; or (3) call our customer service line during normal business hours. Prior to affecting such termination, or any other change to your account, Comcast may undertake actions to verify your identity and confirm your election. Subject to applicable law or the terms of any agreements with governmental authorities, all applicable fees and charges for the Service(s) will accrue until this Agreement has terminated, the Service(s) have been disconnected, and all XFINITY Equipment has been returned"
Seems fairly clear to me, except for the "Prior to affecting such termination, or any other change to your account, Comcast may undertake actions to verify your identity and confirm your election" part. I guess a weasel could easily take 90 days to do so.
I was furious at the thought of paying for a service where I wasn't even living (To add insult to injury they threatened to fine me if I cancelled anyway), so I simply kept calling and escalating. They'd assign me a ticket, and if they EVER slipped their 48 hour SLAs, I'd call again and escalate again. (document everything) Luckily for me their ticket handling was so shoddy a higher manager eventually saw the churn on the tracker and handled me himself, he seemed both competent and sympathetic to the BS I had to put up with and both cancelled and credited my account.
To answer your core question with a ramble: in this situation the squeaky wheel really does get the grease. I'm sorry you have to go through their shit, "not comcast" was frankly a large motivator in choosing my house where I did.
1. What is your operator ID and/or first name and last initial?
2. Can I please speak with your manager?
They either fix the issue or get the manager on the phone.
Manger gets on phone and usually resolves issue. If not then I repeat step #2.
About sound quality there is no doubts about AKG, in my opinion one of the best brands out there.
In reference to the price.. yes they are expensive compared to other headphones but for someone who uses them more than 5h a day.. I think they pay off
I am glad to find these headphones. You should definitely try these without bothering about price. The price will look small when you start using them. If they allow you to go into Deep Work and focus on whatever you are doing then the price pays off!
In-ear headphones don't provide enough comfort for extended listening day in day out, so over-ear headphones are the best choice IMO.
Anything of good build quality in the $150+ price range is likely worthwhile, assuming you don't go for the overmarketed brands like Beats. I chose the DT 770 Pro because they are studio-quality, meant to last, and Beyerdynamics even supplies repair parts. They are meant to last through heavy use while providing studio level sound quality.
I couldn't switch back from using bluetooth headphones. Once you get used to the wire not getting in the way and how you can walk around without having to take your headphones off, wired headphones just feel awkward. When you can get ones with 40 hours battery life then that isn't an issue either.
I like IEM's for the size but in terms of comfort not so much.
For those on a tight budget, best value headphones I've used are Soundmagic e10 IEM's and Superlux HD681 Evo closed back headphones, for about 25 each they are great.
Some examples. Sitting outside in a noisy cafe, with NC and white noise playing: zero background noise.
Same setup without a track playing, only NC: I can hear everything, but it's just quieter. Voices, cars still there. Not acceptable.
The way I interpreted the promise of NC was it would actually play the inverse wave and cancel everything. I don't understand why some sound gets through. And I feel cheated.
I don't think my expectation was unrealistic because it was based on the following episode. I was on a plane and the Qatari American guy next to me chatted to me about films. When our conversation died he started watching movies, with some huge black headphones. I asked him about them. And he told me they were NC and asked if I'd like to try. I put them on, then he pushed one button on the side and whsp! Every noise disappeared! The plane engine was gone. He kept talking but it was gone too. It was literally a religious moment for me. I glimpsed another world I didn't know existed. I never knew I could end all the noise. So naturally I had to get myself a pair. I asked what they were and he said Bose QC. They were from a few years ago. So you see I thought I'd found something I could trust.
At Yodobashi Camera I was so excited to buy my QC 35. But when I used them, I could not rationalise away my disappointment. It was qualitatively different. On the plane those headphones had clearly put my ears in a pressurized bubble. Of total silence. But the QC 35 was just like God had turned the world volume down a third of the way. Really not good enough.
So now I still wear them, but I'm always playing tracks. At least I've discovered Spotify. But I still think I'd much rather prefer, the Total Noise Cancellation my first experience promised. Now sometimes I even go back and question the trustworthiness the guy who introduced me to NC such is the magnitude of the difference between my expectation and the reality. Did that Qatari American guy trick me? Did he just start mouthing silently as soon as he pressed the NC button? Or did that NC tech really cut everything? And if the tech was legit, did Qatar or 2014 get better NC tech than Japan or 2017? Why has Bose foresaken me?
Pushing for this shouldn't be underrated.
Although ACS no longer make custom eartips there are other places that do such as snugz, without doubt custom tips are the single best upgrade you can buy for your sound
I simply use some cheap skull candy earbuds. The important thing to me is earbuds. They almost never leak sound, and can easily drown out surrounding noise at mid-volume.
The HD600s are brilliant for gaming, music and movies, but obviously are open backed so are no good for a busy office. I find in games, people often complain they couldn't hear me coming, but I could hear them! The sound quality is amazing and I don't think I'd ever replace them, unless they broke - in which case I'd either buy more or try HD650s.
The momentums are great for travelling and this morning I actually bought my other half a pair of the folding on-ear versions for travelling and being away from home.
Also have Sennheiser 380 Pro but am looking for something that blocks a lot more.
Also if your media playback hardware is Sony, you can use their proprietary Bluetooth audio codec (less compression) for superior sound... (assuming your source media is high enough quality)
Build quality is phenomenal and they're surprisingly low profile. My first pair actually had a problem with the touch interface and B&O not only replaced them, they expedited shipment so I'd get them in time when I explained I had a long trip coming up - so great support.
The only drawbacks are:
- they're on-ear which takes some getting used to, especially early on before the band takes shape to your head and can press to your ears. However, the padding is very soft and even replaceable! I now believe on ear produces the best accuracy as there's no acoustic reverberations and feedback that you can sometimes get with over ear. However, over ear is more comfortable and even after breaking these in, you'll still get ear fatigue after some hours of use. That being said, I've fallen asleep for hours on flights wearing these and not even playing music as the noise cancellation completely eliminates engine noise and most external sound in general
- they cannot be charged and used at the same time, even if using the wired lead. This is probably the biggest drawback though at 14 hrs play time and having a replaceable battery, this can be mitigated
- the previously mentioned drawback of not having noise cancellation when using them wired is annoying because the noise cancellation is exceptionally good
- they don't fold in any way, so you need to consider how to carry them if not in use. I usually just extend the headphones and keep them around my neck. In general though, you'll find the higher quality headphones won't be foldable as that's an easy point of failure
They're pricey but I've tried a number of other brands and the features/sound quality of these are far and away the best to be had. Not to mention, they look great when worn unlike most others that look cheep or goofy in their bulk. Highly recommend
Amazon has them on Prime:https://www.amazon.com/Bang-Olufsen-Wireless-Headphone-Cance...