Hopefully this can help you.
(Disclosure: I work at Microsoft but not on PhotoDNA.)
Google "safe image search" has the additional help of searching the content of the page the image is used. You might be able to do the same, up to some limit, by checking the http referer header field to know where requests are coming from. You could scan the referer's page for some keywords. This might give you a better idea of the context where the image is used. Note that this might be tricky, since you probably don't want traffic coming out of your server to some child porn site.
That said, those are just some ideas. Youtube has a good community that flags videos, but also an army of reviewer that look at the flagged content.
Another way to look at it would be to try to manually select some images as "front page worthy", instead of trying to filter the bad stuff.
is there any incentive to participate in your community?
With moderators you feed the "power tripper".
With karma you feed people obsessed with points.
This is a bit complicated: what if you had some sort of capcha that required users to classify images as nsfw/sfw/illegal?
One way it to make contact with the police and get permission to list the names of the police agencies that are allowed to inspect the site via backdoor etc. Of course this might enrage some?? So some sort of middle ground might be to quietly approach the police for advice
The Laborers Who Keep Dick Pics and Beheadings Out of Your Facebook Feedhttp://www.wired.com/2014/10/content-moderation/
For the future: Asking legal questions without stating your jurisdiction is... not helpful. :)
Yes, big sites employ a lot of people to clean content. I remember reading an article about poor people in $third_world_country that do this all day long.
Someone brought it to my attention that Bing's cache is full of CP, after the offending websites are taken down, Bing keeps the images for a long time. The Rapidshare sites are also full of it and they password protect RAR files so admins cannot peak into it. It is a major problem that has no solution for it yet. People run Wordpress blogs and spambots leave comments that link to CP sites.
This has become a hot topic issue because that Jared guy from Subway had a manager of his foundation that was found with CP, and they raided Jared's computers and found more evidence.
My ethics and morals won't allow me to look at porn, but it is a big industry. There are all kinds of porn out there. The CP is the worst of it, and a lot of children are trafficked as sex slaves for it. They grow up with a criminal record and sex offender record, and by the time they expunge the record they are in their 40s and can't find work. I was contacted by a woman who was in that situation on Github during the Opal CoC debates. She is trying to get out of her situation by programming and cannot find work because of it.
This CP stuff ruins the lives of the children who suffer abuses for it. Once they grow up they have a hard time in life trying to make ends meet. Some have serious psychological problems that are hard to treat and deal with.
I remember that in some cases the website is found responsible for the content that users post on their websites. Laws in your nation may vary on that. If you find illegal content you should remove it, least you be found liable for it. Make sure to report the IP address of the poster to the government or a non government agency that handles it.
If you start one I'll use it though. :)
It's really difficult to just have your post get to the front page organically. It's possible (and probable) that when you submit there just aren't enough people looking at the /new page, so a lot of it comes down to sheer luck. A lot of good/great posts go by unnoticed. Getting 1 or 2 friends to upvote would probably increase your likelihood of being on the front page by 90%.
I've been gravitating towards things like http://lpushx.com, both because they're smaller, and because everything has more of a chance to "survive."
Ironically, this post just exposes how helpful it is to do the behavior it discourages. When you're trying to kill a lucrative behavior (especially something that exposes a loophole), saying, "Please don't do this behavior" is probably the worst way to do so. Now several people just figured out that you can game the system. I would recommend dang kill this.
10+ points chart: http://i.imgur.com/MdUvMB9.png
Median chart: http://i.imgur.com/SN5BuAJ.png
GitHub Repository: https://github.com/minimaxir/hn-heatmaps
Why does it break the integrity of the service? Sockpuppets, sure, but why does asking friends/acquaintances with established HN accounts (presumably the HN software can distinguish these cases) for upvotes work contrary to the goals?
Presumably if you have an account and you "vote well" -- that is, your upvotes are well-correlated with what other people upvote and don't downvote -- you know what people like to see. If a friend asked me to upvote their spam listicle, or a question they could just ask me and get an answer, or some politics story irrelevant to hackerdom, I'd say no. If I "manipulate" a friend's post, I'm making as much of an endorsement of the content as HN-appropriate as if I upvoted something I saw on the front page. (And if I abuse that, my upvotes should get disregarded.)
To be clear, I'm not advocating that bugging your friends for upvotes is a good system. I am advocating that posters asking people who care about HN for endorsement is explicitly desirable, and the software and community norms should be designed to reflect this, instead of the community norm deciding that vote manipulation is an acceptable means to an otherwise-technically-unsupported end.
Might lead to upvote inflation but I think extra positivity isn't a terrible problem to have.
 The traction you get on a site like HN is useful for feedback but it probably won't covert to sales unless you building something for other startups. It's not real take-it-to-the-investors traction.
I believe the crux of this issue is not enough people frequent the new page (https://news.ycombinator.com/newest). If that page got enough traffic, the legitimate votes would outweigh all but the most systematic attempts to 'game' the system.
Is this manual intervention, or is HN testing out some new heuristic?
is never good because it breaks the integrity of the service.
vote manipulation is called hustling these days. it tends to be okay if you're in the blessed class allowed to manipulate people or are doing it for a good cause (a private for-profit startup needing "exposure").
They delight in breaking rules, but not rules that matter.http://www.paulgraham.com/founders.html
There's always lobste.rs for when you really want to share something with your peers.
As I was recently told...
> No acerbic swipes on Hacker News, please.
Let's try to avoid double standards, OK?
So you are trying to imply that HN(or any other link aggregator) have integrity?
We started building web-based analysis tools for regulated energy companies (helping them keep on-top of their regulated rates of return, regulatory news, etc) and it turns out potential acquirers also want these tools.
We pretty much provide an automated regulatory analyst via a web-app.
Pitched to 1st-degree consulting contacts.
1. You apply, you do okay. Let's call you applicant A.
2. Company has found someone else they want more, so they start moving forward with that person (applicant B).
3. Recruiter knows that that applicant B might reject their offer or take a different gig, so he doesn't tell you (applicant A) any of this. He waits until things are 100% finalized between the company and applicant B before he gets back to you at all.
4. In some cases, after a month of silence, he comes back and says "they're ready to hire you" (because Applicant B turned them down), or he'll say "they just hired someone else", or he'll forget to respond to you entirely.
Moral of the story: Just keep moving, keep interviewing elsewhere. If they get back to you someday, great. If they don't, no sweat.
On the one hand, we hire because we desperately need someone to fill a role we're now doing in addition to our current work.
On the other hand, we are so miserably overwhelmed with work that going through the mess of things with HR to get everything figured out takes way longer than it should. In the middle of that might also be something happening to the position itself -- it was open and available when we listed it, but now the company's financials came out and we don't know if the position is still available. So we sit on it, for weeks, waiting to hear back from a VP. It gets approved but, nuts, the candidate's found another job already.
But within your question you seem to be asking for help in getting the job. You think you've interviewed well so you might have been within the top few candidates and just didn't get selected. The person who got selected, obviously, was notified. You were not and I guarantee that this will happen most of the time. As an interviewer, we didn't handle any of that communication -- you came to me with a resume attached in an e-mail from our hiring team. I don't even know how to get in touch with you. It sucks and I'm very sorry about that, but at a big corporation, it's pretty typical, unfortunately (and that speaks to a lot of other processes that tend towards being terribly impersonal).
I'd also hate to say it but too often I'd be stuck between 4 adequate candidates and the decision came down to superficial things. The best advice I can give you there is: add some superficial things. Get the work mailing address of the person you interviewed with -- the one who is going to make the decision. Write a hand-written Thank You letter expressing your desire for the position. I've gotten one of those in my life, though I've written one every time. My boss was so impressed by that extra step that I didn't get to pick the person I wanted for the job in favor of the other gal. She turned out to be a fantastic hire, so no hard feelings, but she literally won out because of a thank-you note.
Edit: To clarify I'm referring to a large corporation, not a startup. Can't recommend working for a good startup enough, it's been a way better situation for me.
If I took the time to contact all the people I passed on to give them an explanation, it would consume more time than I can afford. The people making the decision to hire are - usually - the ones with the most to do.
The safest bet is to consider any lack of communication a "no." Personally, if I want to hire someone, I make the offer on the same day I interview them. Any company worth your time should do the same.
On a side note, you should make a habit of giving and getting business cards from your interviewers - particularly the decision maker. Also, turn the interview around if you can. Your talents are worth a good company. Make THEM SELL YOU on their job. Don't take a stance of hope. Make sure they leave the interview knowing you are the right person, and the question is "will YOU accept their offer?"
Sometimes you'll feel the most hopeless right before a breakthrough. This has happened with 4 companies, but the 5th might be a perfect fit. Be persistent.
If you do get those companies on the phone, be brave enough to ask them why they passed. If they don't know, ask they if they can put you in touch with the interviewer. You can't work on your weaknesses if you don't know what they are (and people rarely know for sure).
Second, why take a chance to get back to you and tell you it wouldn't work out? You're gonna ask why and what are the gonna say? "Not a good fit"? Even this made up reason doesn't work anymore, since people start suing for discrimination.
I've found that this behavior is very rare at small shops and startups since they care about their reputation and try to not alienate people in their area.
What you're looking for is therapy. It takes a long time. There are little things you can do, but generally motivation and happiness are deeply rooted in things like your personality, income level, and a myriad of external factors that can't just be "hacked".
You can, however, do the following, which are known to be done by happy people (causal relationship is not necessarily established):
1. Exercise daily
2. Eat food in moderation, and mostly healthy food
3. Maintain close relationships with friends and family (loneliness is very bad for your health)
4. Write down 5 things you're thankful for every day
5. Meditate, practice mindfulness, or otherwise clear and focus your mind for a non-trivial amount of time every day
6. Similar to #5: take time to be bored
- If I'm stuck I take a walk.
- Tackle the hard/unknown/risky stuff first.
- Do the boring/easy stuff after lunch when I'm sleepy.
- The hardest thing is getting started.
- Let the little things build up for a while so you can do a bunch at once.
- Sometimes ignoring a problem works really well! I'd never have accepted that when I was younger, but really, sometimes problems are not urgent and ignoring them makes them go away. But sometimes not.
- If I hit my revenue goal for the month I buy myself a bottle of scotch. Maybe this habit isn't so healthy. :-)
Writing todo lists helps too, but not immediately. It forces you into thinking mode and you may end up doing a lot of the work next day.
But there is still a limit. You cannot be productive 40 hours per week.
Ruby, Python - general purpose programming languages. Often used in web development.
Rails, django - server-side web development frameworks. These are sets of related code libraries that make it easy to develop web applications using ruby and python respectively.
Mysql, Mongo - These are databases. Mysql is a relational database and uses sql as the programming language to interact with days. Mongo is an example of a nosql database. Nosql is a catchall term for databases that aren't relational and don't use sql.
Html5/css3 - Html and css are the building blocks of web pages. The numbers indicate the latest vegrsion which included a number of new ui abilities.
If you're using Codecademy you're off to a good start. The question is what do you want to build. A blog? An iOS app? solve some computational questions?
Once you can code, you can work with just about anything because most languages are based on the same fundamentals. So pick a project or a technology and then google "Build a XXX tutorial" or just the technology name and find a tutorial.
I find that the easiest language to learn is Python. Once you know the basics, you can start with http://projecteuler.net for some coding / CS problems. From there move on to https://docs.djangoproject.com/en/1.8/intro/tutorial01/ for some Django stuff.
I tell people to learn languages in this order:Python -> JS -> Java -> C -> C++ -> Haskell/oCaml -> whatever you need for work.
Standard web stacks include LAMP and MEAN. Standard mobile apps are written in Swift (with some C++) and Java (with some other random stuff thrown in there). Pick a set of technologies and write something simple (like a blog or calculator or mini social network.) Then swap out one or two components and build it again. I've built a bunch of blackjack games in 10+ languages with 4 or 5 different UIs, solved a bunch of the projecteuler stuff in a bunch of languages, and at work I play browse around the code base (Java, JS, Coffeescript, Typescript, C++, etc.) to read other people's code. When you do the same project over and over you learn the strengths and weaknesses of each project.
At the end of the day, most tools are mostly the same (especially if you don't have to worry about scale) and their strengths and weaknesses are less important that the developers comfort level.
I suspect you might know most of it already but perhaps you'll find something helpful.
I think we look for something that makes it easier for large numbers of people to become programmers. And I guess that's OK, for some definition of "programmer". Teaching people to be professionals still takes a degree, or basically an apprenticeship, or both, and I don't see that changing. (You don't see a new approach to teaching chemical engineering dramatically changing things.)
But perhaps I have misjudged the thinking behind your question...
Since then, even more have cropped up:Code Combat: https://codecombat.com/Empire of Code: http://www.checkio.org/blog/empire-code-space-strategy-game-...Code Kingdoms: http://codekingdoms.com/Taken Charge: https://takenchargegame.com/ComputerCraftEdu (Minecraft Mod) : http://computercraftedu.com/
No doubt, this has only contributed to the divide between software engineers and information security engineers.
For developers, something like laracasts (or other screencasts) is working well it seems. Not very revolutionary though.
For me personally, the ecosystem matters most. So the choice comes down to Gems vs NPM. NPM tends to have what I need the vast majority of the time, and gets better every day; so I prefer Node.
Here's an interesting comparison of the ecosystems: http://www.modulecounts.com/
Also soft realtime is the default, shipping mobile/desktop apps aside with your webapp requires little to no code changes, and even offline working apps are doable with ease.
Rails was fine 10years ago and popularized many paradigms when it was important... Now the world moved on and requirements changed heavily. No bad rap for rails (it paid my bills for many years!) but it left the path of innovation and is now mainly on the maintenance road and trying to catch up and thats it IMHO.
The typical issue at sea level is from neutrons hitting silicon atoms. If a neutron hits the neucleus in some area of the microprocessor circuitry, it suddenly recoils, basically causing an ionizing trail of several microns in length. Given transistors are now measured in 10s of nanometers, the ionizing path can cross many nodes in the circuit and create some sort of state change. Best case it happens in a single bit of a memory that has error correction and you never notice it. Worst case it causes latchup (power to ground short) in your processor and your CPU overheats and fries. Generally you would just notice it as a sudden error that causes the system to lock up, you'd reboot and it would come back up and be fine, leaving you with a vague thought of, "That was weird".
As others mentioned, most of these problems are caught when testing the chips. Most of the transistors on a chip are actually used for caching or RAM, and in those cases the chips have built in methods for disabling the portions of memory that are non-functional. I don't recall any instances of CPUs/firmware doing this dynamically, but I wouldn't be surprised if there are. A lot of chips have some self diagnostics.
Most ASICs also have extra transistors sprinkled around so they can bypass and fix errors in the manufacturing process. Making chips is like printing money where some percentage of your money is defective. It pays to try and fix them after printing.
Also, as someone who has ordered lots of parts there are many cases where you put a part into production and then find an abnormally high failure rate. I once did a few months of high temperature and vibration testing on our boards to try and discover these sorts of issues, and then you spend a bunch of time convincing the manufacturer that their parts are not meeting spec.
Fun times... thanks for the trip down memory lane.
The last time I worked with some hardware folks speccing a system-on-a-chip, they were modeling device lifetime versus clock speed.
"Hey software guys, if we reduce the clock rate by ten percent we get another three years out of the chip." Or somesuch, due to electromigration and other things, largely made worse by heat.
Since it was a gaming console, we wound up at some kind of compromise that involved guessing what the Competition would also be doing with their clock rate.
But the failure rate after initial burn-in is phenomenally low. They're solid state devices, after all, and the only moving parts are electrons.
So, simplicity and hard work by fab designers is 90+% of it. There's whole fields and processes dedicated to the rest.
Faults don't always manifest themselves as a binary pass/fail result; as chip temperatures increase, transistors that have faults will "misfire" more often. As long as this temperature is high enough, these lower-grade chips can be sold as lower-end processors that never in practice reach these temperatures.
Am not aware of any redundancy units in current microprocessor offerings but it would not surprise me; Intel did something of this nature with their 80386 line but it was more of a labeling thing ("16 BIT S/W ONLY").
Solid state drives, on the other hand, are built around this protection; when a block fails after so many read/write cycles, the logic "TRIM"s that portion of the virtual disk, diminishing its capacity but keeping the rest of the device going.
As geometries fall, the effects of "wear" at the atomic level will go up.
This seems to be a nice overview of aging effects: http://spectrum.ieee.org/semiconductors/processors/transisto....
Yes, generally speaking it would be. Depending on where it is inside the chip.
> Wouldn't a single transistor failing mean the whole chip stops working? Or are there protections built-in so only performance is lost over time?
Not necessarily. It might be somewhere that never or rarely gets used, in which case the failure won't make the chip stop working. It might mean that you start seeing wrong values on a particular cache line, or that your branch prediction gets worse (if it's in the branch predictor) or that your floating point math doesn't work quite right anymore.
But most of the failures are either manufacturing errors meaning that the chip NEVER works right, or they're "infant mortality" meaning that the chip dies very soon after it's packaged up and tested. So if you test long enough, you can prevent this kind of problem from making it to customers.
Once the chip is verified to work at all, and it makes it through the infant mortality period, the lifetime is actually quite good. There are a few reasons:
1. there are no moving parts so traditional fatigue doesn't play a role
2. all "parts" (transisotrs) are encased in multiple layers of silicon dioxide so that you can lay the metal layers down
3. the whole silicon die is encased yet again in another package which protects the die from the atmosphere
4. even if it was exposed to the atmosphere, and the raw silicon oxidized, it would make silicon dioxide, which is a protective insulator
5. there is a degradation curve for the transistors, but the manufacturers generally don't push up against the limits too hard because it's fairly easy and cheap to underclock and the customer doesn't really know what they're missing
6. since most people don't stress their computers too egregiously this merely slows down the slide down the degradation curve as it's largely governed by temperature, and temperature is generated by a) higher voltage required for higher clock speed and b) more utilization of the CPU
Once you add all these up you're left with a system that's very, very robust. The failure rates are serious but only measured over decades. If you tried to keep a thousand modern CPUs running very hot for decades you'd be sorely disappointed in the failure rate. But for the few years that people use a computer and the relative low load that they place on them (as personal computers) they never have a big enough sample space to see failures. Hard drives and RAM fail far sooner, at least until SSDs start to mature.
That's why our boxen have power-on self tests.
Twenty years ago we learned that even censoring specific bad words in online chat rooms is impossible. http://articles.baltimoresun.com/1995-12-02/features/1995336...
I really don't want to see the psychological damage this could cause, even if possible (I do believe AR and augmentation and weak AI will enable versions of this). Filter bubbles are dangerous, limiting, and foster intolerance. I don't think we need any more intolerance.
Also, are we really at a point where we want to protect children from opinions?
Even if it works, it seems likely mostly to adversely affect the children's performance in school (and not just on the specific worldview issues -- the distraction and stress from all the incomplete thoughts would probably be a general drag on their attention and performance), and their ability to interact with those there.
A sci-fi story which mention a device that adults use to control what children can see.
(remember that the charges for which he was put to death started with 'corrupting the youth of Athens').
This is both a horribly bad idea for a wide variety of reasons, and to top it off, it won't work, not even in principle.
There is always a better way but you need to build your solution in small incremental steps. This means that you can not solve anything until you had failed to solve the problem previously and identify the wrong answers leading to death ends.
Also, sometimes issues are just complicated. I recently worked on debugging an image processing algorithm I implemented for a client. I spent 4 days following one path just to figure out it wasn't the right path at all. In that case the hostage feeling came from the fact it was highly parallel and extremely complex and it just took a lot of time. Even with breaking the steps down to the simplest denominator.
IF you want do it with technology and dodge politics, then you need to create an enterprise that generates 80 billion euro  in monthly revenue above what is necessary to maintain the enterprise, and then sends out the money.
Good luck coming up with the technological solution which provides nearly a trillion euro in annual profits, and can continue doing so (and growing with population and inflation) while distributing all that to the people of Germany.
 actually, more, because you need funding for the money distribution part, too.
How to use technology to do this:
1. Make up a scheme where 70% get net positive income (make models, etc)
2. Make up persuasive material (interactive visualizations?)
3. Spread it
4. Harness the enthusiasm into actions
Ultimately use this to elect people who will pass it.
Doesn't basic income cause inflation? It ensures that everyone can have the basics, if they choose to use it that way, but will drive up prices on the basics because demand increases. Am I missing something?
In my experience, simply asking an experienced developer for help is enough and a great way to meet a dev with different skills is to get involved in their community.
Are there any iOS MeetUps in or near your area? Or perhaps a popular iOS IRC or Slack channel you can join to meet people remotely?
For most companies with complex environments, it might take 6 to 8 months to fully get you up to speed. Why should I put that into you only to have to do it again? Perhaps you're looking for jobs in the wrong part of tech, or corporate tech isn't for you? Have you considered trying to found a startup, one that you yourself are directly vested in? Perhaps that is your best bet for the future.
If an employer is particularly worried about employee staying for extended periods, your record will work against you and you won't get that job, all else equal. But, that's not as bad as it sounds. Some employers value people with Math Degrees. Some prefer PHDs. Some don't. Some employers don't like autodidacts. etc. A 1.5 year average employment period is in that category of preferences. Different employers will treat it differently. Same applies to your time as a freelancer.
Long term, 1-2 year stints early in your career is not usually seen as indicative of anything later on. It's common. So unless it is not causing you problems now, it probably won't later on.
I don't know how it looks when you get to 10X1.5 year jobs though. It would certainly make you an unusual candidate. I've never hired someone with that much experience so never seen these CVs.
The real "problem" cases are people with multiple < 1 year jobs. If your last 3 jobs were under one year, most employers will see that as "The last 3 people that hired him regretted it." That doesn't sound like your record though so like I said, don't worry.
Also, just building your CV your whole life sounds like a drag. Staying at a job you dislike for years just to change your CV image is like taking a job you don't want or doing a degree you hate for your CV, it's unattractive as a lifestyle. If you like changing jobs, do it.
I use the term jumpy. It is a negative signal, but not a killer signal. You'll have to make up for that with numerous other positive signals, such a extreme technical competence, culture fit, evidence of shipping, etc.
I probably wouldn't point it out on a resume or whatnot, but when asked about it, be honest. Also consider being more picky about the jobs you take. Try to stick and your next place 2-3 years or switch to contracting.
Changing jobs every 1-2 years just means you are ambitious. Keep it up! You might find your previous employers will hire you back at a contractor rate remotely in the future.
If you work on small projects with a 1 month ramp up time then it's not a big deal to leave after a year.
On the other hand I'm currently working on a million+ loc application and the typical ramp up time for a good dev is 6 months. I'm not going to hire somebody who will probably be gone in a year.
My take away was: it sounds very romantic to be in love with your job, always, but isn't very realistic. Being good at your job is realistic, but many people aren't good at their job. If you are, you stand out and can command a good salary, working hours, benefits, whatever is important to you. And every once in a while you should try to get an interesting project to keep things fresh.
So who's responsibility is it to get that occasional interesting project? I'd loosely say that's 50/50 split between employee and employer. You can't just expect to get spoon fed interesting projects. You have to look for them, and the company has to be in a relevant position to support that.
If you like research, if you good at ramping up and learning new skills, that can be a good way of acquiring the occasional interesting project, while getting better paid for it.
I guess that highlights for me the biggest problem: you've talked a little about what you've done, but nothing about what you want to do. That kinda matters. If you want to be an engineer, you need to prove that you can stick with something from concept to at least the first upgrade cycle (you'll learn more from an upgrade cycle than you will from shipping ten products and then walking away from them each time). That might be a year or it might be ten. If you want to do operations, you need to complete projects and then stick around long enough to learn from what you did. An in any case, hiring managers will want to see that you've shipped something, because that's the only way to be sure that your work was good enough to use. Repeated departure well before shipping (or completing an internal project, etc.) is a big red flag, much moreso than the length of your tenure. And not staying in one place long enough to learn from past mistakes greatly reduces your value. Again, it's not the calendar time, it's what you did and learned.
> I had long term plans but found out the IT department is just kind of support and even to make simple changes decisions take weeks. On top of that I got an offer to be first in house employee of a company where I am expected to do everything now and manage as company grows.
No one would want that kind of job, changes taking weeks. They should have made that clear, that they basically do all maintenance. The new job offer sounds more challenging and full of opportunities. Staying at your current job sounds like a really bad idea.
Just be aware and look for more opportunities to do interesting projects at your new place.
For companys the questions is do you reach the break even- the point where the investment that they did by hire and assigning somebody too you, to introduce you to your tools and internal operations. Everything else is rather benefical.
Cooperations outsource codejobs to strangers today. And they do well with it. Never heard a hiring manager complain about the company beeing "a problem case" when it came to investments in hirde guns.
You might not hit it off with your collagues though. Many want the safety. If somebody appears who represents the opposite lifestyle, and shows everyone that life can be lived different- which theire manager might use for pressure once you are gone - things can get a little frosty.
4 companies in 6 years is nothing. I would be more wary of someone switching job every 2/3 months. So, more than 6 times the number of companies you have been working for. You are fine.
Consider reading a book called, "The First 90 Days: Critical Success Strategies for New Leaders at All Levels". It's contents helped me gear my interviews towards how and when I would add value to a new group. I believe that is the key to changing jobs - average time to positive ROI from the new group's perspective, not average time spent in a group.
You have some great recommendations in this thread, so thank you for asking!
Self-awareness is good step forward in managing your career. Be upfront with potential employers on what you've learned so far. And be prepared to address concerns they may have over your decision-quality, stick-to-itiveness, and maturity.
Relative to your next move(s) suggest that you create a scorecard-- get clear about the types of environments & work you find appealing and intellectually challenging. You must probe for those things as you explore new opportunities. Put some serious thought into evaluating if the next job is a strong match.
You have mostly pretty good reasons for the switches. The possible exception is the research career move ... didn't you know going in how little money you'd be making? Sounds a little flaky to give up on it for that reason. If I were interviewing you, I'd drill down on that one.
The trick is, would I even interview you or would I see the resume and think, hmm, I don't know? I try to be very thoughtful about that but I usually get a lot of applicants ... I think you should try to keep this new position for a while.
This is one of those classic cases where the culutural pressures and beliefs are wrong and ripe for ignoring. Especially for an engineer/hacker its important to second guess your societal perceptions and make decisions on more substantial foundations.
There will be some metros where no one cares; they're so short-handed and the tech market so hot, they won't care and will hire you. Other metros which are more sedate, with few companies and therefore over-saturated with techs and H1Bs will look at you and laugh.
It's not so much a problem for a senior engineer who can onboard relatively quickly.
But for junior engineers, I would be a lot more hesitant because the onboarding that the company invests in you is lost if there's a high chance you'll move on a few months later.
So in your case, yes I would say that 1,5 years is controversial. For 6 years you should have changed 2 jobs (maybe 3 with a good explanation)
You have an increased validation in your hirability, as four different companies have thought you were good enough to give an offer to.
A lot of people us both http://docs.vagrantup.com/v2/provisioning/docker.html
That's actually the only thing that got me to hold off on Docker the last 2 times I've evaluated it. I was able to get everything running for a 1 monolith + 7 microservice system that I work with but the local developer workflow felt very clunky even with Fig. That was 6 months ago and it's my understanding there have been a lot of improvements.
That project was for a Ruby team and there are so many Ruby based tools that make the local development workflow a smooth operation that shoehorning Docker in locally would have been a step back, so we held off on it.
It's an area that I think will see major improvement though. Heroku's even gotten in on it.
Which is really impressive to me. If anybody in the space can polish out the user experience, it's Heroku.
In terms of using docker, IMO it's the best development experience I've come across once you get everything set up. It can be confusing to get your workflow set up at first, and it seems like everyone does it a little differently, I'm hoping that best practices will standardize a bit as docker continues to mature.
I love having every part of an app (app code, split into a few microservices if you wish, postgres, redis, rabbitmq, etc.) completely isolated, and docker-compose is a great system for linking things together. I also currently don't have any puppet/chef/etc code and love not having to maintain that, in my mind a large part of the need for configuration management tools is dealing with the complexity of diffing two arbitrary states of infrastructure, and with the immuatable approach of docker containers all that complexity disappears.
Don't use any kind of provisioning on vagrant just straight bootstrap.sh as honestly I don't like them.
As a solo coder, I love vagrant - the whole nature that you can use a configuration file with a script or two to build out an entire VM has so many benefits. Less time to build the VM, easily destroy the entire VM, easily rebuild the entire VM, save drive space by destroying the VM when you don't need it, keep the VM configuration in a git repo, distribute the configuration to someone else to use, and the best is having all the steps used to configure the VM are documented in the config file and scripts.
This is actually not the case. Although containers do not share any persistent volumes with the host by default, you can use the --volume option to do so.
To answer your question, I've used Docker for local development to run MySQL, Postgres, and Redis inside of containers. Using the aforementioned --volume option, you can share the unix socket opened by either of these services from the container to the host. Otherwise, you can use the --port option to share ports between the container and the host.
I've had a generally pleasant experience using Docker for this use case and would recommend it. It's nice being able to start using a new service by pulling and running an image. Similarly, it's nice to have the ability to clear the state by removing the container, assuming you choose not to mount volumes between the container and the host.
The only frustration I've run into is running out of disk because I have too many images, but it takes a while to get to that point and those can easily be deleted.
 https://docs.docker.com/reference/run/#volume-shared-filesys... https://docs.docker.com/reference/run/#expose-incoming-ports
Much faster than the Virtualbox provisioner, so it's not an "or" decision, the two thing works well together :)
I also like to use it to create test deployments for debugging or evaluating things, for example it's a lot easier to run Hadoop in pseudo-distributed mode inside a Docker container with host networking, than it is to fiddle with running it in a VM and either getting NAT or DNS working just right, or installing it locally. With the Docker container, if anything goes awry, it's just so easy to get back to initial state by killing the container and starting again.
As for Vagrant, I like it a lot too, but for different reasons. You can define a set of actions that is a lot closer to installing whatever it is you are developing, instead of baking everything together like you do with Docker, which can be desirable. I have used it in the past for creating virtualized cluster environments for integration testing of distributed systems. I think so far I use the VirtualBox provider, but I'm thinking of re-working some of my past uses of it that don't strictly require a VM to use the Docker provider.
When I write code inside docker, I always submit to a git repo like Bitbucket. Data persistency is easy. Besides you can always use --volume, which works out of box in Linux.
Vagrant requires some basic shared environment, which is not realistic in my case. For example, I use Archlinux myself and am forced to use old Scientific Linux at work, while many other FEniCS developers use Ubuntu, Fedora, or Mac stuff. It is too painful to write and maintain a Vagrant script for all these (different compiler, boost, blas, lapack and some other 10+ numerical specific stuff). I even tried Vagrant+docker. But in the end, with docker maturing, I switched to docker+bash script instead. It is just more convenient and needs less dependency.
So I'd endorse a docker only approach if you mostly use Linux and your project has a diverse group of people.
Previously all devs had their own environment (some MAMP/WAMP, some homebrew, some remote, etc) which led to onboarding and support issues. Setting up a standardized recommended dev environment has helped with that a lot - both in terms of reducing project onboarding and getting junior developers up and running.
Would love a day where we can build projects as Docker containers and hand them off to our clients' IT teams, but that seems to be a way off.
SO thread where the authors of Vagrant and Docker weigh in: http://stackoverflow.com/questions/16647069/should-i-use-vag...
Needs a minimum of 42G ram, 150G disk space and fills its logs at 2G/h. Not great when you are running on a 256G SSD.
Building takes 2h+ with ~10% random failure rate due to dependency mirrors and timeouts.
The python code is deployed as gziped virtualenvs to the hosts. This actually works pretty nicely as it means you cant just import stuff and have to build stuff similar to 12 factor style(We dont use ENV_VARS/stdout logging though).
TBH I still dont really see the point of docker, Im sure it will 'just click' at some point but it hasnt happened yet
I love the fact that once I configured the dev environment on my PC and I hit the road on the next day I can have exactly same environment on my laptop by running single line - "vagrant up". Not to mention that any dev working on the same project saves himself ton of time but not having to configure everything from scratch.
I have not taken the leap of faith yet and I am not using the docker in production but hopefully this will happen soon.
We then use our production docker image(s) with some more development appropriate configuration options. Vagrant mounts the user's home directory at /Users/<username>/ inside the CoreOS machines. Then we mount the appropriate folder inside the docker container at where the container would normally expect to find the app's code. This way the developers have live updates without having to rebuild the docker image or anything.
At my last job we used Docker extensively for developing our main software product, based on a Django + PostgreSQL + RabbitMQ + Celery stack. It's definitely a bit tricky to get your head around at first, but after that, it's very nice being able to just type "docker-compose start" and have a working application with consistent configuration ten seconds later.
The vm environment was also as close as possible to the production env, with the same os version, etc.
It also greatly streamlined onboarding of new devs. The dev environment setup was a couple of hours instead of a day or two.
It was significantly easier to tell my co-workers to install docker and type `make local` for local binaries and `make igep` to produce a igep armv7 binary by running a docker container.
I know most people here won't know about the Portuguese law in specific, but I'd be interested to know what options are usually available/recommended.
1. Yes, Google Analytics can be quite useless if you keep default settings with no configuration.
2. That doesn't mean you should jump straight to a self-hosted solution, or a paid solution, or throw up your hands and say "it'll never be accurate."
For most use cases, GA is more than good enough to measure effectiveness of online marketing efforts. Dismissing it outright in favor of a paid or self-hosted option just because you didn't google "how to prevent analytics hijacking" is bad decision-making.
Now on to the fix...
You can create a filter in your GA view settings to ignore tracking calls from any hostname other than your own. See here: https://support.google.com/analytics/answer/1033162?hl=en
PS - No client-side analytics will ever be 100% accurate, certainly not GA. But for the purposes of measuring marketing efforts and results, you can have greater tolerances. It's a tool for marketing, not logging.
See https://news.ycombinator.com/item?id=7477736 or https://news.ycombinator.com/item?id=8869880
I guess SEO people already know this, the question is: can you trust a SEO consultant?
GA is really not a product you want to trust your business with. Best approach is to consider self-hosted analytics solutions.
I built my own for my needs which also include combined features for security analytics to investigate malware attacks. GA is totally useless in this aspect.
You _could try_ to have custom JS that would gather those data-points like e.g. screen resolution.
Also, with the increasing spam coming from referrer and the new trend of adv blocking plugins (they block GA too), Google Analytics has become less reliable than ever.
However, you can setup open source analytics software on your own server, like [Piwik](http://piwik.org/).
what blows my mind is that they aren't doing more to fight the referral / event tracking spam. it's totally out of control.
Screwed up a huge amount of our click tracking data on GA.
But if the comments are just an add-on with little SEO value, then sure, quick and easy.
If you're worried about "owning" the data/community, you could host your own discourse (http://www.discourse.org/) instance and use their embed feature (http://eviltrout.com/2014/01/22/embedding-discourse.html).
Embedding is sadly missing inline commenting, though.
On the other hand if you are just working on MVP, and comments are not your main value prop, and you just don't want to bother with implementing your own system - sure, whatever, go for it. There's nothing too horrible about disqus, it's fine.
+ Highly customizable
+ Loads of moderation options(whitelists, blacklists)
+ Active community
+ Easy to use admin tools
- Seems to load slowly sometimes
Also, the most compelling reason for using it is that you don't need to waste time coding your own commenting system. In the future if you decide to make a commenting system of your own, you can always export comments from disqus.
I don't like enabling semi-arbitrary JS from one place among all the sites I visit.
Fun side project that blew up with world-wide media attention last year. Intro video here: https://www.youtube.com/watch?v=O_2zr5EYbDk. On Jimmy Fallon tonight show here: https://www.youtube.com/watch?v=oTf7g59LQ_Y.
Working on other projects and not really interested in building out more functionality / monetizing.
Looking to sell all source code (Android + iOS), domain, US trademark on BroApp, email lists, etc. Facebook newsfeed cost-per-install is ~$0.21 with a lookalike audience built off our install base.
- I'm also looking for help with programming, only equity. Have programmers already making it; however, their is a lot to do, marketing & partners = pretty solid. Platform based, semi-social network that benefits startups.
Anyone is welcome to email about this or just say 'hi' :) at firstname.lastname@example.org
Allows vacation rental hosts to recommend/sell travel activities to their guests. Commissions on sales are shared with the hosts.
For sale, or looking for a non-technical business/growth partner to work on it part-time with me.
A simple to-do list
Your biggest issues with that setup will be:
1. Credential and permissions management. Don't lose control of your access keys to AWS! Set up "MFA" at the very least. If you use your AWS account for other purposes, use IAM to ensure that other users cannot access the S3 bucket with the site in it.
2. Getting that green lock (e.g. HTTPS). You can pay $800/month or something insane to Cloudfront to get a custom TLS cert on their CDN, or you can get Cloudflare's "universal ssl" for $20.
3. DDoS is not really a concern. It would be nearly impossible to DDoS an HTML-only website hosted in an S3 bucket. I'd really like to see someone try. The only thing that could happen is you get charged a bit more that month while people are DDoS'ing you. But if you're behind a CDN like Cloudfront or Cloudflare (which can cache everything because it's plain HTML), then the impact would be reduced.
4. Your Registrar suddenly becomes a huge risk. Make sure you use a secure domain registrar (ie. NOT GODADDY!), that the registrar has a "Registrar-Lock" turned on for your domain, and that your account with them has 2FA. If you screw that up, then someone might be able to socially engineer the phone rep at the registrar to transfer the domain, change the nameservers, etc. This happens depressingly often.
I highly recommend setting up websites this way. It's fast, easy to maintain, and incredibly secure. We do this for the trailofbits.com website and we're very happy with it. Jekyll FTW.
ps. don't forget you can use Github Pages too.
1. Gather info from whois DB, google search, site spidering, going to your house and looking through your trash.
2. Ring you up - Hello I'm Joe from the tax department/credit card company/bank we need to confirm your address .. give your address .. could I please confirm you are the credit card holder, I just need the last 4 digits
3. Ring your friends, family and business contacts - use smooth talking to gather as much info as possible.
4. Ring up Amazon - oh yes I am mister XXX, I forgot my password, please can you reset it. If they don't I'll try to guess information, and glean any info out of the replies.
5. Ring up your email provider and do the same
6. Keep on ringing about 8 hours apart to make sure I get different teams, so it's fresh each time, until I had enough info to get access to the account
7. Make sure to delete all backups
8. Deface to my hearts content - change all the passwords, blah blah
This is the info I'd try and gather:
* Name - probably from whois
* DOB - probably from public records search - or ringing friends
* Phone - probably from your trash or mailbox
* Last four credit card digits - probably will get from your trash, or tricking you on the phone
* Date of last payment - Probably from tricking Amazon
* Password bits - pet's name, girfriend/wife/child names and ages, keylogger in an email I sent you
I recommend you watch periodically for your contents to pop up on random domains (you can google for your exact texts) and file your DMCA requests as soon as they appear.
It might help to use <base> tags, absolute URL links and the likes in all of your web pages, as well as mention your domain both in textual contents and images (logo?) - that actively discourages "lazy duplicates" of your pages (but not copy/pasting your articles on a different site by hand).
Just my two cents.
The only way to hack your site is to actually be at Amazon with access to whatever disk array stores your data. As in, I'm pretty sure "inside job" is the only route left.
There is no such thing as an unhackable system, only more and less difficult to hack systems.
Usually it relies on ownership of some third account, e.g. email. Okay, what's the recovery process for the email account? Receiving an SMS to a particular phone number? Okay, what's the recovery process for that phone number? What's the process to get the phone number redirected?
At some point you're going to end up being able to ring up a number and tell someone a name, address, date of birth etc. Best case you ring up and they say okay we'll mail you something. Or they make you come in person and sign.
Customers lose credentials constantly, and won't tolerate being told that this means their account is unrecoverable. So there is almost always another way.
But I think generally if your site is not using top 3 popular CMS-es like WP, Drupal and Joomla - this will make 99.9% of attackers to move on and scan for easier targets elsewhere.
SSH guessable passwords.
HTTP daemon vulnerabilities.
Any other daemons running.