However something like this needs to be integrated tightly like iOS and apps need to be handle not receiving requested permissions. I still use App Ops in Kit Kat and it silently breaks apps all the time. Apps expect to be able request information (e.g. contact details) and crash or stall when they can't.
In that respect, LBE Privacy Guard was a much better alternative up to ICS. It was a privacy firewall that would pop up notifications when protected permissions were being used for the first time and instead of blocking apps when denied, Privacy Guard would feed it blank data instead. This lead to a better UX and stopped blocked apps crashing.
There needs to be an open source version of LBE Privacy Guard since the current one hasn't been updated over a year and is closed source from a Chinese company.
: DO NOT INSTALL, FORCES REBOOT LOOP
Oh, it would definitely break some apps. Considering it would introduce new uncertainty. It's still an amazing feature.
If you are interest in doing this with Android, the autopatcher does this for you on a whole series of ROMs on all versions of Android from 2.x-4.x, and has been doing so successfully for a while. There is a pretty active user community on XDA for the PDroid patchset as well.
I'm not sure I buy Google's excuse. Any app worth it's salt should be able to cope with not having the permission set. I'm guessing this was impacting their own core apps.
Apps, on the other hand, would have an extra error case to deal with, but they should be dealing with other error cases anyways.
I think Google can make it work, but they need to be committed to it. Surely there's a way for the apps to gracefully transition to not needing certain permissions. Google just needs to introduce the proper rules for developers to make this work.
(Kind of loud. But be sure not to miss his awesome reaction at 4m20s.)
I have the original devkit, and it's amazing. You can even interface it with Google Street View. There's nothing like typing in "Eiffel Tower," tilting your head back, and staring up in awe.
In fact, I'd say no one here has experienced Street View until you've seen it with an Occulus. It simply cannot be described how incredible it is to look around with your head instead of dragging your mouse!
EDIT: Okay, if you're unhappy with that particular video, then this one might be more to your liking: http://www.youtube.com/watch?v=fl7fz__6B-4#t=15m30s
EDIT2: Wow, that Dreadhalls game is terrifying. You actually don't even need an Oculus to get the full effect, just headphones. https://developer.oculusvr.com/forums/viewtopic.php?f=51&t=3... (Windows / Mac)
Either you go huge and buy a stake in the winners at all costs, or you go wide and super early like YC. It does leave a large seed/series A financing gap someone will need to close, and I suspect their returns won't be as stellar as those in the extreme ends of companies financing.
I wonder if John Carmack wouldn't be better off just launching a competitor to Oculus. Wherever he goes the magic will follow, and it'd be nice if he was the ultimate boss like he was at id Software.
I'd just really hate for him to get bogged down in a bad environment, kind of the way Linus did with Transmeta, and be forced to resign at some point and start over after wasting years of productivity.
The A16Z guys can probably help avoid any massive stupidity, so that's a nice benefit to this investment.
Regardless, Oculus's concept is the future of gaming -- anything that can trick the brain so substantially is going to be a winner.
If you've got 10 minutes to thumb through this 20 minute video this shows off whats possible with augmented reality: http://www.youtube.com/watch?v=Bc_TCLoH2CA If we can get that into a pair of sunglasses in the next 10-20 years I'm pretty sure we'll be living in a wildly different world than we are today
RIP Andrew Reisse (http://www.oculusvr.com/blog/andrew-reisse-in-memoriam/)
I believe this will have a ton of applications from VR conferencing to going to virtual concerts to even spectating sports with an isometric view of the field instead of the 2d projection.
inoremap jj <Esc>`^
Edit: After looking at the source, this seems to make use of emscripten to reuse Vim's existing source code. This really makes me wonder what other cool things we can bring over to the client side.
Absurdly wrong, marketers already use a unique image URL for each email recipient, and Google has no way to know that all of those point to the same image. So they won't see "a single request from Google", they'll see one request from Google per successful delivery to an inbox.
Now, an open question is if Google will make that request when the email is actually opened, which would allow marketers to determine if and when the email was read by the user, or if Google will make the request as soon as the email is received. The latter would enhance users' privacy at the cost of bandwidth for Google, but early tests indicate that they don't actually do that, waiting for the user to click the email to make the request.
I'd like to add that there's no possibility the Gmail team is stupid enough to not have considered this. They must know full well what they're doing, and marketing this as a privacy enhancement when it's actually detrimental to privacy is willfully dishonest.
The most important part is at the end:
"In some cases, senders may be able to know whether an individual has opened a message with unique image links. As always, Gmail scans every message for suspicious content and if Gmail considers a sender or message potentially suspicious, images wont be displayed and youll be asked whether you want to see the images."
So Google apparently does not see read receipts as a problem. The privacy and security protections are about preventing other information (like ip, browser headers, cookies) from leaking, rather than read notifications.
If you care about maintaining your privacy, I would recommend disabling the new functionality.
This isn't the privacy and common-sense win you think it is.
Remote address: 66.249.x.x [any google ip]
Referer: [not set]
User-Agent: Mozilla/5.0 (Windows; U; Windows NT 5.1; de; rv:22.214.171.124) Gecko/2009021910 Firefox/3.0.7 (via ggpht.com)
You wouldn't get the IP address like you would with conventional bugging, but you could still find out how many users read the mail and what time they did so.
I've seen a couple startups that were working on dynamic email marketing - they fed in the content as an image, e.g. a "one-day promotion", but would change the image content server-side for future email opens to reflect current details. I guess that this breaks that functionality.
This is a silly fear, all email clients already do this, as you don't display raw unmodified HTML from emails, you have to scrub it. They are just adding one new kind of scrubbing to the list of things they already must do.
On the other hand, if Google (either now or in the future, crucially) alters the behavior to be smart about pre-caching images, then e-mail marketing is screwed. It will likely make sense at Google to do this, since it will improve the user experience to have the images be pre-fetched to the proxy server before they open an e-mail.
In other words, e-mail marketing vis a vis gmail is now in a Schrodenger's cat-like situation. We can't know if Google is now fully, partially, never will, or will in the future pre-cache images, so for all intents and purposes e-mail marketing data is both highly accurate and completely worthless at the same time :)
First they started filtering marketing messages into separate tabs, which I'm assuming dramatically cut readership. Now they're going to make it impossible to "bug" emails for read receipts. The only metric left is the "click".
Email marketing just became a whole lot less valuable.
Read Mailchimp's post (December 6th):
"Image caching still lowers our ability to track repeat opens, but turning those images on means well be more accurate when tracking unique opens. At least, theoretically it should work that way"
Penalize mass emails containing unique identifying image URLs for identical images.
Where identical means virtually identical.
They'll cache and own even more of your data and keep it out of the hands of spammers - in turn spammers will have to buy into google to get data about you.
This isn't for us, this was done to make money off of us.
That just isn't true on gmail. The whole service is served over https and won't pass referral information.
Not to mention your IP and whatever other information they feel like embedded in links will still passed along when you click. So theres still some tracking going on, but they miss out on open without action emails (which is of course useful information to marketers).
In order to maintain privacy it's been well discussed they would have to cache always and forever. So large images will definitely add up over time.
I also wonder, even if they have a persistent cache, you might still want to check the Last-Modified and Etag of the URI. I don't think many people embed dynamic images like this, and I'm not sure how most clients would handle it, but it's an interesting corner case.
Saying that the proxy is enough to require everyone to opt-out of auto-images may be a bridge too far, especially when there are ways to register your domain so that inline-images ARE automatically displayed, which IMO is what they should be encouraging.
Another way at this would be to find a UI widget which helped users actually understand the possible tracking info they would be giving up to the sender.
Still further putting control in hands of the sender would be a data tag on the IMG which told Google they should cache, and in exchange would result in wider image viewership. Tracking opens, actions, and coverts is the most important metrics to providing feedback to improving copy, it's devious for a display ad company to fuck with this on shaky privacy grounds. I guess at least they do provide an opt-out, which will be used by ~0.1% of users...
GMail serves all images from a datacenter in Mountain View, CA, so if your email's images were served from multiple datacenters or a CDN, there is a good chance they will load more slowly, depending on your caching headers. They optimize images on the fly, which may introduce more latency. Their optimizer doesn't take into account whether the optimized image is smaller than the original, so the image they serve is occasionally larger (and/or looks worse) than the original. The maximum image size seems to be about 10MB.
I'm not too keen on the idea of Gmail modifying the body of emails sent to me.
You send me a mail - you've no business being able to track if/when/how I open the envelope, unless I explicitly wish to inform you.
Surely it's this same technology google themselves use more than anyone else to identify users?
I recognize that there are a couple potential downfalls to this thought.1) The time/processing it takes to determine the md5 could be problematic on such a large scale.2) I have no idea how easy it is to change an image to be unique for each user.
it makes it super simple to enumerate valid email addresses.
patch it to fetch the images on valid+invalid email addresses, then we'll talk
(Specifically, if the goal is to monitor changes as they happen and the service can be assumed to be continually running.)
Using sha256 just to compute changes is probably overkill. Using md5 instead is almost certainly adequate and will be a good deal faster.
Unfortunately, when I was there, there wasn't a lot of protection from the conflict created by your snippets, so if you wrote "tried again to convince team X to let me fix their code, they brought up objection a, b, c. Still no clarity from them on what they really want." In your snippet it would make Team X look bad, and at some point it could come back to them that you were bitching about them in your snippets (even if you were doing what you were told to do) and then they would counter-attack in their snippets, and a little sub-surface adhominem would go on for a while. Moderating your snippets made you look non-productive. It was an epic failure in leadership.
We've all heard about the stereotypical manager-perception of "effort invested will be proportional to results delivered!", like the article laments. But this analogy breaks down really quickly.
Let's use an example that's not from software to illustrate the point -- in fact, let's use the author's own example, laying bricks. Who would you rather hire?
-- Bricklayer A: Takes 2 weeks to finish a wall, stays late every day to finish the job, finally builds a brick wall of passable quality.
-- Bricklayer B: Finishes your wall in two hours because he realizes there's a better way to lay your bricks, and builds a wall of excellent quality.
I feel like most people, even managers of non-software folks, would rather hire B. At least, I certainly would.
So I think the premise of the article is wrong. There's something deeper going on, and dismissing it with "managers want to reward hard work, and creatives don't look like they're working hard" is an explanation that is "neat, simple, and wrong" .
Once upon a time, early in my career, I spent half a day automating and error-proofing a 15 minute task. My boss, at the time, was slightly annoyed, and slightly amused. Fast forward two months from that point, and that same task needs to be done not once, but several thousand times. My boss wanted to go get coffee before we got started, so while he was grabbing his wallet, I fired up my automated process. By the time we got back, the task was done, and the boss was annoyed at himself for being annoyed earlier, and never again questioned why I built the systems I did. I, unfortunately, was too young and too inexperienced to realize that he was an excellent boss, and left a couple years later for "greener pastures", where I encountered many "productivity is measured entirely by the time you sit at your desk" bosses.
TL;DR: I work hard to look this lazy, had a boss that recognized it, and was too dumb to realize how good I had it.
I'm by no means an incredible engineer, but I've spent a couple hours a day over the past few months building basic Ruby/RoR apps, and it makes an enormous difference. Of course, I still have a long way to go before I'm able to really estimate anything, but if nothing else I understand why my co-founder would just sit there for hours with a whiteboard and model data. I know that "burnout" is a real thing, especially when you're not overcoming little bugs you've had for a long time. I know the value of getting a second pair of eyeballs on some code. etc.
It's just impossible to understand what you don't understand the technical aspects of what you're trying to do.
I figured out the system mostly independently, because the senior developers were always under fire due to the over-large workload for our team. So while other interns would need help for an hour or two every few days, I'd ask something about the data type of a certain value in a function and be on my way. My mentor was especially impressed, but he was the only one because nobody else saw how much code I was writing or its quality. They would just see me getting up to pace about when the problem was very difficult, or opening up hn when I needed to give my brain a break.
Long story short, my manager said that they weren't going to hire me full time, which came as a shock because I thought that producing quality code to the deadline was the important part. He cited early mistakes in writing documentation (I'd never written a functional spec before the internship, but I posit what I eventually produced was of the highest quality) and my perceived lack of motivation. This despite me working as well and energetically as I ever had. I decided to look at it through a lens of another article I'd read on HN: Any company that doesn't want to hire me is a company I probably don't want to work for.
So while I'm dismayed that this pattern will continue, it's nice to know that there are some people who know the difference between a good programmer and one who puts in overlong hours.
I would submit that this applies to many "professional" occupations. A lawyer has to produce a draft of a brief or of a contract in time to meet a deadline set by the client's schedule, so the work output just has to get done. But the draft brief or draft contract works best to meet the client's needs if it is developed after thought, and being just-plain-busy isn't the same as thinking. "There is no expedient to which a man will not go to avoid the labor of thinking." So, yes, whether you are a manager of programmers, or a manager of lawyers, or a manager of physicians, or of any other kind of professional, you have to know the professional's craft well enough to know what good work looks like, and you have to make sure that the professional has time to think, and usually time to confer with colleagues to get a reality check on the thinking. (The article kindly submitted here gives the example of the author learning from the original coder of a body of code how the code worked. The code was elegant, but it wasn't easy to see the elegance at first glance.)
If the people who put out fires consistently get the best rewards, don't be surprised when a lot of things catch on fire.
The challenge is that you want to reward the most skilled people. However, "skill" isn't easily visible to some managers. Instead, they observe that:
effort = difficulty / skill
If you're going to use effort as a metric for skill, you must have a measure of the intrinsic difficulty of the task. If you can't directly measure your employees' skill or the difficulty of the tasks you've given them, you've got real problems.
Customers/managers value the perception of hard work, even when the result is measurably worse for them. In fact, we all do this.
A good example of one aspect of this is LinkedIn's technical career track. Beyond sole contributor status, you can be promoted to either a Manager or a Staff Engineer. Managers have direct reports, while Staff Engineers lead teams with a technical responsibility, each with their own progression: Manager < Sr Manager < Director < Sr Director < VP < Sr VP and Staff Engineer < Sr Staff < Principal < Distinguished < Fellow. The Engineering track is more of an accreditation system focusing on freedom as opposed to control (control of things vs people).
Most Silicon Valley companies struggle with retention, because they internally cannot simultaneously offer a "fair" career progression/compensation and compete with the raw efficient risk/reward of the free market (e.g., poaching). In other words, you can't promote engineers with potential over engineers with good track records without consequently discouraging hard work. On the other hand, you absolutely don't want engineers with potential to leave because of an inability to offer opportunity equal to what they can find elsewhere.
Companies will ultimately need to offer choice to engineers: higher vs lower risk/reward. One possibility is a commission based engineering track. Engineers would choose between the stability of traditional progression/salary and a higher reward commission based system with a small base salary. The commission system would require results-based quantitative systems like OKRs (Objectives and Key Results) to quantify both the value of the proposal to the company beforehand, and a quantitative measurement of its success. Switching between tracks could occur each quarter allowing engineers to tap temporary motivation increases. Mixing and matching traditional and commissioned engineers would result in unique balances of high motivation and stability (since the team lead is not their manager), which would highly align with stable vs high risk/growing aspects/departments of the company.
The increasing competition faced from engineers starting their own companies will require companies to adapt new compensation models to more efficiently allocate resources (e.g., autonomy, cash, responsibility, recognition, etc...). I think a commission based system holds a lot of promise for those of greater capability.
In IT departments, the ones that seem crazy busy and dealing with all sorts of fires can hint at a deeper issue of low technical ceilings. The IT departments where things seem calm, relaxed, and under control can hint at a higher technical ceiling present.
Similarly, in programming, there are those who are solution based thinkers (assume they understand what the problem is and off they go coding) and are easily caught into a perpetual cycle of refactoring much earlier. There are also developers who take the time to learn the problem from the ground up, by doing the actual work and finding the things that the user is seeking to solve but can't express or imply.
This industrial age thinking of being at a desk for x hours or looking busy is an incredibly poor measuring stick for what is an abstract, and creative pursuit. I find my time away from the keyboard is as important as the time I spend at a keyboard solving problems that need to be thought about.
Going for a walk, helping someone else with their problem before my own all seem to have interesting triggers for creativity -- solving a problem eloquently enough that it requires deep understanding but little programming wizardry.
My favourite part of this article was being open to, and appreciating the value of learning solid processes from someone with more experience of having a relationship with a code base for more than 1-3 years. There are too many times where problems are trivialized, and lessons are re-learnt over and over by new team members, experienced or not.
In the end, I read a quote about the kind of software development environment that I always want to help enable -- avoid wizardry in code, frameworks and technology, and instead focus the creativity and magic on architecture solutions that don't require as much complexity to create the same solution much simpler in the short term and the long run.
Straight-forward, brute-force approach sometimes works for easy tasks, it seems, while to came up with some optimization or innovative idea one must master hows and whys, which takes long time. But in the long run, of course, those who invests in understanding and knowing of how things works and why, would ultimately win.
The classic story is that people with understanding and knowledge of hows and whys, so-called "mappers", could even design and implement a whole new language based on proper concepts, while so-called "packers" are continuing struggling and sweating with their packing. Never heard of Scala?)
Another nice point is about [re-]structuring the code. Keeping functions small (and interfaces standard and simple) pays in any language, but to know that one needs to learn the fundamental ideas in CS, not "recipes" from some crappy book that end with "in Java".
Putting it another way, studying classic CS (algorithms, data structures, programming paradigms and language design) takes long time, but eventually you will outperform "hard working guys" because you will waste much less time on ignorant guesswork searching in darkness and repeating all the naive old errors.
Only one with knowledge could afford to be lazy, since knowledge is power. It is not actually laziness, it is efficiency in movements.) Choosing appropriate paradigm, right data-structure (and corresponding algorithm) could save man-months, if not man-years.
This is an issue I struggle with on a regular basis, as we outsource a lot of our development work. We work on a project basis, but work very closely with our partners to help them make good estimates, maximize capacity, and minimize their (and thus our) risk. I don't want people to work overtime, nor do I want them under-committing.
My experience is that watching someone's commits, even from far away, is actually a damn good indicator as to how capacity is being managed on a project. Every project is different, but coders code: if they're not committing frequently and regularly, it's time to check in.
Human beings normally don't have flat productivity, so activity spikes are normal. But low activity in a period means there is not enough work and/or there will be a large spike later, which for me means risk.
The is one sure-fire antedote to this: data. And the best data is a precise project plan, with every detail, and always up to date.
I lost track long ago of the instances someone "important" accused me or someone on my team of "not working hard enough" or "not appearing to be doing enough". My response is always the same: whip a current hard copy of the project plan out of my back pocket, throw it onto the desk, and ask the same question, "OK, exactly what is the problem?" This shuts them up every time. They may not like it, but they always STFU.
"Bill not moving fast enough for you? He's responsible for ORP560. 2 weeks ahead of schedule."
"Sue not here enough? She's in Wichita this week. Just implemented RESERVES. Already saved us $200,000."
"Don't like how Gary talks to the customer? Here are 14 examples (right here on the page) where the customer was wrong and Gary straightened them out. How would you suggest he handles it?"
I can go on and on. Many of us skinny introverted geeks have had to find a way to counter the slings and arrows of ugly decision-making by perception with a hacker's solution: maintain the data to counter the perception. It's the best thing that I've ever found that actually works.
I guess the question should be, Are your programmers working primarily to serve your need as a business or are they devoting their effort and time on work that is more aligned with their own ideology (choice of technology and approach).
A few months back I was in Philly talking to a startup founder whose company was at death's door because he kept hiring developers who came in and replaced whatever the last person did because it wasn't what they preferred. This phenomenon is a much bigger problem than the question of whether programmers are working hard or not.
I think this is really the key sentence. It certainly isn't true in all cases, but it tends to be true more often than you'd think. If you see a lot of people struggling to work really hard... it's a big warning sign that things aren't set up right.
In these organizations the visibility of a team is at least as important as what they do. Managers at these places typically are more interested in playing the political games than attempting to do the "hard" work of managing. One component of those games is to insure that these managers' managers can see their teams working, in the tangible, physical sense. The more "work" being done, the better.
Managers at these places are awful to work for.
So whenever the boss popped in, he saw me leaning back looking at my scripts doing all my work for me. I was usually the first one to be finished with a new task. I reduced compile time from many hours down to 30 minutes (which all the other programmers were extremely happy with). But all my non-programming boss saw was me leaning back or goofing around, rather than typing really hard (which is what programming is supposed to look like, right?).
Looking around for a new job at the moment, it seems hard to convince people that I am worth double what a junior developer will cost. Half of the interviews seem to have the programming language trivia questions. Even if they did ask me more high level design questions, its not the way I work. When I have to design a database, I don't do it in 20mminutes. I usually come up with two or three alternative designs, with pros and cons. I probably look unproductive staring into space, but I take a day or two for big decisions, and mull over the possible scenarios and which design is better. My code is better than it was a few year back.
productivity = effort * efficiency
1) Some people hate mundane work and crave intellectual stimulation. This has both positive and negative effects on productivity.
2) Clever people can afford to be lazy in school and they simply stay lazy.
You were given a quota to hit each week. One week I closed a big account and hit my quota for the week by Monday.
When it came time to review your weekly sales numbers, I got grilled by the sales manager for not closing double the quota, since I had closed the week's quota on Monday and had the rest of the week to continue selling.
I was taken aback by this and simply asked, where does that end then? If I closed a full year's quota in a single day, does that mean I should produce the same every day? With a capped commission why would I do that when another rep is producing a 10th of the amount but making the same pay?
It's not about hours worked, it's about production.
Edit: To be fair, my comment relates more to the title than to the body of the post.
It's a little sad that pushing for "visibility" is so important, especially in large corporations, but it helps to put yourself in the boss's shoes - how will he/she know you're doing good work unless you blow your own trumpet?
Arguments that a good boss "should just know" certainly have merit, but it's up to each of us to determine whether our own boss fits in that category and emphasize our accomplishments accordingly if we're to be "performance reviwed" with bonus etc depending on that perception.
The pieces of the puzzle that work to set the balance are when developers are freelancers who can walk away from undesirable situations. It also helps to catalog and share experiences to "warn" other developers of potential problem companies.
The optimal situation, I've found, is working remote when the developer is well-motivated, is being compensated based on weekly milestones they help create, requirements are based on real and realistic business assumptions, and bonuses are paid weekly for meeting the milestone with an extra bonus for exceeding the milestone for that week.
This is how we work at poppup. Everyone seems happy and when we have had developers who are falling behind regularly, it just means that they are not the right fit at that time and both parties can move on soon after the discovery, having learned something valuable in the process. No hard feelings.
By the way, this last bit is an advertisement for this new way of working. Poppup is looking to hire a remote rails developer right now: http://careers.stackoverflow.com/jobs/45288/rails-developer-...
Because I wasn't seen to be busting my gut on Sunday/Monday with the others, my managers say I wasn't pulling my weight and therefore scored me very low.
If getting high scores means working Sunday/Monday on a long weekend, I don't want high scores.
As just one example: Ideally a person creating great code would benefit from it over the longer term, he wouldn't just put his heart and soul into something that gets handed over to someone who doesn't really appreciate it. And under that approach, he wouldn't stop working just because his assigned task was finished, he'd work on other things, since those too would reap him benefits.
In the systems I've architected, well-structured code, proper tests, good docs, operational goodness -- all that came about through spending time and energy understanding how to do that well. It's come through watching others, learning how others have solved the same problem I've solved, trying new things, keeping my skills current, returning to sharpen skills that may have dulled. I just happen to do it outside the guise of a living, breathing, production system.
I know the article implies this as well, but I just wanted to highlight it in depth. What the article references to laziness, I ascribe to competent and responsible engineering.
We took a service-oriented approach to building the software, so in the early days, my boss thought that no progress was being made because he couldn't "see" anything. He also wanted me to work between 40 and 60 hours a week, which I was doing until my overtime pay was taken away for "equity."
I've gone off on a bit of a tangent here, but I'm of the opinion that non-technical founders should learn a thing or two about architecture and the current state of web/mobile development before hiring resources. Otherwise, we end up with silly analogies where people think programming is like typing up a short story in MS Word.
says Elon Musk
> Conversely, what about the guy sitting in the corner who works 9 to 5, and seems to spend a lot of time reading the internet? Is he just very proficient at writing stable reliable code, or is his job just easier than everyone elses? To the casual observer, the first chap is working really hard, the second one isnt. Hard work is good, laziness is bad, surely?
If the brilliant developer is spending a lot of time reading the internet, never staying late, and producing the same output (minus bugs) as the weaker team, then maybe he's not providing quite as much comparative value as he thinks he is.
Note: Some discussion on here regarding sys admins. I agree the situation is different in the maintain-working-systems roles vs. the create-new-products roles.
Do you think that if you worked nights and weekends (say another 20 hours a week) your total productivity would be diminished vs. your normal week because those extra hours aren't productive?
To keep the patients busy, they would have 2 groups: One group would be responsible for unloading something from the back of a truck while another group would be responsible for loading it back into the front of the truck.
All of the people appeared to be working.
Boy oh boy, if I had a quarter every time I heard that probably I could get a tall Starbucks latte. Many, many people are under the assumption that fiction is stuff that somebody made up and hence useless while non-fiction gives you information about the world; so if you're a busy person, read non-fiction (it's of, course debatable if such a neat classification even can be done). What these people do not realize is that great fiction can provide more information about the world, humanity in general, and what's even more important, yourself, then you can ever glimpse by reading another Gladwell book.
* So Good They Can't Ignore You (Newport). If you've ever daydreamed about how much better your life would be if you were only working at that cool company, you should seriously read this book. By not focusing on just getting a cool job, but instead doing deliberate practice and being "so good they can't ignore you" you can increase aspects of any job that are scientifically-proven to make you happier (control, autonomy, and expertise).
* The Making of a Chef (Ruhlman). If you have any interest in cooking (even if you just watch Top Chef) you will like this book. A major theme of the book is discipline which aligns well with software; code quality, good design, maintaining a test suite all of these things are signs of a true craftsmen, but they are easy to shrug off without discipline.
* Are Your Light's On? (Gause, Weinberg). I would recommend it to anyone dealing with arguments about solutions or confusion about problems, especially when non-technical folks are involved. The biggest win for me was making a mental shift from "Problem Solver" to "Solver of Problems", which allows us to focus on finding who is impacted by a problem and identifying the real story behind the problem.
I'm currently reading Punished By Rewards (only halfway through) but it is pretty interesting so far and will probably make the cut on my final list :)
I do this for two reasons:
1. It's fun.
2. In fiction authors have the liberty to go dip shit crazy with their possibilities. They can even change the inertia of the universe they are in.
I also think reading this kind of books helps to think outside the box and often helps to get a fresh view on problems I'm working on.
It also helps me relax my mind while reading, which I feel is a prerequisite for it to work properly.
PS: here are some of the works I particularly enjoyed lately:
Howey, Hugh.: Wool (and the entire Silo saga, very good read); Sanderson, Brandon: Mistborn: The Final Empire and the sequels (Sanderson is a genius when it comes to creating consistent universes); Phillips, Richard: The second ship (trilogy, contemporary argumented with alien technology); Corey, James S. A.: Leviathan Awakes (expanse series)
2. Why Zebras Don't Get Ulcers [ http://amzn.to/1kFszdH ] - great book on stress and its effects by Robert Sapolsky (have you seen his lectures on behavioural biology? Fascinating stuff, even if you always thought 'meh, biology' - the guy is an amazing lecturer)
I almost drove off the road listening to him talking about bonds vs stocks and percentages. It was interesting, but definitely not a road trip book.
That said, I finished Snow Crash on the trip, and it was a great book. I think I preferred Cryptonomicon a bit more, but some of the theories in Snow Crash were incredible, and way before their time. Hiro Protagonist and YT are also awesome characters.
Who owns the future - Jaron Lanier
Canoeing the Congo - Phil Harwood
Margaret Thatcher - The Authorized Biography
Venture Deals - Brad Feld / Jason Mendelson
1000 Ultimate Experiences - Lonely Planet
- Jules Michelet, History of France (pg suggested somewhere to read books about history. I now fully agree, it is a way to get the best possible understanding of today's world. As much as we have to understand how a cell has grown from nothing to its current state to really understand what it is, we also have to understand how a country has been built in the long history to understand it's current issues)
- Daniel Kahneman, Thinking, Fast and Slow (read it again).
"Bill, you are really inspiring me to increase my concern for the critical issues of economy and technology. You are a gift to the world. thank u."
Wow, Gates has really cleaned up his public perception.
Wait. Is he saying that sometimes he does choose what's on the bestseller list?
The Wide Lens - Ron Adner (a more complete view for developing projects, foreseeing the inevitable problems beyond your immediate reach)
Hardboiled Wonderland and the End of the World - Haruki Murakami (just awesome)
Snow Crash - Neal Stephenson (tomorrow's world, written the day before yesterday)
And here's all the 40-something book I've read this year: https://www.goodreads.com/user_challenges/562634
Malcolm Gladwell may be too popular to be 'cool', but he's a master storyteller. Haters gonna hate.
Can any technical-types here explain if they found it particularly impressive? It's well written but I didn't think it very notable. Do I need to read it with a specific mindset?
Non-fiction, agreed. Especially Biographies of successful people who made it from the scratch after a lot of struggling, like Mr. Gates himself.
"This web site is oriented toward contents, rather than special effects, bells and whistles. This should help in quicker loading and easier reading."
Then, he started digging ditches and welding. Then, he started winding transformers.
That man is no joke.
I think it takes a particular trait to be able to acquire enough money to be able to stop all that and get away and play rather than a) not making enough to retire early or b) never being able to give up striving for more.
I'd love to build something like this in WA (to go along with a grid connection, solar PV, solar hot water, and wind). With an artificial pond and pumping, it might even be a good energy storage solution, although finding a place with a natural stream would be better (and mountainside land is cheaper, generally, since only a small fraction is buildable.)
It looks like this is in southern Chile, though, maybe the solar incidence so far from the equator doesn't allow it? I wonder how reliable the source for the stream is.
It might be a fine solution given the situation but this is about hacking the place you live not so much saving the world, eniromently friendly solution it is not.
But not everything has to be about saving the world.
Whats the geography of the site like How much head have you got What was the design power when compared to the actual system.
/me wants to meet him.
* Loading images is now enabled by default rather than disabled by default, meaning that a larger portion of emails will be tracked, because it's more likely tracking images will be loaded.
* Images are now loaded through a proxy, which means that all tracking images will no longer provide information like cookies, IP associated with the account, etc - the only information they'll provide is "this specific email was viewed by someone, somewhere."
There is still an option to disable loading images by default. Toggling that option still results in images being loaded through a proxy, so the second item above still applies.
As far as privacy goes, the potential level of privacy has increased (the proxy now allows you to load images if you desire without leaking IP etc.). The average level of privacy from the change is a mixed bag - more basic tracking (open tracking) will occur due to the change of default, but with the trade off that more advanced tracking (e.g. tracking IPs, setting cookies for correlation with non-email site visits a.k.a. remarketing) will no longer be possible.
There is no net change to how hard it is to verify whether an address is a valid GMail address - that's already possible by simply talking to a Google mail server.
It's probably an improvement, but not all the way there. For actual privacy, what GMail needs to do (and I realize this is slightly unfeasible due to the amount of email they receive) is instantly open and cache every single email to every single email address (including non-existent addresses).
If so, then anyone can include an invisible image and always know when I open the email.. whereas before they had no way of doing this.
Nothing stops Gmail from doing the loading on their side (to hide UA, IP, etc.) but only when you ask for it.
What's Google's motivation for this? Do they do emails that need to be tracked? Are they doing this for themselves to avoid having to special-case their own emails?
Wouldn't it be better to work on a standardized way to embed images in email, so that recipients can get nicely-rendered emails without exposing themselves to action tracking?
It's been pretty clear for some time that codon choice is not random (hence codon optimization is useful when taking a gene from one organism to another), and over the last few years it's also been clear that codon bias can be evolutionarily constrained. For a while, it mostly thought to be based on tRNA levels (basically anticodons). However, it's been increasingly clear that there are other constraints to protein coding sequences than just the decoded protein sequence (aka the genetic code).
For example, we published a paper a few weeks back  showing that in bacteria (and probably higher organisms), the N-terminus of genes has a lot of rare codons and this is due to other constraints such as relieving mRNA structure to allow better translation of proteins. I think in the coming years, we will find that other regulatory elements also shape this code, including sequences that control splicing, small RNAs, mRNA degradation and transport, et cetera
Anyways, it's a pretty fun time in biology. The tools we have now make studies that were ridiculously impossible just a few years ago, a reality for an individual lab. I can't wait to see what the next few years bring.
EDIT: Since my comment has hijacked the most useful comment linking to the original study, I'll link to the comment here:
AFTER EDIT: I heard back from one of my local geneticist friends, a mathematician turned psychologist by higher education who largely does statistical analysis as part of a team of researchers on behavior genetics. He writes, from the perspective of behavior genetics research, "That is fascinating, but if duons are also tagged by SNPs, and especially if they are in the exomic DNA, we've already been studying them and finding very little. In other words, this is huge for molecular genetics and physiology, but I'm not so sure it changes what we do in genotype-phenotype association research." So I take that to say that this could be quite a big deal for molecular genetics and physiology, if this finding is confirmed in follow-up research.
That's nice, that's how science progresses.
Now, let's go to how business works. What about all those "scientists" (nay, engineers doing whatever for a payroll) and all those corporations, cocky and convinced that they had all the knowledge they needed, working as amateur magicians with DNA to create synthetic food, organs, new drugs, etc?
They essentially said "fuck" to a better and timely scientific understanding, in order to greedily rush to market some half baked results -- and consequences (medical or otherwise) be damned.
The brain is no exception. I think musicians, for example, are better at math because at the heart of music and at the heart of math, many of the same brain circuits are involved. It is more efficient to have one copy of these circuits and just apply them to both music and math, rather than having two copies of nearly identical circuits. Proteins have more than one use in our bodies, because if each only had only one use we would need an inefficient number of them, possibly more than what exists. Same for neurotransmitters. When nature is limited by constraints it usually finds a way to fold into a higher dimension around that constraint (such as the neocortex wrinkling to increase surface area, or DNA containing information on more than a single level, or even how grass grows over a fallen log instead of "choosing" to just grow somewhere else, or if you are into String Theory, how the universe/multiverse has folded into 11 dimensions, possibly because that was the most efficient way for our universe to exist.)
It reminds me of a river when it is initially forming down the side of a mountain. The water takes the most efficient path at any given instance from the top of the mountain to the bottom. The process is like a greedy algorithm. The water cannot foresee where it will end up, it just flows. The water won't always find the most efficient solution, but give it enough time and will find an efficient-enough solution.
$20 for one day access! Fucking obscene.
I should join AAAS, it is only $50 for the year, but I can't swing it just now.
While I found the article very interesting as a new postgraduate student, it seems like such a straightforward deduction that I find it difficult to believe no one has ever put it into words until now.
I ask, because insofar as I'm aware, the moment geneticians parsed the human genome was the moment it occurred to them that most (90%?) of the information was missing. (As in, information to make the remaining proteins that we build in our body.)
I vaguely recollect 2007 being the year where they discoverned and/or described how ARN can influence how ADN is read and turn bits and pieces of information on and off to use the same code to build new proteins. And another team the same year described how injecting ARN could allow to use existing ADN to potentially build proteins that aren't synthesized in the body.
So... How is this different exactly? :-|
So genes only code how to make proteins? That's IT? What about all the other stuff like what you look like, what diseases you may or may not get, some special functions of your body, your biological strengths and weaknesses, etc etc. Or defining protein generation actually defines all of that? (that would be so fascinating).
And 90% of the DNA is labeled as "non-coding?" ?!?!? Seriously? well it can't be "junk" can it? Or maybe its like a long bitcoin chain with only the latest commit relevant....
What strikes me as an interesting question (I hope someone of you can answer) is: How long will it be, before physicians/gene specialists/biologists/etc. will use the new way of DNA interpretation in cases for regular people? How long before this discovery can be used in mainstream medicine?
My (uneducated) guess is: there has to be a huge reevaluation of the accumulated data (and assumptions) about gene connections with deceases.
Up Up Down Down Left Right Left Right B A
Actually there is a nice, cheap service called ezautoscaling.com - looks like a hacker side project but supports the full API including schedules, which I think are missing from the official AWS offering.
Now I just wish AWS management console itself got some TLC.....it can be pretty difficult and cumbersome to use sometimes. Even products for fellow devs should have beautiful UIs.
That isn't because it is more expensive, it is because the estimate on the original plan was wrong.
The old peak funding requirement on the original plan was $43.6 Billion with 3.5 million households passed by June 2016. There was supposed to be an update of those figures published this July but they sat on the report until after the election. We now know that those figures are now $73 billion peak funding and 1.7 million residences passed .
> The Coalitions NBN is a joke. It will not arrive faster, cheaper, or better.
It will be faster (as in, rolled out faster) and cheaper. Despite the political vitriol surrounding this topic in Australia, you can't bend the technical and economic reality that rolling out a fiber to the node solution (FTTN) is both cheaper and faster than rolling out fiber to the premises (FTTP) with existing households:
You rollout fiber to the home where it is more economical (dense areas, new developments) you roll fiber to the curb where it is more economical and faster (apartment buildings) and you roll the fiber to a cabinet where it is more economical and faster (existing suburban areas).
Attaching each plan to a rollout strategy was a mistake in the first place, as a national network requires a mix of technology (this isn't FTTN v FTTH, its about where to apply each).
 Here is a nice table the lays out the old/new estimates: http://www.zdnet.com/au/nbn-strategic-review-by-the-numbers-...
The latency between Australia and the internet (hear AWS data centre in Virginia where most startups live, or rely on an API that exists this region) is crazy! It's especially so as companies are starting to push "regular traffic" over to https (even HN is https now). The 4+ requests required to establish an encrypted connection adds up.
Whenever I'm in the Bay Area it feels like the internet is on localhost!
The cost of hosting in Australia is 5-10x that of the states.
Whilst it's fun to bash the Coalitions NBN policy, the Labour party's plan topped out at only 100Mbps (which I'm already getting on Telstra Cable). What really annoys me is that
Australia's cable network is Fibre to the node already, and it is already capable of 1Gbps speeds. I hear that Telstra is not offering 1Gbps per second as part of an agreement with NBN Co.
 My friends over at orionvm.com gave me a lesson in hosting economics when I wanted to white label their service.
 My next door neighbour designed and sold the underlying hardware used on Telstra's coaxial network.
I can't wait for them to pull a Telstra as well, and try to sell it off to the public in shares to 'mum and dad investors'.
Its embarrassing to watch the farce unfold.
(and will, I predict, continue to be, for years to come...)
(the irony is, there is NBN fiber in the ground outside my apartment right now, up and down all over. ...but we're still on the '2-3 years' waiting list, and so is everyone else here, because maybe 40% of the homes in this area are apartments. Stupid doesn't even begin to cover it.)
The claim that giving up reading news will make you happier is a medical claim in the article that is not backed up by reliable medical sources, so I call baloney on that. The newspaper opinion writer here (promoting his new book with excerpts from the book) doesn't report the issue the way a competent reporter would report it, but just makes a bunch of broad general statements with no nuance. In other words, the medical claims about happier human life in the article are just like the made-up opinions we can all easily find on the Internet, and the article stands as an example of how we can find blatantly misleading "information" inside or outside the professional news media. I have no reason to suppose that the full-length book is a medically reliable source (the publisher of the book is identified at the end of the article).
Anecdote alert: I'm a curious person and I like to learn, and so one of the reasons I come here to Hacker NEWS is to find out new facts about the external world that I didn't know before, including facts about current events ("news" in the narrow sense). My personal experiencewhich, to be sure, may differ from yoursis that I am a happier and more productive person when I know, from good sources, what is going on all over the world and the broader context of expanding human knowledge. But I'm sure you can find an opinion column somewhere based on a popular book with a different opinion from mine.
AFTER EDIT: Good catch! Another participant here on HN noticed that the author of the article kindly submitted here has credibly been accused of plagiarism by more than one published author who works harder than he does. I upvoted that comment for what it added to our understanding of the article's background.
EDIT: For another take on not reading the news, see http://www.aaronsw.com/weblog/hatethenews
I get all of my (non-HN) news from The Economist's audio edition. It's released weekly and they have a section right at the start about big things happening in business/politics around the world in the last week. It's no more than a couple minutes to scan, and 10-20 in normal speed audio.
The rest of the articles are at least one step back (since they summarize a week of what's happened). Many others are looking at some larger event or trend, sometimes with a recent event/anecdote as a lead in.
I like the audio edition in particular since I can put it on while I'm doing chores or commuting and I'll pick up bits and pieces even if I'm not fully paying attention. I can also have only the sections I care about included, which lets me skip the ones I really don't care about.
"Once a newspaper touches a story, the facts are lost forever, even to the protagonists." Norman Mailer
"Newspapers are unable, seemingly, to discriminate between a bicycle accident and the collapse of civilisation." George Bernard Shaw
"In the real world, the right thing never happens in the right place and the right time. It is the job of journalists and historians to make it appear that it has." Mark Twain
"I fear three newspapers more than a hundred thousand bayonets." Napoleon
"If you're not careful, the newspapers will have you hating the people who are being oppressed, and loving the people who are doing the oppressing." Malcolm X
"The public have an insatiable curiosity to know everything. Except what is worth knowing. Journalism, conscious of this, and having tradesman-like habits, supplies their demands." Oscar Wilde
"The lowest depth to which people can sink before God is defined by the word journalist." Soren Kierkegaard
Another serious problem with news is its schedule. A daily paper must publish something every day, even if nothing important has happened. An hourly newscast is worse.
My ideal internet news source would publish infrequently and be filtered to the specific reader. The second part is very hard. It would look something like this:
Not News- A car accident across town- A single crime in another state- Celebrities- Scandals- Daily stock market fluctuations
News- A trend of car accidents at an intersection near me- Crime in my neighborhood or a trend of crime in my city- Economic trends and their underlying causes
1) Some news is important, purely for social (not informational) reasons. When you show up to the office, you want to know why everyone's talking about Miley Cyrus! And you need to know who won the Superbowl, even if you have no interest.
2) News does have explanatory power, but mostly in weekly mags like The Economist, New Yorker, etc., and occasionally in analysis pieces by the NYT. Don't throw the baby out with the bathwater.
But to my first point -- I would love a service that would "curate" the need-to-know headlines, to send to me every morning/afternoon. Where each headline had a numerical score or increasing importance (say, 1-5), and I could choose to subscribe to all headlines of 5, and all headlines 3-5 in tech, for example. The important thing being that this is not a simple daily digest, but that I'd only receive it when there was something newsworthy -- plenty of days, you'd receive nothing at all.
This is an amazingly bad example. Most news sources would be leading the torch-and-pitchfork brigade to either the relevant road authority or the architect's office.
I don't know a single truly creative mind who is a news junkie
There is a vast gulf between 'news junkie' and 'don't watch news'. The author may also want to broaden his social circle, because I'm aware of a few. I also find it weird that 'physician' and 'scientist' are classified as 'truly creative minds' - I've known quite a few of each, and it's a terrible assumption.
The article is an example of poor quality news - consuming it without thought is indeed bad for you. Full points for irony, I guess.
Recently I was looking for information on something and ended up on CNN.com and was awestruck at how much just absolutely unnewsworthy garbage filled the pages. Curious I looked around at other new sites to see if they were all worse than I remember and yes, pretty much they were full of gossip, misinformation and obvious fear mongering.
No thanks, I like this new system better.
If you care about your digestion, my advice isdon't talk about bolshevism or medicine at table. And, god forbidnever read soviet newspapers before dinner.
Curiously, another HN reader expressed the same  feelings about HN.
"News has no explanatory power": I'm not going to argue that most mainstream news is even that good, but to suggest that "the accumulation of facts" is inconsistent with forming deeper knowledge is too sweeping. Readers of news can observe patterns, which hopefully they will check against more in-depth research.
Much of news' task to not the "how?" but the "what?" and on that measure, it does a decent, if inconsistent job: http://publicmind.fdu.edu/2011/knowless/.
"News is toxic to your body": The author cites a case study involving the limbic system that doesn't mention media or news at all. It may well be that "Panicky stories spur the release of cascades of glucocorticoid" but do they do so at noticeable or unhealthy levels? I'm not convinced.
"News increases cognitive errors": News is not an ideal way of challenging biases, but it seems much better than not reading news and getting information about filtered through friends with similar biases to you. (Reading carefully filtered news and books is probably best of all.)
"News inhibits thinking": This section only applies if you read news intermittently and let notifications interrupt you. Concentrating on a newspaper (or news site) for 30 minutes would not have the same effect. But continually leaving work for chatting co-workers would.
"News works like a drug": This section is one of the most plausible, but once again, it doesn't cite any evidence. Cal Newport has a similar line of reasoning, but he actually has research to back it up: http://calnewport.com/blog/2010/06/10/is-allowing-your-child.... (It's about Facebook, but the same principle of distracting activities ruining focus applies.)
"News wastes time": This is all about habits and boundaries. Like "News inhibits thinking," this problem could emerge with any activity engaged in on a whim during working hours.
"News kills creativity": The theory that younger mathematicians are more productive is actually unfounded. See http://www.slate.com/articles/life/do_the_math/2003/05/is_ma... or http://privacyink.org/pdf/myth.pdf. And this last part is pure argument by anecdote:
"I don't know a single truly creative mind who is a news junkie not a writer, not a composer, mathematician, physician, scientist, musician, designer, architect or painter. On the other hand, I know a bunch of viciously uncreative minds who consume news like drugs."
The points about most news being irrelevant to day-to-day life and story bias are worth pondering, but otherwise this article overreaches. It is a series of interesting conjectures about the effect of news, but often presumes a certain way of reading or watching news. The evidence for each point is slim. I'm forced to conclude his warnings of "panicky" news with "no explanatory power" are hypocritical.
I grew up without a television - deliberate choice. My friends couldn't believe how much cool stuff I could get done because of this.
I would honestly rather just get my news from HN because the intelligence level is a lot higher than any news organization. While I may disagree with certain views on here, it's not a sensationalized conversation. Users on here generally have concrete conjectures and thought out responses which you definitely don't get on the news.
I know a lot of people don't like the doom & gloom of news. But it's needed. I recently discussed with someone who doesn't consume news about the NSA revelations. They were shocked. They said "why didn't anyone tell me?"
Instead of blocking things out and being happy with our ignorance, we need to change how news is done. If you whine about something, change it. The Guardian can certainly make an attempt to change the dynamic.
Scientific research publications, local papers, special-interest blogs and good old fashioned conversation are more than enough to get the useful information.
It's extraordinarily rare that the TV/radio news ever contains any information that's directly useful to my life and I have better things to do than pan for gold whilst being subjected to varying degrees of propaganda.
It is very poor attempt. For every point discussed in the article present so many counter-arguments that are never discussed. Not just that some of the arguments are just contradictory. Not to mention there is hardly any research material pointed that made the author think that way.
Once it says that we don't think about news : "Unlike reading books and long magazine articles (which require thinking), we can swallow limitless quantities of news flashes,". This is ironic. The article itself is NEWS. Is it not making us think. Well if it not making us think this makes this NEWS itself is useless right?
The author just tried to create an article by combining things he read from the book. At the end he just presented HIS OPINION. This should not be NEWS !
My man Rolf became a journalist, worked his way up to the Guardian, and wrote a story about how busted up news is.
I look forward to the similar press release from Jony Ive telling us to stop using those blasted iPads.
Increase the number of sources, believe fewer of them, and use critical thinking. But only pay attention to things you care about.
I may be outraged with recent conflicts between tech carpetbaggers and SF residents, but I try not to invest any energy in it, because I've got my own local gentrification vs. crime issues and I only have so much bandwidth.
This thread gave me pause to think about what I believed then, in the media induced state of stress, and what has come to pass. Of the notable ones:
- twitter would fail and cause the tech bubble to burst. Twitter now sits pretty on $51/share
- the euro zone would collapse and riots would rock the world. I actually skipped out on two trips (a wedding in France and a stag party in Taiwan) because I thought the world was on the brink of disaster. Nothing happened.
I only list two but there are a half dozen others that haven't happened either. The thing that really strikes me though, is that my perspective on others' has changed dramatically. When I see people spinning themselves up into a state because of some media (and quite often it is not reputable media) I feel a combination of anger and derision; somewhat akin to the emotional reflex I experience when a homeless person is drunk.
In my home country "news" consists mostly of bad stuff. e.g. I glanced at a local news paper _today_...among the headlines "3 y/o baby gang raped". Being bombarded with that kind of stuff daily can break the strongest soul, so I just don't read local news anymore.
I tend to focus on finance & tech. Even if there is an absolute bloodbath on the stock exchange it'll never rattle me like the baby thing does (and I didn't even read the actual article). The stock exchange is just numbers...maybe I lost some money - so be it. That I can absorb without lasting damage.
I'd love a open platform that can process RSS that I know won't close/change/fail me. Google Reader had some ability in this regard but we all know how that went. Plus I think RSS might no longer be sufficient...cutting edge news is now on twitter. Not sure if 140 chars counts as news though...headlines maybe.
I find I stay just as informed reading commentary, where I'm purposefully being manipulated, as I do reading news. In fact the news is better, as various authors advance various personal theories they've been working on for weeks or months, using the current events as fodder. Reading a couple of these from different viewpoints provides wonderful context -- and context is the one thing critically missing from most "breaking news" reporting. The only difference is about a 12-hour delay. Trust me, the world does not depend on whether I know something that quickly. Twitter peeps will annoy me if something truly incredible happens.
I'm also finding that branding, whether by news outlet, author, or social signaling, is a terrible indicator of quality. As I continue to flush out the app, my belief is that a better indicator is statistical clustering around personality types, but that's still a year or two away.
But one thing is for sure: I've been much happier since I gave up all forms of news consumption. News is based on emotional manipulation. It's always a crisis, there's always an argument, and there's always some terrible danger you've been unaware of. That stuff will rot your mind. It's always been bad; it's just gotten worse over the last few decades as the news cycle has shortened.
Now if I could get a continental breakfast one morning without being assaulted by the talking heads squawking on every TV in every hotel lobby saying the same shit every same day.
The problem is that every fool nowadays could write a blog post or a comment which would be indexed by a search engine, adding a bit to the total waste.
"Life is easier and the world is a much happier place when you're dumb."
The amount of bad food/news in the world has increased exponentially in recent history due to the ease and low cost of production and distribution.
But like we've seen with food, the more unhealthy options proliferate, the more of a premium there is for e.g. home-cooked, organic meals. I like to think that Hacker News, on most days, is my source of healthy and nourishing news, and it's up to me to discern and sift through the junk that might occasionally get mixed in.
The caveat is that it's necessary to know which news is garbage and which is meaningful. Clue: meaningful news is not typically popular.
I do read a lot of tech news however and more recently have taken up reading the headlines on the local newspaper websites.
For me it's an effort to help me be more positive about life as I have struggled with negativity and sometimes intrusive thoughts.
The psychologist Gary Klein has written about how people make decisions. His most recent book provides several pieces of evidence that we have good insights because we can connect irrelevant information/ideas to the problems we see everyday (whether these problems are at work, at school, in the laboratory, or on the toilet).
Learning is healthy. Reading is necessary. The news is irreplaceable: not because of its pertinence but because its insightful value.
A reading app's top read article was an article about giving up reading.
One interesting paragraph on backcover:'news began to make us dumber when we insisted on having it daily'.
As anyone who's tried to make or create (even just writing something about a topic) something would know, creating takes time. Now when something needs to be created daily or even hourly, you end up putting out junk.
It should be a kind of alarm system with editable preferences like topics of interest also location based warnings with ranking of information importance consisted of social component plus importance rating given from information provider.
However, these days news are bad all over the world so not reading them will truly make you happy. :)
Not only they don't provide quality information, but it seems to me they trigger our worst sides (jealousy, hatred...).
I made a news crawler that automatically filters out bad news articles using sentiment analysis.
I like to think it offers a good reprieve from all of the negative, depressing news you get inundated with from the major media.
You can check it out at http://www.nonews.info/
Go to http:192.168.1.1 using your browser, the default name and password is "admin" and "admin" (please change the password to a REALLY long one and write it down on a sticky note next to the router (if you haven't already)
Click the "Access Restrictions" tab | Enter a policy name and select "Enable" | ignore "applied PCs"'s edit list | Set Access Restriction to "Allow" | Make sure the Schedule portion has "everyday" checked and "24 hours" selected | Enter the URLs of the 4 websites you'd like to block | and click "save settings" at the bottom.
Sure you can come back here and disable the access restrictions, but it requires extra steps, requires you to get up, requires you to type in a long password. And by that time you'll have realized what you're doing isn't good and stopped yourself. The whole point is to stop the bad habit of subconsciously typing in Reddit.com every 5 minutes. It took me a month and after whatever chemical high I had in my brain that was addicting me to Reddit/HuffPo/etc. wore off I just disabled the bans and haven't been a Redditor ever since. I've visited Reddit months later maybe twice but didn't care and haven't been back since. I'm free.
Also, US news bugs me even more when I am abroad. Watching CNN abroad vs. watching it at home produces different feelings. When out of the country I often feel the news is embarrassing. At home, regardless of the source, it oscillates between politically charged, moronic or down-right egocentric news. Most of the quality information I get is from non-US news programs or the Internet. Local and national TV news programs, regardless of network or political affiliation are deplorable.
I can absolutely see a constant stream of sensationalized and skewed news being bad for someone, particularly if they don't seek balance outside of their usual sources.
Why would a news outlet suggest that news is bad for you. Thanks to hacker news, the article will get thousands of extra reads and the guardian is raking in the cash!!!
I can consume any and all news and have it be beneficial to me. Not because I now know facts, but because I can understand each of the stories as a glimpse into the lives and processes of other people in all disciplines and all walks of life. In doing so I can create equality where, in my own mind at the least, it may not have already existed.
I know nothing about Michelle Lee, but if she spent time at Google, there's no way she doesn't have some understanding of how bad the patent ecosystem has become. The only question is whether the bad publicity around patent trolls has created enough political will to push back against bad patents, given that granting as many patents as possible is effectively the business model of the USPTO.
EDIT: (commenting on the article title; not the HN title)
But this isn't the whole story. There are also patents involved, and these are not mentioned in the LICENSE file.
Cisco does explain the issue on a different website: "a team can choose to use the source code, in which case the team is responsible for paying all applicable license fees, or the team can use the binary module distributed by Cisco, in which case Cisco will cover the MPEG LA licensing fees " (where MPEG LA, or MPEG Licensing Authority, is the organization that holds the patents and of which Cisco is a member)
I think that most of the audience here seems to know all this. But it wasn't clear to me, and I think it probably wouldn't be obvious to many other readers.
Where does "15" come from? I suppose if I'd written a codec like this before, or if I stared at the code long enough, I could figure it out, but wouldn't it be better to use an enum or a #define?
Well, I wasn't expecting miracles from this, but constrained baseline profile only? That's very disappointing. Not only will this be unable to decode most of the H.264 content out there (web and otherwise), you could very likely get better results with VP8.
If only they'd have endorsed something like libavcodec instead...
Sadly even truecrypt fails to provide that. So i guess we'll have to life with binary blobs no one knows what they're really doing.
What Cisco did seems to have put an end to all those debates, and h.264 is available for all, with patents intact, etc.
Contributors are 100% Chinese. Is this project make in Cisco China R&D?
Fallout 3 New Vegas came close to Fallout 2 in terms of ambiance and game play, but not close enough (mainly because of the combat system).
I really envy those who will discover this game.
Fallout 2: http://appdb.winehq.org/objectManager.php?sClass=application...
Fallout Tactics: http://appdb.winehq.org/objectManager.php?sClass=application...
Now I just need wait for the OddWorld games to start coming out for free... any decade now...
It was a solid tactics game, but there were some rough edges in bringing in the Fallout stuff.
Anyway, a fun game in terms of mechanics go, but still (in my mind) unsurpassed in writing
great deal - Fallout 1 (and 2) were amazingly well written and really sucked you into their marvelous world.
I never got the same feeling with the rest of the series - something about the first person view broke the fantasy for me and I was always 'playing a game' rather than exploring a wasteland.
Hacker news for x is a funny concept by itself, but part of reddit's appeal for me is all about the subreddits. If someone wants to take the time to setup a topic specific niche why not?
Best of luck to the DataTau founders!
Cause I can't find the next button...
There already seems a lot of traction there.
(btw I would try it for sure if you'll port it to BB10!)
There isn't even any mention of it being encrypted.
I weep for all the claims this makes about security.
Maybe I missed something, but how exactly is this "secure"? I'm assuming ccPing will still store these messages in a database which will be vulnerable to attack.
I mean, is this going through the TOR network or something?
IMO, the app interface needs improving.
I am now playing XCOM: Enemy Unknown on the iPad, even though I have the computer verion on Steam.
However what is impressive is that if you were going to do startup with YC, your chance of becoming significant company is about 10%. Due to nature of power law in these things, I tend to think, may be about 20% of other startups managed to become less spectacular but still successful and other 40% may be just sustainable to make a reasonable living(lifestyle business) and rest had to fold.
Say in 2005 company X got funded, then its value from 2007 should be used.
Of course 2013 and 2012 cannot be included then, but it's obvious that older companies had a much longer period to build up value and thus the current graphs don't say all that much.
But on a serious note, I wonder how much that 2009-2012 bump has to do with inflation or a lack of other opportunities in the market. Maybe I am out of touch, but it seems like there are less and less places to put your money. Not to suggest that startups are a bubble. The rise is probably due to YC becoming a better filter, picking better horses in the race.
If you think $200 buys you an iPhone, you're sadly deluded. "$200 with 2 year contract" means you haven't bought anything, you're still paying the mortgage with every phone bill. If you want to buy something and "do whatever you want with it", Apple, Google, and a host of others will sell you an unlocked phone. And it will cost a bit more than $200 (though the Nexus 5at $349 isn't too far off).
On the other hand, if you're fed up with your carrier and would like to switch, you're free to do so. I just did it with AT&T. Walked into a T-Mobile store, asked them to transfer our numbers. The next day I called AT&T and asked, "how much do I owe for the ETF?" I paid the $225/phone (that's how fed up with AT&T I was) over the phone with AT&T. Wait a few days for their accounting system to catch up, go to the web page and enter the IMEI for both phones, and two days later the phones are unlocked. Slapped the T-Mobile SIMs into the iPhones and we were on the air.
I just don't get the sentiment of "unlock my free-with-contract phone two months into my contract, you bastards!" It's not your phone. Now, if Verizon, Sprint, or $YOUR_CARRIER don't as readily unlock your phone as AT&T after you pay the ETF or your contract is up, I'll stand with you in your complaint. Otherwise, I don't see the issue.
It's not whenever you want - just are just clearly documenting when they will let you. For the actual info read here:
Prepaid: 1 year.
Postpaid: When your contact is up.
Military: Upon deployment.
Of course, still unfortunate that I have to pay the fee, though I'd rather have this than in states where you still have to wait until your contract is up.
Web 2.0 was a trend in prominent personality types, types of websites, business models, increased scale of online interaction, use of real names, web design styles, programming languages. It wouldn't have been absurd to see a guy with a specific look and a specific laptop and say "that's so web 2.0". To add insult to injury, pretty much all the stuff that gets captured in a word like web 2.0 has predecessors. Crowd sourcing? What about Wikipedia?
I think a good analogy is "movement" in art, philosophy & culture. Modernist is a word that encompasses Frank Lloyd Write, Pablo Picasso, James Joyce, Ayne Rand & karl Marx. It applies to paintings, manifestos, econometrics and buildings with straight lines.
That's the kind of word that 'data science' is. We found ourselves recording a lot of data as a sort of side effect of digitization. It's growing. Then we start to try and get some value from that data. Some new stuff is possible with that volume of data. Some new people are now interested in data. A lot of the tools people were using to collect and analyze data don't work at that volume, so we start using new tools. We end up with a word that includes astronomers, netflix, medical researchers, self driving cars, R, statistical theories etc.
Data science doesn't mean anything that specific yet. It's best not to lead the discussion (as I am doing right now) to a discussion about the word, what qualifies as data science.
My modest exposure to machine learning at the professional level gives me impression that the "real experts" combine a strong mathematical understanding, long experience and some good rules of thumb to perform better than a grad student shooting in the dark, if they happen to perform better.
Oddly enough, all articles about how hard it is to become a "real data scientist" gives the impression that however much expertise is involved, that expertise isn't the codified understanding that is "real science" - even a physics undergraduate does real physics because scientists, physics codified their methods.
Maybe "data science" can become science. But suspect that what will become scientific is the understand of whatever entity is producing the data. Which isn't to discount the learning of experts here but simply to note that compendiums of rules-of-thumb and feelings indicate what Thomas Kuhn might a pre-scientific field.
Good article, nevertheless.
I worry about this, since I travel frequently. Then again, I also worry about plane crashes. I think both are probably irrational, but having plans is what keeps the boogeyman away at night, so here's mine:
1) Remain polite and professional.
2) Decline to consent to any search. Comply with officers' demands that they tell me I'm legally required to comply with. Ask for that demand to be produced in writing. I'm willing to wait for a supervisor while they figure out how to do that, even if it means I miss my flight. I will get a receipt and/or report number and/or some other official written record of the incident if any seizure, including seizure of information, takes place.
3) Immediately after reaching my destination, file written grievances with any and all responsible agencies. They must have a spreadsheet tracking passenger complaints somewhere. Let's increment that while having someone commit, on paper, to a version of events of what happened and a legal rationale to why that was justified. This will only cost me a bit of time and money, and I have lots of time and money, but it has heavily asymmetric payoff in the event of a lawsuit or PR battle.
But honestly? Mathematically, I'm much, much more at risk of getting mugged in Chicago on my way home from the airport than getting held up by Customs. I don't exactly live in fear of muggings but I take sensible precautions like e.g. backing up my data, making sure that I can turn a factory-new laptop into a working dev environment within a day, and carrying insurance. I'm pretty sure most of these still work even if I happen to lose a laptop to Customs rather than to a mugger.
This news is absolutely terrifying to me.
Edit: I wrote this as a quick knee-jerk without much thought, and now that I read it again I see the sad irony. This is being done to prevent terror? Someone needs to do a risk/reward analysis here...
Edit 2: I appreciate everyone offering up solutions here, but really the only solution would be either to not go, or to not carry any devices at all. Encryption can't prevent me from being detained and my property from being confiscated. Given that the issue here is a violation of personal space rather than one of having something to hide, encryption will only increase the impact which such an event would have on my wife and me. If such an event were to occur and I was using strong encryption to protect my data, I'd be asked for any encryption keys, detained for much longer, and if I refused to cooperate in any way I'd have my things permanently confiscated and face refusal of entry.
If I find myself in this situation, I'll be looking to minimize impact. I'll cooperate while trying as best as I can not to compromise my values, and then raise hell after the fact.
I don't know the facts here so I'm not implying anything in this specific case, but I like to remember that the terminology we use drastically shapes our thinking.
Historically, white New Zealanders thought of New Zealand as "Better Britain". This basically meant doing anything  and everything  to please the "Mother Country". Now that the USA is the dominant superpower, this means things like raiding Kim Dotcom's mansion or providing land for a spy base. 
 http://en.wikipedia.org/wiki/Gallipoli_Campaign http://www.teara.govt.nz/en/the-new-zealanders/page-8 http://en.wikipedia.org/wiki/Waihopai_Station
The reason they did it to me is because they thought I was a drug smuggler. They got that idea because I was going to China and didn't have a fixed itinerary, which they found to be incredibly suspicious.
I'm not sure if the non-techie cop was playing good cop/bad cop or not, but he was yelling at me and accusing me of being a liar from the beginning. Because I knew I hadn't done anything wrong and wasn't lying, it was pretty comical, but it became really upsetting once they started threatening to seize everything if I didn't give up my passwords.
Expensive, but safe.
These sorts of harassment measures are only capable of catching amateurs making serious operational mistakes. They will not catch professionals or serious operatives. I can never understand how this passes for counter-terrorism when it is really just "counter-clumsy-terrorists", at best.
This kind of heavy handed, precedent setting, dissent-disincentivizing move is just sad to see happen (and perhaps sadder that I feel completely at its mercy and feel it affecting my actions concretely)
Not that I have anything to hide... that being said they will probably just back door into my laptop next time I'm on and deactivate any form of tripwire.
Curse you NSA, always streets ahead!
(Note: If I wasn't on their radar, I am now... /sigh, it was a joke)
The guy who interviewed me was actually very polite and friendly. They said I'd probably been flagged up as suspicious due to my itinerary. I'd travelled from China, for a 2 day visit to see a customer, with tickets booked via a US agent (Expedia). Suspect that living in China at the time had quite a bit to do with it.
Got asked all kinds of questions, most of which I didn't know the answer to. Didn't get my laptop searched or asked for any passwords, but they did make a point of asking if I had any porn on my computer. I said no, and they asked no more. They way they asked though did make it sound like it was a bit of an issue to them.
Took almost 2 hours in total. I actually didn't find it particularly stressful, mostly as I still made my next flight. I'd had far worse times travelling to the US, and getting some vindictive border agency guard who's out to get you by any means possible. Really, I loathe US immigration.
Also, I would strongly advise giving out your password if you want to be on your way. Refusal to give access is generally a huge issue.
Also strongly agree with patio11 on getting everything in writing. This is now a criminal investigation. Make sure you have what you need to defend yourself in court.
Actually nvm, I don't, I wiped my hard drive before bringing it in for repairs.
I'm glad I've already been to NZ.
Give you password to a trusted friend (preferably your lawyer) with instructions not to give you the password until you get home or to your destination.
Of course you want to have VNC / ssh set up ahead of time so you can actually do stuff...
CPB -- http://www.cbp.govDHS -- http://www.dhs.gov
If you define violence as intimidating (of), physical and mental torture then all governments top the list.
Some of people on airport were like robots, they didn't have any reaction (on their faces) for my plea.
My sandboxing uses Docker instead of NaCl.
Also: Russ Cox is the Chuck Norris of Go programming.
> To isolate user programs from Google's infrastructure, the back > end runs them under Native Client (or "NaCl"), a technology > developed by Google to permit the safe execution of x86 programs > inside web browsers. The back end uses a special > version of the gc tool chain that generates NaCl executables
In that experience I learned how incredibly difficult it is to get things on and off of computers this old. Even though I lived through all the advances, my mind has a much rosier picture of the capabilities of old hardware than is actually true. Similar to this author, I had to rely on serial connections and some utilities developed by the Amiga emulation community running in Windows on an old PC to actually get it to work. And boy was it slow to move data. It took something like 48 hours to move about 80 megabytes of data. I was worried that was too long for such a machine to be running, but it held out. One of the partitions did have a tiny bit of corruption, but I was overall surprised to find most of the data intact.
Sadly, my shoebox full of disks is probably never to be recovered. Both floppy drives seem to be failing and reading Amiga floppies on modern hardware is possible, but an expensive pain.
Of course they did, it's not in HTTP/1.0, it's in HTTP/1.1.
Web server running on C64:http://www.c64web.com/
OS with TCP stack running on C64:http://en.wikipedia.org/wiki/Contiki
My first was a Kaypro II with 64kb of RAM, 191kB floppy drive, and a 2.5MHz Z80. I just looked up some videos of them running and the memories came flooding back. Good times.
Just imagine what computing power we'll have in another couple of decades and how silly what we have right now will seem.
This. People wax nostalgically all the time how fast running Star Writer or something was back in the day, and how "bloated" modern OSs have become, without understanding that those old PCs did 1/1000 of the things modern computers do, and even those simple stuff (running a text processor) was slow compared to today's standards...
I created a network card for my MSX computer a year ago; it has less than half the cpu speed of this Plus. I really like working on that machine; adding hardware, writing bits of software. I said it before, but it's like a Bonzai tree. Because it's not really possible to do something commercial on it anymore, unlike on anything modern, my brain stops thinking commercially and that's a good feeling which I don't seem to have when touching any modern computer/board. Using the latter I always get business ideas and then it suddenly went from playing to work. Which is not bad because I like that, but sometimes I want to just play.
Here as well, a proxy was used to get around the Host Header problem.
Also, there is more older machine connected to internet that these MAC : http://www.youtube.com/watch?v=-ECnN7jdgA4
...or maybe not, as this C64 web browser(!) shows:http://csdb.dk/release/?id=30400
Not as old as this, but I wrote a guide nearly 10 years ago on installing Apache on Apple Unix (OSX is not the first Unix from Apple) on a Quadra.
 - http://www.snakeoillabs.com/downloads/aux-guide.pdf
 it was (what would now be called) an all-in-one, and it came with some Windows 3.1 thing called TabWorks. And before anyone asks, no, I didn't program on the thing. Not every programmer has programmed since he was in diapers. Also, Windows didn't come with a programming environment back then and search engines sucked ass.
(Edit: Removed a bit about the Amiga being a knockoff, as mobile reading led me to conflate this article with the one about the Amiga emulator.)
I'm still developing for it in my spare time ... http://vimeo.com/70784619 (very little progress yet)
Sarcasm aside, the most telling line was this one: "If the project is for a three-month campaign that must be completed next week, the shortest path to the finish line is probably best. I've only been a developer for five years, and 95 per cent of my professional projects are [like that]."
This essay was written by a developer at BKWLD, an "independent digital agency." They do a lot of advertising work. Of course code quality doesn't seem important! Code quality matters when you have a team of people developing and maintaining software over a long period.
Here's a more nuanced view: speed matters, and you can be really successful developing important, business-critical software by emphasizing speed over quality. If it's a problem domain you're particularly familiar with, especially if you have a lot of tools, frameworks, and libraries to help you, you can get an impressive amount of work done.
As you continue to work, you'll slow down. At some pointabout six weeks is my guessyou'll actually be going slower than someone who's focused on quality. They'll speed up over time (asymptotically), and you'll continue to slow down (also asymptotically). You'll still be ahead of them, though, because you got so much done in that initial burst of energy. Your life will start to suck, though, and you'll be looking forward to when the project is done.
Eventuallyabout three to six months, I'd guessyou'll fall behind. You'll be going so slow that your initial advantage will have been erased. Luckily, you now get to hand the project off and start a new one. Win!
Three to seven years later, your code will be thrown away. It will be so expensive to change that it no longer makes economic sense to maintain. If it's still relevant, it will be rewritten at great expense and even greater opportunity cost.
So, does quality matter? Depends on who you ask... and who ends up paying for it.
When your code sucks or you intentionally code fast you end up adding tons of things that will drag you over and over again. And unavoidably you'll end up asking yourself "why didn't I code well. I would have ended up much faster".
Just use your brain, use your judgement on the result, make a decision, iterate.