hacker news with inline top comments    .. more ..    29 Jan 2012 News
home   ask   best   6 years ago   
1
Canada Is About To Pass Sopa's Evil Little Brother. Politely dearthey.com
43 points by DanielRibeiro  2 hours ago   4 comments top 4
1
extension 17 minutes ago 0 replies      
Is there a better source of analysis on this bill? The summary posted here does not seem to be nearly as bad as this blog is making it out to be:

http://www.parl.gc.ca/Content/LOP/LegislativeSummaries/41/1/...

For example, phone unlocking is explicitly permitted, according to that summary and contrary to the blog.

The linked Michael Geist post says that lobby groups are pushing for scarier things, but not that they are already in the bill.

2
teilo 15 minutes ago 0 replies      
Given Canada's record on freedom of speech in general, I imagine this is far more likely to pass than the American bill.
3
dreur 21 minutes ago 0 replies      
Speakout at http://www.ccer.ca/speakout/

Send a letter to your representative at Ottawa.

4
moizsyed 29 minutes ago 0 replies      
What can Canadians learn from their American cousins to stop this bill?
2
IcedCoffeeScript github.com
173 points by swannodette  6 hours ago   28 comments top 13
1
swannodette 1 hour ago 0 replies      
This fork brings some up deep issues, perhaps intractable, around
about the development of syntax-y languages. Useful source
transformations require marginalized forks of the compiler and all the
development burden and risks that entails. In some sense CoffeeScript
inherits the actual problem with JavaScript - a select few determine
the introduction of language features. Perhaps these language
features are best shipped as libraries?

Maybe it doesn't matter. Perhaps most users don't need it. Or perhaps
as we continue to develop software we'll find that we'd rather defer work to the
compiler / macros and the high cost of syntax is simply not worth it.

I'm looking forward to see how 3 languages I immensely enjoy (JavaScript, CoffeeScript, ClojureScript) co-evolve :)

2
mayoff 6 hours ago 2 replies      
I was going to complain that I could find no mention of TameJS (from which the await and defer keywords/features clearly originated). Then I noticed that the IcedCoffeeScript author is also the author of TameJS.
3
bascule 4 hours ago 0 replies      
When I made this comment the other day, I wasn't actually serious that someone should build a JavaScript compiler that does CPS:

"CPS is also generally used as an IR in compilers, not something you write by hand"

http://news.ycombinator.com/item?id=3511211

This reminds me of my Celluloid::IO system in Ruby (which uses coroutines to provide a synchronous API), except Celluloid::IO lets you have as many event loops as you want:

https://github.com/tarcieri/celluloid-io

4
mbostock 1 hour ago 0 replies      
I like the generality of the defer/await concepts of TameJs (and by extension, iced coffee). But I was curious if these concepts could be used without code generation, as in Async.js. So I came up with Queue.js:

https://github.com/mbostock/queue

I haven't tried it in production code yet, but it looks like a convenient abstraction with tunable parallelism.

5
Detrus 5 hours ago 1 reply      
So was there more discussion about await and defer in the many github issues on the subject? This output seems more debuggable than previous CScript defer attempts.
6
ruxkor 4 hours ago 0 replies      
Another notable feature is the generation of readable stacktraces in iced, as explained here:

https://github.com/maxtaco/coffee-script/blob/iced/iced.md
(search for: "Debugging and Stack Traces")

7
equark 4 hours ago 1 reply      
Why is defer needed? Why can't I follow C# and do:

   result = await search("test")

Is this just so that it plays nicely with non-standard callback APIs? I'd prefer to introduce Promises (i.e., q) and have await interact with that. I guess this is not the NodeJS way though.

8
simcop2387 5 hours ago 0 replies      
The way that this manages to transform those examples makes me think that it would make all kinds of other things more palatable than the vanilla coffee script. I've written chains of callbacks in plain JS that this would make so much nicer to work with.
9
russianbandit 5 hours ago 3 replies      
Why didn't the "iced" feature just get added to CoffeeScript? Why have two separate packages?
10
nailer 4 hours ago 1 reply      
The aim of your docs should be to show how simple and readable IcedCoffeeScript is: using multi character variable names in your examples would help to achieve that.
11
patrickwiseman 4 hours ago 0 replies      
Being able to syntactically assemble several parallel requests together has an obvious advantage in making code more readable. Having a hard time wrapping my mind around the keywords. From the examples I think more wait/collect than await/defer. For instance:

Await these items that I want to defer.

Wait on this code block and collect these items.

12
quangv 3 hours ago 0 replies      
When a language has dialects, you kno you are doing something right... go coffee-script!
13
taskstrike 46 minutes ago 0 replies      
This is awesome
3
Beautiful web type: highlighting the best of the Google web fonts directory hellohappy.org
99 points by ubuwaits  5 hours ago   20 comments top 8
1
tuacker 3 hours ago 2 replies      
Why do fonts look like ants chipped away some pixels in chrome/win7? They look better in Firefox but I remember OSX looking better still, can be wrong about that though since I hadn't access to one in a long time. But fonts in Chrome are plain ugly, someone please turn on AA.

FF on the left, Chrome on the right: http://i.imgur.com/SxlaT.png

2
sgdesign 3 hours ago 1 reply      
Very nice list! I recently also wrote on this topic: http://sachagreif.com/google-webfonts-that-dont-suck/
3
hsmyers 2 hours ago 1 reply      
As I slip on my typographic curmudgeon's hat, I have to ask where are the printer's ornaments? Can you set a chess diagram with anything in Google web fonts? Don't think we've quite arrived yet...
4
jphackworth 1 hour ago 0 replies      
I really appreciate that this site uses interesting quotes rather than lorem ipsum. It makes it less boring to trawl through different fonts.
5
slamdunc 4 hours ago 1 reply      
Lato is one of my current favorites, glad you included it. Orbitron, Jura and Cabin also work well in certain places.

I do wish there were more fonts with 3+ styles (reg, bold and ital at least), though I'm sure that'll change with time.

6
Pickels 1 hour ago 0 replies      
I so envy you font guys. If I had more time I would invest it in typography. Good design always starts with awesome fonts.
7
tstegart 2 hours ago 2 replies      
So what are the first two fonts? That's frustrating they're not named.
8
buro9 3 hours ago 1 reply      
1 web page

22 HTTP requests

442.61KB transferred

Thankfully we're not facing an increasing use of mobile devices on high latency networks. Oh.

Platform and browser defaults have come so far that nowadays my preferred font stack is this:

  font-family: sans-serif;

4
Why Every Professional Should Consider Blogging technicalblogging.com
69 points by acangiano  5 hours ago   24 comments top 10
1
kalid 4 hours ago 1 reply      
Great article, and very true in my experience. One of my favorite, unexpected benefits of blogging is it's like a time-machine/memory dump for your brain.

Here's what I mean. When you're thinking about a problem, you struggle, and struggle, then find some insight. When you go to sleep that insight might be gone. So you write it down, ideally in the shortest, most vivid language that can recreate the idea in your head: I serialize my mental RAM into words, and at a later date, reload that information into my head.

If I wrote well, the deserialization is fast (few minutes) and my head is back into the mental state when I "got it". Then I can continue working, add some new insights, then serialize that new state back into words (a follow-up post, or adding to the original).

Over time, you develop some deep insights which are the result of several "me"s collaborating on the problem. I know that college-me understood class XYZ really well, and because he wrote down some insights, 10-years-after-college me can reload that memory very quickly, and maybe add something new. It's like having a perfect tutor (you, when you got it) jump into your head and bring you up to speed. I don't remember vector calculus very well, but I can deserialize my notes and in a few minutes be 80-90% up to speed.

This was a hugely unexpected benefit to blogging which I hadn't even considered. And incidentally, if you write in a way that deserializes well for you, it will likely be useful for many other people too.

tl;dr: create a standard library of thought snippets so you can #include "vectorcalculus.h"

2
stfu 8 minutes ago 0 replies      
As serial failed blogger (3-4 "dead" blogs) let me ask you "pro" bloggers this:

- how do you stay motivated over time and don't get sidetracked

- how do you keep a focus on the blog i.e. that it doesn't turn into some random rantings about daily politics etc.

- how to you keep pushing for that blogging thing over other similar "productive" tasks (i.e. reading/writing articles, conference papers etc)

3
petercooper 2 hours ago 2 replies      
If I hadn't blogged personally 10 years ago, I'd not have been approached to write a book for Apress. Without that book I wouldn't have launched the professional blog that was my main source of income for several years (more than the book was!). Without that professional blog, I wouldn't have got the podcasting gigs or launched the weekly newsletter which has now turned into 5 profitable newsletters and has just helped me become co-chair of an O'Reilly conference.

This is not to brag but to show the "chains" that can happen by putting yourself out there. It's totally unpredictable but increasing your "luck surface area" has amazing outcomes. Also, consider Jeff Atwood, a similar chain arises.

4
timsally 2 hours ago 0 replies      
HNers including Patrick (http://www.hnsearch.com/search#request/all&q=by%3Apatio1...) and Matt Might (http://www.hnsearch.com/search#request/all&q=by%3Amattmi...) have been talking about this for years now. There's a lot of benefit to be had by carefully grepping through the comment archives of HN.
5
sliverstorm 1 hour ago 1 reply      
So how does this fit in with a job at a corporation that would not appreciate public broadcasts of the details of what (or heck, even what) you are working on?
6
estacado 2 hours ago 1 reply      
I tried blogging once, but what put me off was the amount work and time it took to make it look good. You've got to format it so that it's readable to persons other than yourself. You need to provide links to relevant/related sites (which sometimes you have search for it because you didn't bookmark it, which you didn't think you needed to because you didn't know you were going to use it in your blog). Embedding stuff is such a pain in my experience. If it's a video, you got to get the size right, the default is always either too big or too small. And I have yet to find a hassle-free way to embed code, especially from the free blogging sites. With images, you need to resize them so that there's a small one inside the post itself, which links to the bigger one when clicked. All that time spent on making it look good is better used to actually do my work, in my opinion.
7
Swizec 2 hours ago 0 replies      
Personally, I blog because writing helps me think. But as it turns out it's also a marvelous lead generation tool, and just generally a brilliant way to put my name out there and make sure people remember me when they're thinking "Hmm, where could I find a crazy coder right now?"

Not to mention the rush of adrenaline when a post sticks and starts raking in traffic. Very addicting.

8
HPBEggo 4 hours ago 1 reply      
I find this article to be spot on, especially for professionals in very technical careers.

Specifically, the bit about improving your writing skills is very pertinent. Most technical professionals would benefit quite a bit by learning how to communicate with those outside of their profession, i.e. a large portion of the people who might read your blog.

I, for one, keep writing a blog with essentially no readers for more or less this reason.

9
SonicSoul 2 hours ago 1 reply      
i've been thinking about starting a blog for sometime. There seems to be a lot of ways to achieve this.. I'm a developer but not sure if i want to host/administer my own instance of word press.. aside from working 50-60 hours, i spend 10 hours a week on personal projects, so not a lot of spare time.. what's a good streamlined way to start a blog quickly, but still have good flexibility in it's appearance?
10
forrestthewoods 4 hours ago 1 reply      
A good post but it misses the single biggest reason I blog - to share knowledge. I want to share what I've learned and for others to do the same so that we all can become improved.
5
Megaupload Implications are plain scary for Cloud Storage alexblom.com
21 points by AlexBlom  2 hours ago   1 comment top
1
wmf 30 minutes ago 0 replies      
We need to distinguish between primary storage, syncing, backup, distribution, and other cases. I wouldn't even call MegaUpload "cloud storage"; I would call it more of a CDN. If Akamai went down, people wouldn't complain about losing data, because Akamai isn't used as primary storage.

Likewise, the US DOJ makes a distinction between uploader-pays and downloader-pays business models, so we must also.

7
Why does that QR Code go to justinsomnia.org? justinsomnia.org
223 points by jacobr  11 hours ago   46 comments top 11
1
gerggerg 7 hours ago 5 replies      
QR codes are a massive phishing scam waiting to happen. I'll just go cover up the one at my bank with a sticker of the same exact size that links to my own site that looks exactly like the bank's site. Or maybe I'll put one on the ATM and see how long I get traffic before someone takes it down.
2
mortenjorck 6 hours ago 0 replies      
When I worked in marketing communications, we had a policy that anything that could be mistaken as final, approved assets in a printed piece had to be covered with a big, diagonal, magenta "FPO" label (for position only). Whether it was an inaccurate placeholder image, or a justinsomnia QR code, it had to be obvious it was not the final art.
3
pak 5 hours ago 0 replies      
This only supports my long running contention that normal people (outside of Japan) do not understand QR codes, and wherever they are printed, you would be better off writing a short URL. They are 1) opaque, 2) ugly, 3) impossible to memorize, 4) confusing to non techies and 5) no faster than typing the URL for the majority of viewers.
4
sp332 9 hours ago 0 replies      
Does anyone know of a FF plugin that will decode QR codes in images on a page, and maybe even turn them into live links? Here's one for Chrome: https://chrome.google.com/webstore/detail/bfdjglobiolninfgld...
5
dholowiski 8 hours ago 2 replies      
We use QR codes extensively where I work. We use them in print ads and on TV commercials and I have a re-occurring nightmare that we'll use the 'wrong' QR code somewhere disastrous... like printing a link to our competitors on 10,000 brochures or something like that.

I insist that I check all QR codes before they're sent out, and I scan them with 2-3 different QR scanning apps.

As another commentor mentioned, I often send our QR codes to a redirector URL - either a branded redirector service I built, or to a WordPress site with the redirection plugin, or even to it's very own domain name which is configured for forwarding.

6
eliaskg 9 hours ago 1 reply      
I made the experience that it's never a good idea to point a QR code to a fixed domain. Always create a little redirect app where you can define later on what target the link should point on.

So for example point the link to http://mysite.com/qr

where you have a little redirection-php file that you can edit at all time.

7
franze 3 hours ago 0 replies      
ohkhey, that explains, why http://miniqr.com/justinsomnia is one of the most visited pages on miniqr, the QR code was scanned via http://miniqr.com/reader.php more than a 100 times. (a nice gallery of people scanning the QR code visible if you visit the first URL)
9
mkramlich 4 hours ago 1 reply      
QR codes are the new XML. great technology that's perfect for a certain role but being used in far too many other roles whether it's a horrible fit. But it's another buzzword for a resume!
10
mnutt 9 hours ago 3 replies      
Interesting story, though I don't understand what he says at the end about not being able to redirect the QR Code to another link because the QR Code is the link. Why not just send a 302 redirect?

He sounds like he wouldn't do it on principle, but I don't understand the technical reason why he couldn't.

11
habudibab 10 hours ago 0 replies      
Would be great for viral marketing. As far as I know it is possible to forge codes that are valid but have an image embedded. A stickfigure giving another one oral pleasures for example. Place it in the streets and people who think of QR codes as random jumble will see it as a funny coincidence, take a picture for the funny pages and maybe decypher it and visit your erotic gadgets shop.
10
Zocial CSS3 Buttons smcllns.com
28 points by semanticist  3 hours ago   18 comments top 8
1
oldstrangers 1 hour ago 0 replies      
I dont understand the g+ button... no reason at all for the rainbow above it. I want consistency in my buttons and this needlessly breaks that.
2
sbisker 19 minutes ago 0 replies      
Am I the only one whose eyes are really hurt by the brightness of the colors chosen here? (Seriously, I may be the only one - they do seem a little unnecessarily bright, but I've also been feeling tired today.)
3
lukifer 24 minutes ago 0 replies      
Bookmarked for future use. Nice work.

Side note: IE5?!

4
ten7 1 hour ago 2 replies      
Looks crappy in every version of Internet Explorer, 6 through 9. Is it shocking that IE doesn't place nicely? https://ten7.litmus.com/pub/adfdb99
5
paul9290 42 minutes ago 0 replies      
Great work! I'm going to use some on my site. Thanks!
6
latchkey 2 hours ago 0 replies      
This is really well done and funny. You missed one... https://github.com/mozilla/browserid/wiki/How-to-Use-Browser...
7
shaka881 2 hours ago 1 reply      
OpenID in the same category of IE5.

Ouch.

That hurts.

8
sergioramos 2 hours ago 0 replies      
The G+ square icon is higher than the others
11
Sixteen Concerned Scientists: No Need to Panic About Global Warming wsj.com
54 points by llambda  2 hours ago   76 comments top 24
1
sambeau 1 hour ago 1 reply      
This calls it:

http://www.forbes.com/sites/petergleick/2012/01/27/remarkabl...

The Wall Street Journal's editorial board has long been understood to be not only antagonistic to the facts of climate science, but hostile. But in a remarkable example of their unabashed bias, on Friday they published an opinion piece that not only repeats many of the flawed and misleading arguments about climate science, but purports to be of special significance because it was signed by 16 “scientists.”

(Note: I wish he hadn't put the word "scientists" in quotes as it, sadly, undermines a good editorial piece.)

2
_delirium 2 hours ago 1 reply      
This article is a bit high on polemics and low on science for my tastes (unsurprising since it's a WSJ editorial, but still). Is there a more sober whitepaper version that spends less time explaining that CO2 is colorless (true but irrelevant) and talking about the ominous drumbeats of their opponents, and more time on their thesis? Not just a rhetorical question; it seems like they might have a defensible thesis, especially about the cost/benefit profile, but it's hard to tell as presented.
3
jeffreymcmanus 2 hours ago 8 replies      
it is a common tactic of climate change deniers to line up some number of scientists to convey the sense that there is some question about climate change. but this isn't how science works; it's not the case that because one or 10 or 100 scientists have questions about gravity it means that gravity doesn't exist.

the existence of climate change caused by humans is settled science. no amount of wall street journal editorializing or smears by conservative politicians is going to change that.

4
themgt 1 hour ago 1 reply      
"Easter Island 2: Earth" is going to be so much more entertaining for the people 1000 years from now. We didn't get to read through all their "why cutting down the last tree won't be so bad - plus we need more Moai statues" hack WSJ op-eds from concerned islander scientists

What I learned from this letter: CO2 emissions are good for plants and poor people; controlling CO2 emissions is Stalin

5
saulrh 2 hours ago 1 reply      
Sixteen Thousand Concerned Scientists: Get These Posers Out of Our Hair and Let Us Save Your Food Crops
6
dantheman 1 hour ago 2 replies      
Flagged - this type of article doesn't really belong on HN as there isn't really anything new presented in it and will merely cause a flamewar.
7
gaurav_v 2 hours ago 1 reply      
Googling the cosignatories names revelas that several work for Exxon. I don't see how the WSJ didn't consider this relevant information to disclose, instead presenting them only as 'distinguished scientists.'
8
lukifer 17 minutes ago 0 replies      
It's quite possible that the dangers of climate change are alarmist and overstated. But to think that aren't serious side effects to over a century of industrialization is to be short-sighted and willfully ignorant. There's a reason that geologists now refer to the current era as the "Anthropocene".
9
kristofferR 1 hour ago 0 replies      
I wouldn't be surprised if WSJ starts promoting Intelligent Design as a real scientific opinion soon too.
10
gerggerg 1 hour ago 2 replies      
The fact is that CO2 is not a pollutant. CO2 is a colorless and odorless gas, exhaled at high concentrations by each of us, and a key component of the biosphere's life cycle.

Ok lets play this game. The fact is that H20 is not a pollutant. It's a molecule that sustains life and a key component of the biosphere's life cycle.

Bring on the ice age!!

11
mkramlich 2 hours ago 0 replies      
It's possible to get 16 people to come forward and back any particular claim or position. So take this with skepticism.
12
ghouse 1 hour ago 2 replies      
As a society, we know how to take action under uncertainty and imperfect information. It makes no sense to me when people argue we should take no action. The only rational viewpoint I understand is taking action (and realizing cost) proportionate to our confidence. Risk-based cost-benefit analysis.

The "It's took expensive to address the problem" argument doesn't hold water; it presumes the outcome is binary.

13
prawn 1 hour ago 0 replies      
Here is a thread on AskScience regarding this article: http://www.reddit.com/r/askscience/comments/p04l4/how_accura...
14
jmmcd 1 hour ago 0 replies      
> The fact is that CO2 is not a pollutant. CO2 is a colorless and odorless gas, exhaled at high concentrations by each of us, and a key component of the biosphere's life cycle. Plants do so much better with more CO2 that greenhouse operators often increase the CO2 concentrations by factors of three or four to get better growth.

Yes, and ozone is a poison, but in the high atmosphere it still protects you from skin cancer. This arguments seems calculated to appeal to the most scientifically ignorant of audiences.

> This is no surprise since plants and animals evolved when CO2 concentrations were about 10 times larger than they are today.

Yes, because evolution happened at a single point in the past, and no evolution at all happened in the many millions of years since CO2 concentrations dropped below that level.

15
pbreit 38 minutes ago 0 replies      
The editorial is confusing. On one had, it seems to be suggesting that while human-caused global warming is happening, it is not necessary to panic and in fact it would be better to divert resources away from global warming mitigation and back to regular ole projects.

But then it also makes a number of suggestions that global warming is not happening or that if it is, it is not human-driven.

I do think scientists questioning aspects of the global warming debate should be able to have their say but they need to make sure their arguments are sound and fact-based.

16
pg_bot 1 hour ago 0 replies      
I find climate projections to be very similar to financial projections of startups. A typical chart will show hockey stick like growth (temp vs revenue) over a long period of time. Since both groups need to prove that they will have an impact in order to get funding they are motivated to show the most drastic/promising future. As a statistician I tend to discount any long term projections for complex systems because I know that there are too many factors that will influence the results.
17
jl6 1 hour ago 1 reply      
Well the headline at least is correct: even in a dire emergency, panic is the wrong response.
18
unabridged 1 hour ago 1 reply      
My #1 doubt always comes from what constitutes a statistically significant temperature change on a 100-150 yr scale (or however long we have had consistent measurements). What is the variance in 100 yr temperature differences over the last 100K to 1M years? Even if the current historical models are accurate do we really have accuracy at this resolution? 2 or 3 degrees C change over 100 yrs may be a common occurrence.

To me it like trying to judge whether a stock price movement over the course of 10 minutes is significant based on historical daily stock prices.

19
etfb 51 minutes ago 0 replies      
I think the heading is wrong. Should be "Sixteen Concerned Physicists, Administrators, Politicians and a Couple of Ex-Climatologists: No Need to Blame Us For Obama Getting Re-Elected"
20
darksaga 1 hour ago 7 replies      
I actually have two points about this.

First of all, I still remember reading articles from the 1970's from scientists who were convinced we were headed for another ice age. That's right, another ice age. The exact opposite of the what climate experts are telling us now - less than 40 years ago.

My other point is this - let's just assume all these scientists are right. Wouldn't it be better if we just did what they suggest and reduce our impact on the planet? If they're wrong, we help the planet. If they're right, we're still ok. I would prefer this outcome to not doing anything then finding out they're right and then we're really screwed.

For reference, here's the Time Magazine 1974 article about the coming Ice Age - http://bit.ly/zNZfI8

21
mathattack 1 hour ago 0 replies      
It would be interesting to follow the money trail on these 16 scientists.

I do somewhat buy the plausibility of the economic arguement that reacting to global warning is more efficient than trying to stop it.

The "it's not proven" crowd seem to be in the same camp as the folks claiming you can't prove smoking causes cancer.

22
molecularbutter 1 hour ago 0 replies      
More Global Warming denialism from a Murdoch news source? I'd pretend to be shocked, but really the only thing surprising is that the WSJ is still even vaguely respectable considering it's ownership and bizarre bias.
23
tomjen3 1 hour ago 1 reply      
Even if the oceans were going up 10m in a week it seems counterproductive to panic -- it might be better to buy a boat or build a dike.
24
gavanwoolery 1 hour ago 1 reply      
I love how religious everyone gets about this.

What does that mean? A group of people preach one thing, and everyone takes their word for it. Your college professor told you anthropomorphic climate change is the real deal, and you took their word for it. Your conservative talk show host told you the opposite, and you took their word for it.

I do not care what f_cking side you are on. Yes, that's right, I called your "science" a religion. It is religion when you use more faith than reason or logic. Its a religion when you get emotionally involved in your belief.

Think for yourself. What does a REAL scientist try to do? Disprove their hypothesis. Look for holes in your argument, fallacies, unknown variables, false positives, etc.

<end rant>

14
Why Software Development Estimations Are Regularly Off diegobasch.com
84 points by diego  7 hours ago   40 comments top 21
1
michaelochurch 5 hours ago 2 replies      
All this is true, and there's another statistically-rooted problem that pertains even to mundane "enterprise" projects where there's little new invention and where reliable time estimates should be possible.

Let's say that you have 20 tasks. Each involves rolling a 10-sided die. If it's a 1 through 8, wait that number of minutes. If it's a 9, wait 15 minutes. If it's a 10, wait an hour.

How long is this string of tasks going to take? Summing the median time expectancy, we get a sum 110 minutes, because the median time for a task is 5.5 minutes. The actual expected time to completion is 222 minutes, with 5+ hours not being unreasonable if one rolls a lot of 9's and 10's.

This is an obvious example where summing the median expected time for the tasks is ridiculous, but it's exactly what people do when they compute time estimates, even though the reality on the field is that the time-cost distribution has a lot more weight on the right. (That is, it's more common for a "6-month" project to take 8 months than 4. In statistics-wonk terms, the distribution is "log-normal".)

Software estimates are generally computed (implicitly) by summing the good-case (25th to 50th percentile) times-to-completion, assuming perfect parallelism with no communication overhead, and with a tendency for unexpected tasks, undocumented responsibilities, and bugs to be overlooked outright.

2
fragsworth 5 hours ago 2 replies      
This guy is wrong, and this is an issue of correlation vs. causation. Just because software tends to be something new, and software estimates tend to be off, doesn't mean that things that are new cause estimates to be off.

If you give a software developer a very clear specifications of what they need to do, and don't change those specifications, it's very likely that they'll give an accurate estimate and get it done in time. Probably even faster, because most software developers pad their estimates somewhat.

Also, it is possible to "invent new things" that have very clear specifications that don't change much. It might not be very common, but it does happen. Especially for relatively simple things.

Two things actually cause inaccurate estimates:

1) a lack of clear specifications of what the project is going to be

2) a change in the specifications

In many cases, the expense of providing clear specifications is not justified. This is normally the case when companies develop their own products, where they would implement them regardless of how long it would take and bear the expenses themselves.

When software development services are provided for other parties, there is normally a "requirements gathering" phase where the developer tries to get a very comprehensive specification for the project. Normally, this specification and its estimates will be very accurate. However, after realizing their mistaken assumptions in the requirements gathering phase, the client tends to want something different from what they wanted before - it is these changes of requirements that cause initial project estimates to be off.

In the end, no estimate has to be off if we provide clear specifications - we just have to accept that requirements/specifications are very likely to change during the development of any product.

3
ohyes 4 hours ago 0 replies      
The OP is correct if you do not have clearly defined goals for the project, and it is a field that you have not really worked in before. This is probably true of some start-ups.

However, if you are working for an engineering company, often you have projects that were similar which can provide a guideline for the scope of the software or software modification that you are undertaking.

I would posit that for the majority of software projects, the invention hypothesis cannot be true, because you are not really inventing anything new.

You are more likely replacing something that is done manually or done via a hideous and error prone excel spreadsheet machinations.

So my complimentary hypothesis is that estimating the budget for a software project is a social issue.

Example:
My manager and I come up with some really cool feature that will let us market our product to new people. My manager asks me for a time estimate. He says:

>How long do you think this project would take for you to do?

Basically, it is a simple question that is also completely overloaded with little details, philosophical questions, and other minutia.

I can give an estimate of how long it will take for me to get this project done. But how long a project will take me is a derivative measurement. It is units of software engineering multiplied by my units of software completion rate. What we really want is units of software engineering work. This does not exist. You can't invoice for 10,000 ml of software engineering.

The punchline for this is that my rate of completion will be different from Steve or Charlie's and we don't have the rate or the units, just the aggregate for any given project. And it seems to be that the tendency is to go to your best programmer to get an hours estimate, rather than your mediocre programmer, regardless of who will be working on the particular project (you probably don't know who will be working on it when you are figuring out financials for a project that is 6-10 months off).

There is no standardized software engineering test that gives you a baseline for probable programmer efficiency/ability so that you can adjust the budget of the project accordingly.

There are also questions about 'what is done', interruptions from other projects that you have going on simultaneously, interpretation of various details in your spec.

And there is other bureaucratic stuff. I've had it where I was budgeted for a few months and had a delivery date representing that, and I hadn't received the relevant
input documents that I needed to complete the project 2 months into the timetable.

Or the other version, you budget for a few months and some know-nothing pushes back and tells you do not need that much time (or really that his financials will look better if you finish it this quarter rather than next).

There are certain unsolved problems in computer science that you may encounter. When asked to solve them as part of my day job, I prefer to gracefully decline and offer a solution that I know has a chance of working and being implemented in a reasonable amount of time (by doing some research beforehand and figuring out what is reasonable within current technology/knowledge and what is pipe-dream/'interesting research' material).

There may be things that you do not yet fully understand when working on a project. But it is possible to estimate how long it will take you to learn those things. It is very difficult to estimate the complications of many of the social factors. If stupid steve is working on the project, instead of me, it might never get completed. If a jackass up the chain of command cuts my budget halfway through, i can't predict that. If I get pulled off onto an emergency project, who knows what will happen. I think this is the real reason why startups and small workgroups do so much better at software. By reducing the number of people involved, you reduce the chaos factor and the amount of damage control you have to do due to someone monkeying with your project when they really shouldn't be.

4
aharrison 5 hours ago 2 replies      
I believe that it is erroneous to believe that this has anything to do with software. Estimates in most or all fields are off because humans are bad at predicting timelines. This is referred to as the planning fallacy: http://en.wikipedia.org/wiki/Planning_fallacy

I heartily recommend reading the studies associated with the Wikipedia page, as they demonstrate that this applies to more fields than just software. I will agree that the data does appear to support the hypothesis that the less repetitive the task, the more likely it is to fall subject to the planning fallacy.

5
ScottBurson 6 hours ago 0 replies      
Here's how I like to think of it. Consider the project as a tree of goals with subgoals. The root goal is the whole project; it can be broken down into components which are the subgoals; each of those has sub-components, all the way down to the individual lines of code.

The fundamental problem is that until you're actually well into the project, you can only see down a few levels; but the amount of effort that is going to be required is a function of the length of the fringe of the tree, which you can't see, at least not very clearly, until you get there.

6
msluyter 1 hour ago 0 replies      
I like this analogy, and it applies to a number of projects I've been involved with. On the other hand, at work I also encounter change requests that are basically repeats of existing projects with slight modifications, and/or changes to well understood subsystems such that I can pretty much nail the estimate. Not surprisingly, these are also relatively simple changes.

It's the whole "known unknowns" vs. "unknown unknowns" thing, and I think it's useful to distinguish between the two.

7
paulhauggis 7 hours ago 0 replies      
In my experience, it's usually way off because the people in charge change the specifications at the drop of a hat.
Every company I've worked for has done this. No matter how much you plan and try to stick to that plan, things always change. Many times, even with these changes, you are expected to get the project done in the original time-frame.

At my last company, they needed an e-commerce site done. I had the coding finished in roughly 3 months and we were just waiting on the design. We went through 3 artists and a design by committee (the boss rounded up everyone in the company once a week to give their feedback). 2 entire finished designs were also scrapped. In addition to all of this, the boss would change his opinion of it on a daily basis (I think it depended on his mood).

A year into the project, they questioned me as to why the project wasn't finished. This was after I had been telling them for months why we couldn't run a project this way. A year after this, the project was finished.
What infuriates me is that a company like this is still making money. Every aspect of the company was run like the above scenario. Over time, every good person they ever had left in frustration (including me).

8
fara 34 minutes ago 0 replies      
Estimations are regulary off cause they are just estimations, not futurology. And even if they are accurate is hard to know if that's due to a good estimate or because of Parkinson's law.

I'd like to hear some thoughts on how to improve those estimations rather that explaining why it is hard as Brooks did on MMM.

9
vbtemp 6 hours ago 0 replies      
Honestly I thought it was quite prescient when this guy wrote:

"Hey inventor, I need a drone that will pick up mice (but not other animals), locate my ex-girlfriend and drop them on her head. Give me a budget and a time estimate."

For a good amount of software (excluding boilerplate, plumbing e-commerce-type software) - this is what it's all about. I had a long argument with an uncle of mine who's getting into the field of software project management. I get the sense that among the project management types, there's a sense that software can be constructed by building gantt charts, costing spreadsheets, and "staff months". They claim computer science degrees "don't teach the practical stuff" and it's as if they are completely unaware that there lurk hard problems.

10
ColinDabritz 6 hours ago 1 reply      
The article states the problem quite well, the people who want software solutions really don't understand what software is. I find that users' and managers' mental model of software is all too often 'magic box'.

From Frequently Forgotten Fundamental Facts about Software Engineering:
http://www.computer.org/portal/web/buildyourcareer/fa035?utm...

RD1. One of the two most common causes of runaway projects is unstable requirements. (For the other, see ES1.)

ES1. One of the two most common causes of runaway projects is optimistic estimation. (For the other, see RD1.)

ES2. Most software estimates are performed at the beginning of the life cycle. This makes sense until we realize that this occurs before the requirements phase and thus before the problem is understood. Estimation therefore usually occurs at the wrong time.

ES3. Most software estimates are made, according to several researchers, by either upper management or marketing, not by the people who will build the software or by their managers. Therefore, the wrong people are doing estimation.

Glass has a book that goes into more depth on these points and has numerous citations (Facts and Fallacies of Software Engineering, Robert L Glass) and covers a wonderful variety of other topics.

11
sankyo 5 hours ago 0 replies      
When someone asks me for an estimate I like to pose this question: You just realized that you lost your keys. How long will it take you to find them? You've lost your keys before so you should be able to estimate how long it will take, right?

Well, maybe. Probably you can retrace your steps and figure out where you set them down while you were on the phone, or maybe they fell into the couch. There is also a possibility that it is going to take you a long time to find those keys.

The software estimate often depends on environments being available, other people finishing the piece you interact with on time and working the way that you expected, and you may have tricky defects that you really can't estimate how long it will take to fix them or how many there will be.

A clear specification is only one piece of the puzzle. To switch metaphors, pretend it is 1812 and try to give an estimate of how long it will take to get from Chicago to San Francisco. Yes, you have a map with routes, but there is also weather, broken wagon wheels, dysentery, bandits, you name it. (just ask the Donner party). Let's just estimate it at 90 days and probably we'll lose some people along the way and pick up a few more.

At best I try to give the suitcase estimate: I can estimate that you will need one suitcase for a weekend trip, you most likely will not need two.

12
T_S_ 2 hours ago 0 replies      
Several reasons. Managers want to believe that programming is like installing a kitchen. It should have a clear start, middle, end, and a reasonable estimate of time and cost that could almost be written in a contract. Devs don't want to admit they never installed a kitchen before and need to research the topic first, because if they do the job will be given to someone else who is willing to say they install kitchens all day long.

Oh yeah, managers also believe that changing the counter top from marble to granite in the middle of the project should be free, because, hey, it's software.

13
protomyth 2 hours ago 0 replies      
I think michaelochurch's comment http://news.ycombinator.com/item?id=3522910 is more accurate than the article, but I think the type of problem does have quite a lot to do with it.

If the problem domain is defined by physics or physical function, I think most time estimates are going to be a lot closer. You might still get outliers, but most experienced people in the domain will probably give you decent estimates.

When you get human's preferences, design, business processes, and regulations into the mix, estimates can be wildly off. I once thought a defined business process would yield a decent software specification, but that never seemed to work out since the "flexibility" argument seems to creep in. Never mind multi-national companies that have a defined business process that is actually executed totally differently at every location.

14
teyc 3 hours ago 0 replies      
Most enterprise software is based around forms, validation and business rules. Where things start to become unstuck (look for "itfailures" on Delicious) are:

1. The languages and tools change - but are often immature for the task, which means a lot of reinvention and re-workarounds

2. Users ask for nice-to-haves but are often difficult to implement as it is a once-off operation - federated sign on in frameworks that do not support it natively

3. The specification problem - the more people you have involved, more people are required to understand the problem domain in order to translate into code. The problem is particularly bad when rolling out new payroll systems in Government/heavily unionized industries, as there are just a lot of rules which can be very difficult to implement.

4. It is still ridiculously difficult to test enterprise software across the full stack.

15
michaelrwolfe 4 hours ago 0 replies      
Diego, I don't disagree with your post...there is no reason that someone hiking from SF to LA shouldn't do their homework, be realistic, be prepared, push back on their boss who wants them to do it faster, and do it better the second time.

But I'm not covering how the world should work. I'm covering how it does work. The reason this article resonates is precisely because we all see the same mistakes being made again and again.

I also agree that many software projects are more analogous to Columbus's trip to the new world...a tough trip even if you knew for sure there was a new world and where it was, almost impossible if you don't.

But realistically most people are working on web sites, enterprise apps, mobile apps, where there is enough prior experience that we should be able to make reasonable estimates. We aren't curing cancer here.

Yes the same mistakes get made again and again...

16
bediger 5 hours ago 0 replies      
Not always the case, but sometimes Godel Incompleteness gets in the way, and other times, the problem is inadvertently made NP-hard, and other times, it's like a lambda calculus term, you can't tell if a term has a normal form without going ahead and reducing it.

You can think of this by imagining that you're asked to solve a Rubik's cube. You can look at 5 of the sides, but not touch it. Tell them how many moves it will take to solve. The theoretical maximum is 20, I believe. In this case, and many others in programming, you can't know how long it will take to get something done. The fastest way to find out how long it will take to finish a system, is to do the work and finish it.

17
dutchrapley 6 hours ago 0 replies      
Because you don't know what you don't know. That's why.
18
kragen 4 hours ago 0 replies      
We should call this Basch's Law: "Software projects worth doing are not repetitions of previous projects."

The software tasks that are easy to estimate are the same ones that you already should have automated, maybe with a DSL. However, automating them is hard to estimate.

19
foundart 6 hours ago 0 replies      
If you want to really dig into it, and learn some approaches to getting better at it, I suggest you read Software Estimation: Demystifying the Black Art, by Steve McConnell.
20
Reltair 6 hours ago 1 reply      
I've always found estimating hours for a project to be somewhat difficult, especially when the client wants a concrete number right at the beginning for the entire project. I usually roughly estimate how long I think it will take, and then raise that by 1/3 to account for changes in specification by the client along the way.
21
brudgers 6 hours ago 1 reply      
There is a term which describes that mode of thinking, "Not Invented Here Syndrome."
15
US plummets on World Press Freedom Index, from 27th to 47th slashdot.org
47 points by gasull  5 hours ago   19 comments top 3
1
Zigurd 1 hour ago 2 replies      
The first two top-level comments I see here amount to:

a) Beating bloggers, especially hippies, doesn't count

b) If we had only prevented the protests, we'd be doing better

Still more reasons America is in a slump in these rankings

2
mohene1 4 hours ago 3 replies      
The key seems to be, in order to be high on the list do not have protests and uprisings in your country.
3
Duff 4 hours ago 1 reply      
This is a stunt that does more to take away from the legitmacy of Reporters Without Borders than anything else.

If they dropped the US's press freemdom ranking because of the cozy relationships between government officials and mainstream media, or the manipulative behavior of the government towards the press, or any of a dozen other factors, I'd be interested.

But they didn't -- instead, they made some vague remark about the oppression of the occupy people. The occupy movement is a joke that is either imploding or getting hijacked by the local activist union movements.

16
Testing Socket.IO With Mocha, Should.js and Socket.IO Client liamkaufman.com
11 points by liamk  1 hour ago   discuss
19
Founder Soup: Stanford and Andreessen's New Startup Generator techcrunch.com
10 points by FredBrach  2 hours ago   2 comments top 2
1
rfurmani 1 hour ago 0 replies      
I was at this event and it was quite an interesting way to see what kinds of ideas people are working on and what kind of help they need. There were 50 or so elevator pitches and a chance to speak with the presenters afterwards (which was useful since some of the ideas seemed a bit useless until you actually spoke with the founders face to face)
2
pnmahoney 48 minutes ago 0 replies      
ps, hosted at stanford? yes. stanford's program? no.
20
Why Every Entrepreneur Should Self-Publish a Book techcrunch.com
33 points by coolrhymes  6 hours ago   9 comments top 3
1
kevinpet 37 minutes ago 0 replies      
Missing from this article: you have something worthwhile to say. Honestly, I don't think I have anything to justify a book length treatment at this time. Okay, I'm not an entrepreneur, just an engineer at a startup, but I don't see the founders having all that much time to be sitting around writing books either.
2
chrisacky 4 hours ago 2 replies      
I knew this would be an article by Altucher by looking at his title. He's a great writer too, always great reads.
3
motxilo 2 hours ago 3 replies      
Related to this, what are good book authoring tools? Or at least the ones you use for writing stuff that can be exported to pdf, html or printed out. I'm talking things in the line of Tex, Docbook, etc.
21
Paulo Coelho advertises on Pirate Bay front page paulocoelhoblog.com
161 points by rafamvc  8 hours ago   30 comments top 11
1
nextparadigms 6 hours ago 1 reply      
Something tells me MPAA and RIAA will desperately try to make the US Gov take down TPB's site now too.

Piracy scares them, but real competition at this level where they can actually poach popular artists from them must terrify them.

2
yannis 6 hours ago 1 reply      
The most interesting aspect is Paulo Coelho's claim that ".. the physical sales of my books are growing since my readers post them in P2P sites". This is in agreement with what web applications used all along---lure users with a freemium model---and a certain portion of them will eventually buy the product.
3
blakesmith 6 hours ago 0 replies      
We're going to see more and more of this as time goes on. Artists are waking up to the fact that the RIAA/MPAA are preventing them from interacting with their audiences directly. You're making a very bold statement that speaks straight to your fans when you align yourself with TPB (and also send a very direct message to the MPAA/RIAA). Good for him.
4
fpp 2 hours ago 0 replies      
From Paulo Coelho's blog:
(http://paulocoelhoblog.com/2012/01/20/welcome-to-pirate-my-b...)
"...Nowadays, I run a ‘Pirate Coelho' website, giving links to any books of mine that are available on P2P sites.
And my sales continue to grow " nearly 140 million copies world wide..."

Chapeau and a great lesson how to do things:

One other way: We should stop accepting the labeling of sites with termini that create negative soundings with the Vogons of the Entertainment industry and instead play their game against them by using terminology that they are more familiar with. That will also make some of them actually read critical material.

In this case how about "agile communications on literature" or "active user engagement with the content" or maybe "community building beyond consumption" (Sorry for the high levels of nonsense but the best way to beat an overly strong enemy is using their weaknesses and their weapons against them.)

5
sloak 6 hours ago 2 replies      
Would this work if the author/artist were not as famous as Coelho or Louis C.K.?
6
mitchie_luna 7 hours ago 2 replies      
Paulo Coelho is really amazing! He thinks differently. I agree with his ideas that if P2P sharing is the way of introducing artist's work and a good idea doesn't need protection. Of course, if the people know that your book or your music is very good, then for sure, they will buy the hardcopy or the cd because they want to own a product which was written or produced by well-known writer or musician.
7
paulovsk 1 hour ago 0 replies      
Posso comentar em português nesse texto?
Just asking :D

Muito boa a iniciativa do Paulo. Ainda não li nenhum livro dele, mas como é o preferido de Will Smith, lerei em breve.

8
juliano_q 7 hours ago 0 replies      
I am a not a big fan of his books, even reading a few of them occasionally. But I am really proud of this guy being a brazilian. Great ideas.
9
nathanpc 6 hours ago 0 replies      
This is how you do it in the internet era.
10
cstefanovici 7 hours ago 0 replies      
This is great.
11
pyre 5 hours ago 0 replies      
The point is that piracy has happened throughout human history. The recent attempts to act like it's possibly to eliminate piracy are lacking in common sense (or are just down-right intentionally deceptive).

It's content creators like this that demonstrate the correct way to deal with the situation regardless of your views on the morality of piracy. While the MPAA/RIAA spend all of their resources attempting to conquer gravity, they could have spent their resources to mold their business into one that incorporates gravity into its strategy.

22
Bill Gates' masterpiece, DONKEY.BAS, finally available for iPhone technologizer.com
41 points by technologizer  7 hours ago   16 comments top 5
1
RodgerTheGreat 6 hours ago 2 replies      
My knowledge of DONKEY.BAS comes from Andy Hertzfeld's story[1] on Folklore.org, which paints the game in a somewhat less nostalgic light.

[1] http://www.folklore.org/StoryView.py?project=Macintosh&s...

2
tzs 1 hour ago 0 replies      
Altair BASIC was Gates' masterpiece. DONKEY.BAS was probably just a quick throwaway.
3
gizmo 5 hours ago 0 replies      
4
ben1040 3 hours ago 0 replies      
Luckily someone has already brought GORILLAS.BAS to the iPhone:

http://gorillas.lyndir.com/demo

5
bkaid 2 hours ago 1 reply      
Interesting that the iOS download is .5mb, which is the lowest I ever remember seeing. And probably 50x the size of the original version.
23
Harvard Starts Experiment Fund allthingsd.com
3 points by Pasanp  38 minutes ago   discuss
25
Former Apple Exec goes to JC Penney and slashes prices by 40% nymag.com
73 points by kennethologist  10 hours ago   46 comments top 12
1
blahedo 7 hours ago 2 replies      
It's down at the bottom of the article, but this:

> Penney will use whole figures when pricing items. In other words, you won't see jeans with a price tag of $19.99, but rather $19 or $20.

is actually my favourite part of the whole thing. Taken together, it's a major overhaul that says, hey, sorry, over decades we'd gotten wrapped up in all these little marketing tricks to increase our sales and bottom line, but we decided it was time to treat y'all like grownups for a change.

I hope it works out for them.

2
technoslut 9 hours ago 2 replies      
I could see this being hugely successful. The only question is whether people have been conditioned for the sale season. Growing up in a household full of women who buy clothing and accessories all the time, they particularly save up for when holidays approach.
3
JeremyStein 4 hours ago 0 replies      
Thank you OP (kennethologist) for fixing the title of this article. The nymag title is "JCPenney Permanently Cuts All Prices by 40 Percent" which is certainly not true. Copy editor Bill Walsh writes:

The company meant "permanent" in the sense of regular prices as opposed to sale prices. It's lowering regular prices and cutting back on sales in a strategy that may or may not work. If the strategy doesn't work, the company has every right to change course -- and if it does, a bunch of news outlets will be revealed as big, fat liars.

Bill goes on to criticize lazy journalists who just write what they're told:
http://theslot.blogspot.com/2012/01/penney-want-cracker.html

4
matsiyatzy 7 hours ago 2 replies      
This is referred to as the "fast fashion" business model. H&M and Zara in Europe has been doing it for quite a while already. Wikipedia article here: http://en.wikipedia.org/wiki/Fast_fashion

It's not so surprising that they go for this business model, since "fast fashion" retailers have an average profit margin of 16%, compared to 7% for typical retailers, at least according to this source: http://www.google.com/url?sa=t&rct=j&q=&esrc=s&#...

Basically it's "just-in-time" production coupled with "agile"-type response with regards to fashion trends and customer wants.

5
Corrado 2 hours ago 0 replies      
Just informally talking with the women in my household, they all seem to hate the idea of not having sales. I told them that JCP will still have sales (Christmas, Valentines Day, etc.) but they were nonplussed on the whole idea. The first thing my wife did when she got her new JCP catalog was look for coupons and she was sorely disappointed.

I think it's hard-wired into some people's (read: women's) brains to not purchase anything unless its on sale. Every single one of them said it was a bad idea.

6
yarone 8 hours ago 1 reply      
7
troymc 8 hours ago 1 reply      
Is there a website where I can go to see when normal retailers have had sales in the past, so that I might predict the arrival of the next one?

Better yet, imagine this data set: the price of every consumer good sold in retail stores, as a function of store and time.

What would be the impact of such a database, if it became widely available, say, from a company whose mission is to organize the world‘s information and make it universally accessible and useful?

8
eps 8 hours ago 2 replies      
One thing to keep in mind is that this cheapens the brand. It's one thing to buy a $10 t-shirt at Gap and another - a $30 T at 60% discount at Banana Republic. They might be identical, but the perceived value of latter is higher.

This is a very ballsy move for the company. Must be desperate times.

9
benjohnson 8 hours ago 1 reply      
Knowing that I can pop in at any time and get a fair price is much better for than wondering if I'm getting screwed that particular day. Maybe it's an American thing - it seems that the rest of the world loves to haggle.
10
mseebach 8 hours ago 1 reply      
It doesn't appear terribly novel to me, hasn't big box retailers done this for a long time? I don't recall IKEA doing out-right sales for an example (they have continuous clearances and promotions, of course, but not an "everything is cheaper in January" type sale).

For instance, jewelry and Valentine's Day gifts would go on sale in February, while Christmas decorations would be discounted in November.

Isn't the point of "seasonal sales" to clear stock? So Christmas stock goes on sale in January, Valentines stock goes on sale 15 Feb?

11
brd529 6 hours ago 0 replies      
For a really good explanation of what is going on, I highly recommend Brian Roemmele's quora post:
http://www.quora.com/JCPenney/Why-did-Ron-Johnson-leave-Appl...

Not sure if he was at Johnson's press event or not, but he has pictures from it and explains in detail Johnson's new strategy for JCPenny. Looks like a good bet to me.

12
buu700 6 hours ago 0 replies      
For some reason I read the headline as saying that an Apple exec got pissed and drunk after being fired then stumbled into a random JC Penney and demanded that all prices be slashed 40%.

The actual news is much more interesting.

26
Taking Nothing Personally bennesvig.com
37 points by bennesvig  6 hours ago   9 comments top 8
1
sliverstorm 2 hours ago 0 replies      
Caveat: I take almost nothing personally, but I feel like every now and again someone will accidentally let something slip in a joking or casual context that you should take personally. A criticism they would normally be too polite to say, for example.

You have to be able to identify them among everything else in casual conversation that should not be taken seriously though.

Also, to be clear, I don't mean "take offense"- I mean, treat it as actionable critique, for example.

2
Swizec 4 hours ago 0 replies      
Not taking things personally is a good motto, but an even better motto is trying to understand the other person.

Ok so they've just snapped something mean at you because you're slow with your job. Sure, it's mean, but maybe they're in a rush to get to their daughter who's just come out of surgery after a terrible car crash and they're, you know, in a bit of a hurry and not in the best of moods.

Because people are generally nice. When they aren't nice it doesn't mean you shouldn't take it personally, it means you should ask what's wrong.

3
cateye 27 minutes ago 0 replies      
Are people talking to your brother? If they are talking to you, you should take it personally.

But how you process the information is up to. Endless looping about some trivial thing a customer says in an impatient mood, is not something you should base your life on.

I think what you try to explain has nothing to do with personal versus impersonal but assessing peoples input correctly and balancing the value of it on some rational criteria.

A lot of times this can mean that you can filter a lot of noise so you can focus on valuable input instead.

4
jayferd 6 hours ago 0 replies      
I think there's an important distinction between "not caring" and being detached from our own opinions. Say you're having an awful day and snap at me. If I were to "not care", I might just go about my business and that would be fine. But if I try to dig a little deeper, put myself in your shoes, and really understand what's going on, then I've not taken it personally, but I've made it very clear that I do care.
5
hobin 1 hour ago 0 replies      
Well, of course people shouldn't take many of these things personally. What he describes is just slightly modified actor-observer bias (or asymmetry), a well-understood phenomenon in psychology.

(For those who had not yet heard of the term, Wikipedia has an article on it that seems to explain the concept well enough: http://en.wikipedia.org/wiki/Actor-observer_bias)

6
AznHisoka 5 hours ago 0 replies      
Yep, you shouldn't take anything personally from anyone that isn't family. Most people just want more dopamine, whether it comes in the form of more money, more sex, or just getting to work in time. You're just a means to an end for most people. hence, don't take what they say personally.
7
draggnar 4 hours ago 0 replies      
this article is timed well with all of this hoopla about copying designs
8
powertower 3 hours ago 1 reply      
How can "nothing" be taken personally?
27
Visualized Git practices for team: branch, merge, rebase kentnguyen.com
68 points by kentnguyen  10 hours ago   29 comments top 7
1
spitfire 3 hours ago 0 replies      
Isn't it a little ambitious to call yourself a veteran $foo programmer when you've been doing it for a little over a year?

Particularly when the blog post before this is discussing how .m and .h files totally confused you. God help him if he gets asked a linked list question in an interview.

2
martco 7 hours ago 2 replies      
I agree with the author that learning and becoming handy with git cli is very important.

I don't agree that you should stop using a GUI for git, especially if you've already cut your teeth on the command line. Tower, a git GUI for Mac OS X, has a great interface and ties in very well with the core git functionality. I've learned a lot about stashing, merge conflict resolution, and cherry-picking, thanks to that app. Also, Tower shows a tree graph, similar to the network diagram the author "seriously cannot live without."

Why knock visual tools when when you rely on them so much?

3
gavingmiller 7 hours ago 3 replies      
The OP gives his own definition for rebase: git rebasing is taking all your local changes since the last push and put them ahead of other people's changes regardless of the date you made the commit.

But fails to explain _why_ you'd want to do that. Can anyone fill in that why for me?

4
hack_edu 6 hours ago 0 replies      
I love git. It helps me to wake up in the morning some days. But...

The OP's attempt to visualize git concepts falls quite short. This same old branching graph does little to nothing to illustrate these concepts. I cannot tell you how many friends and colleagues threw up their hands when someone tries to graph it out these way.

5
igorgue 5 hours ago 0 replies      
I think the best thing I did for my team is explain them how branches work, and how not-so-special are they, branches are just pointers to a commit, and commits are never 'lost'.

People need to understand how simple Git is.

6
SeoxyS 4 hours ago 1 reply      
Can somebody explain how rebasing affects shared histories. Say I'm rebasing a feature branch that's shared via github, will rewriting the commit history prevent me from pushing?

I avoid `git commit --amend` because of that, and thought that `rebase` had the same problem. So for that reason, I always use `merge`. Am I mistaken?

7
dqminh 9 hours ago 4 replies      
Its opposite for us. At work, we dont use branch, only git pull --rebase on master. Any idea on when and why to use branches often ?
28
The Cost of Laundry gridserver.com
3 points by craigkerstiens  54 minutes ago   1 comment top
1
hammock 17 minutes ago 0 replies      
Probably the biggest issue off the top of my head is that if you pay a monthly fee for laundry service, and the machines break, the tenants are losing money every second that ticks by and the machines are broken. Another way of looking at is is that the landlord all of a sudden has a lot more pressure to deal with when one of his machines break.

Contrast that to the per-use fee structure, in which case you don't lose any money when you go down to the laundry room and see the machines are broken - you haven't put your money in the machine yet.

29
Browser Sedimentation tbray.org
65 points by wglb  11 hours ago   30 comments top 11
1
chime 10 hours ago 2 replies      
A wonderful example of this is people who type URLs into Google Search or search queries into address bar (before most browsers had Omnibar-like features). To all of us here, it clearly makes sense that the browser is a local software that enables you to visit websites while a search engine is a website that finds other websites. Typing a full URL in Google is generally a waste of bandwidth and time.

This abstraction of service is lost on many users who see the browser and search engine as the same thing. And even different websites and the Internet as the same thing. I've had to field many tech-support calls because a Java exception occurred on the outsourced Payroll processing site but somehow since I manage the IT Department, clearly it is something I could fix.

I think a good way to look at this is to pick something complex that you use regularly but don't know much about, like an automobile or cellphone hardware. Why does my cellphone suddenly lose reception for 5 seconds when walking under my carport? I have no clue but I suspect it has something to do with the signal or maybe because there's a lot of metal around. That is how users feel when you show them a big red X, regardless of how much descriptive text you present. Best is to avoid situations where you have to show X and just make it easy to revert to the previous situation.

2
dsr_ 10 hours ago 2 replies      
The default has to fit these requirements:

- The OS/desktop manager-specific stuff needs to be there, or else you fragment the overall UX for the computer.

- The browser needs to give immediate access to the actions that a typical user will use all the time.

- Tab mode has won out over separate windows, so you need a visual indicator of what tabs are open and which one is active.

- And the contents of the web page are the contents that you care about right now, or else you're looking for a way to navigate away.

Modern browsers do a pretty good job with all of those requirements. I'm surprised that Bray is even interested in a notional "home" page, since everyone I know, technical or not, either wants their session restored from last time (on startup) or a very fast search page to come up when they tap for a new tab. People don't reset tabs -- they close them or go somewhere in specific.

Best of all, this behavior is all customizable by normal people who never have to type about:config.

3
ck2 1 hour ago 0 replies      
Firefox3 Theme for Firefox4+ and status-4-evar extension solve a great deal of the modern firefox problems and actually make it functional again.

Modern browsers are going to raise a bunch of idiots who aren't even curious about how the page actually works - even "view source" is now buried.

4
_delirium 10 hours ago 0 replies      
In broad terms, this same concern (sedimentation of layers of interface) was one of the original motivations for the MacOS's otherwise unusual single menu bar at the top, rather than separate menu bars for the OS and the application. But that of course hasn't carried over to webapps: while Safari might use the OS menu bar, Gmail inside of Safari layers a new one inside it.
5
alexchamberlain 9 hours ago 0 replies      
I'm not sure this is an issue. Every application has a hierarchy of controls at the top of the screen, and the user understands this.
6
skrebbel 8 hours ago 2 replies      
Maybe it's because English isn't my native language, but I didn't understand half the worlds in this article.

I mean, 'sedimentation'. What?

I find this a bit odd given that the article is about understandability and accessibility for the layman.

7
andrew93101 3 hours ago 0 replies      
This is why I love site-specific browsers. On the mac I make SSBs using Fluid for most of the sites I use throughout the day (email, calendar, google docs, ci server, pivotal tracker, etc.) and then use traditional OS app switching mechanics to switch between them.

As a result, no browser controls: no back, home, or url bar. The window is dedicated to a single site.

And now on Lion, I can make anything fullscreen if I want to get rid of the OS chrome also and focus only on the content of the app/site I'm using.

8
Zigurd 7 hours ago 1 reply      
I find myself fumbling with the three scroll bars in GMail: Two are owned by GMail, one for the main pane of the UI, and one for the message pane embedded in it. The browser's scroll bar controls scrolling for the contacts displayed on the right side of the window.

A case can be made for this in that all the scroll bars are closest to the content they scroll. But it also points out the unsatisfactory state of having the browser control some scrolling and the active content inside the browser control some of it. This would never fly in the design of any UI library for creating interactive applications.

9
16s 10 hours ago 1 reply      
I have trouble finding my "home" button too in Firefox. Maybe I'm just getting older, or maybe things are just getting more complex. Probably a bit of both.
10
lbotos 10 hours ago 0 replies      
I understand this is an issue, and quite a serious one at that, but how do we solve it for users? As a web dev, we have to assume that the user is decently prepared to use their browser of choice and once on our page "learn" our interface controls, right?
11
JonnieCache 10 hours ago 2 replies      
Wait, people still set a homepage?
       cached 29 January 2012 01:02:01 GMT