hacker news with inline top comments    .. more ..    5 Oct 2012 News
home   ask   best   7 years ago   
Steve Jobs passed away one year ago - HN Frontpage waybackletter.com
68 points by duck  57 minutes ago   9 comments top 6
stevenj 36 minutes ago 0 replies      
When news broke about his passing, I started putting together a small archive of stories about him. I continue to update it when I come across new stuff.


Miss you, Steve.

redthrowaway 25 minutes ago 0 replies      
Someone could have proven P=NP, and we wouldn't have noticed.
huhtenberg 37 minutes ago 3 replies      
That's when I had my flag option taken away by mods, because I flagged all of these except for the top one.
plainOldText 28 minutes ago 0 replies      
I keep listening to this audio of a talk Steve Jobs gave to an audience in Aspen back in 1983 and I believe everything he said in that talk was exactly what Apple became years later. I can't help but wonder how he nailed it so perfectly (well except for the time frame which was a bit longer, 20+ instead of 10-15 years, but still).

Link to audio: http://api.soundcloud.com/tracks/62010118/download?client_id...

From where you're watching Steve, thanks.

kleiba 27 minutes ago 0 replies      
Oh, please. Indiana... let it go.
zsherman 50 minutes ago 0 replies      
Wow, literally every single post.
Yale scientists explain how ketamine vanquishes depression within hours yale.edu
86 points by 001sky  3 hours ago   28 comments top 11
kevinalexbrown 1 hour ago 1 reply      
Here's my shot at "the most revealing job interview question" (not looking for a job, I do neuroscience in a separate area):

The review explains why the rapid action of ketamine excites so many researchers:

The discovery that ketamine rapidly increases the number and function of synaptic connections has focused attention on synaptogenesis as a fundamental process for the treatment of depressive symptoms and also suggests that disruption of synaptogenesis and loss of connections underlies the pathophysiology of depression.

This excites researchers not because ketamine itself would be used to combat depression, but because depression is still extremely symptomatically defined, making it difficult to design treatments for. That's roughly how it's diagnosed in the diagnostic manual used by most psychiatrists: check off a list of symptoms, if you have enough, you're depressed. It's like going to the doctor and explaining that your stomach hurts and they say "well, looks like you have abdominal pain, here's some Advil." Treating the symptoms would be great if only there were a happiness dial in the brain. Indeed, the effect of most anti-depressants is often demonstrated prior to a mechanistic understanding of why they make many patients feel better.

Recently there has been substantial evidence of "synaptogenesis" - the formation of new potential connections between neurons - from multiple treatments, including ketamine. So now we have this new picture emerging: depressed patients tend to have atrophied and "less-connected" neurons in some brain areas, and some drugs can reverse it, in particular ketamine can reverse it quite rapidly, and it works in rodents as well as humans.

That makes it very amenable to study. The way this often works in the lab is the following. Take some rodents, subject them to unpredictable stress to get them depressed, then give some ketamine. It makes them better. Euthanize the rodents, slice the brains, and note that the non-ketamine ones have less dendritic spines in certain areas ("potential input points to a neuron"), but remarkably, the ketamine ones have more in those areas.

The most important step comes next, where you try to find out what ketamine is actually doing, since, again, there's no happiness dial in the brain. Create strains of "knock-out" rodents, where you block the production of certain chemicals or proteins you think ketamine might affect by altering their genetic composition. This step is crucial, because it allows you to find out which effect of ketamine is providing the benefit, because there are many. You can do this both by observing both behavior (does the ketamine not improve mood in the genetically altered rodents?) and in physiology (does the ketamine still increase synaptogenesis in the altered rodents?).

In the end you can kind of work out a map of sorts: ketamine does X things to the brain and Y in X are the ones that are important, sometimes in certain combinations. Then you can start creating intelligent drugs that pinpoint those important processes, to avoid the unfortunate side effects of drugs like ketamine. Moreover, you now have a better physiological understanding of depression, instead of just a symptomatic one.

To put it in machine-learning language, it's like going from ideal observer analysis like mutual information, to an actual parametric model where you understand the distributions themselves.

tokenadult 2 hours ago 0 replies      
From the press release submitted here: "In large doses, ketamine can cause short-term symptoms of psychosis and is abused as the party drug 'Special K.'" This is a general problem with drugs for mood disorders--the human mood regulatory system is very complicated, and it is possible for patients to engage in behavior that is dangerous to self and to others when they have low mood (are in a depressive episode) or when they have high mood (are in a manic episode). That's why a careful physician always asks about a patient's personal history when beginning treatment of someone seeking treatment for depression, to avoid starting out with treatments likely to trigger mania. Some human beings only get depressed, and never manic, but some can go awry in either direction.

I heard a National Public Radio story with interviews of researchers on the same issue as the press release submitted here while on a drive this afternoon. What most excites researchers about the ketamine studies is not a prospect of using ketamine itself as a frequent first-line drug for treating depression, but rather as a model for understanding brain function better and eventually developing new drugs that are even longer-lasting for treating depression and even less likely to trigger mania. Human mood disorders are very diverse--there are probably hundreds of rare genetic variants that increase the risk of mood system disruption under varying kinds of environmental stress--so there surely will not be just one drug that will successfully treat all patients, but rather a gradually growing toolkit of better and better drugs to treat more and more patients with less risk and fewer side-effects.

AFTER EDIT: User shrivats just kindly pointed, in a subcomment, to the overall summary of the Science special issue on depression, in which the ketamine research and related research is discussed. A paragraph describing another article in that issue is especially helpful for HN participants: "However, not all is bleak. There are individuals who overcome difficult situations and show astonishing resilience in the face of adverse circumstances and other forms of acute or chronic traumatic stress. Studying them might provide us with clues about what can go right. Southwick and Charney (p. 79) provide an overview of current ideas about why some people are more protected against stress and depression than others and how this knowledge may help us develop better treatments and successful prevention strategies." Several HN participants regularly write about strategies of building resilience to face the stress that many hackers face. Further research on that issue will also be part of the package in future improved treatments for mood disorders.

chunkyslink 2 hours ago 1 reply      
I used to take quite a few drugs recreationally and I can quite honestly say that Ketamine gave me 'God like' experiences unlike anything else I have ever experienced. Repeatedly (and reproducibly) I could 'see' everything in the universe from the smallest particle to the planets and solar system in one go. I could comprehend nature, science and space and observe everything working as a system, from a point way above it all (it is dissociative after all). It made me euphoric and happy and when coming down from the experience I could (and still can) remember that feeling and how powerful it was / is. Of course to an observer I was basically asleep in a chair.

This news doesn't surprise me at all.

starpilot 1 hour ago 1 reply      
> In their research, Duman and others show that in a series of steps ketamine triggers release of neurotransmitter glutamate, which in turn stimulates growth of synapses. Research at Yale has shown that damage of these synaptic connections caused by chronic stress is rapidly reversed by a single dose of ketamine.

There's burgeoning evidence that depression is tied to a lack of neurogenesis (creation of new brain cells), which may also be tied to serotonin levels. Some believe that the reason SSRIs take weeks to have antidepressive effects, even though serotonin levels are restored almost immediately, is that it takes a while for that to stimulate new brain cells. It's far from certain though that serotonin has anything to do with depression or neurogenesis. The atypical antidepressant tianeptine (currently unapproved in the US, but widely used in western europe) apparently reduces serotonin levels while boosting neurogenesis, the study of which is summarized well by Stanford neuroscientist Robert Sapolsky [1]:

> … tianeptine prevented many of these stress-induced changes. These included the spectroscopic alterations, the inhibition of cell proliferation, and a significant increase in hippocampal volume (as compared with stress + vehicle animals). Of significance (see below), tianeptine did not prevent the stress-induced rise in cortisol levels.

The restoration of hippocampal volume is important because it's been shown that low hippocampal volumes correlate with emotional abuse in adolescents [2] (also by Yale researchers). This was breathtaking for me - bad parenting literally causes brain damage.

Special K and tianeptine aren't the only efforts at curing depression through neurogenesis. Neuralstem [3] is testing a drug in humans to treat major depressive disorder by restoring hippocampal volume, with a controlled study to complete early next year. That is the only new drug I'm aware of. Regardless, the serotonin model of depression, the one which produced Prozac etc. isn't the last word, and it's looking more likely that a dearth of new synapses may be the culprit in clinical depression.

[1] http://www.ncbi.nlm.nih.gov/pmc/articles/PMC60045/

[2] http://www.medpagetoday.com/Pediatrics/DomesticViolence/3002...

[3] http://www.neuralstem.com/pharmaceuticals-for-depression

lutusp 16 minutes ago 0 replies      
> Yale scientists explain how ketamine vanquishes depression within hours

Translation: "Yale scientists speculate about how ketamine vanquishes depression within hours" And until there's a strictly designed study with a control group, we'll be no closer to a definitive answer. But considering the drug and its role and target, a control group would be unethical.

siganakis 2 hours ago 0 replies      
Also of interest is the observation that Ketamine (and PCP) are associated with NMDA Antagonist Neurotoxicity (Olney's Lesions), a form of brain damage.

Not really something you should want to self medicate with at this point until more research is done.



fluxon 1 hour ago 0 replies      
Yale scientists excitedly, repeatedly explain to anyone in the quad how ketamine vanquishes depression within hours, in, you know, a million million ways (yale.edu)

I've read too many Onion headlines to ever read headlines like this again.

001sky 3 hours ago 1 reply      
Orignal is http://www.sciencemag.org/content/338/6103/68 paywall)



Basic and clinical studies demonstrate that depression is associated with reduced size of brain regions that regulate mood and cognition, including the prefrontal cortex and the hippocampus, and decreased neuronal synapses in these areas. Antidepressants can block or reverse these neuronal deficits, although typical antidepressants have limited efficacy and delayed response times of weeks to months. A notable recent discovery shows that ketamine, a N-methyl-d-aspartate receptor antagonist, produces rapid (within hours) antidepressant responses in patients who are resistant to typical antidepressants. Basic studies show that ketamine rapidly induces synaptogenesis and reverses the synaptic deficits caused by chronic stress. These findings highlight the central importance of homeostatic control of mood circuit connections and form the basis of a synaptogenic hypothesis of depression and treatment response.

anigbrowl 2 hours ago 3 replies      
Interesting news, but the potential for abuse is worrisome. Recreational ketamine produces dissociative states similar to heavy drunkenness in the short term, and frequent repeat use is known to damage a part of the brain called Broca's region through cellular overheating, seriously impacting speech formation.
scotty79 2 hours ago 0 replies      
Was the antidepressant effects of ketamine observed at normal anesthetic dosages (1-2mg/kg)?
soup10 2 hours ago 3 replies      
Recreational drugs temporarily make people happier? No way... I'm glad the fine minds at Yale are studying such important, ground breaking stuff. I can't wait until this research leads to new antidepressants that turn more people into zombies.
Clearing up some things about LinkedIn mobile's move from Rails to node.js ikaisays.com
87 points by ikailan  4 hours ago   13 comments top 6
davedx 52 minutes ago 0 replies      
This is a great post for validating management concerns about pulling in sexy new technologies for the hell of it. Every place I've worked I've been unable to convince management to use e.g. Rails (5 years ago) or node.js (recently). Even though I love these technologies and wish I'd had more time in full-time employment to learn and play with them, I understand and appreciate the risks implicit with adopting a shiny new technology in your company's IT dev/production environments.

It's also a great post illuminating how in hindsight some things can be really obvious (that building a high capacity web service dependent on a single-threaded server will give you problems down the road), but at the time it's not always easy seeing the woods for the trees.

For me though, the big takeaway was that one line summary: "You're comparing a lower level server to a full stack web framework." Node.js has a pretty nice library/module ecosystem now, but for a complete full-stack solution with maximum productivity I would venture that there is nothing out there that compares to Rails currently.

klochner 2 hours ago 1 reply      
This sounds much more sane than "node is 20x faster than rails", thanks for validating the assumption most of us were making.
eta_carinae 1 hour ago 2 replies      
> And those requirements kept growing. If my calculations are correct, the standard setup for engineers now is a machine with 20 or more gigabytes of RAM just to RUN the software.

Close. In 2011, all the engineer desktops got upgraded to 36Gigs. At the time, the eng department still hadn't figured out how to deploy without duplicating hundreds of jar files everywere.

mhartl 2 hours ago 1 reply      
I met Ikai through the Silicon Valley Rails Meetup, which I co-hosted back in 2008-2009 and which met at LinkedIn HQ in Mountain View. This post is a great contribution to the recent discussion about Rails at LinkedIn, and I hope it gets the attention it deserves.
rhizome 3 hours ago 1 reply      
And the blog-to-commentary-to-blog cycle begins anew.
trung_pham 10 minutes ago 0 replies      
Node.js is old news.
Time to move on to GoLang. :
Chrome DevTools could do that? igvita.com
288 points by lysol  9 hours ago   52 comments top 20
kevingadd 8 hours ago 2 replies      
WARNING: This presentation seems to crash Firefox for me. Had to view it in Chrome. Anyway...

"Disable cache to (re)gain some sanity" because Chrome continues to cache things it shouldn't, including the contents of file:// URLs and in some cases, even content with headers that specify it shouldn't be cached.

mumble grumble

Cool to see some of these features finally documented, though. I had no idea you could drag-drop elements to reorder them.

For those looking to try out the (useful!) Heap Snapshot tool, please be aware that it has a bad habit of crashing tabs. It tends to happen the most when a tab is already using a lot of memory, but sometimes it just happens. So don't do it on a tab that contains any state you might want to hang onto.

One cool feature they don't mention: You can edit code in the script debugger and then hit ctrl+s to update it live in the running page. It's pretty useful for experimenting or for adding tracing points to existing code.

skeletonjelly 5 hours ago 1 reply      
Going to make a top level comment about the slide tools.

I just used the Chrome Web Inspector (!) to look at the JS libraries, search for the credited authors of the obviously named file and found this:


Which leads to this:


Which has this code: http://code.google.com/p/io-2012-slides/source/browse/

And this dog food demo: http://io-2012-slides.googlecode.com/git/template.html

Looks great for doing a talk about code. Has a few features for highlighting code, handling links etc

ludwigvan 4 hours ago 3 replies      
Here's something that I believe, should be included in dev tools: Click on a node, and see all event handlers that are attached to that node (and those attached using jQuery).

Does anyone know if this is possible using Dev Tools? There is a bookmarklet Visual Event2 (http://www.sprymedia.co.uk/article/Visual+Event+2) that does this, sort of; but it is still lacking.

zaroth 3 hours ago 0 replies      
This just goes to show how far we've come, and oh how far we still have to go. For the hackers who live and breathe by these tools, I salute you.

Some products absolutely depend on pushing the envelope of 'what is possible in the browser'. These trailblazers ultimately spend incredible amounts of effort achieving their desired effect, which a year later will be nicely packaged in an MIT-licensed, open source JS lib you can call with a single line of code.

But one look at my feature roadmap tells me exactly when I'll have the time to analyze HAR files, tweak how often I flush packets, stare at paint rectangles, or write some Chrome devtool plugins -- that would be... NEVER.

wmf 8 hours ago 5 replies      
If you're confused by the total lack of UI, try the arrow keys (facepalm?).
statictype 6 hours ago 1 reply      
Crap, there's a lot of useful stuff in there.

The paint rectangles thing is amazing. I didn't know browsers even expose this data.

The Audit API looks really useful too.
I'm now thinking of standardizing on Chrome as the development browser for our team (most devs prefer it anyway) so we can share custom development tool add-ons.

niyazpk 2 hours ago 0 replies      
For some reason even after multiple tries over the years I have never been able to get "Break on subtree modifications" and similar to work reliably. I think I must be doing something wrong, but I don't know what. Anybody else have had issues with this?
rurounijones 5 hours ago 0 replies      
While not the presentation for these exact Tips'n'tricks slides, the google guys cover most of this in the following video:


(The Chrome Dev tools stuff kicks in about 20 minute)

hadem 7 hours ago 4 replies      
The UI for this is terrible. The bullet points are incredibly brief to the point I'm confused about the information it is telling me. How do it actually see the information in the "Sources" pane? It is an overlay but there is no description of how to see it...

Am I missing something?

bgrins 7 hours ago 4 replies      
Remote debugging is so useful: http://www.igvita.com/slides/2012/devtools-tips-and-tricks/#.... It is painful to try and make and test changes on mobile devices without developer tools.

I wonder when this will be available for iOS.

smagch 2 hours ago 0 replies      
For people who are not familiar with Devtools

"A Re-introduction to the Chrome Developer Tools" by Paul Irish




molmalo 5 hours ago 2 replies      
Can someone explain me this, please:

-Break on subtree modifications - delete me


wlue 58 minutes ago 0 replies      
Nice to see a PonyDebugger mention here. :)
minikomi 3 hours ago 0 replies      
BRB off to write a ton of custom panels :)

Great slides.

d70 6 hours ago 1 reply      
Dumb question here ... is there a visual tool to create browser-based slides like this or people just pretty much hand code each slide? I know there are libs out there like impress.js.
ljoshua 8 hours ago 0 replies      
Great resource, adding on top of other hard-to-find resources that others had pointed out before. (Nice use of a presentation in the browser too.)
neerajdotname2 6 hours ago 1 reply      
What tools is used to build this presentation ? Is it open source ?
qntmfred 6 hours ago 0 replies      
induscreep 2 hours ago 0 replies      
dat navigation controls...
chris_mahan 6 hours ago 0 replies      
And to think that if they didn't use javascript at all the page would be even faster...
Todon't codinghorror.com
104 points by RossM  8 hours ago   56 comments top 30
btilly 6 hours ago 2 replies      
Jeff is doing it wrong. A to-do list should be for things that you need to do, not thing that you'd like to do some day.

Here is what I do when I need to be particularly organized and productive. I call it a to-done list. (I'm lazy and probably have ADHD, so I never sustain. But whenever I do this it feels great, and it is never hard to start it up again when I get motivated.)

I start with a small list of things that I absolutely have to do, in roughly the order that I plan to tackle them. Put that in a plain text file. Write the date below that list.

Take the bottom task, break it up into subtasks, recursively, until I've got a task I can work on right now. Do it. Move it down off the list to the date. If I get blocked on that one, add the blocker below the task and pick another task. Continue all day.

The next day I start by putting that day's date above the old date, and then continue again.

The keys to this are the following:

1. I ONLY include things that I HAVE to do. (Adding all of the, "It would be good to some day" or "I'd like to" leads to depression as described by Jeff.)

2. Items are SPECIFIC and SMALL. The goal is to constantly move them off the list into done.

3. Only include CURRENT stuff. If there is a project that is intended for 3 months from now, it does not go on the list.

This list serves 2 purposes. The bit at the top is pretty much a LIFO stack (I add at the bottom, take off from near the bottom) of what I am currently working on. So it is the whole, "I need to do Y in order to do X, be sure I get back to X eventually." And the long list at the bottom is a log of how much I got done, which makes me feel good.

If you try this, be aware that a quickly growing top section is proof that you're doing it wrong. Go through the top bit and ruthlessly prune off everything that doesn't need to be there. Yes, I know that some people feel good about taking a list of high level tasks and breaking it up right away to get organized. But really, this makes the list explode, and you won't do as good a job of exploding it out as you will when you are closer to actually doing that item.

The rule of thumb is, if you have trouble scrolling through the todo to the already done, the todo is clearly too long.

kamaal 3 hours ago 0 replies      
>>I've tried to maintain to-do lists at various points in my life. And I've always failed. Utterly and completely.

I don't know if Jeff has read "Getting Things Done" By David Allen. Or he has read "Flow" by Mihaly Csikszentmihalyi. With Regards GTD, David Allen specifically mentions its very easy to get on and off GTD framework. Because GTD does require a level discipline to get it work. Or any Time management framework ever invented for that matter.

If to-do lists are not working for you, then you have one of the signs.

1. You don't have a lot of things to do in your daily schedule at the first place. Making the purpose of list obsolete.

2. You have very few but large monolithic tasks that don't need to be written down, and generally fit in comfortably into your brain cache.

3. You are not frequently interrupted.

4. You don't procrastinate.

5. You are just not disciplined to follow the list discipline.

>>Eventually I realized that the problem wasn't me. All my to-do lists started out as innocuous tools to assist me in my life, but slowly transformed, each and every time, into thankless, soul-draining exercises in reductionism.

Sorry to-do lists do work. Don't make to-do lists a religious ritual you need to follow. They are there for a reason and if you fit into that framework its futile to use it.

>>Lists give the illusion of progress.

When were lists meant to measure progress? They are meant to track your work, your brain only has a limited capacity to store things. When you put to many to-do tasks in your brain, you start to worry about it. And then most of your energy goes into worrying than executing those tasks.

The whole purpose of lists is to dump your brain on paper. Then execute them, if you are interrupted you know where to start after you get back. In other words they work like stacks in software.

>>Lists give the illusion of accomplishment.

Lists of completed Lists are definitely an indication of accomplishment.

>>Lists make you feel guilty for not achieving these things.

That is why they work in most cases.

>>Lists make you feel guilty for continually delaying certain items.
Lists make you feel guilty for not doing things you don't want to be doing anyway.

Why do you put them on the list anyway. Lists are not books used to maintain vision statements. They are tools to put actionable items whose progress you can measure.

>>Lists make you prioritize the wrong things.

Lists are dumb. You create them. How can they make you prioritize wrong things.

The problem is over zealousness. As I said before a list is not your vision statement.

Write things in the list what you want to do. Not what you dream about, or want to have 10 years from now.

David Allen covers this in his book, These sort of things should ideally go in a 10 year plan or whatever year plan and its progress must be reviewed every Saturday or so.

>>Lists are inefficient. (Think of what you could be doing with all the time you spend maintaining your lists!)

Think of what you could not be doing if you didn't know the time needed on what you have been doing, doing currently or likely to do in the future. How will you know where you could save time and use it elsewhere?

>>Lists suck the enjoyment out of activities, making most things feel like an obligation.

This is true if you work at a resort. But if you are somebody who has to 5-6 meetings, go for status updates, answer 20 emails, solve two bugs, tend to your home and track your personal projects all in a day. Without getting organized you are not going to make it.

>>Lists don't actually make you more organized long term.

Because you stop just there. You don't have a list of lists.

>>Lists can close you off to spontaneity and exploration of things you didn't plan for. (Let's face it, it's impossible to really plan some things in life.)

That is why you should run your life like an agile project and not in the waterfall model.

>>If you can't wake up every day and, using your 100% original equipment God-given organic brain, come up with the three most important things you need to do that day

Most people don't have 3-most important things in life.

In fact lists exists because most don't have 3-most important things in life.

SCdF 7 hours ago 3 replies      
This is an interesting reaction, but I'm not sure it goes deep enough.

What is a bug tracking system if not a glorified todo list?

What is a shopping list if not a glorified todo: buy X list?

When you think up a cool idea for a project or learn about a technology you want to explore next time it fits and add it to your project log / text file / notepad, isn't that basically a todo list?

These things are really important: I can't remember every bug my software has, I often forget something I wanted to buy (goddamn avocados, honestly every time they slip my mind), and I can't work on every idea I come up with straight away.

I people who have problems with todo lists are using them, I hesitate to say it, incorrectly. They shouldn't run your life, they are just a place to jot things down.

nostromo 6 hours ago 0 replies      
All lists for me are transient and limited in scope and usually very effective. For example: "remaining things to do before shipping version x" or "things to get at Costco".

I think lists break down when you try to make a single list for your whole life that lasts forever.

dkarl 7 hours ago 2 replies      
I've largely replaced long-term TODO lists with calendar reminders. That's what my TODOs were anyway, reminders of what I intended to do at some point in the future, and I was always seeing items too late (ahhhhh shit shit shit!) or too early (meh, ignore) so now I just stick each item on my calendar on a date that seems appropriate.

Most TODO lists that aren't reminders are just glorified brainstorming. For example, when I make a list of steps for getting something done, I consider it disposable. A task list I generate from scratch tomorrow will probably be more accurate than the one I was working from today. Like design documents, task lists are perennially stale, more harmful than helpful.

The one exception to the above two rules is my shopping list, because nothing sucks more than waking up in the morning and not having any coffee.

mistercow 4 hours ago 0 replies      
>If you can't wake up every day and, using your 100% original equipment God-given organic brain, come up with the three most important things you need to do that day " then you should seriously work on fixing that.

Wow, talk about some ableist nonsense.

Why is it so hard for people to understand that different productivity tools work well for different people? Letting go of the typical mind fallacy is a very important step in understanding how to take advice from other people.

neilk 7 hours ago 1 reply      
There was an art project I saw a while ago where you got to confess all your undone ideas to a "priest", and you got absolved of the responsibility for doing them. Maybe this is the curse of living in a world where so much more is possible.

But OP goes too far when he says that you should rely only on your brain's natural scheduling and short term memory. I mean, without a grocery list, I can't even remember all the ingredients to make a birthday cake.

Prioritizing is hard. I don't think there are any simple solutions. But maybe we need some sort of trigger to know when we should throw out undone projects, or cast them into some very far back burner. What would that be?

pacomerh 16 minutes ago 0 replies      
If the article was talking about personal goals, then its spot on. But todo lists are needed, and they're usually sub-tasks of a greater goal.
smegel 6 hours ago 0 replies      
I could not disagree more strongly with this post. For me, managing a todo list has been the single greatest boon to my own productivity and self-management I have ever come across.

That is not to say I think most or any todo list applications are worth while - I think the vast majority of them are really terrible as they are far too complex or constraining, require too much overhead to do the simple things like creating a task or changing a task status, are too opinionated about how you deal with your todo items (like forcing schedules, reminders or due dates), lack hierarchical structure and lack any kind of free-form input.

The best (and only) worthwhile app that I have ever encountered is not even a todo list app - it is simply a Google Docs document that I leave open in my browser on various computers. I use whatever kind of free form structure I want to dump my thoughts and "todo" things that I feel like at the time - and annotate task status with free text tags and tokens that suit me as I go. Various parts of the documents will at times look like todo lists, plans, schedules, itineraries, work-flows, inventories, idea-lists, collections and more - and it is constantly evolving as the state of my work and activities evolve. Important, current stuff goes at the top - for example my first two blocks are titled "Appointments" (dont want to forget those!) and "Today" (what do I absolutely need to do today). Going down the blocks tend to reduce in priority/importance - for example my very last block is called "Learning" which is a list of various things that I would like to learn more of when I have time (not that learning isn't important, but it is a long-term background activity that I don't need to be reviewing every day).

If I have some thoughts or plans that I feel are important, but I dont want to focus on them now, I will just dump them in the document and move on to something else. Later I can come back and review that dump, maybe translate it into actual todo item or evolve it into a planned work-flow. This "thought dumping" is a well researched (there is a quite famous book about it I believe) way of self-management and I find it very effective.

The most important thing is there is no structure, form or anything other than what I impose on myself - you literally get a blank, empty page and that is it.

ak217 3 hours ago 0 replies      
This is bullshit.

Todo lists are the most basic form of a tool for triaging and prioritizing your work. They are a minified, single-threaded version of issue/bug-tracking systems. If you don't need a todo list (or an issue tracker) to remember all the details of what must get done, you're either superhuman, or you're not working on a hard enough problem.

kiba 7 hours ago 0 replies      
I don't have anything like a todo list. But I do have habits that I followed religiously. I guess you can call it "Invisible Important TODO Items For Today".

There are three items that I do every single day:

1. Write 500 words a day, in which I usually spend writing about half baked random things like "fear inoculation", "legoization" or "Conquering Rome with Science". You can see random stuff at http://kibabase.com/articles/notes-and-thoughts

2. Walks 10K steps a day. I did it for an experiment that supposed to last 30 days, now it's in 37th days total. I am trying to build an analysis tool that output pretty JSON of my data, but somehow keep neglecting it. You can read what I have so far here: http://kibabase.com/articles/self-quantification#interventio...

3. Measure my step count, my blood pressure, my weight, and my pulse.

Well, actually there's a fourth item: read a book everyday.

It turns out that I didn't do the million other things that I wanted to do but never do consistently, including coding. Coding should at least be a priority.

I can say with a straight face that even writing 500 words a day about random things makes progress. Out of the random things pile in my Notes and Thoughts page, I eventually spun off two essays, one of which is about my self quantification effort in which I am currently doing 10K steps a day, the others is for logging the ideas of the 16+ books I read so far this year. I expect to add more essays to my site over time as essays mature from my primordial soup of random ideas and notes.

These activities help keep me healthy and sane. It also makes me feel like a badass, even when I am not.

edanm 7 hours ago 1 reply      
Sometimes there are things I need to remember. Things like "remember to call this person on Sunday", or "remember to deal with this issue in 3 weeks". Sometimes it's personal (things like "go watch this movie that just came out"), most of the times it's professional (like "take care of that problem you had with the bank").

I don't have a good memory. I need to write these kinds of things down somewhere. It's not (and shouldn't be) that complicated, but it's definitely necessary.

moocow01 7 hours ago 0 replies      
Ive put it at the top of my to-do list to use my brain and gut.

I think the problem with to-do lists is when people use them to track literally what they are supposed to do for the day ... you should not need a todo list to guide your day in the larger sense in my opinion. I think where they actually are useful is tracking very small postponed tasks that would otherwise be forgotten - software is filled with these sort of things and probably the reason why devs think to-do lists are so instrumental.

dwc 7 hours ago 0 replies      
I've tried and failed to use to-do lists off and on over many years. Now I'm using the ultra simple Reminders app that comes with iOS, and it's working for me. The only good thing about the app is the complete lack of features, and that it's with me always.

I only put down little things that I'm likely to forget. I give myself permission in advance not to do any of them at any specific time. But when I find myself with spare time and motivation, I always have a couple of things I can pick off the list. It's helped me get a lot more little things out of the way. The big things are another matter, but with fewer little things cluttering my brain…

dangoor 4 hours ago 0 replies      
Mark Forster has written some very interesting ideas about todo lists (and the management thereof).


One thing that's very cool about Mark Forster's approaches is that they have always had the notion of going with your intuition on what you should be working on and they've also had a mechanism of throwing stuff away from the list.

I totally agree about todo lists generally becoming giant Katamari balls. I personally have no issue with the idea of having todo lists as long as you can throw things away comfortably.

WalterSear 4 hours ago 1 reply      
I keep my entire life in one very complex 'todo' list wiki. Every day, I refer to it, moving the 'next things to do' to a place of prominence. A couple of times a week, I sit down and spend an hour or so shuffling things around. I have been using this system for almost a decade now.

If Jeff hasn't found a way to make todo lists work for him, too bad.

aymeric 1 hour ago 0 replies      
The problem of most todo apps is that they focus on productivity (do as much as possible) rather than effectiveness (do the right thing).

Try an app that helps you keep your goals in mind in your everyday workflow.

http://weekplan.net my app) is inspired by the "Put First Things First" methodology from Covey, and it works for many.

engtech 2 hours ago 0 replies      
For me the advantage of a todo list of adequate size is staying productive in the face of blocked tasks (eg: the "it's compiling" problem).

I think the real secret is to disable internet access on your dev machine (and instead keep it in another room), but I find a todo list is all about keeping in flow by any means necessary and avoiding the subtle allure of the web.

TeMPOraL 7 hours ago 0 replies      
I've been suspecting for last few years that reading GTD and productivity blogs in high school might have been the biggest mistake of my life, as I remember that have always drown in overloaded lists of things not yet done since that high school time, and somehow before it I never felt a need for increasing productivity.

I feel that todo lists help me get through all those so called errands - stuff I want to get done, but might not particularly enjoy the "doing" part. Otherwise I would forget many of them. But in case of things that really matter for me, writing down TODOs feels silly, as I'd rather actually do the stuff, not write about it.

But between work, university and my S.O., I have almost no time to actually do anything from that TODO list, so it up being a list of stuff I could have got done if I had a 40-hour day.

3rd3 6 hours ago 0 replies      
I'm convinced of to-do lists. There is one important rule that often helps: Don't put things on your list that take less than 10 minutes of your time. Do those things right away instead!

I write my lists in SublimeText 2 using this plug-in:
Basically, it provides only one shortcut for marking tasks as completed. That way I don't lose time by fiddling with the UI of one of those to-do list apps.

photorized 53 minutes ago 0 replies      
I only write down things that are unpleasant. Don't want them in my brain cache when I go to bed at night.

Normal "tasks" and creative decisions tend to bubble up and sort themselves out.

minhajuddin 3 hours ago 0 replies      
>I have used a lot of tools to manage my TODO lists, I even wrote one (Taskr - Simple command line utility to manage your tasks). However, I keep coming back to pen and paper. I think I get it now, The biggest drawback of the Todo list apps I've used was that they made managing my Todo lists easy. As a result of which my lists started growing. When I use pen and paper, I have to copy everything to a new page every single day, and THAT is NOT easy. It makes me think which task is worth copying. At the end of the day, this is what makes my todo lists sane. I think I am going to stick to pen and paper for my Todo lists for a long time.
blvr 7 hours ago 1 reply      
I'm glad if going todo-less works for the author but I wouldn't recommend it. I doubt many people have the capacity to remember that meeting you're supposed to have Friday after next at 3:30 or that little bug someone just mentioned over the phone that you'll have to fix at some point when you're back at the office.

Like all things, you can go to far with todo lists. Todo today lists have never worked for me. But I'd be lost without a list of appointments and minor/forgettable actions (filed according to the context in which they need to be done).

dugmartin 6 hours ago 0 replies      
Maybe we should just write our todo lists on flash paper and light them up at the end of the week. If there is something important on there you would get it done and if not you can see it go bye bye in an instant. No karmic backlog.
pippy 4 hours ago 0 replies      
It's a personality thing. Some people work well with todo's, some people don't.

I work better with them.

scott_meade 7 hours ago 1 reply      
Jeff sums it up well: "If you can't wake up every day and, using your 100% original equipment God-given organic brain, come up with the three most important things you need to do that day " then you should seriously work on fixing that."
outside1234 7 hours ago 1 reply      
i think i agree with the sentiment of this, but i still find a "bag of ideas" interesting to remind me of things that should be on that short list if my memory fails me.
jiggy2011 6 hours ago 0 replies      
I need to make some sort of list or reminder for things that are important but not urgent.

I lost a domain name to a squatter before because the "RENEW YOUR DOMAIN NOW" emails ended up in my junk folder.

philsheard 7 hours ago 0 replies      
Sometimes things pop into your head at the worst time, when you can't do anything about it. However you collect it (todo list app, email, pen and paper), making a reminder is the only way not to fail at life.

Real life is too complicated and too important to just forget stuff.

craigvn 6 hours ago 1 reply      
Only programmers could argue about the rules of using To Do lists.
Write Articles, Not Blog Postings (2007) useit.com
7 points by rahul_rstudio  1 hour ago   1 comment top
tangue 26 minutes ago 0 replies      
He's assuming quality is distributed normally among blog posts without much explication on how he came to this conclusion. Usability deserves a better guru.
LinkedIn Mobile Moved from Rails to Node: 27 Servers Cut and Up to 20x Faster highscalability.com
167 points by turar  11 hours ago   111 comments top 23
SoftwareMaven 11 hours ago  replies      
If you are thinking about using node.js for this reason[1] on most sites, you are optimizing poorly. LinkedIn didn't worry about this until after they were a public company.

If Python/Djando or Ruby/Rails can get your app out the door and into customer hands faster, it is almost always the right thing to use.

1. There are certainly other, very valid, technical reasons for choosing node.js over other technologies early. But let those reasons be about the problems you are solving today, not the ones you might need to worry about when you have 50 servers to deal with.

gnufied 10 hours ago 1 reply      
The title is misleading. From original article[1]:

> They found that Node.js, which is based on Google's V8 JavaScript engine, offered substantially better performance and lower memory overhead than the other options being considered. Prasad said that Node.js “blew away” the performance of the alternatives, running as much as 20 times faster in some scenarios.

So according to original article, Node.js did not perform 20 times better compared to existing Rails based backend. According to Prasad, it performed 20 times better than alternatives of Node such as - Eventmachine & Python Twisted (they did evaluate both of them).

Now I am having hard time believing node.js can outperform Eventmachine or Twisted by 20 times. Most benchmarks I have seen and done tell me, node is marginally ahead. I would obviously like to see, what they benchmarked and how?

1: http://arstechnica.com/information-technology/2012/10/a-behi...

justinjlynn 11 hours ago 2 replies      
It sounds like they went through a major rewrite of their backend and ended up architecting things to be much more performant than their previous system. I'm curious to find out what parts of the system they think contributed most to the performance increase. While this is interesting it is by no means an apples to apples comparison of Node and Rails as the headline suggests.
lsh123 1 hour ago 0 replies      
The performance improvements have probably NOTHING to do with node.js but with the re-architecture goals set by the team:

"For our inevitable rearchitecting and rewrite, we want to cache content aggressively, store templates client-side (with the ability to invalidate and update them from the server) and keep all state purely client side."

Better understanding of the problem and experience running the system were probably key for building the new high-performance architecture. Obviously, old one lacked these big advantages.

dpcx 11 hours ago 1 reply      
I personally find it hard to believe that all of LinkedIn was only running on 30 servers, and is now running on only 3.

EDIT: Mobile only. Maybe the title should be updated to reflect that.

rapind 3 hours ago 0 replies      
I looked into Node.js, Sinatra, and Go to handle API traffic for a mobile app a few months ago and did a lot of benchmarking. What I found during my tests, was that Go > (Node.js = Sinatra).

If I had wanted to add Rails to this comparison I would have compared apples to apples and used Metal instead of including the entire stack.

ikailan 4 hours ago 0 replies      
I was on the team at LinkedIn when we first wrote the thing on Ruby on Rails. Here's my writeup containing some more context:


While I'll freely admit v8 is much faster than MRI Ruby, the efficiency gains are likely more related to 1) the rewrite factor 2) moving to non-blocking 3) the fact that the original server ... um, needed love

dschiptsov 43 minutes ago 0 replies      
Replacing inefficient and bloated Rails, with custom coded framework, and byte-code interpreted Ruby for native code generating V8, losing in readability in an order of magnitude? Well, nothing to see here.
rorrr 8 hours ago 2 replies      
Ruby is one of the slowest languages you can think of, while Javascript on V8 is only 2.3X slower than C++ (median).


What I'm really surprised is that they got such a huge gain. Most projects I've worked on are DB or I/O bound. Maybe they store everything in RAM.

shn 3 hours ago 0 replies      
What this article talks about is more than a year old. When I was looking for a tool to start my project I was at the same distance to Python, Ruby and NodeJS and their ecosystem. So when I read about it a year ago I leaned towards NodeJS. So knowing the experience of others always help to certain people at certain point of their yet to unfold story but not to everybody all the time.

I am not unhappy with my choice but I do not have enough data to compare with other tools. I do not think a lot of people have either. Once you start with a tool you tend to keep it since you invested a lot of time learning it as well as developing something with it. I think few can afford switching tools (e.g. FB switched from HTML5 to native recently for their mobile interface).

jemfinch 8 hours ago 0 replies      
When you make a software change that allows you to reduce your server pool from 30 to 3, you don't say, "27 servers cut". You say "Servers reduced by 90%." I have no idea what "27 servers" actually means: you could have been using a thousand servers for all I know.
pragmatic 8 hours ago 0 replies      
The original article:


And this is really intersting:

Finally LinkedIn tested, and ultimately chose to adopt, something surprising. The company embedded an extremely lightweight HTTP server in the application itself. The HTTP server exposes native functionality through a very simple REST API that can be consumed in the embedded HTML controls through standard JavaScript HTTP requests. Among other things, this server also provides access to the user's contacts, calendar, and other underlying platform functionality.

wardb 11 hours ago 1 reply      
Reading this like: "Hey, our previous backend was a total turd, technically speaking." It might be trivial to speed up your own crappy first implementation 20x with some extra TLC.
trekkin 7 hours ago 0 replies      
It's almost impossible to reach 1000 req/sec in Ruby/Rails. It's relatively easy to reach 50000 req/sec in C++. Assuming V8 is about 3x slower than C++, it is not surprising that a move Ruby/Rails->node.js gets 10x throughput improvement.
SanjayUttam 10 hours ago 0 replies      
"Programmers could leverage their JavaScript skills."

I'm confused by that being an advantage - I guess that's better than using some language you don't know at all, but it's still a bit of a different approach to JS, no?

fomojola 9 hours ago 1 reply      
Ah, odd math point, but was I the only one who noticed this sentence:

"focus on simplicity, ease of use, and reliability; using a room metaphor; 30% native, 80% HTML; embedded lightweight HTTP server; "

skyebook 10 hours ago 0 replies      
It's a nice stat to see but I think this sort of comparison with "we moved our infrastructure of undisclosed age and unknown bloat to this new infrastructure built for the current problem domain" doesn't really do much for the ongoing conversation.

The article is touted as praise for a stack but my gut says that its really a smart restructuring of how they serve mobile. Either way, good on them for the efficiency boost.

derwiki 8 hours ago 0 replies      
Without citing versions, it's hard to extract anything useful from this article. Rails 2.1 on ree-1.8.7 performs very differently than Rails 3.2 on 1.9.3-p194.
scott_meade 8 hours ago 0 replies      
"simplicity is at the heart of LinkedIn's mobile vision." Then the article goes on to describe complexities of the implementation. Maybe my old and cranky mind has lost touch with what people mean by "simple".
jrockway 6 hours ago 0 replies      
Is 27 servers a lot?
benbjohnson 11 hours ago 1 reply      
Can someone update the title to read: "LinkedIn Moved from Server to Client: 27 Servers Cut and Up to 20x Faster"?
elpee 5 hours ago 0 replies      
read all the comments, not a single mention about php, wtf world?
antonpug 10 hours ago 3 replies      
I hate Rails. Node is the way to go.
Node > Python > Rails
Conflagration: Zynga's OMGPOP acquisition torched nearly $500,000 a day thenextweb.com
60 points by tylerlh  6 hours ago   52 comments top 13
ChuckMcM 4 hours ago 0 replies      
Presumably this isn't entirely unexpected, and the 'torching' comment is over the top. Microsoft wrote off $6.2B of their AQuantive purchase in July, assuming 2000 days (5 years plus a few months) that is $3.1M per day, roughly 6x the 'torch' rate in this article.

The stock price was already valuing them a lot lower than their pre-OMGPOP buy so this seems more like a recognition of that in order to make their balance sheets look a bit more credible. I had stock positions invested for a while in Activision (90's) and Vivendi (early 2000's) and generally they seemed pretty random. A hit and the price would rocket up, a flop and it plummets down. That suggested to me that a 'fashion' stock (where the product was as much fad/fashion driven as it is quality/non-quality driven) is really only good for trading, not for longer term growth. Got out of both positions at a small profit but a lousy return.

Contrast that with a company like Milton Bradley though, which milked the Monopoly game concept for millions if not a few billion dollars. It seems like games are more like 'books' or 'music' or 'movies' than something more durable like a 'word processor.'

The treatment Zynga getting re-assures me that we aren't in a bubble, in spite of what some would say (although not so much now, which is nice).

gfodor 4 hours ago 2 replies      
It's harsh, but whenever I see Zynga's stock dip even lower its reassuring due to the signal it sends to smart people: what you work on matters.
alanh 5 hours ago 1 reply      
I haven't played their annoying game in quite a few months (despite being addicted, despite myself, initially). I got a Draw Something email yesterday (contents: “Please? We know you love us! Here's your username! You forgot, right? Haha! LOVE US!”) and cheerfully unsubscribed.

Edit to clarify: I paraphrased, obviously.

Tsagadai 1 hour ago 0 replies      
I wonder how much of that write down is due to Zynga's management of OMGPOP. Their games went from fairly fun to complete spamballs (popups and constant, invasive advertising) in a matter of weeks after the acquisition. That caused almost everyone I knew who played it to stop. Draw Something has unplayable responsiveness on older Android phones after the "updates".
bdr 5 hours ago 1 reply      
I forget which venture capitalist first said it, but a very wise thought is this: if it has become easier for any single app to garner millions of users rapidly...

This may be referring to cdixon's "Increasing velocity" post: http://cdixon.org/2012/04/11/increasing-velocity/

It's a hugely important idea that doesn't get enough attention.

ipince 5 hours ago 3 replies      
Pardon my ignorance, but what does this mean? That Zynga will pay OMGPOP less than what we had heard? How can they do that?

(Sorry I'm not from around here and don't understand the "write down" language. I presume it's different than "underwrite").

Edit: from trotsky's comment I'm assuming it means they now value it for $x less than what they valued it before (what they paid for).

So how does this influence/affect the guys that came with the acquisition?

fpgeek 5 hours ago 0 replies      
Was OMGPOP acquired with cash, stock or a mixture of the two?

To the extent that there was a significant stock component to the acquisition, this is somewhat less bad than it looks, since Zynga's stock is down so much (though, of course, Zynga won't want to make that argument).

htmltablesrules 5 hours ago 0 replies      
ZNGA hasn't even been public for a year, yet their equity is now at an 80% discount from it's initial price. Not looking good at all.
trotsky 5 hours ago 4 replies      
very hard to believe you can go from buy to writing down half the value in two quarters. either there was fraud involved or zinga is just looking for a convenient scapegoat + one time charge for their general malaise (which is what I'd guess
moocow01 5 hours ago 1 reply      
I feel sympathy for any employees who exercised their options at IPO - that stock in after hours is at a rock bottom 2.28. Hope strike price was low.
Aloha 4 hours ago 1 reply      
I'm not surprised.

Everytime I see a webapp like this, I always wonder to myself "where's the click".

I dont see a future for SaaS that is not targeted at people directly forking money over for it.

timrpeterson 5 hours ago 1 reply      
i wish i could wish Zynga the best, but everything I read makes the CEO, Mark Pincus, and the company philosphy seem truly awful.
samstave 5 hours ago 1 reply      
Heh, call it Karma.

Kixeye, whilst having its own PR issues, has proven that FB gaming can be quite profitable and lucrative.

Luckily they don't have quite the bad reputation Zynga has for its practices - even though the CEO is a douchebag.

All of this Zynga drama is, to me, is even more an indication on how criminally wrong the FB ipo was; Zynga did the smart thing in IPOing first had the information about FBs worth would have been true - but they suffer tremendously based on how bad the ipo was...

The OMGPOP buy was a toss of 200MM on the speculation that FB was going to out goog the goog....

I am still surprised there really havent been any claims of criminality/litigation.

Software architecture cheat sheet gorban.org
234 points by grayprog  14 hours ago   29 comments top 9
ChuckMcM 13 hours ago 1 reply      
Very nice.

I'm a big fan of #1 being "State the problem." rather than "Is this a 'Good Idea'?" they are inter-related of course, but any good software architect has their eyes fixed on the problem so they don't get distracted by the opportunities to 'decorate'.

I like asking people what they think the 'architect' does, that is to weed out people who think architect implies a leadership role, it can be but it isn't necessarily. In the 'real' world the architect is the person who notices you've got a banquet room for 100 people but the nearest restroom is two floors down, or a single hallway connecting both the people and the kitchen to the room. They see the 'whole' goal (feed large groups of people) and then work out what has to be true for it to be not a problem.

I look for similar skills in software architects, they don't care if the implementation is rails/django/node but they do care that individuals can be identified as users or guests, given capabilities or not, can be disabled or not, and the largest possible number are welcome.

Sometimes architecture is combined with the person who does design, sometimes with the person who does coding, and sometimes its just a person asking really good questions at the launch planning meeting.

RyanMcGreal 12 hours ago 0 replies      
I'd rather call this a checklist than a cheat sheet. A cheat sheet is a quick lookup for someone who doesn't know what they're doing, whereas a checklist is a quick lookup for someone who does know what they're doing and is humble enough to recognize that even the most capable expert performs better when working off a list.

Related: http://gawande.com/the-checklist-manifesto

ctdonath 13 hours ago 2 replies      
One quibble: "DRY?" isn't meaningful to anyone not exposed to that non-ubiquitous acronym. If I stick this sheet outside my cube I'll be explaining it over and over, or (worse) the curious and otherwise open-minded will walk off dissuaded by opacity.
rosser 7 hours ago 3 replies      
As a database guy working in a Rails shop, I have huge qualms with "DRY". It encourages people to encode relationships in their models, and trust that's somehow magically going to keep their data sane. It's not. Believe me, it's not.

If you're using an RDBMS, use it. The model is just a representation of the data's canonical, authoritative state, which is what it looks like when it's been committed to the DB. The only way to keep your data sane is with Foreign Keys enforcing referential integrity between tables, and the only way to do that is to specify the relationship in both your models, and in your migrations.

Blind, slavish adherence to a pithy acronym is just going to get you into trouble.

chrisohara 47 minutes ago 0 replies      
YAGNI should be on there
nonrecursive 10 hours ago 0 replies      
Here's a good taxonomy of software quality attributes, which are strongly related to architecture: http://www.sei.cmu.edu/library/abstracts/reports/95tr021.cfm
ctdonath 13 hours ago 0 replies      
Curious abuse of fonts. The "g" displayed is "!" in other fonts, hindering copying (I'd print it as-is, but the "DRY" acronym is unnecessary & confusing).
priyanka_sri 13 hours ago 1 reply      
Beginning with such a simplified list proves useful. I would say the first point has to be "Are there 'existing' solutions to this (Architecture) Problem? If yes, what are they & what are their pros & cons?"
It always surprises me (& I learnt from a wise man & my mentor) that you aren't the first one (& almost never alone) when you encounter any problem.
ExpiredLink 13 hours ago 3 replies      
Why are these rules "software architecture" specific?
Canadians may now apply for a TN visa before they reach the border crossing uscis.gov
43 points by jacalata  6 hours ago   16 comments top 8
mrpollo 7 minutes ago 0 replies      
I'm so jealous I was denied my TN last year because the interviewing agent didn't like my credentials, I'm a Mexican citizen and we don't have it that easy, we have to make an appointment for a paperwork review process (1 month minimum) then they schedule your for interview ( 1mknth here too ). At the time I was just married, my employer had just moved us to Chicago, and suddenly I was in Tijuana ( my hometown ) without a job or any personal belongings. Haven't applied again since I haven't had the luck of getting another employer try to hire me since they are all scared ( with justification )of the process, wish it was easier for us too, but I know we have a higher track of abusing the system (especially on immigration). Still. I would try again if opportunity came.

Edit: typo

ghshephard 2 hours ago 0 replies      
I've crossed the border on a TN for 16 years now. I've been refused twice. The first time (my first time) because I was really unprepared. The second time, about two years later, when we used a law firm for the first (and only) time, I was at Netscape. The border agent didn't like the look of my education credentials, and I ended up in Toronto (I'm from Vancouver) - because Netscape thought that was a "Good Place to cross") - I got supporting documentation and I've never been refused again (14 years running now. Knock on wood.)

Ironically, I'm heading back again to re-apply this weekend, flying back home for thanksgiving weekend. The cool thing is you only have to do it once every three years - though, now that I do the paperwork myself, it's a one page, 4 paragraph letter, describing my job, where I work, what I do, what I make, what my experience and education are, and when I'll start work, and for how long.

For those who fall within the system (2 year+ diploma, appropriate job category) - it's a 5 minute interview at the Airport/Border. I don't even arrive more than 15 minute earlier at the airport anymore.

Zombieball 11 minutes ago 0 replies      
Question for the HN community: As a software engineer considering work in the US I am covered under the list of professions for H1B & TN visas and can probably find a job where a company will sponsor me. However, my girlfriend went to school for a BCom in marketing. If the 2 of us considered moving to the US for work, what would be the easiest way for her to get a visa? Would she be able to get a TN visa as a marketing / management consultant (she graduated only 2 years ago)? Any tips are much appreciated.
dmix 5 hours ago 2 replies      
This is so helpful.

When I was planning to apply for TN, I was very hesitant to pack all of my things, buy a plane ticket, go to the airport and have to deal with a random unfriendly customs agent, then have a good possibility of getting declined for arbitrary reasons.

hobonumber1 4 hours ago 1 reply      
I was rejected for my TN at the border on my first attempt. The customs agent thought I was working illegally for some reason. I always hated how so much of the decision process was based on whether the guy at the border liked you or not. I ended up having to cancel all my flights, hotel and car reservations and spent the whole day talking to lawyers.

Tried again a week later, and the customs agent I met this time said I should have been let through the first time. -.- sigh

seanmccann 1 hour ago 0 replies      
This is really great for Canadians. Less risk packing up all your stuff (paying for all the bag fees) and potentially get denied.

It does look like it costs $325 to file this I-129 (Petition for a Nonimmigrant Worker). This is in contrast to applying at a port of entry where the cost is $50/$56. Am I mis-reading?

fatjokes 3 hours ago 0 replies      
This is just beautiful. Finally, some added sensibility. My gratitude to both governments for pushing this through.
jonny_eh 4 hours ago 1 reply      
Luckily I didn't live too far from the border so I drove down ahead of my move to get approved. Then when it came to move I already had my TN visa.

This would've been nice though!

What features would you like to see added soonest in your favorite C++ compiler? herbsutter.com
5 points by AndreyKarpov  1 hour ago   discuss
Facebook confirms it is scanning your private messages to increase Likes thenextweb.com
103 points by neya  12 hours ago   66 comments top 13
TomGullen 7 hours ago 4 replies      
To me, this is just standard privacy hyperbole that rears it's head every now and then.

Anyone remember the headlines several years ago when "Google 'scans' your emails to serve targetted ads'? When was the last time you heard someone complain about that anymore? No one does, because it doesn't matter.

Why should me sharing a link via a PM and my action be aggregated into an anonymous number worry me? How is the argument different from Google 'scanning your emails'?

Social networks nowadays seem to be competing on ways of justifying bigger numbers on the counters. It makes sense, the bigger the number the bigger the social proof which will attract webmasters. Scanning PM's for shares isn't something I might of thought they would do, but they do and it makes sense. The whole privacy debate surrounding this particular case is a complete non-issue for me though and will probably fade into insignificance just like Google scanning your emails.

An action from a big company that shifts the boundaries of acceptability in privacy would be a justified concern, but in this instance no land here in the privacy war has been lost. Nothings really happened.

ericdykstra 11 hours ago 1 reply      
There's a reason I disable Facebook buttons on my browser and don't stick them on my own sites (including my blog). I don't trust Facebook with my browsing data, and I don't want to subject users of any website I work on to their abuse, either.

"Move fast and break expectation of privacy"

phwd 10 hours ago 1 reply      
This is known by most Facebook App Developers.

What makes up the number shown on my Like button?

The number shown is the sum of:

The number of likes of this URL.
The number of shares of this URL (this includes copy/pasting a link back to Facebook).
The number of likes and comments on stories on Facebook about this URL.
The number of inbox messages containing this URL as an attachment.

k-mcgrady 11 hours ago 3 replies      
Doesn't sound like a big deal at all. There certainly aren't any privacy implications. When you send a link to a website via PM the 'like' counter on that page on that website will increase by one. Makes sense (not perfect though e.g. if you're directing a friend to something you don't like). It's not like the 'like' is then shared publicly to your friends not part of the private conversation.
dumb-dumb 3 hours ago 2 replies      
The difference between Gmail and Facebook is that Gmail did not aim to track who you correspond with. Facebook is focussed on personally identifiable information. They have made it their business from Day 1 to know who your correspondents (friends) are _and_ to exploit that for profit.

To my mind, this is not something Google set out to do. Although to compete with Facebook, I imagine they may have changed direction. We have Facebook to thank for that.

Let's imagine for a second that the future brings us the proverbial "video telephone" that even the most non-technical person expects to one day be standard issue. Crystal clear, real-time communication with both audio and video as available to every person as owning a cell phone is today. Now, hold that thought.

Should companies be invited into every conversation we have on this device? Should they be permitted to show ads to us as we converse?

We never had companies keeping a record of everyone who we telephone and listening in to our telephone conversation to try to figure out what junk postal mail to send us. Would this be different? How?

Ok, now we can return to present day reality. The question is: Where do we draw the line? Should companies be a party to every conversation? What will happen if we leave this question to the unscrupulous kids working and compromised adults working at Facebook? I doubt they would see anything wrong with what I described in the previous paragraph.

SeanDav 10 hours ago 3 replies      
Why are people still using Facebook anyway? I can only presume people that do simply don't care much about privacy because Facebook has proved time and time again that your privacy does not matter at all to them.
codva 11 hours ago 3 replies      
So if I send a PM to a friend on FB with a link and note that says, this is the lamest site ever the "Like" counter on the site goes up by one? That makes perfect sense.
dumb-dumb 10 hours ago 0 replies      
"Private messages" and "Facebook" are mutually exclusive.

Facebook is the antithesis of privacy.

Dreamers think FB has value.

Does the average person, of any age, think privacy has value?

What's more valuable?

Can all value be measured in monetary terms?

Web traffic has value (e.g. we can sell display ads). FB has web traffic. But so did milliondollarwebsite.com.

Privacy OTOH seems a long-lived concept, dating from at least the dawn of civilisation. I'd argue we have a lot more privacy than our ancestors. A trend that has continued unabated for hundreds of years. FB is but a {milli,micro,nano,pico}second in the evolutionary timeline of privacy.

I'm not throwing away my fig leaf just yet.

golgo13 11 hours ago 2 replies      
I know FB scans private messages. You cannot send links with certain domains without getting an error message. Sure, they don't have some dude in a cubical looking at each and every message, but if they are scanning for domains, how hard is it to scan for keywords within those private messages? Mention your new iPhone in a message, and you see ads for iPhone cases. Make a Tony Romo joke and you see ads for an RGIII jersey, etc etc.
stephengillie 11 hours ago 1 reply      
What about the "ad-likes", where FB shows "Your friend Jill likes ponies. Here's an ad for My Little Ponies" -- are these similarly prevented?

Breaking things tends to make unhappy the people using.

sadga 11 hours ago 0 replies      
the hubbub over "scanning" here is silly. Obviously Facebook's systems scan your everything.
The issue is that FB considers a "private mention" as a "Like", but it's old news that a "Facebook Like" is not an "English like"
bcooperbyte 11 hours ago 2 replies      
"Privacy is a relic of a time gone by" -Sean Parker
LouDog 9 hours ago 0 replies      
I'd rather be shocked if they don't do it (think viruses, trojans, hoaxes etc)
Linus on keeping a clean git history (2009) mail-archive.com
205 points by pushingbits  16 hours ago   75 comments top 10
lukev 16 hours ago  replies      
This highlights the only thing I don't like about Git. It's an immensely capable tool, but it gives no guidance regarding the right way to do things.

Our own teams have a set of practices which are similar but different from what Linus outlines here. And different projects on my company use different practices from those.

The worst thing is that there's no way of enforcing these workflows or practices other than out-of-band social conventions. And so minor mistakes happen, all the time. Our Git projects are never as pretty as they should be.

In other words, Git provides an awesome set of primitives for source control. I'm not sure what it'd look like, but I'd like to see a product that built on those primitives to enforce a little more order on projects.

mattdeboard 16 hours ago 0 replies      
Like lukev said, git is "an awesome set of primitives". How you build a workflow out of those primitives isn't set in stone (though, like most things, Linus has strong opinions on exactly how to use his products). This is basically what Github has done, with an extra layer of UI glitz, social, and (much-improved) notifications.

That said, IMO there is still quite a lot of room for customization in git workflow when using Github. For example, we don't "send patches around" as Linus says. Our private feature branches live on Github but we've adopted the convention that the "private" branch name is prefixed by who's working on it, e.g. mdeboard-oauth, jschmoe-url-routes. If it has someone's name at the front, don't touch it. That enables us to still use the "D" in DVCS while retaining the ability to safely rebase our own work to keep our history clean.

The only reason I'd want a git-based product to "enforce order" is a culture-related one: ensure that contributors/collaborators do things in line with the conventions we've established. However, IMO it's always better to have a conversation about that than work with an overly prescriptive tool.

silverlake 15 hours ago 4 replies      
I'm still new-ish to git and don't get why rebase is popular. If I do my work on a branch B, I can merge this branch into the master M. The merge point will have a succinct message "Bug Fix #1". You can print the history so it only shows these merge messages and not the messy history in the branches. Isn't this the same as rebase? That is, rebase removes the messy branch history. But I'd prefer to keep that history, but rarely use or display it. bisect can also ignore those branches and only use the merge points. Saving the branch history shouldn't be problem. What am I missing?
smithzvk 14 hours ago 3 replies      
So I'm relatively new to version control entirely, but in the last few years my group has been making a big push to institute Git. I have been wondering lately, however: how much history cleaning is expected/desirable?

When I develop, I split my commits into as many small changes as I can so that the commit messages are single topic. I thought that was basically the idea. Every once in a while I use rebase to combine a few commits that should have been done together as they all addressed the same issue. This all seems right to me. I am left with a clean history of everything I have done on a very fine grained time scale. But the large number of commits, each with little significance to whole program hides the large scale structure of the development.

However, I could use rebase to start combining loosely related commits, trading the time resolution for clarity in the commit history. There seems to be a continuum along this scale. Where is the proper place in that continuum to say this is clean enough? Also, I don't like making changes where I am losing perfectly good information.

I know that I can group certain commits by defining a branch, developing on it, then merging (non-fast-forward) back to the original. The branch should keep the grouping in the commit history. I even suppose that this is can be done after the fact using rebase with the proper amount of git-fu. Is branching and non-fast-forward merges the preferred method of grouping related commits in the history?

If so, this seems troubling as it means that partially fixing something is difficult to do with a clean history. Until the piece of the program you wish to fix is completely working, it shouldn't be merged into master because it would ruin the grouping of the related commits. This means that there can't be any partial thought's like fixing bugs as you find them, because presumably you might want to group all bug fixes of a function together, but have a distinct commit for each.

Now I'm more confused than when I started. Seriously, any references or advice on this sort of topic are welcome.

jrochkind1 2 hours ago 0 replies      
oh yeah, perfectly straightforward, only took several thousand words to confusingly explain.

Nope, not simple. Yep, this is a git usability problem.

In the ruby/github world, people generally violate this and DO rewrite 'public' history in order to get 'cleanness', primarily because almost ALL history is 'public', since you tend to show people work in progress on github, or just push it there to have a reliable copy in the cloud. And yes, this sometimes leads to madness.

easy_rider 14 hours ago 0 replies      
Funny. I was just finishing a chat with a colleague about a git strategy for a coming new release of a production product, then saw this post on top. I've been working on it without collaboration for about half a year now, so thats easy.. I've had mixed experience with both rebasing and pull strategies before that. I've found rebasing being a lot better when working with tightly coupled code. And pull being a lot cleaner in being able to cherry-pick and revert to previous states more easily.
rebase is indeed a destroyer.

We've now decided to use this model, while only deleting feature branches after RC acceptance.


My colleague just suggested to rebase regularly from the develop branch while developing features
"I'm working on a branch.
someone - e.g. you - updates the develop branch.
I will have no info if that is related to my stuff or not
so, I should rebase regularly to the latest version of the develop branch"

I'm kinda clueless now. Git is really powerful and flexible in strageties, and that adds to complexity.

leeoniya 15 hours ago 0 replies      
jebblue 6 hours ago 0 replies      
I have tried to get git, some people say one project per repo (which seems crazy but I did it), many projects are ok, you do need a main master repo, you don't need one, then there's the half dozen commands where with SVN it's one.

Now the most valuable thing to me in source control, history, I'm supposed to keep clean? That's like a sacred cow, you _don't_ mess with history.

>> That's fairly straightforward, no?

No _Linus_ it isn't. Git is hard to get right. If it wasn't for EGit I'd be lost. I tried Canonical's bzr and it is more understandable for ordinary humans.

All that aside I really like Linux. :)

mibbitier 15 hours ago 3 replies      
git is so overly complex (Coming from svn).
3825 14 hours ago 0 replies      
I've heard some of these words...
Fizzbuzz, Interviews, And Overthinking fayr.am
96 points by timf  12 hours ago   68 comments top 11
rauljara 10 hours ago 1 reply      
I understood from near the very beginning that this wasn't really about Fizzbuzz. All the way through the post, I was waiting for the author to get a real world example instead of Fizzbuzz. Yet, I got to the end of the post and realized it was already pretty long.

I think in talking about programming, we are often hindered by the fact that it is much more complicated than our brains can handle. Our only choices are to write novel length treatments of real programs or short story length posts about a toy program, or just the tiniest piece of a real program.

Which is all to say, as sick of hearing about Fizzbuzz as I am, I'm glad there are silly little examples like it that we all know. Even though it was ostensibly about interviewing, that was a much clearer introduction to monoids than most. I think it was largely because it was in reference to Fizzbuzz: something very concrete with which we're all familiar.

Too many introductions to Haskell's abstractions are too abstract. Good on the author for finding away around that.

aristus 8 hours ago 4 replies      

  cases = (
(3, 'Fizz'),
(5, 'Buzz'),
(7, 'Bazz'),
(11, 'Boo'),
(13, 'Blip'),

for i in range(1, 101):
out = []
for c in cases:
if i % c[0] == 0:
if out:
print ''.join(out)
print i

Edit: not to detract from the post's point, I think it's valid. Monoids are cool and all but simple counting arguments can take you a long, long, long, way when case analysis fails you.

__david__ 10 hours ago 4 replies      
Forget Fizzbuzz, we get candidates that cannot reverse a string (in their language of choice). A friend of mine just told me he uses the question "What is the hex number that comes after 'F'" as his first "weed-out" technical question. It boggles the mind.
Swizec 10 hours ago 8 replies      
I recently challenged people to codegolf fizzbuzz (http://swizec.com/blog/fizzbuzz-without-ifs-in-90-char-i-wil...)

The Haskell solution was really cool:

[max(show x)(concat[n|(f,n)<-[(3,"Fizz"),(5,"Buzz")],mod x f==0])|x<-[1..100]]

This is much simpler and it looks easier to extend as well.

nandemo 2 hours ago 1 reply      
This seems overcomplicated. Why wrap String (which is already a monoid) inside Maybe? You can just use concat; if the result is the empty string then print the number. If you want it to work for any monoid, then use mconcat, and test for equality to mempty.

And why introduce monad comprehensions if you're just introducing monoids?

romonopoly 4 hours ago 0 replies      
"When you really boil it down to its implementation, FizzBuzz is something of an irritating program. I'm not sure how much the author of the problem really thought about FizzBuzz, but it turns out it's difficult to express well with the tools available to most imperative programming languages..."

Nonsense.. you call a simple loop with a couple conditions difficult?

absherwin 10 hours ago 4 replies      
I agree that Fizzbuzz can be a more interesting example of how to write code without repetition. While the author suggests that languages such as Haskell provide a unique advantage, the deciding question seems to be the availability of pre-built abstractions. Consider the following solution in Python:

  for i in xrange(1,101): print (('' if i%3 else 'Fizz')+('' if i%5 else 'Buzz')) or i

or the even more general:

for i in xrange(1,101):
print ''.join(['' if i%x else mapping[x] for x in mapping]) or i

We can even do this in C though to write something as extensible as the second would require writing more helper functions than I can justify for this brief comment:

int main(){
int i;
char s[4];
char n[3];
for (i=1;i<101;i++){
return 0;

ostso 7 hours ago 0 replies      
For more on monoids, see Brent Yorgey's paper _Monoids: Theme and Variations (Functional Pearl)_ at <http://www.cis.upenn.edu/~byorgey/pub/monoid-pearl.pdf>; (there's also a video of his talk at the Haskell Symposium at <http://www.youtube.com/watch?v=X-8NCkD2vOw>).
jisaacks 4 hours ago 0 replies      
The Ruby example that you recommending hiring because of, is overkill. Here is a better Ruby example:

    (1..100).each do |i|
o = ""
o.concat("Fizz") if i % 3 == 0
o.concat("Buzz") if i % 5 == 0
o.concat("Bazz") if i % 7 == 0
o.concat(i.to_s) if o.empty?
puts o

adiM 5 hours ago 1 reply      
I was waiting for a Java solution with an AbstractFactory somewhere.
droithomme 10 hours ago 2 replies      
We're five years into this, and here's yet another weekly column from a person who has just heard about it and is champing at the bit to prove both he can write FizzBuzz and all the other implementations are not as good as his. It will be a miracle if this thread doesn't turn into a chain of "even better" solutions, like all the other threads that came before it.

In this week's installment, the variation where it is claimed that common production languages are inadequate for a problem of this complexity, and the tool stack should be shifted to languages supporting monads... er monoids? Sigh.

“I am calling you from Windows”: A tech support scammer arstechnica.com
223 points by chinmoy  18 hours ago   124 comments top 23
bradleyland 17 hours ago 7 replies      
Having done my time in the tech support trenches, I do one hell of an "end user" impersonation. I held one of these guys on the line for 1 hour 20 minutes one evening, then told him my phone was dying and that I'd need to call him back. I called back two days later and tied him up for another 20 minutes before he finally cracked and hung up on me.

Yes, it was a terrific waste of time, but boy did it feel good. I consider it volunteering. All the time I spent on the phone with the scammer was time they couldn't spend targeting vulnerable individuals.

simonsarris 15 hours ago 11 replies      
What is most bizarre to me is that someone could actually do this and do it more than once.

Why wouldn't people feel bad about scamming someone like this? You have to talk to someone, get them on your side, and then scam them?

I could maybe imagine myself in an alternate universe doing something like this, perhaps enjoying a sort of "heist" feeling like when you're playing poker and no one's aware that you're totally prepared to take the table. But if I succeeded (at the scamming call) I'm sure I would feel devastated that I just did that to someone.

Maybe I'm just supremely naive, but it seems hard for me to imagine anyone I've ever met in my life scamming someone like this. It seems so completely incredible that it could ever happen on such a large scale.


Is there something in a culture (besides wealth discrepancy per se) that makes this sort of thing more OK?

akharris 17 hours ago 3 replies      
Got the call from these guys a few weeks ago - actually about 20 calls. I finally picked up and kept the guy on for about 10 minutes asking him progressively dumber questions. Finally, I told him I was running OSX, which led him to call his manager. The manager had a really hard time understanding that I wasn't running Windows. It did not sit with his worldview at all.
nicholassmith 15 hours ago 0 replies      
If I get them I generally just put the phone down immediately, or I'll wind them up asking stupid questions about their qualifications. However, my nana has had a few recently who've been super aggressive about it, I told her not to do anything they say and to call me if she's unsure, but apparently saying 'no' and putting the phone down gets you repeat calls if you sound like like a good mark.

Edit: I'll point out, this happens in legitimate call centres as well. I worked for a fairly well known a credit card company and left not long after I heard a top seller say "Focus on old people, scare them enough and they'll always buy".

kevinalexbrown 13 hours ago 2 replies      
The most effective way to troll these types of scams is to let them convince you, but repeatedly tell them that your computer froze and needs to be restarted. If they feel certain you're willing to pay, they will wait for the 20 minutes it takes to "restart" your computer. The more you "restart" your computer, time they invest in you, and the less costly that 20 minutes will seem.

That said I don't think I'd have the patience to do it more than once. And I suppose there are better ways to help humanity than trolling scammers.

jiggy2011 17 hours ago 2 replies      
I've had these calls too, now and again but never had the patience to keep them on the line.

So , they want to install some kind of remote admin software on your PC? I'm going to assume it is something based on VNC or RDP.

In that case if you really wanted to troll them it might be fun to figure out which protocol it was using and implement a custom server that you can run when they call you.

Any fun ideas as to what that could do?

debacle 16 hours ago 1 reply      
Back when I was a BOFH, I would string these calls on all the time. Usually I could manage 30-40 minutes before the Indian would get tired of it and hang up. Occasionally I'd get one that would last for over an hour.
16s 11 hours ago 1 reply      
I won't answer when the caller has masked their phone number. I have too many things to do. They can leave a msg if it's important. They never do.
jiggy2011 15 hours ago 1 reply      
I wonder how these guys are organised. Is it just a bunch of people doing this from home who have heard about the scam from a friend and have tried to imitate it?

Or are there actually physical call centres with rows of desks of scammers doing this as a 9-5?

JimmaDaRustla 17 hours ago 3 replies      
Had it happen to many people I know. They only were successful with one person I know - they replaced her OEM Windows XP license with a pirated/fake one...then stole the original perhaps? Can't imagine how useful an OEM XP license to a Acer POS netbook would be valuable.

If you get one of these calls, just screw with them - pretend you are following their instructions for as long as possible before saying "Ubuntu doesn't have that."

eckyptang 17 hours ago 4 replies      
We get these in the UK all the time. Usually either try and hold them on the line as long as possible (if bored) or tell them to "fuck off" straight away.
mnazim 10 hours ago 0 replies      
I couldn't help but read it in the Russell Peters style Indian accent. It wasn't until the middle of article that I noticed the subconscious act.

(DISCLAIMER: I belong to the same part of the world and probably have the same accent)

Pezmc 17 hours ago 2 replies      
I have also had this happen to me and a friend. We have both calls recorded and the numbers logged, is there anywhere we are supposed to send these reports?
garazy 7 hours ago 0 replies      
I had exactly the same call with someone, I wish I'd thought of the VM idea but I didn't, however I did record it -


I felt a bit sorry for the guy it's clearly a boiler room with high expectations.

darkstalker 15 hours ago 1 reply      
I would answer him "Sorry, I use Linux".
btilly 16 hours ago 0 replies      
These guys call me every month. I hope the FTC does do something about it.
navs 17 hours ago 0 replies      
Ah such tales are so entertaining. There was a brief period here in New Zealand when I'd receive a few of these kinds of calls. Folks in my Computer Science class considered it a badge of honor to receive a call and troll the scammer. It's been a while since I received any calls and I guess thats a good thing.
patrickdavey 7 hours ago 0 replies      
If you've not seen it... http://www.itslenny.com/ you can forward VOIP telemarkers to "lenny" - an automated bot who will happily chat with them for ... ever..

There are some classic mp3s to listen to.

slashedzero 16 hours ago 0 replies      
Ugh. A friend's mother fell for this exact thing. Spent a couple days "disinfecting" her computer. A few hours later, it finally dawned on her that she had been scammed (when someone asked her how the people could have known she had viruses), but she was too proud to talk about it/fight with the scammers.

Very smart though, targeting home phones, as they're just the right generation to fall for this type of scam.

devsatish 13 hours ago 0 replies      
The pitches for these sites can be seen on day-time tv, and late night tv, sandwiched between infomercials. I heard these on radio too. ex: doublemyspeed.com , totallyfast.com
zapt02 7 hours ago 0 replies      
This is interesting but the article is really sub-par and they ended too soon.
CaioAlonso 15 hours ago 0 replies      
The scam site: http://windows4pc.webs.com/

EDIT: Webs.com has just frozen it.

lampe 15 hours ago 0 replies      
haha nice post!

can someone call me like this?

i got windows xp only in a VM :D

Amazon Orders More than 10,000 Nvidia Tesla cards vr-zone.com
63 points by SlimHop  10 hours ago   13 comments top 2
mercuryrising 8 hours ago 4 replies      
Amazon's cloud might be one of the coolest things I've seen in a while, hop on, get some of the best computing performance possible, get off and save some money. If you have a random data analysis problem that would take your computer three weeks, why not just pay $10 and get it done in two hours (plus a few hours of debugging)?

If the article is correct, Amazon paid 15 million for those cards which will be out of style in about two years (not that they have to get rid of them, but something faster, easier to maintain (if Nvidia starts opening up to Linux), with more memory and less power usage will come out. They'll have to fork over a large sum of money again to keep their top "on demand computing" title.

Amazon's cluster GPU right now has two Nvidia Tesla Fermi's in it. I'm going to assume Amazon will split their new cards into twos and fours, at about half of each. That's ~1750 new computers that are going to load up. Looking at the current rates of the cluster, it's $2.100 for an hour of the normal, I'll say it will be $4.200 for an hour on the jumbo with 4 GPUs.

They paid $15 million for just the cards. They need to get 2380952 hours of usage out of the machines to break even on the cards. They need to log 1360 hours per machine to break even, or have someone run all the machines at full bore for 56 days. As the cards are the most expensive component (assumption), and the total price of the computer will be about the price of one of the cards, we'll add a little bit of over head for all the other things they need to do to make it work - 120 days of full time use to break even on an investment of about $25 million (they need to buy lots of other things to put all the GPUs in, and worry about all that heat, and have a place to put it all, and have people install the new computers, etc...). I wonder what the actual usage of those clusters are, and if they've had anyone sign a deal saying we'll use the cluster for an entire month. That's a beautiful maneuver though, say CERN didn't want to do all the data analysis from the LHC in house because by the time they got to this part of the experiment, their technology they purchased previously would be way out of date. Just let Amazon do it. They will always have the latest technology, and you'll have an inexpensive way of leveraging that power.

Assuming they can make it all work (and I'm sure a lot of their decisions now are strategic decisions aimed at future investments) this is a great time to be a computer user, log on and get the best for a couple hours for a couple dollars. Instead of shelling out $1500 on a new computer personally, I could log a ton of EC2 hours getting significantly faster, more powerful machines, that never get 'stale', and their lives are much happier (my computer probably doesn't do anything "intensive" 70% of its life, whereas the EC2s are probably pushed a bit harder than that).

Karhan 8 hours ago 2 replies      
I remember reading a blog post of the peculiarities of GPU programming and the post noting that for most modern graphics cards (at the time) if you can keep your computable data in chunks no bigger than 64kb a piece you can expect to see enormous performance gains even on top of what you'll see by using openCL/CUDA because of a physical memory limit on the actual GPU itself.

I also remember thinking that a 64kb row size for DynamoDB was very odd.

I wonder if these things are at all related.

Perfect Audience (YC S11) Makes Facebook Retargeting Easy, Raises $1.1M techcrunch.com
59 points by brandnewlow  9 hours ago   23 comments top 11
lurker14 1 hour ago 0 replies      
I am entertained that Firefox gave me a "Ghostery's tracker list has been updated." notice at the same time I read this article.

Retargeting is one of the creepiest abuses of the web, chasing people across the Internet after visiting your site once.

aaronharnly 4 hours ago 1 reply      
I'm almost entirely ignorant of what "retargeting" is and how it works. My impression is that the ad network serves me a cookie when I visit the client page (acme.com). Then, on Facebook, I am shown an ad (from Acme? Or from anyone), placed by the ad network, who has identified me as an acme.com visitor.

Is that roughly correct, or could someone explain it to me like I'm simple? Also, does this technique still work if my browser blocks 3rd-party cookies?

brandnewlow 1 hour ago 0 replies      
Here's our URL: http://perfectaudience.com just realized it's not in here
toast76 9 hours ago 1 reply      
We've been using perfect audience at BugHerd for a while now and I'm an unashamed fan (first twitter follower even!). Great product with amazing support! Best of luck guys!
brandnewlow 9 hours ago 1 reply      
Brad from Perfect Audience here. I owe so much to HN and all the great advice I've received on here over the years. Happy to answer questions about YC, online advertising for startups, or anything else. We're really excited to get out of the gate and "on the board."
kunle 3 hours ago 0 replies      
Solid product. Brad and team are doing great work and we're super happy with them.
aaronjg 9 hours ago 0 replies      
We use Perfect Audience at Custora, and are really happy with it. Couldn't be easier to set up ad retargeting. And they have excellent service and support. Congrats guys!
robryan 8 hours ago 1 reply      
Just wondering, is this product just putting a better interface/ aggregation on Facebook retargeting or giving companies access to their retargeting features that aren't available to all yet?
cristianpascu 7 hours ago 3 replies      
Are you competing with adroll.com? They told me that they've also partnered with Facebook on retargeting. But they said there's like a $2k minimum right now. Why there's no minimum with you?
josh2600 7 hours ago 1 reply      
This looks cool.

I'm managing a bunch of sites so multi-domain retargeting would be awesome, but I'll try it out on a domain we're putting up next month.

Looks cool. Are there any 'under-the-hood' improvements relative to other retargeting platforms?

davidwhodge 9 hours ago 1 reply      
Congrats brad!
Homomorphic encryption: Compute with data you cannot read americanscientist.org
81 points by friism  12 hours ago   13 comments top 8
imurray 10 hours ago 0 replies      
The author, Brian Hayes, has an excellent blog at http://bit-player.org/ which is well worth following. Lots of posts about simple curiosities followed up much further than most people bother, and beautifully illustrated.

The post pointing to this American Scientist article adds “For crypto buffs, there's an Easter egg in the first illustration.”

SoftwareMaven 10 hours ago 2 replies      
I was trying to solve secure email at a mass scale, and the fundamental problems there were 1) key management and 2) spam.

We had a good solution for key management, but there was no way we felt we could give a way for scammers to appear more legit (hey, it came encrypted) without tools to fight it. The only way we would have had to solve it was either breaking security or through solid identity management, but that doesn't really (and probably cannot) exist across the Internet and is not necessarily what people are looking for in secure email.

That was when I discovered homomorphic encryption. It really would have been the solution to our problems (that, and how do I search my existing messages). Too bad we were 10 years early (or too poor to put the researchers to work for us :).

svag 11 hours ago 0 replies      
amirhirsch 9 hours ago 0 replies      
http://crypto.stanford.edu/craig/easy-fhe.pdf is a good summary of Craig Gentry's thesis (http://crypto.stanford.edu/craig/craig-thesis.pdf) and explains some of the limitations, for example why binary search in O(log n) time is not possible in a homomorphic scheme because it would necessarily reveal information about the untouched data.
regularfry 10 hours ago 0 replies      
That's a brilliant write-up. I'm still none the wiser what the evaluate function actually looks like, but he makes the rest seem simple enough to play around with.
keithnoizu 9 hours ago 1 reply      
Garbled circuits have been around for a long time. . . not sure if its new but it is a very interesting area.

Similiar Research in this area can be found by googling Secure Multiparty Computing. SMC or SMP

batgaijin 4 hours ago 0 replies      
What hardware makes this faster, SSE or the GPU?
HarrietJones 9 hours ago 0 replies      
Am I the only person who read the title of this as "Homoerotic erection"? I really hope not.
The Best Way to Find Aliens: Look for Their Solar Power Plants theatlantic.com
29 points by cclark20  6 hours ago   29 comments top 8
sxp 5 hours ago 1 reply      
>A civilization that built a Dyson Sphere would have to go to great lengths to avoid detection, either by getting rid of its waste heat in some novel way, or by building massive radiators that give off heat so cool that it would be undetectable against the cosmic microwave background, the faint afterglow of the Big Bang.

So if a civilization were to completely mask itself in the EM spectrum, the only way to detect them would be to look for gravitational distortions assuming those can't be masked. A civilization might do this either to hide from others or because the maximum efficiency of their energy extraction system would occur when their waste heat matched the background ration. So a really advanced (and paranoid or efficient) civilization would be indistinguishable from dark matter.

This brings up another interesting hypothesis: dark matter is actually the computronium of all of the alien civilizations in the universe that have achieved a technological singularity. Unfortunately, this hypothesis can't be tested until humanity gets to the same level.

cek 4 hours ago 2 replies      
The issue I have with Dyson Sphere's is in 1960 the idea of the singularity hadn't really been floated. I think it is far more likely that civilizations either die or upload.
sbierwagen 2 hours ago 1 reply      
Larry Niven, "Bigger Than Worlds" (1974)

  ...assuming that the galaxy's most advanced civilizations are protoplasmic. But 
beings whose chemistry is based on molten copper, say, would want a hotter
environment. They might have evolved faster, in temperatures where chemistry and
biochemistry would move far faster. There might be a lot more of them than of
us. And their red-hot Dyson spheres would look deceptively like red giant or
supergiant stars. One wonders.

tzs 3 hours ago 1 reply      
Build your Dyson sphere with large gaps in the galactic plane. Then observers at most other stars in your galaxy will see your star, and then you don't have to either build a giant sphere or come up with exotic heat management systems to avoid detection.
w1ntermute 2 hours ago 2 replies      
Solar power? I'm pretty sure they'd be using nuclear fusion.
guard-of-terra 3 hours ago 0 replies      
It's interesting, but a bit like trying to detect a large and advanced by looking for products of horse manure decomposing in huge heaps. I think it was Mendeleev who thought that getting rid of horse shit will be the main problem of ever-growing cities in XX century.

The need for energy is definite, but the amount and character of such need is debatable. Maybe civilizations shrink and don't use so much energy? Maybe they mine million of stars at once? Maybe they get their energy out of thin vacuum?

lutusp 6 hours ago 5 replies      
Let's be realistic. A civilization so advanced as to be able to capture the entire energy of a star, probably also has energy alternatives, and probably wouldn't want to attract the attention of lesser beings (like us) by trapping a star's output in a detectable way.
hollerith 3 hours ago 1 reply      
>the sun beams a total of 120,000 terrawatts per day onto our planet. That's 10,000 times the amount that flows through our industrial civilization.

Should be "terrawatts", not "terrawatts per day". (I stopped reading there. There 100s of 1000s of people who know enough physics never to make such a mistake. I'll read one of them instead.)

Comtypes: How Dropbox learned to stop worrying and love the COM dropbox.com
81 points by frsandstone  11 hours ago   26 comments top 5
snprbob86 10 hours ago 2 replies      
COM is a necessary evil when integrating with Windows, Office, Visual Studio, or any other big/old Microsoft product. That's just how it works.

The .NET framework actually does a pretty good job of hiding the complexity from you, but having done some serious integration with VS, let me say that's an abstraction leak that I wouldn't wish upon my worse enemy.

ta12121 9 hours ago 1 reply      
Provocative title for a pretty generic article.

1. Summary of COM, a widely used technology

2. Summary of comtypes, a python package for interacting with COM

3. One example of a gotcha they ran into with comtypes and arrays of COM objects.

Conclusion: meh.

achal 9 hours ago 0 replies      
Semi related: Not sure how many people have played with PowerShell/COM, but it's fun to toy with. Haven't used it for anything too useful (yet) but for example, to delete all the comments in a Word document in a couple lines:

$a = New-Object -com Word.Application

$a.visible = $false

$a.Documents.Open("{absolute path}").DeleteAllComments()

Better examples: http://www.simple-talk.com/dotnet/.net-tools/com-automation-...

bugsbunnyak 8 hours ago 1 reply      
Am I reading this correctly? they are using comtypes as an exploratory tool, but the photo upload feature was written in something else...

(I haven't disassembled the dropbox dlls, but there aren't any obvious python signatures in the install directory)

If the above is wrong, I would love to know what they are using!

cek 7 hours ago 0 replies      
"COM is Love"
- Don Box [1]

[1] http://en.wikipedia.org/wiki/Don_Box

The Other L-Word (2010) vanityfair.com
3 points by stevewilhelm  59 minutes ago   1 comment top
emehrkay 0 minutes ago 0 replies      
One of the things that I loved about Toastmasters is that they designated someone to be the "um grammarian." Their job was to count how many times you used filler words or unnecessary pauses. Very frustrating at first, but damn it really made me a better speaker. Because of that brief experience it is kinda hard to listen to Obama talk sometimes, sometimes he has the MOST unnecessary ummm and ahhhs.
An Update from Elon Musk teslamotors.com
369 points by JGM564  1 day ago   113 comments top 12
reneherse 1 day ago  replies      
I'm a huge Elon Musk/Tesla/SpaceX fan, and have often felt it would be the ultimate opportunity to work at either company (Tesla would be my first choice, as automotive interface tech is one of my passions).

However, doing a quick bit of research earlier today, a search for "Tesla working environment" turned up more than a handful of reports by former and current employees that hint at an unpleasant company culture. Six to seven day workweeks, below average compensation, hyper-political management, management that is quick to fire, and a generally chaotic environment. These factors seemed to be reported even by folks who cited other benefits such as a high degree of autonomy and the opportunity to work with other highly passionate top level engineers on important emerging technologies. One additional oft-repeated concern was that the pace at which Tesla works its engineers is unsustainable, and will lead to burnout for lack of work-life balance.

Can anyone closer to the Valley than I am comment on whether these concerns ring true? And how does this compare to work at other highly innovative and passionate industrial startups?

Elon's explanation of the latest round of fundraising is welcome news, and personally I'm gunning for Tesla to become the Apple of the auto industry. (I plan on buying stock as soon as I'm able.) Is there anything we can infer from these employee reports about the health of Tesla's organizational core/DNA, and what effect that might have on the company's prospects for long term success?
[Edited for clarity]

droithomme 1 day ago 7 replies      
Musk's public statements are the nicest and most logical ones I've seen made by a modern CEO. No sense of spin. Always a pleasure to read.
confluence 23 hours ago 3 replies      
I've been 100% long TSLA since the beginning and really don't understand the reasoning behind the doubts people have - given how little they actually know about a) the car industry and b) electric batteries and c) the ability to think on first principles and not by analogy. But I guess everyone has the right to an opinion - even if most of them aren't a) warranted b) backed up or c) logically reasoned.

We are past peak oil. Battery tech will reach oil parity within the decade. Solar PV will reach grid base line within the next 2 decades. Fusion will be introduced within the next 3 decades. Electric engines already run 92% efficiency (vs the combustion engines 15%) and global warming externalities are finally being priced.

The electric car is a no brainer (it wasn't a decade ago, and it'll be too late a decade from now) just like the electrification of trains were. This company will electrify suburbia and reduce costs while they are at it.

Timing + skill = Very nice stock returns (timing is about 10x more important).

Disclaimer: Goes without saying - I am long TSLA and will continue to be long for the foreseeable future.

chintan 1 day ago 3 replies      
What a timing - "Romney Calls Tesla a ‘Loser'" in first debate:
codex 1 day ago 2 replies      
As of last quarter, Tesla had $777M in assets and a whopping $715M in liabilities--leaving a net balance of only $64M.

Given that their balance sheet is decreasing by an extraordinary $30M a month, that would have left only two months until the company was insolvent. No wonder the U.S. government wants their $465M paid back more quickly than planned.

The company has lost over $850 million since being founded in 2003.

loceng 1 day ago 3 replies      
I think people underestimate what an agile and lean startup like Elon seems to be running can accomplish. He's a super intelligent guy with a growing positive trackrecord. The vehicle industry has been waiting for disruption for 20+ years, and the giants finally have lost control and whatever unnatural advantages they tried to maintain. Technology for electric vehicles will only continue to improve, and costs will go down. Luckily for economies very attached to oil they can shift to systems like free re-fueling once a vehicle is purchased; That's pretty incredible and something I never even thought about or imagined possible, though it makes sense and works once the puzzle pieces are all in front of you. P.S. Elon's my new Man Crush.
eldavido 1 day ago 1 reply      
"Nonetheless, we have a duty at Tesla, having accepted this loan as a portion of our capital, to repay it at the earliest opportunity."

Not sure why Elon and a lot of SV entrepreneurs feel this way. You take a loan; if it reduces your weighted average cost of capital, you roll it and/or pay it slowly. As a Tesla shareholder, I hope such cheap financing remains in place as long as possible to maximize shareholder return on equity by keeping cost of capital as low as possible -- and by improving Tesla's cash position (helping it to operationally succeed) to boot.

Another guy thinking about buying Tesla stock.

MattGrommes 13 hours ago 0 replies      
Sarah Lacy did what I thought was a great hour-long "Fireside Chat" with Mr. Musk: http://pandodaily.com/2012/07/12/pandomonthly-presents-a-fir...

It's well worth listening to.

debacle 15 hours ago 0 replies      
> we expect Tesla to become cash flow positive at the end of next month.

Never in my wildest dreams did I expect that to become a reality. We truly live in amazing times.

JGM564 1 day ago 1 reply      
Exciting quote emphasized in post: "we expect Tesla to become cash flow positive at the end of next month."
anovikov 1 day ago 0 replies      
Hope some good news come on SpaceX, too. What's up with F-H with its so dubious payload figures, especially for high energy trajectories? How's the cross-feed system is coming along (if it haven't been dropped yet)? Where is the 'super efficient staged combustion methane engine' you mentioned a year back, any idea what thrust class it is and what is the fuel combination?
tatsuke95 16 hours ago 0 replies      
Not sure if anyone else noticed, but Romney made a crack about Tesla (in the same sentence as Solyndra) as a useless pet project that Obama subsidizes.

I wonder how this will play out politically, especially if Tesla gets into trouble financially. That story is already being spun.

HP CEO: We're screwed (for the next few years) arstechnica.com
59 points by zoowar  11 hours ago   25 comments top 12
thaumaturgy 9 hours ago 0 replies      
From the outside, looking in, HP appears to have tremendous infrastructural problems that at this point are no longer a matter of "We haven't been using a compelling customer management or CRM system for years". (I know that wasn't the only point in the article; still, it seemed like an odd one to have there at all given what we see of HP.)

HP's printers are ... OK. Not exceptional, not market leaders, but OK. But, their printer driver and AIO software is renowned in the tech industry for its awfulness. It is truly, exceptionally terrible. One of the strangest workstation issues we ever had to diagnose was caused by a messed-up HP software installation, where the HP drivers for one printer interfered with the HP drivers for another printer (http://www.robsheldon.com/puzzlepage).

HP's had multiple consumer laptop lines with a stupid overheating issue which consistently cooks the graphics chip almost exactly 14 months after purchase, due not to a manufacturing defect but a design flaw. (A thermal pad is being used in a space which is too large for a thermal pad -- there are seriously guys on eBay making decent side money selling copper shims for these things.)

We recently had an HP AIO workstation in the shop with a bad BIOS. The chip seemed fine, but the BIOS software had become corrupted (possibly due to a virus attempting to re-flash BIOS, although I haven't heard of that actually being tried in quite a while). On a Dell system, this is an easy problem to fix: we go to Dell's services & support page, enter the service tag number of the system, hit drivers & downloads, select the BIOS flashing utility, download and run and done. HP? Not on their website, and their technical support -- while sympathetic -- couldn't help either. They literally had no BIOS flashing utility for that model. Our customer, a student, was out several hundred dollars on an otherwise fine computer because of that.

We've been making a point of warning our clients away from HP for years because this kind of crap just keeps happening over and over and over. They'll call us asking for a recommendation on a new system, we'll give them some specific options, and if it sounds like they want to go shopping, literally we'll say, "Just don't buy an HP, whatever you do."

What a shame. HP used to have a well-deserved sterling reputation. I have no idea what their enterprise business looks like, but it's hard for me to imagine a consumer line being that through-and-through screwed up without also corrupting the enterprise side.

I honestly was relieved -- and heard the same from a number of other support-level companies -- when scuttlebutt had it that HP was going to get out of the consumer market. I think they should have stuck with that plan, scrapped their PC business entirely, gotten the rest of their higher-profit-margin business back in order, and then a few years later re-launched their PC business when they could do it right and without all of the added baggage of a stinky reputation.

terhechte 10 hours ago 0 replies      
Nothing new there. Good to hear that top level finally sobered up. I've worked at HP from 2005-2006 (roughly) as an extern, and was always fascinated by its lack of a main corporate agenda, lest a vision. Remember, back then, they even used to sell HP branded iPods (by means of a strange deal with Apple that I'll never really understand) (http://www.fscklog.com/hpipodback.JPG).
systems 8 hours ago 2 replies      
And the sad part is we all know what the problem is, because its always the same problem (more or less) everywhere

Put Designers and Thinkers on top ... not project managers and follow uppers

Those who get promoted and run large organization are almost always not the smart designer thinker type, but the project manager, follow upper with a good spirit and cute time management skills

Apple also recently picked a project manager on top, lets see if i will be proven right AAPL close today at 666.8 (cute number) I think it will go back to 30 USD in half the time it took it to go up from there (AAPL was selling for 30 USD in 2004)

rdtsc 1 hour ago 0 replies      
> "with a full rebound not in sight until 2016.

They obviously have more data and information to predict the future, but to me this sounds like a death march. The general has to tell the troops "We'll Win! Attack!" while everyone can sort of see the end is near and this is a losing battle.

nekojima 10 hours ago 1 reply      
"it's been over 7 years since we've had a new lineup of multifunction printers,"

I have one of those 7 year old MFPs and it has worked great, but HP has kindly sent me a new replacement printer because of a cartridge issue I've had in the last few months. The new printer is three "generations" newer, looks great, has a few new features, but has significant operating flaws like paper jams and crashing my brand new (2 week old) laptop and a year old desktop.

Already spent three hours on the phone with HP tech support with the new printer, after five plus hours on the phone with the old printer, and really learning how much they need to either be retrained (or trained to begin with, why do I have to explain what duplex means) or re-shore them to North America. Already been escalated, now waiting for a local case manager to call back. But I know that means sending me a new printer, instead of trying to fix this unit, because they don't do tech support here.

ChuckMcM 10 hours ago 1 reply      
I'm always fascinated by these sorts of reboot things. Many people have experienced 'The Purge' where a new CEO is brought in and the 'old guard' is moved out of the way and the 'new guard' is brought into replace it. Its human nature I think to change leadership that way. But its fraught with peril. In particular its dangerous if you do it several times then after a few purges you don't have enough people who have the skills to lead to pick from.

To pick an example from recent memory, look at Apple during its great withering. Sculley -> Spindler -> Amelio -> Jobs. Looking at how Jobs re-created that company from the inside out was fascinating.

I don't know if Meg Whitman can accomplish that much but she has the basic resource. HP's expertise is wide and deep. I wish her luck.

bduerst 10 hours ago 2 replies      
They're fine. HP is probably gearing up for a big bath next year, in which they can bundle their losses/liabilities from several years in one year, and then look more profitable in years to come.

I say this because HP has the largest marketshare globally (17.2%), and has cornered the U.S. government contracts with elitebooks.

A better indicator of how well HP is doing would be their revenue relative to the industry and relative to the past years. Unlike profits, revenue numbers are harder to manipulate - and HP revenue is up 11% since 2009.

owenfi 9 hours ago 0 replies      
Looks like she opened the first envelope: http://www.mondaynote.com/2010/05/30/ballmer-just-opened-the...

(Not saying her statements are incorrect, though.)

fruchtose 10 hours ago 1 reply      
I think HP has major problems, many of which stem from the board of directors. The fact that the BoD selected Léo Apotheker is one of the biggest indicators of this. Hiring Meg Whitman will not fix HP if the BoD stays.
mammalfriend 3 hours ago 0 replies      
It must be extremely frustrating to have been an old-time HP engineer or product designer. To watch the company transition from an innovative technology company to a gigantic services and ink firm with no breakthrough products for, well, over a decade now. And to know that inside the walls of the company HP had early products that weren't too different from the iPhone, or even 3D printers, but made a conscious decision to do nothing with them and to effectively stop working on disruptive innovation.
catshirt 9 hours ago 0 replies      
with the lead-in about the debate i was hoping Meg Whitman was referring to the United States generally and not HP
lobster45 7 hours ago 0 replies      
The HP Proliant servers are better designed than the HP PowerEdge servers, but that is about all from HP that I would recommend
Misusing DOM text methods benv.ca
31 points by BenjaminCoe  8 hours ago   3 comments top
yuliyp 7 hours ago 1 reply      
Text is a sequence of characters. HTML is a sequence of tags and HTML-encoded text. Some text can be interpreted as HTML. Some of that HTML can be malicious. The bottom line is if you take text, and you give it to something which expects HTML, you will encounter bugs with non-alphanumerics, XSS holes, or both.

Let's look at the methods discussed in the article. textContent gives you the text inside of an element, ignoring any tags. This text can certainly look like HTML, and that HTML can be malicious.

createTextNode takes text and creates a node with that text as its content. innerHTML of that gives you HTML that, when rendered, is the sequence of characters that matches the text you passed it. If you want a sequence of HTML which cannot contain tags, creating a text node and immediately grabbing the HTML within it certainly is a safe way to do it.

In general, "escaping" is the wrong way to think about it. You have functions which can convert text to the equivalent HTML, and you have functions which extract the text within a DOM node. While sometimes the HTML which renders as a given text string is the same as the string, this is definitely not always the case.

The CIA and Jeff Bezos Bet on Quantum Computing technologyreview.com
88 points by iProject  15 hours ago   53 comments top 8
cs702 13 hours ago 6 replies      
The societal impact of quantum computing would be immense -- the entire security infrastructure of the web would have to be rebuilt from the ground up.

However, many experts -- including Scott Aaronson (who is mentioned in the article) -- have cast doubt on D-Wave's claims, because the company has not yet proved that its machines are producing results by exploiting quantum phenomena like particle entanglement and superposition. In other words, D-Wave's devices may very well be different, but still classical, computers. D-Wave's engineers acknowledge as much: they are quoted in the article stating that "they don't yet know for sure what's happening inside the chip."

The fact that Bezos, a very smart guy, has invested in D-Wave, has changed my perception of the company from "these are likely crazy people making outrageous claims" to "maybe these guys are unto something." Exciting and a bit scary. Time will tell.

wallawe 12 hours ago 1 reply      
Did anyone else catch the temperature miscalculation?

> It is actually a cold gun: the structure is a chilly -259 °F (4 °Kelvin) at the wide end and a few thousandths of a degree above absolute zero at its tip...

4 degrees Kelvin is approx -452 Fahrenheit is approx -269 Celsius. Absolute zero is '459.67° F

cing 4 hours ago 0 replies      
On a related note, D-wave is sure posting some interesting job opportunities: https://www.mitacs.ca/o/2012/07/cognition-and-creativity-fra...
SeanDav 12 hours ago 1 reply      
Quantum computing and nuclear fusion power generation are probably 2 of the biggest problems to solve in order to make the next technological leap.

If D-Wave comprehensively cracks Quantum computing I reckon Bezos may become the world's first Trillionaire.

jimwhitson 14 hours ago 5 replies      
Would it be overly conspiratorial to suggest that the CIA and NSA are in fact interested in a large-scale implementation of Shor's algorithm? Perhaps I underestimate how much use intelligence agencies have for optimization...
batgaijin 6 hours ago 1 reply      
What happened with all of the posts from this guy mocking D-Wave?: http://www.scottaaronson.com/blog/?p=431
dyeje 14 hours ago 2 replies      
Stop reading after the second paragraph, quantum computing isn't some magic wand you wave at problems to make them disappear. It's simply a new way of approaching them, and it requires algorithms designed to take advantage of this new approach to computing to solve said problems.
ecolak 13 hours ago 1 reply      
I don't think their big contract with Lockheed Martin should be a good indicator of actual quantum computing. What they built is probably optimized for some of the problems that Lockheed Martin is trying to solve...
The Code Side Of Color smashingmagazine.com
110 points by mmackh  16 hours ago   37 comments top 7
unwind 16 hours ago 5 replies      
Bleh, it seems they forgot to proof this or have it read by someone who ... I don't know, maybe knows a bit more about how it really works.

When computers name a color, they use a so-called hexidecimal code that most humans gloss over: 24-bit colors. That is, 16,777,216 unique combinations of exactly seven characters made from ten numerals and six letters " preceded by a hash mark.

I mean, "hexidecimal" is hopefully just a typo, but the "explanation" in the second sentence is off by one, it's not seven characters that make up the color, since the hash mark is constant and doesn't contribute. I would object to the "ten numerals and six letters" too, but I guess that's a suitable popular nomenclature.

And I don't even have a lawn ...

cdawzrd 15 hours ago 1 reply      
I'm all for designers learning a bit of the technical background behind colors on the web, but this article is full of technical inaccuracies, misconceptions, and misleading comparisons.

"Tens place"?
"24-bit color" ignores alpha and color palettes
"# means 'This is a hex number'": No, it means a web color expressed as 3 hex numbers.

Plus, I know it looks better to make your colors less saturated, but when you are demonstrating starting from #ff0000 and adding other colors, why not actually display #ff0000 instead of #e93f32?

Finally, spellcheck.

lenkite 14 hours ago 0 replies      
I found this article useful despite its errors. However, this article only reinforces my belief that it is easier to work with the HSL color notation. Young(Yello-60degree) Guys (Green-120) Can (Cyan-180) Be (Blue-240) Messy (Magenta-210) Rascals (Red-0/360) gives the visualization for hue. Saturation is between 0-100, where 0 is grayed out and 100 is full-hue. Lightness varies from 0-100, where 0 is blacked out, 100 is whited-out and 50 would give the color as-is.
NelsonMinar 14 hours ago 2 replies      
Trying to work with color in RGB triplets is like trying to assemble furniture with a plastic fork. Any discussion of color for programmers needs to be in terms of HSL, with a discussion of Lab for completeness and RGB for dealing with legacy systems.
tharris0101 14 hours ago 1 reply      
As someone who is colorblind, a lot of my design is based on the logic behind colors rather than looks. Of course, I always let an actual designer approve, tinker with, or redo my work. There are a lot of inaccuracies in this piece but I like the gist of it.
dphnx 15 hours ago 1 reply      
It's a good explanation of how hex colour codes work, but in this day and age it's so much simpler to use (and think in) rgb(n, n, n) notation. Not to mention the hsl(n, n%, n%) notation that is supported by modern browsers.
brianfryer 15 hours ago 1 reply      
Excellent read. I've been interested in developing some sort of tool to help me select, save, and organize colors for custom palettes.

This will definitely help -- thanks!

Filepicker.io's "Don't write off HTML5" contest filepicker.io
32 points by liyanchang  9 hours ago   21 comments top 9
cek 7 hours ago 1 reply      
Even if Mark's view is correct (debatable), the only way HTML5 will ever ‘get there' is if the developer community continues to build apps on HTML5 that push the boundaries of performance.

The presumption that the developer community can impact this is also debatable. What proof-points in history are there for technologies that became popular or standard because DEVELOPERS decided to adopt them? In most cases you come up with, I bet you'll find their success was short-lived or medocre. Compared to say Objective-C and iOS or Win32.

Platforms (and related technologies) become popular/de-facto-standards because CUSTOMERS buy the value proposition of the product. Developers then adapt/adopt.

I'm not saying developers have no impact, I just don't think the impact is enough to really move the ball.

jarjoura 1 hour ago 1 reply      
I don't dispute HTML5 can perform well if optimized the right way. What I would dispute though is that it's actually far harder to develop a web app that feels native than it would be to just make native apps. Plus people want access to device sensors, push notifications, background tasks, etc. that are also hard to get at through javascript.
gavinlynch 8 hours ago 0 replies      
>>> "Some of us aren't ready to give up on HTML5 yet."

Who was "giving up" on HTML5 just because some founder of some company said so to begin with? I appreciate that Mark's HTMl5 app was less efficient than his native app. Sucks for him and his team, I guess. My question is: So what? These are just tools. Pick one that is appropriate for your company, given the entire context of the product you are attempting to deliver, and make a practical and well thought out decision. For so very many, HTML5 is the right choice. For many others, it's not.

Soo... Sorry, I get bored with hyperbolic, "The Rise of..." and, "The Death of ..." articles.

vhf 8 hours ago 2 replies      
>Choose any technology or any API you want and build a mobile or mobile-web application that showcases HTML5's capabilities

>- Both web or mobile web apps are fine.

Is it me or this is quite confusing ? Could someone from Filepicker.io make this clear, please ? :)

drifkin 2 hours ago 0 replies      
Ben Sandofsky of Twitter wrote a well-reasoned take on the whole HTML5 vs. native apps debate: http://sandofsky.com/blog/shell-apps.html

I've had very similar experiences working on both HTML5 and native apps.

vu0tran 9 hours ago 3 replies      
The Zuck is sort of right though. There is no doubt that the iOS / Android FB app is pretty slow and buggy due to their dependence on HTML5. They've gotten grilled by their users in the past -- although props to them for really improving it recently.

I'm not saying that HTML5 is a bad technology or anything, it just isn't quite there yet.

Good article nonetheless.

uams 9 hours ago 0 replies      
Hmm. So the argument that web will eventually over take native mobile apps has been made over and over. Sure an O(n lgn) algorithm is better, but if the constants are bad enough, I'm going with the O(n^2). Even the author admits that native apps were around for nearly 20 years. Accounting for the fact that things move faster now so it might be less, we're still only a couple years into a decade long era of installed mobile applications. Then, we still have to account for the bad network connections on the phone that make web apps harder to work with.

While I'm skeptical of the mobile argument, I'm super excited about html5 dev conf because I do think HTML5 is going to be big on desktop/laptops.

mahmud 6 hours ago 1 reply      
Filepicker.io spams geek sites. Not a day passes without their hype on some front page.
MatthewPhillips 7 hours ago 0 replies      
I get a delivery failure sending to that email address.
       cached 5 October 2012 07:02:01 GMT