hacker news with inline top comments    .. more ..    24 Feb 2012 Best
home   ask   best   7 years ago   
Ubuntu for Android ubuntu.com
680 points by dave1010uk  2 days ago   170 comments top 47
bguthrie 2 days ago  replies      
This, or something like it, is the future: the computing device is portable, and adapts itself to the forms of input available. There's no reason why your display should have to be permanently attached to the device that drives it, and increasingly, it won't be.

I don't know what the implications are for Ubuntu or Android. But genuine support for a first-class computing experience is one of the few things that would tempt me back onto those platforms.

mrb 2 days ago 4 replies      
You could push the ease-of-use even further with wireless HDMI, A2DP, and Bluetooth HIDs. Imagine: sit at a desk, without your phone even leaving your pocket. The wireless HDMI monitor, Bluetooth keyboard, and Bluetooth A2DP speakers automatically pair up with your phone. You just start using the device, eg. show a video to your friends, or start working. Stand up and leave. The phone unpairs itself from the monitor/speakers/keyboard, free to be used by the next person.

(If security is a concern, make this less automatic, eg. make the pairing require pressing a button on the phone.)

I have been waiting for precisely that concept to take off for years, namely using your cellphone as your portable computer.

lifeisstillgood 2 days ago 0 replies      
We have struggled to get a BIOS that is Free / Libre
And where does this leave us now? Just because Ubuntu is free, if the phone manufacturers start to get trusted mobile computing (tm) disease, we are still in trouble.

"Curated" is still not free

And there are some obvious holes - you cant carry a monitor around with you. So you need docking stations to plug into. Do you trust the keyboard in the Public library not to watch your keystrokes?

orbitingpluto 2 days ago 1 reply      
Reminds me of doing the Debian chroot on the Nokia N8x0. That's been around for quite awhile. It's nice to be able to apt-get whatever you need. It's prohibitively slow to use on a device from 2007 though and overclocking is a bit risky and drains the battery quickly.

(Maybe I'm a dinosaur, but I still use one of these things rather than the brand new 1.2 GHz dual core Android phone sitting next to it in my bag.)

I don't really see this panning out unless Ubuntu runs on the mobile screen as well. I'd also hope that the curated experience can be replaced with, well, anything else. Ubuntu is increasingly becoming a forced experience and reconfiguring things is a waste of time. Configuring something to how you want to use it is also a lot more educational than trying to figure out where to disable the new configuration.

dave1010uk 2 days ago 1 reply      
Not 100% sure but I think this is a chroot.

From the features page [0]:

    Ubuntu and Android share the same kernel. When docked,
the Ubuntu OS boots and runs concurrently with Android.
This allows both mobile and desktop functionality to
co-exist in different runtimes.

[0] http://www.ubuntu.com/devices/android/features-and-specs

xpaulbettsx 2 days ago 2 replies      
If Microsoft was smart, this would be exactly how their Win8 tablets should work - plug it into a dock and it turns into a Desktop PC. Ubuntu and Microsoft are in an awesome place here that Apple is going to miss out on.
lhnn 2 days ago 1 reply      
Shut up and take my money. This is what I've been dreaming of since I got my smartphone. Why should I even have a netbook for general purpose computing? I want to go anywhere with my MID (mobile internet device).
spinchange 2 days ago 0 replies      
Is this going to be available to end users to install themselves, or is Canonical holding out for handset makers to respond to this and partner with them?

Sadly, since so many of the big Android guys are also in bed with or paying some kind of extortion to Microsoft, I would expect there to be some amount of pressure and possibly economic incentives for the big Android ODMs to NOT to ship this.

Beyond that, is any carrier going to be interested in offering subs Ubuntu? (Idk, maybe?) Put this into the hands of end users first even if it's a "sloppy" / hack-ish install. That's the way to get it out there.

strags 2 days ago 3 replies      
I know it's immature of me to note this, but what's up with the logo to the left of "Ready to talk?". Is it just me, or is it faintly reminiscent of... well, something else?
shapeshed 2 days ago 0 replies      
Canonical may not have the resources or funds of Apple or Microsoft but they are innovating better than both at the moment. Unity is daring to be different on the desktop, and Ubuntu on Android is a simple idea that could really change the way people think about the PC. The ideas might not all work out in the long term but for sheer creative thinking you have got to applaud what they are doing.
lnanek 1 day ago 0 replies      
I admit, this would get me to bite the bullet if integrated with one of the existing laptop dock solutions for Android phones such as the Motorola Atrix 4G Lapdock or the ASUS Transformer Prime. Ubuntu is enough for me to do everything I need a computer for, except for some rare book keeping that has to be done over a VPN only supported on Windows/Mac. Instead of bringing my phone and laptop on all trips I'd just have my phone and laptop dock. The laptop docks seem much lighter and having the same stored data and same wireless data connection without tethering would be handy.
JoelSutherland 2 days ago 0 replies      
I know this is Ubuntu on Android, but all I can think of is Windows 8. Doesn't this seem like an inevitability for Microsoft? Intel even has x86 mobile chips on the way.
cs702 2 days ago 0 replies      
I can see this working really well for younger users, non-power-users, and non-techies who want to carry around their desktop environment and whose needs are met by web apps like Google Docs.

I can also see its potential in developing countries where many people have a phone and a TV but not a PC.

The medium-term goal is sort of obvious: Ubuntu running on the phone with the ability to display Unity on its own tiny screen or on larger external displays and allowing the user to interact with it via touch or via external input devices like keyboards, mice, etc.

vijayanands 2 days ago 1 reply      
I strongly believe, this would be just the beginning. Mobile phones give you three things - (recently, Horsepower), Mobility and Identity. And there are a plethora of things that could be powered with a combination of the three.

Weirdly, wrote about something like this, back in 2008. http://www.vijayanand.name/2008/10/the-future-of-living-how-...

... in a nokia centric world, be it.

tnorthcutt 2 days ago 1 reply      
"Ubuntu is the killer app for multi-core phones in 2012"

This text is displayed as if it's a quote, but as far as I can tell, it's not: http://goo.gl/vKHOI link is to a Google search for the above text). If there's anyone from Canonical here, can you comment on why that is presented as a quote, or what/where it's a quote from, if it is in fact a quote?

verelo 2 days ago 0 replies      
I think this is a great step in the right direction, and we've already observed a compression of devices recently. Consider the Laptop, Desktop & phone. Who still uses a desktop? Its really just a matter of time until we compress the laptop and phone, we're a long way off in my opinion (in terms of actually usable hardware) but once we have the power and portable input devices (i think one could already structure an argument to say we have them) i don't know what would hold it back.

Good to see we're headed in a sensible direction.

bane 2 days ago 0 replies      
Reminds me of this from 2011 http://www.engadget.com/2011/01/06/motorola-atrix-4g-hd-mult....

Which I think is awesome.

corysama 2 days ago 0 replies      
And, with the "Cotton Candy" Android-on-a-thumbdrive, your next desktop could hang on your key chain.



schpet 2 days ago 0 replies      
Video of ubuntu for android in use:
mitakas 2 days ago 0 replies      
There is a video, showing the functionality. [0]

Now I begin to understand why Canonical made those recent changes. The Ubuntu part of it seems kinda slow, but smartphones are going to get faster. [1]

[0] https://www.youtube.com/watch?v=gUXUjjg9qQ0
[1] http://www.anandtech.com/show/5559/qualcomm-snapdragon-s4-kr...

CWIZO 2 days ago 0 replies      
So this isn't something I can actually buy if I understand this correctly, right?
fredsted 2 days ago 3 replies      
Is this the way Apple is headed? Just dock your iPhone, boom, OS X on the monitor.
charlieok 2 days ago 0 replies      
So it doesn't seem like I can actually run this right now. Am I missing the link to actually get it and set it up? If there isn't one, why are they making an announcement?
spiralpolitik 2 days ago 0 replies      
Its the obvious extension of the current model. Kudos to Canonical for giving it a go.

I was expecting Apple or MS to move in this direction and I seem to recall a POC/patent application/Mock up from a few years back showing an iMac with a removable iPhone/iPod as the home directory. Maybe I'm misremembering.

gjmveloso 2 days ago 1 reply      
It's the official debut of PC Plus Era. This will be huge if Canonical and Google works together. Outstanding!
ajasmin 2 days ago 0 replies      
Can I only use Ubuntu when the phone is connected to a large monitor?

Even having access to some command line packages on the phone would be a big improvement over the minimal busybox stuff that comes with Android.

ilaksh 2 days ago 0 replies      
Yesterday on reddit headline was something like "I can't believe you can play Grand Theft Auto on a _phone_" and my response was "I can't believe we call them phones, they are powerful pocket computers that just happen to have the ability to send and receive phone calls"

I wonder when they are going to come out with portable OLED display sheets and ultralight paper-based keyboards.

raarky 2 days ago 0 replies      
Just curious. What happens if the phone rings and you rip it out of the dock?
tambourine_man 2 days ago 0 replies      
I always find it odd that people call the hardware and OS by the same name.

My first impression was a virtualized Ubuntu on top of Android (the OS).

dfc 2 days ago 0 replies      
Does anyone know who coined the "inside of every X there is a Y trying to come out"?
tmzt 2 days ago 0 replies      
This is pretty cool, but it's an OEM-driven product, dependent on the phone maker to enable it. For over a year I
've been working on a port of X to Android as an application, running as a non-privileged user, displaying to a surface allocated through the Android Java API. The port is at http://github.com/tmzt/androix along with build instructions. (see the readme
rhygar 2 days ago 2 replies      
This is cool from a gee whiz/novelty standpoint, but in practice this will have very poor usability. Why? Good touch apps have terrible UI for keyboard & mouse interaction.

For example swiping, pinch to zoom, etc. Many apps use a swipe to the left or right to perform and action. How would this work with a mouse?

rodolphoarruda 2 days ago 0 replies      
Look at the place I live:


You do not want to "have everything" with you in your mobile while you are in a public place. I have a notebook and an iPad, and, of course, my 4 year old shitty Compaq laptop I take outside when I have meetings. I keep my files inside a truecrypt file vault just in case.

LnxPrgr3 2 days ago 0 replies      
Did I just come out of a 40-day coma? What month is this?

"Android was designed for touch only, and has its hands full winning the tablet wars."

Be careful. I think Steve Jobs might've patented the reality distortion field.

"The Ubuntu desktop sets the standard for ease of use." Compared to what?

"And imagine TVs that become home PCs when you dock your phone: perfect for the emerging market where LTE will be the normal way for new users to connect to the Internet."

Great. My home connection's going to come with a 4GB monthly cap now too?

In a lot of ways this is actually a neat idea, and I could see something close to this catching on. I see a few problems though:

* As fast as my laptop is, I still sometimes wish it had a faster CPU, a better GPU, and more RAM. Modern phones are still around an order of magnitude slower and have a fraction of the RAM. They're not exactly desktop replacements.

* 64GB is an impressive amount of storage for a cell phone. It's pretty weak for a laptop.

* "The Cloud" is an order of magnitude or two slower than my local disk, and my local disk doesn't have a monthly data transfer limit.

* Normal people have no clue what Ubuntu is, and they're not exactly adopting it in droves, even without having to buy new hardware to support it.

So… neat idea, but I don't see this getting off the ground. If it does, though, I see a lot more idle sword fighting in my future.

erikpukinskis 1 day ago 0 replies      
I could've done without the goatse reference, Canonical.
sl4yerr 2 days ago 3 replies      
Does anyone have any real-world experience with this? It looks amazing, but I'd like to hear from someone who's lived with it (if you're out there)...
thechut 2 days ago 0 replies      
I am so excited about this idea, being able to dock and have a full desktop with full applications and keyboard would be awesome. I have BusyBox on my Android phone but it usually isn't enough to make it function like a real Linux desktop. However, I hope that Canonical will stay true to itself and develop and release this as truly open software.

Edited for clarity.

headbiznatch 2 days ago 0 replies      
I never got my jet pack, but I DID get my handheld supercomputer. I love this. I love the idea of it and the 7 year old in me who had his world rocked by that TI-99/4A is stoked. I just can't be upset at any aspect of this - our computing dreams just keep coming true.
ez77 2 days ago 0 replies      
It sounds great, but I'm confused: Are they planning to give this away as a free download? They seem to be targeting manufacturers only.
yread 2 days ago 3 replies      
So what do you do when the phone rings and you're doing something uninterruptible on the PC?
joejohnson 2 days ago 2 replies      
I don't understand what you use for input devices? Bluetooth keyboard and mouse paired with the phone?
arisAlexis 1 day ago 0 replies      
still waiting for ubuntu car what taking them so long...
chj 2 days ago 0 replies      
Ubuntu is the future of open computing. Look what apple does to their OS X and what Microsoft is going to do with their windows.
reactor 2 days ago 0 replies      
aremie 2 days ago 0 replies      
Finally an OS company has seen and grasped this big business opportunity. This could mean a big boom in Ubuntu users. A win for linux.
translated 2 days ago 0 replies      
Linux for human beings for Android?
modev 2 days ago 0 replies      
I'll love to test this out as soon as possible. Getting my Android phone to try it out. ;)
The myth of the eight-hour sleep bbc.co.uk
564 points by gps408  1 day ago   159 comments top 39
VonLipwig 1 day ago 3 replies      
I am really into 'do what works for you'. You may find that that sleeping in two 4 hour blocks changes your life. You feel alive!

Alternatively you may find yourself more tired. Personally I like sleeping 8-9 hours a night. I find myself fairly alert a few minutes after I wake up and I can start my day. It certainly doesn't feel 'unnatural' to me.

I am also a big fan of sleeping when other people sleep so I can enjoy time with friends and family. Unusual sleep patterns typically mean missing out on some of this time.

Xurinos 1 day ago 4 replies      
There is only one study in this article, and it involves how a group of people adapted to a 14-hour sleep pattern. Other than that, there are no studies of importance here, nothing that confirms concretely that this kind of segmented sleep is effective for humans. It is based on historical hearsay but cannot make a prescriptive judgment. The evidence purely anecdotal. Please be careful in reading things like this that you do not immediately form a blind belief or justification.
mchafkin 1 day ago 1 reply      
The New York Times took on this topic a few years back in a very good article that argued that the whole idea of the 8-hour sleep was invented by the mattress industry (and other purveyors of sleep products), and that humans don't need anywhere near 8 hours of continuous sleep.

Ironically, all of the industry's marketing makes people anxious about getting enough sleep--and makes it harder for them to get to sleep (thus propagating the need for more expensive mattresses and pillows.)


chollida1 1 day ago 2 replies      
My brother in law is a "sleep scientist" at UPenn.

His recommendation is that sleep cycles typically happen in 4 hour intervals so it's best to sleep 4 or 8 hours a night.

Getting up int he middle of a sleep cycle is often as bad as getting less than 4 hours of sleep.

And going to bed drunk is the worst for your sleep cycle.

wisty 1 day ago 1 reply      
Isn't this the proverbial midnight? You went to sleep for about four hours, then woke to have a midnight snack.
mmwako 1 day ago  replies      
Oh my God. This could change my life.

One lifelong problem I've got is sleeping. I have great trouble falling asleep, and my sleeping patterns are very uncommon: some days I sleep 12 hours, some 5, but I'm usually very tired because of this. However, I used to think I had a "gift" of being really creative and having my best ideas just before sleep, just waking up, and during insomnia episodes. However, I've discovered that many suffer from this phenomena (anyone here?). This article could explain a lot!

ericdykstra 1 day ago 1 reply      
It would have been nice if there were some actionable items. What are the best segments? Is there something wrong if I can sleep 8 hours without trouble?

Does anyone have links to more research, perhaps with something a little more actionable of a conclusion?

ericmoritz 1 day ago 0 replies      
I am going to try for a week or two. Having nearly 12 hours dedicated to work, I only have 4 exhausted hours in the evenings for eating dinner, spending time with my wife, kids and myself.

If my wife and I split our sleep cycle into two 4 hour periods we can better spend my four hours of free time.

The waking period between my commute and first sleep could be spend eating dinner, playing with the kids, and exercise. After my first sleep I can spend time with my wife, and study with a rested mind. Theoretically it seems like a good idea. We'll see how I cope after a week or two of trying.

agentultra 1 day ago 2 replies      
Sleep is weird. If I wake up in the middle of the night and go back to sleep... I feel like ass the next day. If I get less than eight hours I'm pretty tired the next day.

But science says that shouldn't happen!?

Not enough evidence to suggest that there's any kind of regularity to how humans "should" sleep yet.

etrain 1 day ago 1 reply      
Google ngram explorer has the term 'first sleep' peaking in use in English Language circa 1870, before declining ~80% to present day. http://bit.ly/zt8AfD

Tough to say if this was a 19th century phenomenon or was persistent through history based on the structure of their corpus.

mhartl 23 hours ago 0 replies      
There's a Zen saying relevant to the article (and to much of the discussion here). Rather than wringing your hands over "optimal sleep", I suggest trying it out. If your schedule allows, it can work wonders:

When hungry, eat. When tired, sleep.

metatronscube 1 day ago 1 reply      
I have never slept for 8hrs, not even close. I sometimes wonder how people can sleep for that long and what their jobs and life's must be like. After 5-7hrs I usually wake up feeling totally refreshed and ready to go (even better after my first coffee ;))
dkarl 1 day ago 0 replies      
I experience two sleeps sometimes when I'm not alone or when I drink a lot. If I'm alone, I usually read for an hour, and if I'm not alone, reading is the second choice. I always go back to sleep easily and get my full allotment. Until I read this article, it never occurred to me that I might be able to have that experience every night. Honestly, it sounds wonderful; sign me up to be in the vanguard. Now if only I could figure out how to make it happen when I'm alone and not drunk!
rokhayakebe 1 day ago 4 replies      
Just get rid of your alarm clock, and wake up when your body tells you to.
jsherry 1 day ago 1 reply      
"In the early 1990s, psychiatrist Thomas Wehr conducted an experiment in which a group of people were plunged into darkness for 14 hours every day for a month."

'Plunging' people into 14 hours of darkness per day? Does this sound like a scientifically sound way to determine the natural sleeping patterns of humans?

mdkess 1 day ago 1 reply      
For a long time I've wanted to cut myself off and figure out a natural sleep cycle. Normally (for better or for worse), I sleep 5-6 hours a night during the week, then get 8-9 on the weekends, but that's definitely shaped to the work week.

Also, maybe interesting - two 4 hour sleep cycle with a couple hours between is what happens naturally to me after a night of too much drinking. I wonder if there's some reason for that.

andyjohnson0 1 day ago 0 replies      
fredley 1 day ago 0 replies      
This is great, I've always worried about being or feeling awake for long periods at night. What doesn't seem clear though is whether I should be setting aside 10 hours a night for sleep (two 4 hour cycles + 2 waking hours in between)
b3b0p 1 day ago 0 replies      
According to my Bodymedia Fit, I almost never ever get a true 8 hours or more of sleep. Sure, I lay down at 10 PM or so and get up at 6 AM or later, but if this little device is as accurate as they claim it really opened my eyes that I was never getting 8 hours, when I thought I actually was. It says normally I get between 6 and 7 hours.

Edit: I'm curious what the recommended amount of sleep, or if it really is an individual thing per 24 hour or so period.

will_lam 1 day ago 1 reply      
This would be an interesting opportunity for Zeo or Fitbit to chime in on this if they were to mine their data, anonymize it and publish something about their users sleep patterns.
xfhai 17 hours ago 0 replies      
There is some problem with the title of the article. It could also mean that you dont need 8 hours of sleep. Other point is the need for sleep varies with age. Babies sleep for very long time like 14 hours; old people need only 5 to 6 hours of sleep. The 8 hour sleep is for youth between 25 and 40 i think. Otherwise, very original point, I have never thought sleeping in 4-hour chunks was natural.
jmsduran 21 hours ago 0 replies      
Interesting article. I usually sleep from 7pm - 10pm, wake up and do stuff (like read HN), then sleep again from 3am - 8am, with just enough time to get ready for work in the morning.

Not too long ago I suffered from minor insomnia and lucid dreaming. It was awful, to the point where I started to dread bedtime. For me, my diet at the time (fatty foods + sugar + caffeine + alcohol) had a lot to do with it, so once I committed to a healthy diet, my body settled back into a consistent sleep cycle. I guess it's really what works for you.

ry0ohki 1 day ago 0 replies      
So if one wanted to try getting into this cycle, do you set an alarm after 4 hours or if you wake up at all naturally in the night, stay up?
grey 1 day ago 0 replies      
Has anyone ever read http://www.supermemo.com/articles/sleep.htm or put it through any verification? It looks good but I'm no biologist, I've used it as my basis for years to avoid polyphasic schedules and embrace naps and biphasic sleep when my life allows for it.
nazgulnarsil 1 day ago 0 replies      
I seem to operate best on a 6 hour block and a 3 hour block. The problem is that doing this my body wants to have a total cycle time around 26 hours.
donniezazen 1 day ago 0 replies      
My sleep is a lot more deep when I regularly sleep around 7 hours. Anything less or over makes me restless. I think I am going to try this. If I get up in middle of night, I will work for an hour or two and when I feel sleepy, I will go to bed.
af3 1 day ago 1 reply      
now corporations want you to work more than 9 hours per day.
waiwai933 1 day ago 0 replies      
Not that I know anything about this, but isn't it possible that having this segmented sleep could possibly be bad for you? Just because our ancestors slept as such doesn't mean it's necessarily good for you, does it? (That's not to say it couldn't be good, just that I don't understand why the article seems to imply that it necessarily must be good.)
kylek 1 day ago 0 replies      
Interesting. Some of the lucid dreaming techniques try to capitalize on the split sleep schedule by taking melatonin before the first segment(induces a REM-less, deep sleep for the most part) and other supplements (vitamin B6, etc) before the second segment, resulting in a longer, more vivid dreaming session.
rokhayakebe 1 day ago 0 replies      
While we are on the subject of sleep, does anyone here experience sleep paralysis?
Shank 1 day ago 0 replies      
I've kept a sleep pattern of 4-5 hours by accident, and I don't seem to function any less efficiently than when I sleep 8 or more hours. I do end up taking a nap on occasion, but that's still 6-7 hours, not 8.
conradfr 1 day ago 1 reply      
Some recent studies showed that an healthy night seems to be no more than 7 hours.

Segmented sleep is interesting but I don't think a majority of people could balance that with family and work.

felipebueno 1 day ago 0 replies      
Really? I think this is BS... I don't care about statistics or a 1595 painting or whatever.

What I know is if I sleep less than seven hours I feel like crap during the day.

bufo 1 day ago 1 reply      
A great page about sleep cycles is this one
clux 1 day ago 0 replies      
The guy who did this study spent 20 years. He found 500 'references to segmented sleep'. One of which being a painting depicting people not sleeping and sleeping simultaneously.

That's less than one reference every 2 weeks. Either that guy had the most relaxing job in the world, or it is genuinely difficult to find such references.
Assuming the latter to give the researcher the benefit of the doubt here;
how deep do you have to look before you start to think that maybe this isn't actually that common after all?

Also, how the bbc title reflects this study is beyond me.

samTeacup 1 day ago 0 replies      
I have this weird sleep pattern, where I sleep from around 5pm-7pm or 7.30pm without an alarm clock, I am sleeping very deeply, always dreaming and really hard to wake me up (I don't notice the phone ringing right next to me) and then I'm sleeping from 1am to 6am, where I usually don't remember dreaming and where I wake up very easily
kakaroto_BR 1 day ago 0 replies      
I'm always tired and sleepy, doesn't matter how many hours I sleep. Taking pills to sleep make me fill dizzy all day long :(
MrKurtHaeusler 1 day ago 0 replies      
I had 10 hours last night. 8 til 6
Online Python Tutor mit.edu
526 points by llambda  4 days ago   45 comments top 18
pgbovine 4 days ago 1 reply      
whoa what a pleasant surprise seeing my project on the HN front page this morning! just an FYI: this is a "demo-quality" project, not production-quality. it's probably not ready for HN-level traffic! please email me if you have more detailed suggestions or bug reports, thanks.
thefool 4 days ago 1 reply      
This is awesome.

Would be cool to see a version for a lower level language like C that would help people understand the stack and visualize pointer arithmetic.

ht_th 4 days ago 3 replies      
Although I like these kind of tools, I doubt that they're useful to learn basic programming as they need a lot of prior knowledge to be present to already understand the visualization. For example, what is a stack or heap to a novice programmer? What will the effect of this visualization be on their construction of an operational understanding of basic programming language concepts? And these constructs, will they be helpful, have no effect, or even be detrimental for developing understanding of more complex programming structures?

On the other hand, in a more advanced course, say at university level, it might help to construct knowledge about how programming languages work internally and how this relates to their already existing understanding of basic programming constructs. So, as a teacher be sure to evaluate the effect of the educational tools you use. I know it is easy to use existing cool tools (and this one definitely belongs to that category), but what is its effect on your students' learning?

wilfra 4 days ago 1 reply      
If you want to learn Python, MIT OCW Intro to Programming is probably the best resource there is: http://ocw.mit.edu/courses/electrical-engineering-and-comput...
norvig 4 days ago 1 reply      
It is great that Hacker News has caught up with this -- Philip has done a great job with the Python Tutor. Next step: user-definable layout for display of different data types.
ilovecomputers 4 days ago 0 replies      
As someone who went through Intro to CS courses. I must say that including the object oriented paradigm of Java in an intro class really confused me for a good three years. It wasn't until I learned how data was organized and executed inside a computer that it all became clear to me what programming really consisted of.

This tool is wonderful. It visualizes the inner workings of a computer very well and how it translates to code. I hope this teaching tool is used on students after they've learned some basic programming concepts and syntax.

mckoss 4 days ago 1 reply      
Works great on iPad -- well done!
forbes 3 days ago 0 replies      
This is a fantastic tool. I love it.

On line #9 of the example program, the string "hello" is assigned to y. On line #12 another string "hello" is used. I don't know anything about the inner workings of Python, but I imagine that these strings would be stored on the heap and possibly 'interned' by the interpreter. Maybe this is being ignored for simplicity, or I am way off the mark.

bp_ 4 days ago 2 replies      
The tool is awesome, but I wouldn't rely too much on the instruction bounds. For example, the greatest sums exercise (http://people.csail.mit.edu/pgbovine/python/question.html?op...) can be solved in six "steps" for all input lengths, while it's certainly no O(1) business.

  def maxPairSum(data):
return sum(sorted(data)[-2:]) # one "step"

agumonkey 4 days ago 0 replies      
100% cpu usage on chrome 19.0.1042.0 canary
0% cpu usage on firefox 13.0a1 (2012-02-18) until foo/bar execution (100% cpu usage too)

Like it anyway. Great job

gahahaha 4 days ago 0 replies      
Does anything like this exist for Javascript?
morenoh149 4 days ago 1 reply      
I need 4 lines to do the mergesort correction. What's the way that is expected here? I would do,

    if i = len(left):
if j = len(right):

paufernandez 4 days ago 0 replies      
Thanks a lot! I wish I had something like this for basic C++, my students would love it!
bwarp 4 days ago 0 replies      
I'd love to see this in debugger form!
vng 4 days ago 0 replies      
This should be insanely useful for someone new to Computer Science. I wish I had this during my undergrad career. Good job!!
ya3r 4 days ago 1 reply      
Where was the aliasing?
indubitably 4 days ago 1 reply      
unicode breaks it
beggi 4 days ago 0 replies      
Really, really cool.
EFF Wins Protection for Time Zone Database eff.org
430 points by taylorbuley  1 day ago   34 comments top 9
lsb 1 day ago 1 reply      
Why is there no penalty? Lawyers wasted a lot of billable hours over a suit filed over a copyright claim on facts. If there's no penalty for such frivolous claims, we'll keep having to fight the same fight, wasting money along the way.
click170 1 day ago 0 replies      
Obviously the lawsuit was a non-starter, any company that realized this would retract the complaint, but I was impressed by the apology.
It seems like in every suit I hear about where the company was wrong, they lose the case but still try to say they did nothing wrong and refuse to apologize. I was impressed this company did.
mattdeboard 1 day ago 1 reply      
I love the apology because it sounds exactly like the apology of a man (or organization) who has had it explained to him exactly what a very very bad position his activities have put him in. Really like seeing a complete apology unlike the "We apologize if we caused any discomfort..." ambivalent BS we see all the time.
nkassis 12 hours ago 1 reply      
So that actually raises a question in my mind, does that make all database that don't include something new created by someone (I'm thinking book database, research databases) as non copyrightable?

I'm thinking of phone books as an example.

figital 8 hours ago 0 replies      
dylanvee 1 day ago 1 reply      
Dr. Eggert is a lecturer at UCLA, and a fantastic one at that (not to mention a huge contributor to various GNU projects). This quarter I have the pleasure of taking his Operating Systems class, and last year I got to take Programming Languages taught by him. I'm really happy about this outcome because people like him should never have to have their brilliance tempered by patent trolls.
jakejake 1 day ago 1 reply      
I'm really glad this has been resolved. Now I just wish there was an easier way to use the database. Have you ever had to code something with timezone awareness..? It's kinda nuts.
Matt_Rose 1 day ago 1 reply      
Does this mean that TZ database is back at elsie.nih.gov?
How Mailinator compresses its email stream by 90% mailinator.blogspot.com
411 points by zinxq  2 days ago   69 comments top 17
jrockway 2 days ago 3 replies      
Mailinator is a great product. My favorite part about it is how whenever I register for something, the clever form validation software always rejects the mailinator.com email address. Then I visit mailinator, see their alternate domain name du jour in image form (so bots can't harvest it, hah!), and then that works perfectly. It makes me giggle with joy every time I do it.

It's also nice not receiving ads in the mail every hour of every day just because I wanted to try some new YC-startup's product.

ShabbyDoo 2 days ago 1 reply      
I recently worked on a project where, to cut down on space, I built a custom "compressor" for lists of tree-like records. You might think of a Person record with N past addresses although this was not the actual domain. No records were cross-linked (at least not formally in the code) and the records were trees, not more general DAGs. The data contained a lot of enumerated data types (address type, gender, etc.). I didn't really care about the space usage for 1K records, but I cared about 1M. I used variable length encoding (a hacked-up subset of the Apache Avro project) for integers to take advantage of most of them being small in the datasets. Lots of lookup tables for enumerated values and commonly-repeated in practice string values. Implicit "keying" based on a record schema to avoid specifying key identifiers all over (our data was not very sparse, so this beat out a serialized pattern of KVKVKV etc.). I thought about taking advantage of most strings having very limited character sets and doing Huffman encoding per data element type, but the results were good enough before I got there. A co-worker also noted that, because parts of these records were de-normalized data, subtree pattern recognition could provide huge gains. I added some record length prefixes to allow individual records to be read one-at-a-time so that the entire dataset would not have to be read into memory at once. IIRC, compression speed was 2-3x gzip(9), and large record sets were 1/10th the size of using Java serialization plus gzip. [Yes, Java serialization is likely the worst way to serialize this sort of data]

Was all of this worth it? It solved the problem of not burning through network and memory, but it was a local optima. The root problem was that this data came from another system which did not provide repeatable reads, and providing them would have been a massive effort. However, our users wanted to meander through a consistent data set over the course of an hour or so. To provide this ability to browse, we throw these records into a somewhat transient embedded H2 DB instance. The serialized format is required primarily to provide high availability via a clustered cache. In retrospect, I would have pushed for using a MongoDB-esque cluster which could have replaced both H2 (query-ability) and the need for the the serialized format (HA).

It surprised me that there were no open source projects (at least Java-friendly ones) which provided compression schemes taking advantage of the combination of well-defined record schema and redundant-in-practice data. Kyro (http://code.google.com/p/kryo) comes closest as a space-efficient serializer, but it treats each record individually. Protobufs, Thrift, Avro, etc. are designed for RPC/Messaging wire formats and, as an explicit design decision (at least in the protobufs case) optimize on speed and the size of an individual record vs. the size of many records. The binary standards for JSON and XML beat the hell out of their textual equivalents, but they don't have any tricks which optimize on patterns in the repeated record structures.

Is this just an odd use case? Does anyone else have a similar need?

markbao 2 days ago 1 reply      
Great read. I wish there were more articles like these.
hello_moto 2 days ago 1 reply      
Both blogs (mailinator and paultyma) are awesome. I need more stuff like this than a typical Web 2.0, how we use NoSQL, and Cache-Everything (without a clue how to do cache properly, but 37Signals cache solution is by far in line with mailinator techniques: smart and elegant).
davesmylie 2 days ago 1 reply      
I run a similar (though waaaay less popular) site. My mail is stored on disk in a mysql db so I don't have quite the same memory constraints as this.

I had originally created this site naively stashing the uncompressed source straight into the db. For the ~100,000 mails I'd typically retain this would take up anywhere from 800mb to slightly over a gig.

At a recent rails camp, I was in need of a mini project so decided that some sort of compression was in order. Not being quite so clever I just used the readily available Zlib library in ruby.

This took about 30 minutes to implement and a couple of hours to test and debug. An obvious bug (very large emails were causing me to exceed the BLOB size limit and truncating the compressed source) was the main problem there...

I didn't quite reach 90% compression, but my database is now typically around 200-350mb. So about 70-80% compression. So, I didn't reach 90% compression, but I did manage to implement it in about 6 lines of code =)

Maascamp 2 days ago 0 replies      
Great write up. One of the more interesting things I've read on here in a while. Thanks for sharing.
pkulak 2 days ago 2 replies      
Redis works great as an LRU cache and is much more space-efficient than an in-process LinkedHashMap, especially when the keys and values are small. Plus, an LRU wreaks havoc with the the Java generational garbage collector as soon as it fills up (every entry you put in is about guaranteed to last until the oldest generation, then likely be removed).
newman314 2 days ago 1 reply      
Reading about another algo (Locality Sensitive Hashing) as referenced in the first comment.


funkah 2 days ago 0 replies      
Mailinator, even considering any praise it has ever gotten, is still one of the most underrated tools on the internet. I love it and use it all the time.
pbiggar 2 days ago 1 reply      
In an aside he mentions you should use bubblesort instead of quicksort for small arrays, due to cache locality, etc. I'd recommend using insertion sort instead of bubblesort - it does much better in both cache locality and branch performance (one branch prediction miss per key).
skrebbel 1 day ago 0 replies      
Voila. 90%. (Two notes: 1: that's a reasonable average at least... sometimes better, sometimes worse and 2: I realize I'm not exactly sure what "Voila" means, looking that up now).

If mailinator wasn't already awesome, his writing about it sure is.

wolf550e 2 days ago 1 reply      
1. I think the author calls DEFLATE an LZW algorithm. It isn't.

2. Has the author looked at Google Snappy? It does 500MB/sec.

There is a pure-C implementation that might be easier to port:

steffes 2 days ago 1 reply      
Just when I thought I knew everything there is to know about compression algorithms, along came Pauli, and Voila, mind now blown.
__alexs 1 day ago 0 replies      
The line based thing smells like doing LCS but with string elements of length N rather than 1...
jorangreef 1 day ago 0 replies      
Something else that might work is content-dependent deduplication, with variable chunk boundaries determined by a sliding Rabin-Karp-style (or XOR) fingerprint on the content and a second cryptographic hash calculated for chunks where the cheap fingerprint has a match. It's naive and can find matches across headers, body and attachment parts.
dredmorbius 2 days ago 1 reply      
Further efficiencies can be gained by removing extraneous apostrophes from possessive "its".
iag 2 days ago 0 replies      
Reading this article makes me giggly inside.
Remove Google Search History Before New Privacy Policy Takes Effect eff.org
390 points by bootload  1 day ago   150 comments top 28
jellicle 1 day ago 3 replies      
I'm pretty sure EFF is wrong here. Google's Web History is basically a public version of your search history. You can turn on or off whether you want Google to rub your face in its knowledge about you. But it retains that knowledge even if you turn off the rub-in-your-face personalization part.

Google retains a complete history of your interactions with them, which is not subject to this Web History setting, not deletable, not removable, and will be shared across its properties.

Short reply: This doesn't remove Google's search history of your searches at all.

potatolicious 1 day ago 0 replies      
I for one am glad that Google at least provides this option. I'm sure if Facebook, Zynga, or many of the current startup psoter-boys would not, given the same opportunity.
rpedroso 1 day ago  replies      
The official title of the article is "How to Remove Your Google Search History Before Google's New Privacy Policy Takes Effect", and the writing itself is much more neutral than this submission title.

I'm still not sure why people are afraid of Google's new privacy policy. I understand that there are people who have specific privacy needs, but outside that scope I doubt you have anything to worry about.

It's doubtful at best that Google's "log" of you would become compromised (unless your personal account were compromised, but then this would have been a problem anyways!). It also isn't the case that some Google employee is reading row after row of Google's customer DB snooping on individuals.

Google isn't some unified entity; your data is being manipulated by advertising algorithms to tailor ads for you. Unless you care about a CPU "knowing" your secrets, or you have specific privacy needs/concerns, none of this is a problem.

Maybe someone can surprise me with some good reasons to be concerned, but until then I am trusting Google.

psadauskas 1 day ago 4 replies      
Someone should write a Firefox plugin to allow people to anonymously exchange google cookies, like you can with the grocery store club cards.
aschobel 1 day ago 1 reply      
Is there a way to export the search history?

I have close to 45k searches over 7 years, would love to keep that data and mine it myself.

MengYuanLong 1 day ago 1 reply      
I never realized you could access your search data history. It was really fascinating to see the different visualizations including my search concentration by hour and days of the week.

That said, even as a non-statistician, it wasn't hard to imagine the amount of information Google could infer by focusing on an individual. I had a data set exceeding 50k searches with the resulting click-throughs on my account. (Plus, they can of course leverage their massive db to help eliminate anomalies like Whitney Houston)

In addition to the basics like big item purchase history, hobbies, and problems (e.g. sickness); I wouldn't be surprised if Google could predict my relationship status and sexual preferences.

Custom search results are great, but I'm extremely happy to have been given the option to delete that profile. If anything, it is the single largest factor pushing back to Firefox and possibly DuckDuckGo.

jhancock 1 day ago 5 replies      
When I go to https://google.com/history while logged into my google mail account, I get redirected to https://www.google.com/

The redirected Google home page clearly shows me as logged in. Am I doing it wrong?

CodeMage 1 day ago 0 replies      
After doing what EFF recommended, the Web History is "paused" indefinitely. If you want to opt out of the service completely, you can use the following URL:


dvdhsu 1 day ago 1 reply      
As far as I know, Google Takeout doesn't support Web History. If they supported it, I'd take out my search history in a heartbeat.

As it stands now, I do sometimes refer back to it, but I don't think it's worth having it if it means giving up the information to advertisers as well.

RyanMcGreal 1 day ago 0 replies      
I always use the following link to search google:


And I have search history turned off.

Gustomaximus 1 day ago 4 replies      
I'm not convinced this works as it should. I turned of history a while back and have since repeated "remove all history". But I still get these ads related to searches and email that is too obscure to be co-incidence - e.g. Lately I get shown ads (sometimes multiple times daily) for "Foreign SIM cards" as I googled this about a month ago. Especially on Youtube. Anyone else notice this who has turned off their web history?
jsz0 1 day ago 0 replies      
There are probably better ways of doing this but I just use Chrome now exclusively for Google services. I don't login to Google on my primary browser anymore. I'm hoping this will limit their stalking a bit. I don't mind if they read my mail since that's the price I pay for using the service but I'm getting a little paranoid about the other things they might be doing. Better to keep it sandboxed away from the rest of my browsing.
shareme 1 day ago 0 replies      
The issue is not with start-ups or established players its with US Congress and the DOJ who have consistently used 911 to over-reach to control things so that they do not have to hear dissent. Sort of what the standard Russian citizen experiences on a daily basis, bbut they are not alone in that experience.

Thus, when did we become a less free 'Third World Country'?

My apologies to citizens in Third World countries as I lack the vocab early this morning to express it in a different way.

morsch 1 day ago 1 reply      
I knew about the search history, but apparently a number of people didn't. This makes me wonder: Are there any other semi-hidden Google services which I can "clean out"?
simon 1 day ago 1 reply      
I use three browsers. Firefox for general browsing (technology and theology), Chromium for Gmail and Facebook, and Opera for blogs.

I figure that keeping Google and Facebook in their own little prison should help greatly with privacy.

I use DDG for searching whenever possible.

Not so much to hide things (as a pastor I got used to being watched 24x7 years ago), but just to preserve what little privacy I have left.

rd108 1 day ago 4 replies      
I'm confused. What does this actually mean?

"You can delete information from Web History using the remove feature, and it will be removed from the service. However, as is common practice in the industry, and as outlined in the Google Privacy Policy, Google maintains a separate logs system for auditing purposes and to help us improve the quality of our services for users."

So.... why would I delete this information if Google still keeps it elsewhere? Do I just have no ability to control what data Google collects on me, even if I agree to stop using its services?

shashashasha 1 day ago 0 replies      
Wow, did not know that this existed. Now I know that across 11 thousand searches on Google, AAPL was my top search, and that my search activity spikes on Mondays and declines through the rest of the week: http://o7.no/wiL2jx
guynamedloren 1 day ago 3 replies      
When somebody's life is horribly dismembered as a result of Google's insane privacy policy (new or old), please let me know. I'll start thinking about privacy and necessary safety measures at that point.

Until then, please stop whining about privacy, because frankly, I just don't see how any of this really matters. My life has yet to be negatively impacted by Google and I don't foresee it happening in the near future.

nemoniac 1 day ago 13 replies      
Show of hands:

Who is actually going to do this? Who isn't?

quink 1 day ago 1 reply      
youtube.com doesn't have any obvious method of removing of past watched videos and is even more nefarious than Google will probably ever be (before March 1st, 2012 at least) about tracking visitors.

Additionally, YouTube has become pretty much just a branch of the RIAA and MPAA and their local equivalents, and I'd rather burn down my house than allow them access to my data.

That's why I've been blocking YouTube from setting cookies on my computer, and that's why other people should do too.

gcb 1 day ago 0 replies      
when i go to that page with my spam account, which i'm logged in by mistake when i do most of my searches anyway, i get one screen asking to enable web history, with a button saying "no thanks". and i never get to the calendar screen.

with my de facto account, it shows the calendar. and there's no way i can find a way to that "no thanks" button

esalazar 1 day ago 0 replies      
Looks like government doesn't like either, http://online.wsj.com/article/SB1000142405297020391830457723...
ontoillogical 1 day ago 1 reply      
I use google apps for domains and I get the following message: "Web History is not available for MY_DOMAIN. Learn more about Google products you can use with MY_EMAIL."

Is Google collecting this data and not giving me the option to turn it off, or are they not collecting the data?

facorreia 1 day ago 0 replies      
Interesting. Although I'm not sure I care if Google applications have access to information about my location, interests, age, sexual orientation, religion, health concerns. I'm a 42, male, married, Catholic software developer living in Brazil.
downx3 1 day ago 0 replies      
Can someone possibly paraphrase the new privacy policy? Or at least point out significant changes please.
gildur 1 day ago 0 replies      
The web history seem to have been off for me by default.
thedangler 1 day ago 0 replies      
I never had it turned on in the first place.
Twitter to move away from Hashbangs storify.com
356 points by ChrisArchitect  3 days ago   77 comments top 18
simonw 3 days ago 3 replies      
This is fantastic news.

From the recent tweets by https://twitter.com/danwrong it looks like Twitter are moving entirely to HTML5 pushState, and leaving IE users with full page refreshes rather than continuing to serve them #! - Dan says "I'm not sure why everyone is so adverse to page refreshes these days. You can make them fast too."

Of course, Twitter are going to have to include a piece of JavaScript on the http://twitter.com/ homepage which checks for a #! and redirects the user to the corresponding page - and they'll have to keep that JavaScript there forever, since they have nearly two years worth of links that they need to avoid breaking. One of the many reasons #! is such a nasty hack.

In terms of performance, this is going to make Twitter a lot /faster/ for me - I often open Twitter profile pages in new windows (due to working on Lanyrd) and each new window has to pull in and execute a HUGE chunk of JavaScript before it will display the page. Being able to just load a regular HTML page will be much faster for me.

Nitramp 3 days ago 2 replies      
The actual URL for the informative blog post is this: http://danwebb.net/2011/5/28/it-is-about-the-hashbangs

The linked blog post only contains some relatively meaningless Twitter messages and the hyperlink as text, not as an actual link.

One of the things the post doesn't mention (it's sort of implicit in "going under the radar") is that with hash bangs, every request has double the round trip time to retrieve the initial data being displayed, as the server cannot know what data the client wants to retrieve. This makes a lot of nifty performance optimizations impossible.

nikcub 3 days ago 4 replies      
Good. A lot of developers justified doing it in their own projects because Twitter and Gawker were doing it. Now that one of the headline sites is no longer using it (and will hopefully condem it) we can file this episode to history and never speak of it again.

Edit: wouldn't it be awesome if Google (they did start this, afterall) would allow sites using hashbangs to auto-update all indexed URls

jquery 3 days ago 1 reply      
Twitter's implementation of the hashbang was awful. It broke the back button and it was slow. I don't think it's a fair representation of the technique.

EDIT: And based on their implementation, I wouldn't trust anything their engineers have to say about hashbangs either.

guelo 3 days ago 0 replies      
Good. Now if they would get rid of minified urls they would be done with their damage to the web.
technomancy 3 days ago 0 replies      
I've been using the mobile version on my laptop for a few months now since the hashbang was so sluggish, but this could get me to switch back.
kylemaxwell 3 days ago 4 replies      
Can somebody explain to non-web-types why this matters, other than making the URL itself look cleaner?
protospork 2 days ago 0 replies      
For anyone with a high latency (huge swathes of the US are still stuck with satellite or mobile and the tech industry seems to have ignored this), twitter is a nightmare. The first pageload only pulls the empty 'framework' page, then a series of js requests pull the information. You can't walk away while it loads, either, because it will register the latency and display errors instead of content.
zaidf 3 days ago 3 replies      
I thought this was about twitter moving away from hashtags. For some reason I got a bit excited, may be because I find vast majority of hashtags to be annoying noise. That said, I know they serve a purpose in specific use cases and donno a better alternative.
johnbender 3 days ago 0 replies      
"PushState or bust" means there will be no support for IE version 9 or earlier, which makes me wonder if they'll take the approach of layering replaceState on top of the existing hashbangs thereby fixing the deeplinks issue.
martindale 3 days ago 0 replies      
Thank god. I've always felt uncomfortable that every link normal users share (I manually remove them) is technically pointing at their front page.
xpose2000 2 days ago 0 replies      
As far as I know, the proper way to create a modern web app to deal with location.hash fallback and pushState, replaceState etc is to use https://github.com/balupton/History.js .

Comments/Corrections are welcome.

lucb1e 3 days ago 0 replies      
I especially agree with that last comment, people are so hyping over not making pages refresh for speed, they rather often trigger the opposite effect.
swah 2 days ago 1 reply      
OTOH isn't this the "modern way" to develop a web app?

HTML templates + AJAX on the client and a REST/JSON API that you can reuse for iOS apps on the backend?

firefoxman1 3 days ago 1 reply      
Even though Twitter has decided it's either "pushstate or bust" what sort of fallback for pushstate exists for those of us who care about all our users, not just the ones with good browsers?
zerostar07 2 days ago 0 replies      
How about they move away from URL shorteners too?
thelicx 3 days ago 0 replies      
That totally makes sense. Time to pushState!!!
altcognito 3 days ago 0 replies      
Dan says "I'm not sure why everyone is so adverse to page refreshes these days. You can make them fast too."

Rolls eyes I don't need Twitter (140 characters should be enough for any application!) telling me how "fast" their pages load. Particularly coming from the notorious fail whale.

Our unrealistic views of death, through a doctor's eyes washingtonpost.com
343 points by llambda  4 days ago   251 comments top 35
coolestuk 4 days ago 2 replies      
"At a certain stage of life, aggressive medical treatment can become sanctioned torture. "

I just went through a month with a 90 year old friend whose life ended almost exactly like the story in that story. He had one lucid 30 mins when I was there and his family was there (he had been a general physician for almost 50 years). In his brief period when he had the energy to try and communicate whilst almost totally paralysed, it was clear he was telling the attending doctor that he wanted them to stop all medication and let him die. His own family could not face that fact, and said they'd ask him again the following day (unfortunately the cowardly doctor backed them up on this). He was never again lucid or strong enough to insist that treatment be stopped. He lived for another 10 days, struggling to breathe, almost totally paralysed, unable to control his bowels.

This was a man who when I last went on holiday with him at the age of 85, he insisted on carrying his own suitcase and refused a wheelchair, even though he had trouble walking and had blood pressure and angina problems.

I don't blame his family not being able to make that decision (it's so hard to let go of someone one loves). But his last weeks were undoubtedly torture, and they know they refused to follow his wishes. It was just terribly sad and an awful dilemma.

I was really glad of something else I read on HN about 6 months ago, where a doctor had a brain tumouur (or something like that) and instead of treatment, he lived out the remainder of his life doing the things he loved. I think that idea was what meant I could come to terms with the need to respect my friend's last wishes. I just could not convince his family.

bradleyland 4 days ago 4 replies      
At first, I was confused by this statement, given the data..

> "...modern medicine may be doing more to complicate the end of life than to prolong or improve it"

65 -> +12
85 -> +4

65 -> +19
85 -> +6

The engineer in me said, but we've improved! But then I realized that evaluating life by measuring in years is like reviewing tech products by looking at spec sheets.

"But it has more megapixels!? Aren't megapixels what we want?"

Reality is far more subtle.

mistercow 4 days ago  replies      
> our culture has come to view death as a medical failure rather than life's natural conclusion.

Death is a medical failure, just like our inability to cure herpes is a medical failure. That there's no way to overcome the failure yet does not imply that it is not a failure.

scott_s 4 days ago 1 reply      
Related article from a few months ago, "How Doctors Die": https://news.ycombinator.com/item?id=3313570
Alex3917 4 days ago 2 replies      
I think the fundamental misunderstanding is that most people think that people die from diseases, but in reality it's much closer to the other way around; people get diseases when they are about to die. Even if we had the cure to every single cancer the average life expectancy would only go up by about 3 years, because most cancer patients would just die from something else anyway.
polyfractal 4 days ago 2 replies      
Two poignant quotes from the article really stood out to me:

"When their loved one does die, family members can tell themselves, “We did everything we could for Mom.” In my experience, this is a stronger inclination than the equally valid (and perhaps more honest) admission that “we sure put Dad through the wringer those last few months.”"

"A retired nurse once wrote to me: “I am so glad I don't have to hurt old people any more.”"

Makes you stop and think about how we treat end-of-life situations.

kiba 4 days ago 2 replies      
We shouldn't inflict unnecessary pain and suffering but we must fight death and continue medical research for the future generation of elders so that they will be able to see their great-great-great grandchildren and beyond and be healthy at the same time.
srean 4 days ago 0 replies      
I went straight to the HN discussion rather than the WP page and on my way was thinking to myself, hope some one brings up the issue of "right to die". To my pleasant surprise that is the central topic of the discussion.

Many have argued for prolonging the life span of humans to the point of immortality. Its a thought provoking idea to entertain. One comment ponders rhetorically, wouldn't it be nice to have Einstein and Tesla around.

Not only am I ok with right to die, I absolutely covet it. Not just for myself but the entire society. I am not so sure about "right to life", though it might seem such a no-brainer,

Though we use the word "right" they are often privileges and the important question to think over is who gets to exercise the privileges. That is never uniform, it always comes down to who has the wherewithal to secure that privilege. It is this that causes me to worry.

Sure it would be great to have Socrates and Einstein with us if they chose to, we don't know if they would have. Many assholes would, but more seriously, one can conjecture that the dictators who were not defeated but died naturally would probably choose to live. World history would be a lot different. Would authoritarian regimes live longer ? Difficult to answer.

Then there is the question of ideas. Ideas, both good and bad, they often die not because better ideas replace them, but because their champions die. I don't know whether this is an argument in favor of or against prolonging life, but the need to discuss it will only become more urgent over the years.

ThaddeusQuay2 4 days ago 0 replies      
This reminds me of Percy Bridgman, whose work with high pressure got him a Nobel Prize in physics, and which led to the creation of synthetic diamonds at GE.

"Bridgman committed suicide by gunshot [at age 79] after living with metastatic cancer for some time. His suicide note read in part, "It isn't decent for society to make a man do this thing himself. Probably this is the last day I will be able to do it myself." Bridgman's words have been quoted by many on both sides of the assisted suicide debate."



EDIT: I forgot to end with: Mr. SENS, FTW.


michaelbuckbee 4 days ago 0 replies      
My uncle recently passed away (pancreatic cancer) and the people from Hospice were all amazing in helping him and the family.

To my mind they embody the different way of handling end of life issues than standard medical treatment.

lizzard 3 days ago 0 replies      
When I am clearly near death I would prefer pain relief and hospice-style care at home rather than extreme intervention. And yet, as a wheelchair user (on and off) for the last 20 years and a person in constant pain, I've had people tell me to my face they'd rather be dead than be "disabled". I would like to at least mention the importance of listening to people with disabilities on this issue. Because of societal prejudice against people with disabilities, often people's judgement is that we would be better off dead than suffering or impaired. When a disabled person is depressed or suicidal, they are encouraged to die by fans of the likes of Peter Singer or Kevorkian, rather than treated for depression, and helped to have the medical and personal care, and societal infrastructure, that might improve their enjoyment of life.

So, please keep this in mind and perhaps read up on the complexity of the issues -- from the perspectives of disability rights activists as well as doctors or the caretakers of people who are extremely ill. Our slogan is "nothing about us without us" and yet unfortunately, this article is only from the perspectives of caretakers.

philwelch 4 days ago 0 replies      
I watched my father die last year, and I'm thankful I did the dignified thing and let him go when he was obviously at the end. The metaphor of "checkmate" is quite apt--once you're old enough and you've developed enough problems, every plausible means of escape from one problem is blocked by the next.

If you don't want to spend your last days being tortured to death in a hospital, tell someone you love and trust and write up the legal documents necessary to enforce that decision. It's an incredibly hard decision to make though, and it's sad that not everyone has someone they can trust to make that decision. My dad was lucky to have an only child who followed his wishes. I can't imagine what it would be like to have some hysterical sibling try to undermine my dad's wishes, but that's what a lot of families go through.

rdl 4 days ago 1 reply      
If I were in medicine, I'd sure rather be working on trauma on a healthy patient population (soldiers, young adults, etc.) than illnesses of the elderly or already sick -- vastly simpler, sort of like developing new software (even if it is doing something difficult) vs. working in a complex legacy system with no documentation and lots of hidden mines.
InclinedPlane 4 days ago 0 replies      
These sorts of things strike me as a bit off. One of the problems of being a doctor, much like being a police officer, is that you tend to get exposed to the worst experiences. Good folks may have the occasional run-in. With police but by far the bulk of encounters a police officer has will be with the worst parts of humanity, again and again. Which can lead them to become jaded, depressed, etc.

The same sort of dynamics play out with doctors too. People who get better stop spending time in the hospital, and stop being seen by doctors so often. This can lead doctors to a false view of late stage medical care. They see every moment of the pain and suffering, the struggling that turns out to be in vain, the sheer cost of heroic measures, but they often miss out on seeing the benefits. The time given back to patients who then spend happy years with their family away from hospitals and doctors.

evo_9 4 days ago 0 replies      
Well worth reading and really considering before people assume 'death' is a fact of life:


jdavid 4 days ago 0 replies      
This is one of the reasons I find it hard to talk to my mother. She is beholden to grief about her father and is doing everything to prolong the life of my grandmother even though she can't walk, needs help eating, and constantly falls asleep. On top of that she hardly knows who is around her. it's sad.

Watching how my parents have dealt with my aging parents really puts pause on me and has me thinking about creating a living Will to make my own wishes very clear.

crag 3 days ago 0 replies      
When we are born, nature promises us nothing except death. It's just that simple.

It's unfortunate we've surrounded it with so much fear.

Our ideas on death must change. Cause frankly, it's killing us. Death is the natural order.

I heard a great line from one of the few TV shows i watch, Supernatural;

"Who came first? The chicken or the egg? We're too old to remember. But I know this, I'll reap god too". - the aspect of Death.

berntb 4 days ago 1 reply      
I haven't studied medicin, but have a smattering of chemistry so I follow the subject a bit.

The article miss the probably most interesting aspect.

In fifteen years, there will almost certainly be organs grown from stem cells, which allows transplants without immunity problems.

What would happen to the first example in the article when the hearth and kidneys are replaced? When growth factors and (more or less) young stem cells start repairing the brain damage and the Parkinsons?

It might take twenty years. It will still be in time for most people on HN when you grow old.

Anyway, it is probably only of academic interest to me (old health problems will probably get me).

luser001 4 days ago 0 replies      
For a book length treatment of this topic, see "How we Die", one of the most amazing books I've read.


ern 3 days ago 0 replies      
The article seems to be part of a trend where people are made aware of the value of letting go, rather than engaging in futile medical care.

However, before making end of life decisions, one should be aware that there is a large and growing hospice industry that benefits from people "giving up" (sometimes leading to greater suffering, as treatable conditions go untreated), and weigh one's decisions carefully: http://www.businessweek.com/news/2011-12-06/aunt-midge-not-d...

charlieok 3 days ago 0 replies      
"According to the Centers for Disease Control and Prevention, a person who made it to 65 in 1900 could expect to live an average of 12 more years; if she made it to 85, she could expect to go another fouryears. In 2007, a 65-year-old American could expect to live, on average, another 19 years; if he made it to 85, he could expect to go another six years."

I get that gender neutral language like this is the norm these days, but the way it's used in this particular statement, it isn't helping!

rayiner 4 days ago 1 reply      
I think one of the things he mentions, people not living with their elderly parents, is responsible for a lot of our bizarre attitudes towards old age and death. How much of the "just a couple of more years" attitude is to help assuage the guilt of ignoring the person for the last 10-20 years of their lives?
lizzard 3 days ago 0 replies      
For more on death and disability and "right to die" issues, here's a great post by Bad Cripple.


lunarscape 4 days ago 0 replies      
This brings to mind the following article, in which doctors place greater emphasis on quality of life when making decisions themselves: Physicians Recommend Different Treatments for Patients Than They Choose for Themselves, Study Finds, http://www.sciencedaily.com/releases/2011/04/110411163904.ht...
danbmil99 4 days ago 0 replies      
> very few elderly patients are lucky enough to die in their sleep.

That doesn't have to be a matter of luck. If we allowed people the dignity to choose, while still lucid, to end their life, or to set up a strict set of conditions upon which their life will end, we could drastically reduce the suffering involved in death, both for the dying and for their families and friends.

hendrix 4 days ago 0 replies      
There is no such thing as 'aging' in the traditional sense. As explained by Aubrey De Grey Sens' founder "the set of accumulated side effects from metabolism that eventually kills us" http://en.wikipedia.org/wiki/Strategies_for_Engineered_Negli...

It might not work but it is a _much_ better use of our tax dollars than blowing up brown people in the middle east.

Craiggybear 4 days ago 0 replies      
I think just because you can do something to prolong someone's life, there used to be a time when a good doctor knew this didn't automatically mean you should.

I know my grandparents didn't fear or refuse to believe in death in the same way as people do now (and they had seen a lot of it). I hope I'm strong enough and happy enough with myself that I will gladly relinquish my hold on the reins of life when its properly time to do so, and not fear it or be sad.

jtothapreston 4 days ago 0 replies      
I think this underestimates our most primal of instincts, survival. Those people who died at 48 a century ago had no less want or determination to live longer, than did those who die today at 78. In those times, their options were extremely limited, so as a culture, our norms prepared them better to accept death. Nowadays we have many options, and modern medicine has become the ultimate tool of our survival. Yet another profound ability that separates us from other animals.
ilaksh 4 days ago 1 reply      
This all sounds reasonable to many people.

Unfortunately, the motivation for these kinds of testimonials is not sentiment or reasonableness. Its profit. Its just not profitable for health insurance companies to treat old people -- that's where they lose all of their money.

joelrunyon 4 days ago 1 reply      
Honest Question

If we all lived forever, where would we put all the people?

javajosh 4 days ago 2 replies      
Yes. Now what doctors need is an effective, compassionate way to communicate the simple fact that we need to let old people die. Here's one way:

A family brings an elderly patient suffering from stroke, diabetes, etc. into the ER. The patient is in a coma. Before asking them what they want to do, take the family into the NICU and tell them: we have limited resources, and we can either save a baby in here, giving him a chance for a healthy productive life, or we can perform heroic measures on your loved one, almost certainly doing nothing but prolonging his suffering. What is your choice?

martinklein 4 days ago 0 replies      
My comment on article was deleted by washingtonpost moderator.

2/18/2012 9:16 PM GMT+0100
[Implying they removed a feeding tube]

My response:

You didn't feed him? He died of starvation? That could be painful. You should kill him using exit-bag with helium gas. I'm being serious.

My response was replaced with:

postmoderator responds:

This comment has been deleted by a moderator for violating the site's discussion policy.

jdavid 4 days ago 0 replies      
Ray Kurzweil should read this article.
delinquentme 4 days ago 0 replies      

We all are afraid of death and the reason humans invented religion was to cope with this fear.

If you're going to downvote...

Instead say something so we can discuss!

taf2 4 days ago 0 replies      
i knew at 31 i should start feeling old... so 47 is the end, that means i have 16 good years left in me... here's to enjoying it and making something great of it... cheers!
Apache releases first major new version of popular Web server in six years zdnet.com
314 points by thenextcorner  2 days ago   91 comments top 10
jbarham 2 days ago 4 replies      
It may sound trivial, but the thing I appreciate most about Nginx is its lightweight config file syntax.

It's very easy to glance over and see what's been set up compared to Apache's verbose pseudo-XML syntax which is about the worst syntax you can come up with: the verbosity of XML but without the benefit of being able to generate or parse it using standard XML tools!

Garbage 2 days ago  replies      
Overview of new features in Apache HTTP Server 2.4


krmmalik 2 days ago 1 reply      
I guess real world testing is going to be the best indicator of whether this is a worthy release or not, but i sure am very glad that Apache have at least attempted to up their game. Even if they are not able to deliver on their promise, the effort is noble enough - at least for now.

Personally, i'm very glad to see performance considerations being taken seriously, and even if nginx or node.js don't take over the world, its nice to see that they're forcing others to sit up and think.

xpose2000 2 days ago 0 replies      
I'm excited. Even if it is only a 5% to 10% improvement in performance, then that buys me a little bit more headroom on my current server setup.

I look forward to testing it out down the road.

tutu55634 1 day ago 0 replies      
Micro Benchmark of Apache 2.4 vs Nginx 1.0: http://blog.causal.ch/2012/02/micro-benchmark-apache-24-vs-n...
cmaxwell 2 days ago 2 replies      
Expecting to see some meaningless benchmarks soon.
etomer 2 days ago 0 replies      
Get your faces off of your articles' headers!
rplnt 2 days ago 2 replies      
3.0 would be major, 2.4 is minor.
chbrown 2 days ago 4 replies      
"While it seems unlikely that NGINX could overcome Apache's commanding lead ..." Oh zdnet, don't you realize that actions like this release from Apache, obviously under pressure from NGINX, are precisely the indicators that you will be eating your words in a year or two?
nirvdrum 2 days ago 0 replies      
I'm pretty sure the Apache httpd server doesn't follow semver at all, given it predates that document by 10 years or so. Any overlap is pretty coincidental or inspired in the reverse direction.
From the IE Team: Google Bypassing User Privacy Settings msdn.com
309 points by ecaron  3 days ago   174 comments top 33
nostromo 3 days ago 4 replies      
This seems to be a problem with the design of P3P more than anything.

Browsers: "3rd-party cookies are blocked unless you add a P3P header..."

Websites: "Ok. What should be in the header?"

Browsers: "Anything... it doesn't matter. Just add the header then 3rd-party cookies are fine"

Websites: "Ok, we'll just add a P3P header saying 'Ceci n'est pas une P3P header' then. Problem solved."

gyardley 3 days ago 3 replies      
Google really should have the cojones to stand up and state their actual position plainly, which as far as I can tell is this:

"If you haven't taken an active, positive step to block our +1 buttons, we're going to assume you don't really care and we'll do whatever we can to show them to you, no matter what your browser's default settings are. Why? Well, because we think the default settings are bullshit, and 99 times out of 100 they're only that way because they're the default. They don't reflect actual user preferences, they reflect other browsers messing with our business plans."

Not only is that an intellectually honest position, it's a lot more accurate than assuming all IE users who haven't changed their settings don't want +1 buttons.

yanw 3 days ago 3 replies      
So the WSJ publishes another one of it's alarmist articles about Google and Safari during the weekend and Microsoft wants to capitalize by pretending it just now discovered that P3P (a defunct and shitty protocol) is useless and no one uses it.

NYT September 17, 2010:

If you rely on Microsoft's Internet Explorer's privacy settings to control cookies on your computer, you may want to rethink that strategy.
Large numbers of Web sites, including giants like Facebook, appear to be using a loophole that circumvents I.E.'s ability to block cookies, according to researchers at CyLab at the Carnegie Mellon University School of Engineering.
A technical paper published by the researchers says that a third of the more than 33,000 sites they studied have technical errors that cause I.E. to allow cookies to install, even if the browser has been set to reject them. Of the 100 most visited destinations on the Internet, 21 sites had the errors, including Facebook, several of Microsoft's own sites, Amazon, IMDB, AOL, Mapquest, GoDaddy and Hulu.

Google doesn't support a broken feature that is exclusive to IE somehow it's their fault. If anyone ever doubted Microsoft's PR sleaziness and propaganda tactics that blog post is proof.

jtchang 3 days ago 4 replies      
P3P is a load of garbage as it is implemented/written.

There is no real enforcement behind it and it just causes lots of confusion. Seriously I have to go lookup what each of these acronyms are in order to figure out how my privacy is being violated? What guarantees do I even have that you are obeying P3P and not simply sending it to make me feel good.

Hell while we are at it we should implement P3P for phone apps. I'm sure Path (and others) will stop uploading your address book if the P3P says "ADDRBKNOUP"

kylemaxwell 3 days ago 4 replies      
I've been a Google fanboi for years and defended them in the public square when they've been accused of nefariousness. But these revelations of intentionally ignoring users' privacy settings have shaken me. Maybe it's time to put them into the Facebook category, where I removed my account years ago.
luser001 3 days ago 0 replies      
I saw this header a few days ago in curl, and I wondered why Google would send something like this. Now I know.

    P3P: CP="This is not a P3P policy! See http://www.google.com/support/accounts/bin/answer.py?hl=en&answer=151657 for more info."

capocani 3 days ago  replies      

In some situations, the cookies we use to secure and authenticate your Google Account and store your preferences may be served from a different domain than the website you're visiting. This may happen, for example, if you visit websites with Google +1 buttons, or if you sign into a Google gadget on iGoogle.

Some browsers require third party cookies to use the P3P protocol to state their privacy practices. However, the P3P protocol was not designed with situations like these in mind. As a result, we've inserted a link into our cookies that directs users to a page where they can learn more about the privacy practices associated with these cookies.

Information that Google collects in association with these cookies is subject to our Privacy Policy.

Doesn't seem nefarious.

slig 3 days ago 2 replies      
emu 3 days ago 0 replies      
It's a little disingenuous for the IE team to "discover" this just now. I'm pretty sure Google has been doing this for years, and it's well-known. (I certainly talked about it as part of a wider discussion about P3P policies with colleagues a year or so ago, and this isn't even my area of expertise.)

I also don't much mind what companies do with tracking cookies --- I recommend using the Vanilla Cookie extension to Chrome to create a whitelist of persistent cookies. It rather nicely avoids the problem.

raganwald 3 days ago 0 replies      
The fact that:

1. This is not new,

2. It's not just Google,

3. Microsoft did it (or even still does it),

4. A bunch of web experts don't take it seriously because a bunch of web sites don't take it seriously.

Does not in any way invalidate or make absurd the proposition that:

5. Google's actions are noxious.

Points 1 through 4 are, however, informative and provide valuable context for Google's noxious actions.

jsz0 3 days ago 0 replies      
Spitting in the face of user intent like this is really crossing the line of what is acceptable in my opinion. I'm not really concerned about my privacy but I do need to have some faith that the settings I am choosing are being respected. If I allow Google+ to use my webcam should I expect Google to turn it on and watch me all the time? That's not too far removed from what they are doing here.
scotty79 3 days ago 1 reply      
How can you bypass something that is not a barrier? P3P is useless.

P3P means "we would never..." in computer speech which is unenforceable therefore useless.

Google just make no promises via P3P and places link there explaining that it doesn't and why it doesn't.

Fortunately implementations of the P3P do the right thing and fold.

jackalope 3 days ago 1 reply      
This is like teams of foxes selling chicken coops and accusing the other teams of the improper placement of "No Foxes Allowed" signs. If third party cookies are bad, disallowing them should be the default, regardless of some policy header no user ever heard of or can decipher.

If they really cared, they'd include a way to disallow any third party resource without having to install a plugin like RequestPolicy. That would go a long way towards fighting tracking (and multiple exploits).

lawnchair_larry 3 days ago 0 replies      
Privacy violations should be opt-in. That's what is wrong with privacy on the web.
nitrogen 3 days ago 0 replies      
I can't say I would side with either party in this case. P3P sounds about as robust a protocol as RFC 3514 (the evil bit), and Google could just as easily display a warning to any user whose browser rejects third-party cookies.


voidr 3 days ago 3 replies      
What about IE tracking what users are searching for?

Microsoft should admit that they only care about privacy when it's convenient for them.

Volpe 2 days ago 0 replies      
Googles explanation (set inside the P3P cookie):


joshfraser 3 days ago 0 replies      
Having battled with P3P in the past, I sympathize with Google. I don't have a problem breaking dumb rules. And P3P is dumb on multiple levels.
meow 2 days ago 0 replies      
Keeping Google's blunders aside, the P3P policy as described seem to be a joke. Do they really expect third party sites to be honest with a browser ? At least they had to do some magic on Safari.. this seems too straight forward and begging to be abused...
shtylman 3 days ago 0 replies      
Can someone explain to me why it isn't the browsers responsibility to enforce this instead of relying on websites to "do the right thing"?
arebop 3 days ago  replies      
Oh look, Google fails to fully support features only MSIE has [http://en.wikipedia.org/wiki/P3P#Criticisms]! Shame on them.

P3P was standardized but it never got traction due to various practical problems. For example, privacy policies vary in many, sometimes-subtle, ways and nobody could figure out how to build simple software to decide automatically how to respond to these policies on behalf of users. Don't take Google's word for it, see what facebook says: http://www.facebook.com/help/?page=219494461411349. epic.org doesn't use it either.

There are some appealing ideas in P3P but in real life it doesn't actually help users protect their privacy, even on sites that actually implement it (such as Bing). The P3P working group shut down long go (http://www.w3.org/P3P/).

This is article is just cheap shot at a competitor.

dudus 2 days ago 0 replies      
I was going to post a comment with details of what the microsoft's P3P CP policy means. But it got so damn long that I had to write a blog post.
tete 2 days ago 0 replies      
Easy fix: use DuckDuckGo


Don't let them track and bubble you:



tlogan 3 days ago 0 replies      
There are some comments here that say: P3P is garbage as implemented, so it is ok for Google to invest some time and engineering effort to trick P3P and track users via cookies.

Not sure if that the valid answer but it has some merits if everybody is doing that (like downloading adress book from iPhone).

Now, I have the following question: if a random website is catch doing this, is it going to marked as un-safe by security scanners?

jdavid 3 days ago 0 replies      
I flagged this because it's just Microsoft trying to flame war with Google. These headers have been an annoyance every time I have had to consider them.

This is a clear example of Microsoft's 'extend' and 'embrace' strategy that destroyed so many platforms. The MS series of browsers were the only ones to adopt this before being recommended as a spec. The spec was never adopted.

Tichy 3 days ago 1 reply      
So I take it the MS ad network doesn't track users?
prtamil 2 days ago 0 replies      
It reminds me of a quote from Dark Knight "You either die a hero or you live long enough to see yourself become the villain" .
mrinterweb 2 days ago 0 replies      
For some reason, it feels too soon for the IE team to be calling foul on web standards implementation.
Stormbringer 2 days ago 1 reply      
Is "Google does evil shit and violates users privacy" even news-worthy anymore? Isn't that the default?

Yeah you heard me Google-tards. Down-vote me like you always do, I got karma to burn baby burn.

justinlau 3 days ago 0 replies      
It's a new low for Microsoft to use MSDN as a corporate mudslinging soapbox.
oakgrove 3 days ago 0 replies      
Episode 32432432 in the pissing match between MS and anybody who dares make a dollar in the computing industry. Yawn
devindotcom 3 days ago 0 replies      
Yeah, this isn't quite the same level as the Safari one, which was a bit of a tempest in a teacup to begin with. In both cases you can partially blame the browser, though more so in this case.
evmar 3 days ago 1 reply      
After Microsoft introduced cookie-blocking features in IE they've been dancing around how to get users to block ads without telling users to install the blockers directly.

Here's the blacklist they suggest in their post, which they recommend "as a protection":

The "-d" lines block domains entirely, which I believe means this has the consequence of blocking Google ads entirely.

Why Do Some People Learn Faster? wired.com
294 points by mikeleeorg  13 hours ago   71 comments top 20
tpatke 12 hours ago  replies      
I think about Spolsky's post on programmer productivity quite often [1]. I think the most interesting part of the post is when he says, "The mediocre talent just never hits the high notes that the top talent hits all the time." Obviously, I wonder if I am a programmer who can hit the high notes and, if not, what it will take to get there.

When I read a post like this, I try to apply it to making myself a better programmer in the "high note" sense. Trouble is - it just doesn't apply (and I am a huge fan of Dweck's work). This article is one for the masses - not people who are trying to create the next Google. Motivation, hard work and an ability to learn from mistakes are all necessary, but ultimately not sufficient for our craft.

If I had to guess what the missing ingredient is, I would say creativity.

Heck - we are programmers. We get immediate feedback on our mistakes all day long and anyone reading this post has most likely gotten really really good and learning from those mistakes. ...but, how many of us are hitting those high notes?

[1] http://www.joelonsoftware.com/articles/HighNotes.html

trustfundbaby 12 hours ago 2 replies      
I wish I had happened on this article 10 years ago ... I was always very good at school, so I picked up what I thought were bad habits. Reading this now, it's clear that what I was doing was protecting myself from failure. When I'd do badly on a test in a class that I thought I should do well in, I'd stop taking the class seriously so that when I got a 'C' I could kid myself and say "I did that and wasn't even really trying"

Oddly enough, I never made much of the fact that if I really wanted to do well in a class (because I was in love with the subject teacher for example ... :D) I could actually put the work in to get to the top of the class ... I just put it down to being smart.

This had a lot of bad repercussions when I went to college. For the first time in my life I was not only competing with a shitton of people just as smart as me, but almost as many people who were WAAAAAY smarter. Trying really really hard to only make B's was a huge blow to my psyche and I was very demoralized and uninspired for a long time. It wasn't until I realized that not only did it take hard work, but for me, inspiration or passion was necessary to truly excel at something. That's when I realized that I just had to shun things I really wasn't interested in and devote myself to things that I really wanted to be good at.

Even nowadays, I still look at people in this field who I consider quasi geniuses (DHH, Yehuda, Resig) and wonder if I can ever get to that level, and whenever I doubt myself, I go back and read this very deeply inspiring article from John Nunemaker http://railstips.org/blog/archives/2010/01/12/i-have-no-tale...

wallflower 12 hours ago 4 replies      
"Most people say it's easier to pick up languages when you're younger," says David Green, of University College London, who specialises in bilingualism.

"But people can learn languages at any point in their lives. Being immersed in a language is important. Personality is a contributing factor too - not being able to tolerate feeling foolish from making inevitable errors will make learning a new language a difficult process.

"The cult of the hyperpolyglot"


espeed 11 hours ago 1 reply      
This is a perfect example of what Alan Kay meant when he said, "A change in perspective is worth 80 IQ points."

I had this epiphany three years ago, and it has been my manifesto ever since (http://jamesthornton.com/manifesto).

narrator 4 hours ago 0 replies      
Memory and learning is not just something that magically happens, there's a lot of electro-chemical machinery underlying all of it that is greatly influenced by gene variation and other factors:


ericHosick 12 hours ago 0 replies      
I agree that learning from mistakes is really important. I also think that memorizing and rote learning are a detriment to being fast at learning. You can't "get it" if you are memorizing.

If you have a firm grasp of something then you understand how to manipulate the concept in different contexts. Adding a new idea allows you to simply twist contexts to include the new knowledge and you've "learned" how to do that new thing.

As things get more complex, so does the contexts and thus it gets harder to "learn" if you've been doing rote learning all along. Rote learning is memorizing a new idea in a specific context and there is no room for morphing that context.

If you get good at morphing contexts to fit new ideas then you can also get good at taking two different contexts, merging them at getting new concepts out of it: this is the act of inventing.

crusso 12 hours ago 1 reply      
Really, this kind of research points to the folly of the "everyone gets a trophy", "sheltering ego is most important in learning" school of thought. The every-day evidence of that folly litters the corpse of the American public education system.

Praise your kids when they do well. Point it out when they've made mistakes (in a graceful way). Teach them that doing better next time is always within their power if they apply themselves.

Most of all, let them know that you make mistakes but that you learn from your own mistakes and are willing to work hard to do better. Be a good example of coping with life and its difficulties, including the difficulties of parenting.

zanek 13 hours ago 0 replies      
I totally agree with the points in the article. I actually have actively pushed myself to learn from my own mistakes since I was around 22-23 yrs old. For me, it was a natural, logical way to learn.

I looked at it like riding a bike. Almost noone says they cant ride a bike if they make a mistake and fall. They try to adjust their balance better or think about what caused them to fall. They key is that they get back on the bike. However, in other areas of life, people dont think that what they are doing is like riding a bike when it really is.

With that approach I was able to teach myself programming, linguistics, etc. Its kind of awesome to read this article a decade or so afterwards.

fuzzythinker 10 hours ago 0 replies      
Very similar to "The secret to Raising Smart Kids" from Scientifc America in 2007, bookmarked, but seems behind a pay wall now:
wtvanhest 4 hours ago 0 replies      
I'm going to apply this to my management style and make sure I praise people for working hard, trying hard, but not being smart, or naturally good at anything. They know they are good at stuff naturally, but for the team to be most effective, the smartest people need to work the hardest.
orbitingpluto 9 hours ago 0 replies      
Why do I sometimes learn faster?

1.) When I've had a decent night's sleep.
2.) When I've made efforts taken to manage stress.
3.) Coffee!!!! (Dyn-a-mit-eh!)
4.) When I'm just in interested in why I made a mistake.
5.) Active presence of mind that I should be focusing on the task at hand.
6.) When I'm interested in the material.
7.) When I don't give a damn how about smart or dumb I might think I am.
8.) Removal of any negative influences at the moment. (PHB)

And most importantly,

9.) When I've exercised that same day.

tmh88j 12 hours ago 0 replies      
While I do agree that learning from your mistakes is a huge part of education, I think your interests are a close second. Generally, I'm intrigued by something that others deem to be difficult. I scored better grades in all of my engineering, math and science classes than my intro courses like psychology, economics and so on which I attribute to being more interested in science and thus being able to focus more.

For example, I was always very intrigued by calculus because most people talk of how difficult it is. I wanted to know why it's so difficult, so the first opportunity I had I took a calculus course. The subject itself wasn't necessarily interesting, I wanted to know what the fuss was about.

netmau5 9 hours ago 0 replies      
It's great to see a formalized study validate some of my long-held beliefs. For me, the biggest drawback of being self-critical has been "what if I draw the wrong conclusion from this mistake?" Some people call it tenacity: to try again and again after failure. But I think it's paranoia. The thought that I learned and will now stand-by something completely wrong due to a flaw in reasoning is a frightening path towards ignorance.
shawndumas 10 hours ago 0 replies      
Fixed mindset -- “You have a certain amount of intelligence and cannot do much to change it.”

Growth mindset -- “[W]e can get better at almost anything, provided we invest the necessary time and energy.”

A fixed mindset and a growth mindset are not mutually exclusive beliefs.

As I see it there are three dependent factors in accomplishing a complex task:

1.) intelligence -- raw computational power (impacted minimally by sleep/diet/use)

2.) knowledge -- data stored regarding a given task (impacted immensely by attention/research/exposure)

3.) wisdom -- the application of 1 on 2 (impacted greatly by experience/composure/analysis)

Knowledge, and to a lesser extent wisdom, can come as a result of investing time and energy. Wisdom is an amplifier of intelligence and knowledge; it can produce an order of magnitude increase in efficacy with very little change in either of the other two.

Both can be true...

joering2 11 hours ago 0 replies      
hope its not too much offtopic, but if I had children right now, i would definitely use my time differently to teach them things today than 20 years ago. in schools they still _mostly_ teach you to remember useless stuff (no, I am not talking about basics of history) instead of teaching you how to get necessary information, analyze it and get to conclusions. its not a miracle to anyone living in the age of internet that most information you need could be found on internet in less than 30 seconds (you dont need to pack it all inside your brain). ergo i would teach my children fast-reading (real fast) so they can analyze data quickly, and would teach them how to analyze facts and put them together to get to solid probative conclusions. also, how to do basic PI work :) to find data you looking for by assembling big puzzle from different places of small chunks over the net.
silentscope 10 hours ago 2 replies      
"Education is the wisdom wrung from failure."

I really couldn't disagree with this more. No doubt, failure is important in learning. However, to be educated or have an education, at the end of the day, you need to get something right.

Personally, I don't learn by failing fifty times and never succeeding. I don't learn how to lose weight if I try fifty diets and never lose a pound. I don't learn how to talk to a girl if I speak with a dozen of them and get shot down every time. I don't learn how to rockclimb by falling off the rock and never gaining elevation.

That is negative re-enforcement. I have learned what not to do--the opposite of what I set out to accomplish. Education is progressive. I learn by failing until I succeed, if only a little bit. Then I go back and try to succeed even more. That's the hard part, not calling it quits. Finding inspiration and success in the smallest scraps of result.

Just don't sell education or yourself short.

dacilselig 9 hours ago 0 replies      
Having a Bsc in psychology, I can remember that this was one of those concepts that I picked up from my classrooms and have used for myself. I believe that you could expand on this and say something similar about your athletic abilities. Being from Canada, I've played a lot of hockey and would be told that I was quite good. Having that thought in my head stagnated my ability as a player. It would be mean that when I had a bad game, I would question put blame on other factors and not myself. After reading this article, it made me realize that my self-perception about being a good hockey player was not a good way of seeing myself. After having a bad game, I shouldn't feel bad but rather I focus on finding a way to improve my weaknesses.
jakesandlund 11 hours ago 0 replies      
I know it has been mentioned before about this same study, but I can't remember where: what do you do when your kids are finding the work too easy? Praise them for their effort? That seems like it could backfire if they think they can get by in life with hardly any struggle, while still being complimented for their labor. Obviously, you could give them something challenging to learn, but what about when they're in school with kids that have a wide spread of natural talent? They will probably get good grades and be praised by their teachers for their intelligence, meanwhile getting bored of the same material over and over again. This seems to suggest the need for more individualized teaching, either by the parent, or by a teacher of some "gifted" class.
tintin 12 hours ago 1 reply      
Isn't this all about interest? When you are interested in a subject you are more likely to try again and learn from your mistakes. People with an almost unhealthy interest in a subject (people with Asperger for example; they think Newton had this) are the brilliant people on the subject.
horsehead 11 hours ago 0 replies      
This seems really related to the post from yesterday about how the one fellow learned from his daughter not to be afraid to fail (or maybe that was earlier today?).
How Exercise Fuels the Brain nytimes.com
274 points by danso  1 day ago   18 comments top 8
polyfractal 1 day ago 0 replies      
I expected this to be a fluff piece about exercise and learning, but it was in fact rather interesting.

It should be noted that while increased "glycogen supercompensation" in the brain correlates with the much-hyped exercise-induced-cognition-enhancement (aka running makes you smarter/healthier/happier) the author's don't provide any behavioral or cognitive tests. They acknowledge this, and since it wasn't part of their study they just offer the correlation as an interesting hypothesis.

I imagine they'll be testing cognition/behavior in their next paper.

The articles they are referencing is:



maeon3 1 day ago 0 replies      
So the hypothesis is that the reason cardio exercise every other day makes you smarter is because the astrocytes next to your neurons in your frontal cortex and hippocampus become 60% more capable of meeting needs to fuel your neurons with glycogen during periods of glycogen shortage, increasing the brains ability to keep glycogen levels at optimal levels at all times during the day.

I would like to see a study done seeing how 5 hour powers, caffeine, sugary treats, no-doz and other performance altering drugs affect this astrocyte supercompensation process.

noobface 1 day ago 1 reply      
Great content. Scientific, only presents the facts, and doesn't take the typical "THIS CHANGES EVERYTHING" sensationalist approach. Good on the author and the NYT for reporting actual science.
Geee 1 day ago 3 replies      
I wonder if physical activity correlates with education results. For example, in Finland kids typically walk or bicycle to school and are also actively running or playing on 15 minute breaks every hour.
orky56 1 day ago 1 reply      
Here's a few key take-aways that I hope I'm not extrapolating.

1. Exercise without the proper "carbo-loading" can eliminate the cognitive benefits that astrocytes provide in restoring glycogen to neurons.
2. Exercising every now and then versus continuously will not provide these benefits. Intermittent exercise may just provide temporary (up to 24 hrs) cognitive benefits whereas continuous exercise may allow it to last longer (more than 24 hrs).

tl;dr Article should be renamed to: "Continuous Exercise with Proper Post-Workout Nutrition (Carbs) Fuels the Brain"

ajtaylor 20 hours ago 0 replies      
I started exercising regularly last week and I've noticed that my productivity has soared! Until I read this article I wasn't entirely sure if the two were correlated, but now I'm thinking there is something to it. My exercise time is in the early morning, and it's been a great way to start the day.
ryanmolden 1 day ago 2 replies      
I always get a kick out of articles whereby they basically say 'hey, excercising is good for you!', as though that were some kind of new discovery :) This is a good article, and as others have mentioned it actually links to scientific study, which is a rarity for mainstream press. On the other hand does anyone here think that getting regular excercise WOULDN'T be better all around, in a multitude of ways, than say sitting on your ass all day?
bcowcher 1 day ago 0 replies      
I wonder what types of exercise trigger this effect and to what degree. Off the top of my head (going by their descriptions) team sports would be ideal not just for the exercise but the constant situational analysis, team work, running plays etc would increase the workload on the brain.
Google to Sell Heads-Up Display Glasses by Year's End nytimes.com
271 points by mtigas  2 days ago   132 comments top 41
mechanical_fish 2 days ago 3 replies      
Great, soon it won't be enough to block browser cookies. I'll have to avoid glancing at actual cookies lest my search results start filling up with Snackwells coupons and special offers from the local gym.
pshc 2 days ago 5 replies      
Not saying this is what they're selling, but: Real-time overlays are still impractical, right? My intuition is--without a really high speed camera and high fidelity environmental cues--the overlay will have a lag and high uncertainty wrt where you're actually looking... or has the state of the art moved on? Or are their labs advancing the state of the art?

Any idea how they're making the screen usable at that focal length? Don't VR headsets usually need optics much more bulky than sunglasses for comfortable viewing?

Anyway, even if it's smartphone-like functionality in a more convenient form factor, I'm looking forward to being an early adopter.

Steko 2 days ago 1 reply      
From the linked previous post about wearables:


"Kids will play virtual games with their friends, where they meet in a park and run around chasing virtual creatures for points."

Lucky kids. I suddenly feel like Old Man Luddite because I grew up playing with sticks and crap.



sbierwagen 2 days ago 0 replies      
Huh, two days ago Gabe Newell was talking about Valve maybe going into the hardware business to do wearable displays:


Well timed.

SoftwareMaven 2 days ago 3 replies      
I am horrible at remembering names. The day these things can do facial recognition againt my address book, Facebook, and LinkedIn profiles and pop up a little box with the person I'm looking at's name is the day I will never go without glasses again.
gfodor 2 days ago 4 replies      
My biggest concern is if this is executed poorly it could undermine future attempts to bring wearable computing to the mainstream. It still feels a bit early for this to work well. Google can pull off the technical challenges, surely, but the usability hurdles here are outside of their comfort zone.
msg 2 days ago 8 replies      
Here are some high-quality sf books you can use to imagine the possibilities of this technology.




reader5000 2 days ago 1 reply      
This is potentially awesome for two reasons:

1. I can lay in bed and read an ebook without having to hold a device.

2. Google can parse any situation I may be in and recommend the optimal action based on its data of every single other human on earth.

orofino 2 days ago 1 reply      
I've been waiting for this for a long time. I wear glasses currently and have always felt that some kind of overlay could provide tremendous value.

From simple reminders, augmenting my environment with meta data, or giving access to real time updates about almost anything the possibilities are almost endless.

Batteries are going to truly be the limiting factor IMO. Forget plastics, the future is batteries. I assume version 1 of the product will require some kind of external battery pack, I'd think it was certainly worth it if the tech is what it should be.

shalmanese 2 days ago 1 reply      
Have Google figured out a way to solve the accomodation/vergence problems inherent in screen based 3D displays (http://research.microsoft.com/apps/pubs/default.aspx?id=6402...)? Because if not, these things are going to suffer from the same dismal failure that every other 3D display product has over the last 30 years.
notatoad 2 days ago 2 replies      
i find this very hard to believe. it seems obvious to me that smartphones and tablets have completely filled the niche invented by the HUD glasses of yesteryear's science fiction. they will never be popular, because they are too invasive. people like to be able to put the internet in their pocket and take it out again when they need it.

i wouldn't be surprised if google has a bunch of engineers working on the technology that could be used in HUD glasses, but i would be very, very surprised to see anything ever come to market. google will not enter the hardware market with such a risky product.

ck2 2 days ago 3 replies      
Even the smallest smartphone chipsets still need big batteries, I suspect it's going to be a pocket device with a wire or bluetooth of some kind going to the glasses.

Still an amazingly aggressive technology product.

rkaplan 2 days ago 1 reply      
Does anyone see this actually catching on in the next five years and becoming something more than a gimmick?

I'm as excited about the prospect of viewing the world through a HUD as the next guy, but I can't imagine these glasses looking sleek in the slightest, and they will likely be rather bulky.

But beyond ugliness, it seems to me that someone would rather pull out their smartphone than put on a pair of HUD glasses for any given use case for this product. Augmented reality is "cool" but I've never used an AR app more than two days after I downloaded it.

I think for it to catch on, this technology would have to be baked into glasses that are designed to be worn all the time, not just put on when needed. People aren't going to carry around AR glasses in their back pocket with their phones and wallets. So a place to start might be enhancing the glasses used by people with vision trouble, rather than creating a whole new glasses product.

BadassFractal 2 days ago 1 reply      
Exciting news! Just yesterday I was watching John Carmack's interview from 2011's QuakeCon and at the end he mentioned that he was going to soon start playing with display glasses to see if anything interesting would come out of it (for id, I imagine).

Driverless car, now digital glasses. I'm positively impressed that Google keeps pushing the envelope in all these different fields.

libraryatnight 1 day ago 1 reply      
I wonder if this will eventually intersect with the gamification trends. This article had me imagining a crude augmented reality MMO, routine tasks are assigned point values and so are items for purchase. Walking through the store you could see that Sara Lee white bread nets you 100xp, paying your mortgage on time, Bank of America has awarded you 1000xp. A look at someone and the HUD displays their level and point value along with their various badges. Of course check-ins will be automatic.

I doubt this iteration will be very advanced, but the article definitely made me think.

robin_reala 1 day ago 0 replies      
I”m really surprised that no-one's talking up the accessibility benefits of something like this. Imagine a version for blind users that dumped the screen in favour of a headset and mic. A little voice recognition and your glasses could tell you where you are, what you're facing, and even do basic hazard avoidance. That sounds invaluable.

Actually, it sounds like the 90s kids show Knightmare :)

jfoutz 2 days ago 0 replies      
I can't imagine them not looking hideous, but i want them. i've wanted them for pretty much two decades now. I'm sure they will suck. I will happily give google a pile of money for the few weeks of usage i'm guaranteed to get out of them. It's just a bonus if they're actually useful.
berntb 2 days ago 0 replies      
How is the reading experience? Can you get 80 chars into it?

I hope there isn't a good one hand keyboard interface (chording?) so you can write email/code while bicycling/driving/etc -- with my simultaneous capacity, it would be my death...

theBobMcCormick 2 days ago 0 replies      
Sounds like a more generalized, urban version of the Mod Live ski goggles (http://www.reconinstruments.com/products/mod). The Mod Live googles look amazing, but pretty bulky (plus, I don't ski). If the Google glasses end up being half as cool I'd definitely be interested.
jobu 2 days ago 0 replies      
If anyone hasn't read Daemon & FreedomTM by Daniel Suarez I highly suggest it. His use of tech like this in the novels was pretty well thought out. Can't wait until Google uses these to integrate a MMORPG with RL.
JabavuAdams 1 day ago 0 replies      
I wonder what the field of view is? That's been my main problem with HMDs in the past. Maybe now I'll finally be able to build my wearable computer from COTS components.

Exercise: taking the hardware as a given, what would you do with it, what software do you need?

Steve Mann: 20 years ahead of the curve.

derrida 2 days ago 5 replies      
How do I opt out of all face recognition? I have a right that my personal life not be exploited for advertising by a large company.
jakeonthemove 1 day ago 0 replies      
If this is true and if the display is good enough to let you browse websites, then it's awesome!
rasur 1 day ago 0 replies      
Cue the inevitable "what happened to Steve Mann's EyeTap glasses" comment. Now there was an opportunity 'missed'.
sopooneo 2 days ago 0 replies      
Is this article implying the glasses will work like a HUD, with the image displayed on a transparent surface that you can also see through? Or is it an opaque screen off to the side? Or it couldn't possibly be an opaque screen over your eyes that redisplays what you would have seen behind it?
hesdeadjim 2 days ago 0 replies      
All I have to say is: please, please, please let this actually happen and work even 50% as well as my imagination wants it to.
joshontheweb 2 days ago 0 replies      
But can they look good? Those oakleys linked to are awful. If they look stupid then I doubt they will work... Until apple steps in and does it right, that is ;)
bootload 2 days ago 0 replies      
"... One Google employee said the glasses would tap into a number of Google software products that are currently available and in use today, but will display the information in an augmented reality view ..."

wonder how many Steve Mann patents they are using? ~ http://www.eecg.toronto.edu/~mann/

neworbit 1 day ago 0 replies      
I am definitely going to write a "THEY LIVE" plugin
lucian1900 1 day ago 0 replies      
I think it should just use my Android phone. Why yet another Android device on me?
ctdonath 2 days ago 0 replies      
William Gibson, "Virtual Light".

'nuff said.

mbeswetherick 2 days ago 1 reply      
Google is pretty ballsy.

After their Safari tracking system fiasco they announce this: the ultimate form of tracking. It shouldn't come as a surprise that Google would make this considering the rest of their products.

I love Google and their products, but I feel this takes it over the line. At what point will Google be satisfied with the amount of information they're able to collect? Maybe the world will become some strange Utopia where products like these glasses are acceptable and Google made the right call, but Google should take a step back and realize they're making technology for humans, not robots.

If this trend continues I bet we'll see Google Children roaming our street. Ok, that's pretty hyperbolic but you get my point.

granitepail 2 days ago 0 replies      
I definitely checked the date a few times, but this makes an awful lot of sense given Google's position. At this point it's really just hardware to bridge the accessibility gap.
mrbill 2 days ago 0 replies      
The first thing that came to mind is Manfred Macx in _Accelerando_, and how hopelessly lost he is without his AR goggles.
staringispolite 2 days ago 0 replies      
And Google Goggles is already taken. WHY?!
tehdanish 2 days ago 0 replies      
The article implies they are a completely separate Android device. Why not just interface with Android / iOS devices?
TechNewb 1 day ago 0 replies      
Will they be prescription glasses? They should team with Warby Parker.
rabidsnail 2 days ago 0 replies      
Google to Sell Heads-Up Display Next Time They Start Getting Bad Press
wavephorm 2 days ago 0 replies      
Yeah so everything I look at is going to go through Google's servers, and be processed and display Google ads constantly no matter where I'm looking. Yeah right. Google is flat-out lunatic if they think consumers want this level of invasion into their lives. They're taking the Orwellian computing idea, big brother always on everywhere, to the absolute extreme here.
Steko 2 days ago 2 replies      
Pro: Because it's Android we can hack an awesome DBZ power level widget into the HUD.

Con: The glasses hit you up to join G+ every 30 seconds.

Do Things, Tell People. flax.ie
261 points by hebejebelus  3 days ago   45 comments top 10
hebejebelus 3 days ago 6 replies      
Author here. This is why I think every programmer should have at least these two things: a twitter account, and a github account. A github account to publish what you do, and a twitter account to tell people. You can do this sort of thing from anywhere with an internet connection, and it's free and easy. (I also think every programmer should have business cards. Can't be contacted if nobody has your email address).
akkartik 3 days ago 2 replies      
It's easy to forget today, but this was how academia began 350 years ago.


tathagatadg 3 days ago 2 replies      
"Do Things, Tell People" == "Show HN: .*"

I'm sure most of the folks here enjoy the" Show HN:" posts the most - just like github has become the standard for putting your code out there, wish there was a dotcloud/heroku/appengine like standard that would provide a "Cool projects" feed to discover and get inspiration from. Ideally an integration of = "github + dotcloud + Show HN: feed" is what I long for - "Do things, Tell people" and get inspired from others in the network!

ladon86 2 days ago 1 reply      
Didn't you guys already visit Game Closure? I think I remember seeing you at AOL over the summer. Congrats to both of you on securing great positions - the way you've presented yourselves definitely makes you easy to hire.
shashashasha 2 days ago 0 replies      
Reminds me of Derek Powazek's strategy: "Make something great. Tell people about it. Do it again." http://powazek.com/posts/2090

Great to frame that more in the context of "personal brands" vs "not SEO".

karterk 3 days ago 0 replies      
I think what you are trying to say invariably translates to people learning to enjoy the process and journey rather than the destination.

You have to do things because you really like doing them and hence you somewhat abstract yourself away from the actual results.

peeps502 3 days ago 3 replies      
From my experience (non-technical), it's more about who you know rather than what you do. But, people don't know you unless you do something. It's a vicious circle.
halayli 3 days ago 3 replies      
in summary, toot your own horn
Kiro 2 days ago 2 replies      
This is what I don't like about the startup community. I love making things but hate talking about them.

Of course I want my startup to get publicity, I just don't want my name attached to it. Solution?

orlandob 2 days ago 1 reply      
This is massively inspiring to me. This article is now hanging on my cubicle wall. Thank you, Carl.
A neat way to fold a sheet of paper wheatpond.com
252 points by emeltzer  1 day ago   30 comments top 10
redthrowaway 1 day ago 8 replies      
I wonder why this hasn't been a standard for maps for years. By far the most frustrating thing about road trips (pre-iPhone) used to be dragging out the map then trying to put it away again.

Also, as a total aside, this: "over their conventionally folded*" should be punishable by death. Putting an asterisk next to something then not explaining it is pure evil.

aymeric 1 day ago 3 replies      
Another neat way to fold a sheet of paper: http://pocketmod.com/
CountHackulus 14 hours ago 0 replies      
This is actually very similar to a method of doing tessellated origami with a grid of folds. [1] I wish I could find a picture of it, but I've got an origami tessellation I folded that has this same 1 degree of freedom that allows it to be folded and unfolded by pushing and pulling. Only difference, is that instead of going flat, it turns into a spiral when you compress it. Very similar to [2]. This all being based on the work of Taketoshi Nojima.

[1] http://spacesymmetrystructure.wordpress.com/2009/03/24/origa...
[2] http://www.flickr.com/photos/53416300@N00/1028178660

Argorak 15 hours ago 0 replies      
I am pretty sure that this is similar to how the german Falk plans do it for ages. Sadly, I cannot find any english explanations.

Update: its not. But if you are interested: patent (in german) with visual explanation can be found here:


bigethan 13 hours ago 0 replies      
This is the neatest map folding I've ever seen: http://www.thezoomablemap.com/the-map.html
raldi 1 day ago 0 replies      
This company has been doing it for years: http://www.zcard.com/
mise 17 hours ago 1 reply      
I give up.

I wasn't able to get is into the compact folded shape. As far as I can tell, I have the correct folds, but I did some guessing to follow each step.

guylhem 1 day ago 1 reply      
brillant! I'm eager to try that at work tomorrow for my todo list. I like it small (fitting in the pocket) but the way I fold it it quickly falls into pieces
miles_matthias 1 day ago 1 reply      
Has anyone implemented this digitally with CSS animations?
Splash screens == sloth asserttrue.blogspot.com
250 points by kylehansen  3 days ago   242 comments top 65
JS_startup 3 days ago  replies      
I don't understand what he's angry about. Photoshop, Microsoft Office, OpenOffice, these are all enormously complicated programs that require resources to load. That doesn't make them bloatware and it doesn't make the programmers lazy.

His proposed solution sucks too. Show the UI while it's loading so the user can impotently click around waiting for the program to "turn on". Windows does this when it boots up and it drives me insane -- if the OS or program isn't in a usable state when you show it to me, don't show it to me

Loading speed is just one of a multitude of factors that come into play when making software. According to this Adobe employee it should be the chief most concern, even dominating other things like features, usability, UX, cost, technological debt, etc.

Maro 3 days ago  replies      
This weekend the 3G mysteriously stopped working on my iPhone, so after months of use, I rebooted it. Rebooting the iPhone takes 1+ minute, and there _is_ a kind of splash screen shown during that time. The reason it appears instantenous during normal use is that it doesn't actually boot or load, it just turns the display back on. Also, many iOS apps actually have splash screens, you just don't see them very often as they're usually already running in the background (big apps like Facebook, and even small ones like PCalc Lite or Quotes).

You can kind of achieve this with Windows + Adobe stuff too. Just don't quit Photoshop when you're done using it, and don't turn the computer off, put it to sleep. If you have an SSD (like your iPhone), then swapping Photoshop back into main memory will also be much faster.

Of course the OP is right, Adobe stuff is bloatware and sucks. Fortunately for non-pro designers, there are alternatives like Pixelmator and Paint.NET.

feralchimp 3 days ago 4 replies      
Boo and/or hiss. If you hate splash screens, use simpler tools or get more RAM. When Photoshop finishes loading, it's ready to kick ass. The Finder, on the other hand, taunts with its almost-readiness. Faking readiness is far worse than setting and honoring expectations.
alex_c 3 days ago 3 replies      
A splash screen basically tells me, in very clear-cut terms, that my time is worth nothing whatsoever. It's a fresh reminder that users' needs don't count as much as programmer convenience does.

So... how much IS your time worth? How much extra would you be willing to pay for instantly-responsive applications? Programmer time isn't free, either.

zoul 3 days ago 4 replies      
I hate Adobe software with passion.

I used to love Macromedia Fireworks, so that I bought a copy of Fireworks after Adobe acquired Macromedia. I had to jump through some crazy hoops to prove that I did not steal the product, and that was nothing compared to what I had to do after buying a new computer " it turns out that I was supposed to unactivate the product on the old one and then activate on the new one. This is not what you do to your customers. I don't even want to start on the issue of software quality or customer service (I once did the mistake of trying to report an i13n issue with my copy of Fireworks).

I swore there's not going to be any software by Adobe on my computer anymore, and I even disabled Flash in my primary browser. I am lucky that I can do with the new wave of Mac graphics editors like Pixelmator. I was so happy paying for that product on the Mac App Store, getting a copy and doing nothing else that would require it to work. I was so happy that it starts immediately, that is has a decent UI. It's not feature complete, it's got its own bugs, but it's a software and experience I am willing to pay for. Unlike Adobe. (Which is a company I once liked, being a typography geek and typesetting our school magazine in an old copy of PageMaker.)

luser001 3 days ago 2 replies      
I ranted about this a few days ago. My favorite idiocy on Windows is their background "optimization" of .Net binaries.


This service appears to perform a large number of I/O requests (probably reading/writing a large number of small files). It will totally consume the disk seek capacity, massively slowing down every other program that is also doing disk I/O (e.g., freezing Firefox awesomebar searches).

The programmers at the linked blog congratulate themselves on how this mayhem lasts only 10 minutes on a typical computer.

Note: I do not have an SSD. :( This and other idiocies will force me to get one next time.

munin 3 days ago 2 replies      
>When I turn my computer on, it should just be on. Ready to go. Kind of like"well, like my phone, for example. Which is, after all, my real computer.

what phone does he use? every phone that I know of takes at least two minutes to come up from a cold start, and has for the last ten years...

orthecreedence 3 days ago 1 reply      
Whether or not the programmers of Photoshop can make their app run faster, I have to say it's pretty sad that waiting 10 seconds is worth whining about. I understand this is unacceptable for a web app's page load time, but for a desktop app that loads an enormous amount of resources once before letting you use it seems acceptable to me.

I think the real problem is that we are all so incredibly spoiled that waiting 10s is a huge inconvenience. I'm not saying it's not annoying. I don't like it either... that squirming feeling you get when you expect something to be fast and it isn't. But, like, get over it.

On a scale of 1 to 10, 10 being the most important issue in the cosmos, waiting 10 seconds for an app to load would be about 0.0000000000000000000000000000000001 (if that). Perhaps there are more important things to write about.

MikeCapone 3 days ago 1 reply      
"Just don't quit Photoshop when you're done using it, and don't turn the computer off, put it to sleep. If you have an SSD (like your iPhone), then swapping Photoshop back into main memory will also be much faster."

My experience in OS X is that Photoshop has to be restarted very frequently or it just eats up more and more memory until it uses it all. Seems like bad memory management.

d4nt 3 days ago 0 replies      
The title made it seem like this was going to be more of an whistle-blowing blog post by an insider.

Nevertheless, this kind of rant is interesting in so far as it points to what I think a growing momentum behind the UNIX philosophy. What I mean by that is: small programs that do one thing well and interoperate with others.

I wonder whether the underlying cause of this shift in thinking is the levelling off of CPU cycle speeds. Time was when the performance of something like Photoshop or Office would just prompt you to buy a better computer. People would assume their computer had gotten old or out of date. Now that getting new hardware doesn't magically fix things people are asking why certain things take so long and comparing programs' performance.

rauljara 3 days ago 5 replies      
Louis CK has a fantastic rant about people complaining about technology being too slow (and technology, in general). It helped to put my own rage in perspective. I find myself getting less pissed at technology since watching it, and I kind of hope that lasts.



zokier 3 days ago 0 replies      
I for one prefer seeing splash screen for couple of seconds[1] rather than having laggy/half-functional UI appear instantly. Actually I'm more annoyed by Windows login, which shows desktop early while it's still starting up background apps, and thus being unusably slow.

And the comparisons to mobile devices are just ridiculous. At least in my use, I'd estimate that on average apps on my desktop launch much more quickly than on my Android phone. And I challenge anyone to find a smartphone that boots up faster than a fresh Windows 7 install on a SSD (or even on a regular HDD).

[1] I just timed: Photoshop 5 seconds, Word 2 seconds, both from warm caches. And that's with a 4 year old budget laptop.

jarrett 3 days ago 3 replies      
The computers I use now are far more capable than the ones I used in 2005, in terms of CPU, RAM, disk space, GPU, and anything else that should affect an app's boot time. Today, Photoshop takes just as long to boot as it did in 2005. Yet the features I use today are almost identical to those I used in 2005, the only exception being the ability to import camera raw images. (And I'm sure that the binaries for importing camera raw images don't come anywhere close to accounting for all the bloat in that time.)

So, my computers have gotten better and better, while the demands I place on them have remained basically constant, yet boot time stays the same. The only thing I can think to blame is bloat.

jakeonthemove 3 days ago 1 reply      
What's the big deal? Showing a splash screen for less than 5 seconds (oh yeah, get an SSD) until everything initializes is better than opening a non-functioning UI ("the cloud" is not a solution - what are you going to do when your connection goes bad?). It would be nice to not have it, but for such a bloated program as Photoshop or After Effects, it's pretty much expected...
oofoe 3 days ago 1 reply      
A lot of people here seem to think that complaining about a five to six second start time is just whining. I believe you're wrong -- it is a legitimate and valid complaint and here's why:

Most software can't run for very long. A lot of high-end software can be /very/ unstable (hello, Maya! ;-). The user is not just starting that software once at the beginning of the day for a long productive day of work, they are all to often starting it again after a crash. Or after they shut it down to get enough RAM to run Excel. Or after rebooting Windows for /another/ mandatory update from IT.

I once had to restart a particular high-end compositing package thirty times in an hour. Twenty seconds of the average two minutes were spent staring at the splash screen. But I had to do it, because I needed thirty frames rendered and a particular plugin crashed after running one frame.

The user is not starting your software to admire your credits or the clever graphic design of your logo. They are starting your software to /use/ it get a job done, a job that they have in mind to do and any distraction at all (even a harmless splash screen) is a new stumbling block on their way to resuming their task.

I heard of a study done back in the mainframe days that determined that anything that took less than two seconds felt instantaneous to the user. However, anything longer (even five seconds!) took "forever". In my years of programming, I have found nothing to disprove this (possibly) apocryphal conclusion.

Your user's time is valuable. Your user's mental state and flow are valuable. As programmers on Hacker News, we all know about concentration and flow, right? Extend your users the same courtesy you expect from your tools. Help them keep things going, don't break their rhythm and don't let them lose context.

stdbrouw 3 days ago 0 replies      
As much as I hate bloat and love svelte web apps that do just what I need and nothing more... I think we also need to accept that some people in some fields of work do pretty advanced stuff, and need pretty advanced software to do that stuff.

Adobe InDesign in particular is a huge piece of software with tons of config options and obscure features and what-not, but I've never ever found a feature that made me think "what kind of useless, bloated bullshit is this?" I think Adobe teams fight very hard against bloat, but their business is based around power users and that just leads to a different kind of software.

jsz0 3 days ago 0 replies      
With a SSD and plenty of RAM load-times are a non-issue for the most part. I often leave applications I use frequently (or even infrequently) open for weeks at a time. There's not much downside in doing this besides some extra clutter in your dock/taskbar. For any fairly modern machine an upgrade to SSD + 8GB of RAM will mostly solve the problem. That's about $250-$300 to avoid the problem almost entirely. I feel like this is a byproduct of the race to the bottom of PCs over the last 5 years or so. Just because you can buy a $500 computer with a spinning disk and 2GB of RAM doesn't mean you should buy it. There are trade-offs to consider.

Developers also have these trade-offs to consider. How much money can a developer justify spending to optimize load-times when there is an easy and relatively inexpensive fix available to all consumers? If they deliver 10 second load times on an average PC that is acceptable enough. If a user wants 2 second load-times they can buy a better computer. It's always been the case with PCs that your results vary depending on the hardware you purchase.

shocks 3 days ago 1 reply      
This is so true. My friend has been a games programmer for most of his life, where you have 16 milliseconds to do everything and it pays to be smart, use clever algorithms, etc. A while ago he took a break from the games industry (he was moving country a lot) and began working at a very large company producing software for the film industry.

It was less than six months before he got sick of the "if it takes a long time, don't optimise - stick up a progress bar" attitude and moved out of the country chasing a real programming job again...

yummyfajitas 3 days ago 0 replies      
I don't use photoshop, but my emacs takes 3-5 sec to boot up.

So lets see. Supposing I use emacs for 3 hours, 5 seconds is precisely 0.04% of the time I spent using it. I'd prefer if the emacs dev team focuses on new features for me to use during the other 99.96% of the time I spend using it.

Iphone/Android apps are fast because they need to be. Often you use them for 30sec-5 min. On a 30 second use, 5 seconds is 16% of the time you spend using the app.

dgallagher 3 days ago 0 replies      
I was hoping Adobe would patch Photoshop CS4 on OSX Lion so it stops crashing when quitting, and then hanging indefinitely if you try to re-launch it again without rebooting first. It's the most expensive software I've ever purchased which doesn't know how to [NSApp terminate:nil]; properly.
tjoff 3 days ago 0 replies      
Imagine if your phone or iPad took as long to boot as a Windows laptop. Would you use it? Would it be usable?

My android phone (Nexus S) takes longer than my 6 year old Windows laptop do (sure, it's updated with an SSD but even without it would be a close call), my mothers tablet isn't faster either. Solution? I never turn either off. Problem solved.

Hibernate for longer sessions, sleep if battery isn't important and I know I will use it for the next few days.

On my workstation I never turn off heavy applications anyway. On my laptop I'm limited by my 32 bit Win7 OS so that's a burden, but I can work with it. The 8 GB I have in my workstation isn't ideal either, tempted to get a 24 or 32 GB machine but probably can't justify replacing the current machine just yet. I see no reason to get less RAM on a new setup though.

Also I really appreciate having the whole application ready. Having parts of it lazy-load can be way more agonizing than a slow boot. And with an SSD you don't have to wait long anyway, I always tend to fiddle with other windows during the boot so I'm not that annoyed. The splash screen itself can be annoying if it claims window focus or is just in the way but they exist for a reason. There should something telling me that the application is booting (firing up a task manager doesn't count), I agree that there are less obtrusive ways for this than a splash screen though and for that OS developers should be blamed for not realizing this.

rwmj 3 days ago 0 replies      
Every, I'd say, 6 months or so, someone on the qemu mailing list posts a patch to add a splash screen to qemu.

I don't understand why people persist in this:

(1) Most hypervisor start-up screens aren't even seen by most users.

(2) Spend that time and effort making the boot faster, not slowing it down with utterly useless stuff!

aycangulez 3 days ago 1 reply      
I still use Photoshop 6.0 circa 1999. It loads in a few seconds max, and it has all the core functionality necessary for web graphics work. In fact, even PS 5.5 would do fine because it was the first version that supported Save for Web.

If you are using the latest version of Photoshop for web work only, you are wasting time and money for tons of features you don't really need.

nitid_name 3 days ago 0 replies      
I'm not sure I'm a big fan of what it would take to fix this. Namely, "speed loaders" that sit in memory and eat up RAM. I've already seen what adobe's PDF speed loader can do on a system.

Frankly, I'll manage my applications myself.

Are you tired of waiting for Photoshop to launch? You might want to try leaving it open...

readme 3 days ago 0 replies      
I suppose this is a good place to mention that I wasted 2 hours of my day zeroing my MBR because of Adobe. Apparently Photoshop installs some DRM crap on it.

Proof: http://www.amazon.com/Adobe-Photoshop-Extended-CS4-VERSION/d...

Proof: http://ubuntuforums.org/showthread.php?t=1661254

I will not be using an Adobe product other than Flash again. I would ditch Flash as well, but last time I checked, gnash was not good enough to replace it yet. That's not an issue though. I'll just wait for HTML5 to kill it.

toast76 3 days ago 0 replies      
I hate it when people say "I switch my phone on and it starts instantly". No it doesn't. My iphone takes about a minute to start...of course I never switch it off, so it's not a problem.

If you never "turn off" Photoshop or Office, they too will appear the instant you click on them. Amazing!

davesims 3 days ago 1 reply      
OP has a point. Not a f*cing important point, but a point. UIs can and should be faster, Adobe and MS are slow to catch on with current user expectations, and splash screens are annoying relics of a bygone era of bloated desktop UIs.


But in the end, to me it's just one more instance of Everything's Amazing and Nobody's Happy.


I guess I just don't get the tone. Why the righteous indignation? This not Human Rights we're talking about, it's software.

peteboyd 3 days ago 2 replies      
The video game industry has the same issue. Initial bloat screens. A lot.

Skyrim for instance, it takes anywhere from 30 seconds to load the initial screen, which is just the logo of Bethesda. You know what is on the initial screen. A button for me to click that says Start. Once I click that button, then I have to hit Continue for it to load the last save point. Then another 30 seconds to 1 minute to load my game/map point.

So basically, two actions and over a minute later, I am finally in the game. A much easier thing would be to just load my last save/map point. I can turn on the console, and know to come back in a minute. Just have it paused until I am ready to play. If I wanted to say restart the entire game, I could go into the menu system from there.

I find this the case on almost all games. FIFA, Madden, Bad Company, Call of Duty. Just start the game I played last. If I regularly play online, go and find me a server automatically. Auto load my game, unless I hold down the start button or some other button that would then default to the menu.

I don't care that Bethesda or EA made my game, I just want to play.

johnohara 3 days ago 0 replies      
Splash screens are leftovers from the old-old days of text-based computing when you had to mask the fact that the program being loaded was going to take awhile (>30 secs).

Hardware, software, network, total users, i/o, it didn't matter, it was better to say something than let users sit there at a terminal thinking nothing's happening.

Or worse, tell their boss the "system's down again."

No excuse for them today. They just say "big app", it's gonna be awhile.

tambourine_man 3 days ago 0 replies      
In all fairness, Photoshop startup time did get a lot faster at version 10 (CS3), when they changed the type system to load only when needed.

But yes, there is still a long way to go. I don't think we need iOS like hacks such as showing a PNG of the GUI to trick you into thinking loading is done. This solution has its own set of issues (responsiveness, not precisely matching your previous state in case of crash, etc). It's x1000 faster than my phone, we can do better.

Modularizing, as shown above, is hard, but probably the best way to go. As programs grown to the size of a small OS, they should be treated as such.

butterfi 3 days ago 1 reply      
I used to adore Adobe, but it all changed over time. It started when I only needed Photoshop, and yet Adobe insisted I install several other chunks of software for services I didn't want or need. In the end, I finally found a solution that lets me get my work done without all the excess Adobe bloatware. Thank you Pixelmator!
jiggy2011 3 days ago 0 replies      
My computer is pretty responsive, at least as much as my smartphone is and I'm not running anything special (No SSD, 4GB of RAM and an older model of quad core).

Ok, I wait maybe 2-3 minutes for the computer to start (not even that if I just put it to sleep instead of turning it off). I can work for ~8-10 hours so those minutes aren't a big deal.

There are a few programs that are particularly slow to start (eclipse, steam, openoffice) but that's mostly just because there is a lot of code to load and I'm sure a comparable application for a smartphone would be just as slow.

I do run Linux most of the time though so there is probably a bias there towards smaller non-monolithic programs there and not having registry bloat helps.

However I still remember the days of Windows 98 and how horribly slow everything was back then on anything apart from a freshly installed machine and having to wait a full minute for Office 97 to start, we've come a long way since those days. I can't see it taking long before every PC comes with an SSD drive (which is probably part of the reason smartphones seem responsive as well as having a well warmed cache).

As for doing something like running a cloud instance of the program and then somehow syncing back to the desktop app seemlessly, that seems like it would add such an insane level of extra complexity and problems which is exactly what he seems to be against.

codesuela 3 days ago 0 replies      
when I read this I instantaneously thought about this rant from Louis CK [Everythings Amazing & Nobodys Happy] http://www.youtube.com/watch?v=8r1CZTLk-Gk

Linux Mint and Windows 7 take 10 seconds to boot on my laptop with a SSD. That's amazing. Just a few months ago I would go make a sandwich while my PC would boot.
If you have a problem with splash screens alt tab to the browser of your choice. If you think your PC boots to slow take out your smartphone and play with it. This is not to say developers should be wasteful with users time. Especially web developers know how import page speed is. But it's a complex program just give it a few SECONDS.

Also one should mention Steam (the game distribution platform). If you are bored in between rounds you just bring up the steam panel and use the embedded browser to browse the web or chat with friends. But those at times minute long delays. Sure you could embed a email client in the OS the comes up long before the main GUI is loaded but you wont get to read much in 10 seconds.

Too 3 days ago 0 replies      
They could at least display "tip of the day" or hotkeys while you are waiting to make the waiting time at least a bit productive. Why don't programs have this today? Most old programs had this but after starting up. Displaying it while the program is loading would be much smarter.
marquis 3 days ago 0 replies      
Network-reliant apps should check for connectivity during loading, this can take a few seconds to verify everything. Would he rather the app doesn't do any environment validation and present errors when trying to run specific tasks? Can you imagine, working on a complex document only to find out when saving that you don't have enough disk space or your network connection isn't valid. Splash-screens are an important time to determine whether everything is there that you need to do.
vacri 3 days ago 0 replies      
What a troubled and complex world we live in, when large, complex programs take 5-10 seconds to load.

I'd much rather my devs work on useful features instead of trying to fool me into thinking my productivity suite is loaded a few seconds earlier (and also not dealing with all the bugs that that introduces).

easterisle 3 days ago 1 reply      
Many people here are talking about how iOS works vs a desktop OS and that's the big difference - but its not about the SSD. If you have a high powered computer then its no problem to leave the adobe suite running and then just grab it from the task bar and get to work. The issue is that most people don't have high powered computers, at least powerful enough to run photoshop constantly while doing other things (HD videos, netflix, chrome+firefox, etc…. non work things). Adobe and other big software companies are going to continue to push the envelope no matter how fast our computers get in terms of resource use.

Has photoshop gotten faster in our lifetimes? No! It just wants more and more resources with every release. So what is the difference between how apps behave on iOS and how they behave on the desktop? Backgrounding. Very very few apps (ableton live is one that has some "freeze" type features) don't have any way for you to shut down portions of the app or put the app to sleep so that you can start it up quickly. On iOS this is how apps are expected to behave, and it shows.

But, even if Apple did bring backgrounding to the desktop, I imagine Adobe would be one of the last to support it - and I only say this because of my previous personal experience with Adobe products like Flex, LCCS, etc…

I just thank code I'm not a designer...

gavanwoolery 3 days ago 2 replies      
Photoshop really is not that complex - you could write a photoshop clone that would load in under a second, easily. But it loads all sorts of stuff you do not need up front - many fonts, textures, drivers for scanners, cameras, and so forth. I think the real problem is that Adobe seems to keep patching an already bloated code base - I think if they started fresh they could redesign it more efficiently...just my humble opinion...
Jasber 3 days ago 2 replies      
About loading screens, I've always wondered why games don't try to get rid of these (maybe they do?).

Anyone familiar with why you have to have loading screens in games? Would it be possible to pre-load a level while you're playing?

jnorthrop 3 days ago 2 replies      
I wonder how Kas Thomas feels about the credit roll that occurs at the start of a movie? You know the one where they show the logo of the movie studio, then the production company, then the producer, editor, et al.

I see the software splash screen in the same light. It is giving credit to those who have put it together. Now, obviously entertainment is different then a work application, but many of us feel that software is art, just like a movie and that if someone wants to do a credit role then they should.

indiecore 3 days ago 0 replies      
>Show me a screenshot that looks like Photoshop.

They tried this, everyone got pissed off and asked for a splash screen so they knew it was loading.

podman 3 days ago 1 reply      
Photoshop CS5 took roughly 4 seconds to start on my MBA. Is this guy seriously complaining about 4 seconds? Several popular iOS (and I'm sure there are some on Android as well) take longer than that to load...
glfomfn 3 days ago 0 replies      
What is so hard for people to understand ? Photoshop needs time to load, because that time is quite significant they give you some kind of feedback to let you know that 'hey the damn thing is loading, please wait', what's so bad with that? what would the alternative be?

The title starting with "Adobe employee" tries to make it sound like its a significant opinion regarding the matter, the author of the article doesn't seem to be a programmer or holding a position that deals with the process of making a program, what's even worst is the fact that he is completely clueless regarding the matter, he suggest "e.g., show a UI right away and let an instance of the program in the cloud operate against my gestures, until the local copy boots fully and can re-sync with me", seriously ??? I started wondering if i am being trolled at that point.

It takes 4 seconds to do a cold start of Photoshop on my laptop (which isn't a top notch laptop), on an older computer and with previous versions of Photoshop it would take 10-15seconds which would still be fine, the process doesn't block me from doing something else in the mean time.

lignuist 3 days ago 0 replies      
Another point is file size. Is it really necessary, that every version is twice as big (just estimating...) as its predecessor?

At least for many websites this trend has been stopped, due to mobile requirements.

motoford 3 days ago 2 replies      
Don't you think it's odd he is complaining about splash screens when Adobe is responsible for the worst thing to ever happen to the web -- those stupid flash intros ? Talk about a waste of time, you had to wait for the time waster to load and then wait for it to play
darrikmazey 3 days ago 1 reply      
While as a programmer I understand that perhaps this load time could be improved, and maybe splash screens are a lazy solution, but I wonder if it warrants this level of anger. Even assuming a generous 5 minute load time on photoshop (I don't know as I don't use it, but I doubt it's that much), and that you reload twice a day (once at the beginning of the day and maybe once after lunch), and you use it every weekday of the year, that's still less than 0.5% of your year.

I'm not saying losing 43 hours of your life per year is insignificant, but a proportional response seems appropriate. You in all likelihood spend more time waiting at red lights.

varelse 3 days ago 0 replies      
5-10 seconds? This guy is complaining about 5-10 seconds?

I hate bloatware. I despise behavior like the sloth and neglect Microsoft inflicted on Windows XP once Vista shipped and I loathe the abandonment of gingerbread phones like my late Droid G1 (unaffectionately nicknamed "the brick" in its final days) by Google once they entered the crunch phase for ICS, but 5 to 10 seconds?!?!?!?

Get a grip... If an app has any mandatory online component whatsoever (not that photoshop does), its boot time is unavoidably non-deterministic. Good luck fixing that (not that there's any excuse for avoiding latency that can be avoided)...

jarek-foksa 3 days ago 0 replies      
I remember there was Photoshop clone called Pixel which was doing cold startup in less than 1 second, so Photoshop slugginess is definitely not caused by technical limitations.
mmuro 3 days ago 0 replies      
I agree with him, in principal, that apps should be lighter and faster. However, Photoshop is just a giant program to do lots of complex things. It's a burden I'm willing to put up with (for now) because I have yet to make a transition to a similar, lighter, program.
veyron 3 days ago 1 reply      
Figure this is the right time to ask about SSDs .. Any recommendations for MBP SSDs?
Lockyy 3 days ago 0 replies      
What I find interesting is a story about Firefox Mobile.
It used to get complaints about it being slow to load and that it needed to be sped up.
The solution to this was that during load the browser would display an image of the browser as it is when loaded instead of just a black screen as it previously had.
Complaints about slow loading screens decreased dramatically. So yeah, I'll be keeping this sort of thing in mind.
This was all relayed to me from someone who attended the Firefox Mobile talk at FOSDEM.
ec429 3 days ago 0 replies      
Solution: don't use monolithic applications with bloated GUIs. Instead, use small, simple tools, driven from the command line where possible. If the problem domain is naturally graphical, have a lightweight graphical frontend driving the small simple tools through shellouts or a plugin or library interface.

In other words, use UNIX.

This isn't difficult to understand, guys.

asherjb 3 days ago 0 replies      
A splash screen seems like a pretty innocuous way of letting you know that the app is not ignoring you. I imagine that not too many people take such violent aesthetic offense to them as this guy. Still, they probably allow a longer load time than users would otherwise find acceptable in their absence.
prtamil 2 days ago 0 replies      
For One time if you see splash screen its not a problem. But for example if visual studio crashes so often and it makes you see to splash so often then as author mentioned i would feel like i would commit seppuku. They should try caching or something .
dhm116 3 days ago 0 replies      
At least they aren't following the movie industry's model of forcing you to watch paid advertisements first...
dustingetz 3 days ago 0 replies      
if gimp can insta-load, and photoshop gives me a 60 second splash screen, designers are still going to use photoshop. i would prefer Adobe continues to invest in the features that matter, not startup times.
beedogs 3 days ago 0 replies      
hey, I have a solution. Buy a damn SSD if you want to load a 1GB program faster.
steele 3 days ago 0 replies      
The WCMS he works on also has significant startup time, even beyond JVM warm-up. Ah well.
jakejake 3 days ago 0 replies      
With an SSD drive photoshop loads very fast. It only takes about 2 seconds.
karolist 3 days ago 0 replies      
I don't think I've even seen a splash screen since SSDs became affordable in the past 2 years. Also I'm a "give credit where it's due" type of guy and surnames of people that made the great project I'm launching don't bother me.

This whole blog post reminds me of the "first world problems" meme.

jwarzech 3 days ago 0 replies      
I stopped reading at "Run my gestures against an image in the cloud"
kamjam 3 days ago 0 replies      
I await his next blog post "I got fired for throwing all my toys out the pram" with great anticipation, where he explains how throwing a massive hissy fit and not taking his (some good) points up with the team and management.

I don't agree with this faking rubbish though, if it's not usable then I don't care. There's plenty of bloatware to get rid of in these programs, I doubt the splash screen is the biggest of worries!

techdog 3 days ago 2 replies      
I don't think the guy will lose his job. Do you? He is addressing a very real issue. Adobe should read this.
mariusmg 3 days ago 0 replies      
Adobe is the definition of bloat.
nixle 3 days ago 0 replies      
SSD, problem solved.
New version of Apache HTTP Server released apache.org
241 points by wyclif  2 days ago   35 comments top 8
atuladhar 2 days ago 1 reply      
From the announcement: "Performance on par, or better, than pure event-driven Web servers."

Is there any more information on how they arrived at this conclusion?

selectnull 2 days ago 0 replies      
With mod_lua, embedded lua interpreter.

I just couldn't resist, I had to try it at once. It works! (no pun intented, for those who get the reference).

edit: fixed typo

ck2 2 days ago 2 replies      
Between this and PHP5.4, LAMP performance is finally progressing.
nextparadigms 2 days ago 1 reply      
So no SPDY implementation then? Too bad.
ZenDan 2 days ago 1 reply      
It would be interesting to see benchmarks compared to nginx
tutu55634 1 day ago 0 replies      
Micro Benchmark of Apache 2.4 vs Nginx 1.0: http://blog.causal.ch/2012/02/micro-benchmark-apache-24-vs-n...

Event MPM seems unreliable :-(

madaxe 2 days ago 4 replies      
I thought I saw a squadron of pigs soar by this morning. Too little, too late, I think, though, as nginx & friends have devoured their market share over the years, and it's going to take a good year or two for distros to start supporting 2.4 - probably 7 or 8 years for RHEL!
How Forbes Stole a New York Times Article and Got All The Traffic nickoneill.com
242 points by cnolden  4 days ago   60 comments top 26
danso 4 days ago 1 reply      
Here's why the Forbes blog post worked: it was short and to the (most interesting) point.

I'm sure Duhigg (the author of the NYT piece) would agree that most neuropsychology research largely shows that readers' attention spans are short and easily influenced by the first few grafs of a story.

In fact, any HN user probably has seen the phenomenon where link-bait-titled stories get hugely upvoted despite the actual body text lacking adequate corroboration.

What the NYT should do next time is have one of its army of prominent site bloggers recap the interesting facets of the story. It's a testament to Duhigg's work that there are many pieces of it that by themselves could make for captivating posts. It's up to the NYT to capitalize on it.

maratd 4 days ago 4 replies      
This illustrates the business model NYT is pursuing. The NYT simply cannot put out an article with a headline like that. They would lose subscribers. People who subscribe to the NYT expect a higher level of discourse than "Target knows you're pregnant". Interestingly, Forbes pursues a dual strategy. I doubt they would publish a title like that in their magazine. On the web though, they might as well be The New York Post.
rweba 4 days ago 2 replies      
(1) I object to the use of the term "stole" - there is no indication that the Forbes blogger did anything unethical. She gave links and full attribution to the NYT article and therefore helped to promote it. YES, the quotes from it are more extensive and lengthy than you would normally see but on the other hand it IS a NINE page article so I am pretty sure the excerpts still falls under Fair Use.

(2) The article in question is a feature article in the NYT Sunday magazine which is where they put the long in-depth articles which took months to investigate. These are meant to be Pulitzer Prize level pieces that will get people talking and make a big splash in the news cycle. This explains both the length and the title. There is NO WAY the NYT Sunday Magazine is going to lead with a sensationalist headline like “How Target Figured Out A Teen Girl Was Pregnant Before Her Father Did.“ That smacks of The National Enquirer or something. The title they actually used (“How Companies Learn Your Secrets“) is not THAT bad either - it got me to click on it when I saw it.

(3) Most importantly, I don't see how Facebook Likes can be the sole metric of an article's "success." This weeks NYT Sunday Magazine is just officially coming out today. This is a front page article. Lots of people are going to be reading,talking and emailing it all week. And when they do they are going to send the link to the source and NYT will get the credit.

(4) Lastly, - no evidence is given that the TITLE was the SOLE reason why the Forbes post went viral. It is an interesting topic and the Forbes blogger adequately summarized it, making the post very "shareable." The promotion and SEO strategies of Forbes may also have helped.

So in summary the Forbes blog and the NYT magazine are very different types of publications and it looks like they both succeeded in what they were trying to do.

jawns 4 days ago 3 replies      
Oh man -- he got the title of the magazine wrong! It's Forbes, not Fortune.

Anybody care to edit the title of the HN submission for accuracy?

forrestthewoods 4 days ago 1 reply      
The title was better for Forbes, but the article length was more relevant I think. Forbes article was two short pages while the NY Times was 7 long pages. I have half a dozen long NY Times articles bookmarked to read later. I don't know if I'll ever get around to it.
hellosamdwyer 4 days ago 0 replies      
Let's see... the NYT article was the cover story for the New York Times magazine, a high-quality long form read with a circulation of 1,623,697/week [wikipedia]. The high quality of these long form articles are one reasons why people pay to read the Times. Presumably, the Forbes post will not be in print.

The NYtimes.com has 16.3m monthly US visits, Forbes has 10.5m. [Compete] The NYT article has 435 comments (sign of high engagement) v. Forbes' 155.

I'm not sure how he pulled the total FB share data for the NYT article - they don't display that sharing information in the same way Forbes does.

In short, despite the validity of Nick O'Neil's main point - that a more descriptive title and a synoptic treatment can travel well - his rhetoric is more than a little overblown. Details matter, and what the Times article includes is deep context, originality, and above all, diligence.

As far as a regurgitative blog post making anyone's career... ha, I guess? Only if you want your career to be limited to that activity. The Forbes writer knows it - that's why she includes 6(!) links to the original article, as well as a plug of the original writer's upcoming book. Careers are built on respect, and the most valuable quality a writer or article can have is credibility. Otherwise, it's rubbish, no matter how many people buy it.

And, ultimately, the people who you want to respect you will know you make rubbish.

jbellis 4 days ago 0 replies      
Personally, I tried to share the original NYT article, but it was behind a registration-wall so I shared the Forbes one instead.
antoncohen 4 days ago 0 replies      
I don't see how Forbes "stole" the New York Times article. If anything they helped drive traffic to it, anyone interested in reading the 9 page article will do it. The 9 page NYT article itself was based off a 400 page book that is about to be released. The NYT article isn't stealing the book, anyone interested in reading 400 pages on the subject will. The NYT has a blog about news articles (http://thelede.blogs.nytimes.com/), if they wanted to blog about their own article they could have.

There are good reasons to have the same information published with different levels of detail, they target different people, and can help lead people who are interested into reading the more detailed versions. For example the NYT wrote a 2 page article entitled "Flaw Found in an Online Encryption Method," which was based on a 17 page research paper. The NYT didn't steal the research paper. I personally think the NYT article on encryption was a scare story lacking in almost all technical detail, but it helped publicize research so people interested in the subject could read the full paper.

kmfrk 4 days ago 2 replies      
Are Forbes writers paid per post/traffic like Gawker writers are? I can only imagine this is the incentive that lead to this kind of crap.
wingo 4 days ago 3 replies      
It's definitely relevant here on HN -- a good article with a bad title probably won't go anywhere. A good article with a great title will stay on the front page for hours.
keithvan 4 days ago 0 replies      
If anything, the Forbes reblog shows how oversensationalized media is expected to be. Even in academia and many peer-reviewed journals, there is a shift towards witty or funny titles in the form of "attention grabber: what this paper really is about". The colon is an imperative. I give an example: "Looking for My Penis: The Eroticized Asian in Gay Video Porn". You can find these examples all throughout peer-reviewed journals and books -- this one came from a textbook: A companion to Asian American studies, published by Wiley.

Even academics need to grab attention, too, and it's part of the product of the information economy (where there is a surplus of information) and a scarcity of time (i.e, attention).

jc123 3 days ago 0 replies      
Tangential, but lede is an interesting word and I always thought it was just "lead".
Example: In the era of Linotype and hard copy, an editor might rework a paragraph (or graf) in a story, circle it and give instructions to composing to have it moved to the top by using the word "lede." If the editor wrote "lead," the typographer might think the editor simply wanted "leading" or spacing inserted.
executive 4 days ago 0 replies      
How Nick O'Neill Stole An Article Forbes Stole From The New York Times And Got All The Traffic
benologist 4 days ago 0 replies      
This can't be the NYTimes first brush with this business model, it powers every major blog these days since The AOL Way leaked and everyone outside of AOL realized they were doing it wrong.

Engadget have tags for "New York Times", "NewYorkTimes", "NYTimes", "NYT", "The New York Times" and "TheNewYorkTimes" for the articles they hijack.

jrockway 4 days ago 0 replies      
I liked the NYT article a lot more. I'd rather read one article for 10 minutes than 10 articles in the same amount of time.

In the end, I think the market agrees: people pay real money to read the New York Times, but nobody pays real money to read Forbes blogs.

jaredmck 4 days ago 0 replies      
The NYT article was also several different themes, somewhat related to each other, put together. The Forbes re-blog stuck to the one most link-bait theme.
veyron 4 days ago 0 replies      
Shows who is more interested in intellectually stimulative journalism (honestly, I found that tidbit a honeypot -- other parts of the article are far more interesting)
dmsinger 3 days ago 0 replies      
I wasn't confident that Forbes benefited more from the article than the NYT which is why I looked up the Likes myself as that was the most unbalanced comparison metric, and just didn't seem correct. After the look-up, the Likes are close in number (I prompted the update), and the comments on the NYT are greater.

Forbes lists the page views, but it's a metric against nothing as the Times does not.

There's some pretty heavy quoting in the Forbes post (9 paragraphs from the NYT), but it's all sourced, linked and even encouraged to be clicked-through to.

While Forbes did well with the post, I'm not convinced they did any better than the Times (on the web) with it.

TomGullen 4 days ago 1 reply      
Can 600k page views really make a writers career? Is that how they measure success in online writing?
Tichy 3 days ago 0 replies      
Also, while I would theoretically be interested in the NYT version, it is 9 pages long. So it sits in my "to be read" tab indefinitely. The Forbes article was doable in a short time frame.
adengman 4 days ago 0 replies      
One of my Philosophy professors was extremely adamant about paper titles as he argued that a paper will be read only if it has a clever or thought provoking title. After each assignment we critically reviewed all class paper titles while he provided feedback.
foreverbanned 4 days ago 0 replies      
The NYT has a registration wall that seems to pop up at random. At the moment with FF I can't get through unless I had r_=1 to the article url, but with Chrome it's not there at all. Strange...
8ig8 4 days ago 3 replies      
With so much on the line, you'd think NYT would A/B test article titles. Anyone know if this is done with modern journalism? Is is feasible?
lwhi 3 days ago 0 replies      
With this sensationalist headline, the article is a case in point.
Karellen 3 days ago 1 reply      
Forbes got my traffic because they didn't require me to register, log in, pay them money, or whatever it was that NYT was trying to get me to do instead of just showing me the damn article.
spacestation 4 days ago 0 replies      
The NYT's article has nine (9!) pages.
Are they looking for page views or what.

Forbes' article is one page.

US Appeals Court: Forced Decryption Is Self-Incrimination volokh.com
241 points by zach  5 hours ago   55 comments top 9
newbusox 2 hours ago 2 replies      
Realize that this is a fairly narrow opinion, and, in my opinion, not a particularly well-reasoned one.

The issue here is child pornography: the would-be defendant was suspected of having child pornography on various hard drives which were encrypted. The court states that the actual contents of the hard drive are themselves not testimonial"that is, they are not covered by the Fifth Amendment and, if the government had access to the hard drives, they could present whatever was incriminating on them into evidence. So the issue is whether the act of producing the documents is a testimonial act and therefore covered by the Fifth Amendment

The court concludes that the act of production is a testimonial act because, one, the testimony was not a "foregone conclusion." This holding is based on a case called Fisher v. United States, in which the Supreme Court stated that it was not testimonial to hand over certain papers that might have incriminating evidence because conceding that documents existed, that you had control over the documents, or that they were in your possession was not incriminatory given the circumstances of that case. Under the "foregone conclusion" doctrine, the government knew of the existence and location of these papers so the production of the papers added nothing or little to the government's information. If the government did NOT know that documents existed, they could not compel a would-be defendant to reveal the documents.

Second, and most importantly, the court concluded that decrypting the documents would "use the contents of [the would-be defendant]'s mind" because "the decryption and production would be tantamount to testimony by Doe of his knowledge of the existence and location of potentially incriminating files; of his possession, control, and access to the encrypted portions of the drives; and of his capability to decrypt the files." It's again important to note that this is a child pornography case: possession of child pornography is a crime, so if the would-be defendant here provided a decryption key, this would be tantamount to him admitting that he possessed the hard drive and had access to the files within it"that alone would constitute a crime if the files were found to be child pornography. This is therefore what the court later refers to as an "implied factual statement" and the Fifth Amendment protects this. Although the court also suggests that providing a decryption key might be like providing a combination (and therefore be admissible for Fifth Amendment protection on other grounds), it unfortunately devotes very little space to this discussion"and this seems to be the really big issue here.

The case therefore leaves several unanswered questions: this is a child pornography cases where mere possession alone is a crime: what if that wasn't the case? What if this was a murder case and the defendant had stored notes about his murder on the computer? What if the foregone conclusion doctrine wasn't applicable"would the conclusion here be the same (most of the opinion is actually devoted to this discussion, which is less broadly applicable because, if the police know of the existence of specific files on hard drive, this doctrine is inapplicable)?

Anyhow: it still is possible to get access to these documents if the government gives him sufficient immunity, as the court notes. This would be pretty important because if no one could ever access these documents (which presumably would be possible if the would-be defendant doesn't decrypt them) that would be an enormous problem for our justice system.

In conclusion: the applicability of this case to future cases is unclear, so, for those that want this result, I don't really think this is a "slam dunk." There will likely be many future cases further developing this doctrine. As such, right now, it's very difficult to discuss the merits of the court's holding on the "decryption is testimony" argument (which, in my mind, is the most important) in a general sense, since the reasoning here seems very specific to the facts of case.

ChuckMcM 3 hours ago 1 reply      
Wow I think they got one right. It will be interesting to see how the government continues. Since currently its only a decision in the 11th circuit. If the government appeals, it goes to the supremes and if they hold that its a violation of your fifth amendment then everyone in the country gets to claim the fifth rather than give up the key.

While I hate evil doers just as much as the next person I dislike the loss of civil liberties even more.

rosser 4 hours ago 1 reply      
Am I correctly understanding this decision to mean that, if the government already knows there's incriminating data on the drive, a compelled decryption would not be testimonial (as in the referenced cases from the 5th Circuit); but that the gov't can't compel decryption in order to go on a "fishing expedition", as any evidence found would be self-incriminating, and thus incur 5th Amendment protection?

If so, that sounds spot-on correct to my (admittedly, lay and NAL) understanding of the issues.

ctdonath 2 hours ago 3 replies      
Two analogies I use: rag doll and sawz-all.

Rag doll: so long as they can manipulate your uncooperative unresisting body to do something (apply thumbprint, get DNA sample), they can order you to cooperate. They cannot, however, compel you to do something which they otherwise have no case without.

Sawz-all: so long as getting into a safe (or whatever) is just a normal matter of time and money, they can order you to open it. If, however, "opening" an encrypted volume or some such by brute force will take something on the order of heat death of the universe, and otherwise they have no case, you can stay silent.

joering2 1 hour ago 0 replies      
this is the key to this case and reasonings behind such a response from court:

> The Government attempts to avoid the analogy by arguing that it does not seek the combination or the key, but rather the contents.

Government had inexperienced prosecutor building case and the judge, rightfully responded to prosecutor request: in order to get the content that prosecutor wants, they need keys. By not revealing keys defendant is using 5th. Everything seems fine, other than I am sure this case will come back and this time prosecutor will wont the keys not the content. This mistake, I think rest assured, will not happen from Prosecutor's part again in this or any other cases.


below is what I started typing but when I read the case again it stroked me as of why we dealing with such a decision. I decided to leave it instead of deleting if you want to read anyways:

First and foremost: I use TrueCrypt. Its amazing, simple, and it works. Just make sure, when converting existing partition, you use at least "3-pass wipe" mode since todays hard disk drives can keep "second layer" of magnetized data you were converting for months giving law enforcement access to your pure data pre-endryption. In my example, I have about 10TB across 8 HDD with my CAD/3DStudio Max work. I also have hours of digital-cam material from 2003 where 15 minutes of recording took 2TB of avi files and I never cared to convert.

Said that, I think in this case the court was terribly wrong and defendant should play lottery first thing he leaves the jail.

> First, the decryption and production of the hard drives would require the use of the contents of Doe's mind

say what? they asked him for password he knows. He doesnt want to give it out. Court agrees saying that this would require to force defendant to use his mind and reveal information he keeps there and that noone else can access. May I know any court case or any case where defendant brain would not be used?? I dont honestly find a difference between asking him for password and asking him for anything else in any case proceedings. He is unwilling to comply with court, bottom line.

> Just as a vault is capable of storing mountains of incriminating documents, that alone does not mean that it contains incriminating documents, or anything at all.

sure, but if the Government has any other evidence against defendant, the burden of proof should clearly shift to defendant. If, for example Government has ISP logs of tons of torrent data downloaded by defendant router, one can fairly assume that illegal files are stored there. If defendent is not willing to "open the safe" by releasing the key, he should be found guilty by withholding the evidence. -- Just open the damn vault and show those idiots from the Govt & Co how stupid they really are!

jrs235 3 hours ago 1 reply      
If they don't have a case without the contents on the drive then they don't have a case! I scoff at the prosecutors that claim "but if we can't get the encrypted contents then criminals will get away!" Hey dipsticks! Maybe you should collect other evidence and maybe make sure you build up a case that doesn't strictly rely on the contents of a hard drive that you don't even have possession of. Okay, so lets say you get the drive decrpyted... how you gonna prove who/how the contents got on the drive?
macrael 4 hours ago 2 replies      
The analogy to a combination for a safe seems very apt, I've forgotten: what is the precedent in such situations? Have people been forced to give up the combination for a safe in court before?
darxius 4 hours ago 1 reply      
In plain English: Does this mean an encrypted hard drive CANNOT be decrypted by law enforcement and the contents of the drive cannot be used in court to convict?
shingen 4 hours ago 0 replies      
I guess it's better late than never. A critical finding.
Flash For Linux Will Only Be Available For Chrome adobe.com
235 points by hotice  1 day ago   177 comments top 36
radarsat1 1 day ago 8 replies      
You know, it's strange that Adobe hasn't considered, at this point in time, open-sourcing the Flash player. Please, hear me out, because I don't just mean this as an HH (Hopeful Hacker), but also as a well-thought-out IBD (Intelligent Business Decision):

Flash has obviously been very beneficial to them in the long run. It has given them the only remaining well-controlled proprietary piece of the web. This helps them sell their IDE, and more importantly, gets their brand out there.

Now, I'd argue that these goals have now been accomplished. Adobe is well-entrenched in web history, and everyone knows what Flash is. However, the relevance of Flash is clearly declining, due to HTML5, and stigma and disgruntlement is increasing. This means they will get less and less sales of their IDE and their name will fizzle out.

Imagine for a second that they open sourced the Flash player. Just the player. Suddenly it would no longer carry such a stigma with Linux, it would be easy to include in distros, developers would contribute fixes and make it more efficient on hard-to-support systems. It would literally stretch out its life-time as a product, and keep Adobe's name on the web.

I argue that Flash has played out its role for Adobe, and if they open source it now it could only benefit them. I did not think this was true in the past, and I think it will not be true in 5 to 10 years when HTML5 has surpassed Flash adoption in the most important venues. However, right now I think it would benefit them immensely.

There also seems to be a sentiment from some of the comments here that they are losing interest in maintaining Flash, so opening it to the community would seem to make some sense. If the "standard" ends up evolving in any way, they'd always have a head-start in their IDE support, since it will easily remain ahead of the curve.

ootachi 1 day ago 2 replies      
For reference, here are the thoughts of Robert O'Callahan from Mozilla (and those of Simon Fraser from Apple) on Pepper: https://mail.mozilla.org/pipermail/plugin-futures/2010-April...

The thread (which continues into May) goes over pretty clearly why they felt Pepper was a bad idea.

gcp 1 day ago 6 replies      
I can't make much sense of this. Adobe declared Flash dead. Apple declared Flash dead. Google declared Flash dead in Chrome for Android.

Now, they're going to continue working on Flash, but only on a new API that is implemented only in a single browser in Linux (and from statements from Apple and Mozilla, will stay that way), but keeping it compatible with the old NPAPI on Windows?

What I don't even....

Edit: Could it be that Google is planning to release (or has released) some Linux-based appliance where Flash support is a must?

k33n 1 day ago 3 replies      
Correct me if I'm wrong, but it looks like other browsers are free to implement the "pepper" API as well. It seems as though they are not saying "we're only supporting Chrome". They are saying "Chrome is the only browser that has implemented Pepper so far, and we're only supporting Pepper on Linux".
bwarp 1 day ago 6 replies      
I get the feeling this isn't going to be that much of a problem. I've not got the flash plugin installed in Firefox and I'm not finding any great hardship these days.

Perhaps it'll kill Flash a bit quicker considering the amount of Kiosks and Internet cafes running Firefox+Flash on Linux.

slowpoke 1 day ago 3 replies      
Dear Adobe: just kill Flash already, for good. The world (wide web) will be a better place.
yabai 1 day ago 1 reply      
Wow. I remember waiting for flash to come to 64 bit Linux systems...

Perhaps Adobe has to continue supporting Chrome to support Googles Chromebooks.

In a perfect world, we would have open standards and would never need to rely on a company. Hopefully flash will die quickly (I wish I had a dollar for everytime I have heard this).

coffeeaddicted 1 day ago 2 replies      
The part I'm not yet getting about Pepper - does it only support those native client objects as described here? https://developers.google.com/native-client/overview

Or do plugins like flash have the choice between native client and just using a shared library as they did and Pepper also supports that?

In the first case it would basically mean that flash would run sandboxed (and maybe on every system supported by Pepper, so once ARM support is added it could run there as well again). But probably with some speed-hit (~5% according to the documentation)

raphinou 1 day ago 4 replies      
Let's hope this is one more step to Flash irrelevance.
I don't count the times that Flash caused me trouble, and I'll be happy to see it go away.
melling 1 day ago 1 reply      
Why is there so much noise over this? How many times does Flash have to die? Yes, the Flash plugin will be with us for another decade, but shouldn't most of us have moved on? I uninstalled Flash on my two Macs in December. I'm doing fine so far. Sometimes, I need to switch over to Chrome for video, but so far I'm not missing it.
nakkiel 1 day ago 1 reply      
I think it's reasonable to assume that Adobe wants to kill Flash on GNU/Linux, but can't yet do it for Chrome due to some engagement with Google. If this is really their intent, they are going to have much more trouble justifying a kill operation on other platforms.

I forsee a slow and painful death for Flash.

tikhonj 1 day ago 1 reply      
This comes at a fine time--I've not been using flash at all. The only site that I regularly used flash for in the past was YouTube, and then only for some videos (the ones with ads). The open source Gnash plugin can play YouTube videos that require flash (it's useless for almost everything else--it can't even play YouTube's ads :P). All the videos that work with HTML5 are better that way. (In a pinch, Gnash would work there too.)

So really, the only things I'm missing are flash games I don't play and ads I don't watch. (Some flash games actually sort of work, but it's not dependable.)

zmmmmm 1 day ago 1 reply      
Is it really Chrome or is it the new "pepper" plugin API driving this?

ie. if other browsers decide to support the new plugin API, will Adobe also support them too?

moondowner 1 day ago 2 replies      
So, Google agreed to make Flash on Linux available only via Chrome? Damn...

But, if major Linux browsers implement Pepper API, on the other hand it will mean that we (the users) won't have to bother installing (deb/rpm/etc) packages every now or then. Maybe it will turn out better.

scythe 1 day ago 2 replies      
The one thing where Flash is still apparently unavoidable is something like tinychat.com (or chatroulette) which does web-based videoconferencing. The last time I checked, it isn't possible to replicate that without Flash.
enkrs 1 day ago 1 reply      
In that case I realy hope to see this site change soon: https://wiki.mozilla.org/NPAPI:Pepper
paulrouget 1 day ago 1 reply      

> Shumway is an HTML5 technology experiment that explores building a faithful and efficient renderer for the SWF file format without native code assistance.

> Shumway is community-driven and supported by Mozilla. Our goal is to create a general-purpose, web standards-based platform for parsing and rendering SWFs. Integration with Firefox is a possibility if the experiment proves successful.

figital 1 day ago 1 reply      
To my fellow developers, please don't develop anything else for Flash. Thanks! (Within the next year Google will have you go full screen in GTK inside Chrome/Debian ... then that's your desktop ... Adobe is hedging this decent bet .... BARF!)
hub_ 1 day ago 0 replies      
Now it is time that Google do the right thing and drop Flash, as well as hold their over a year-old promise to drop H264 in Chrome.

Not holding my breath though.

drivebyacct2 1 day ago 1 reply      
I'm surprised that everyone seem to think that this is some sort of exclusive Chrome thing. I'm willing to bet that this is more from Adobe's inability to understand how to do auto-updates correctly and the fact that Chrome is the only browser to support Pepper.

Adobe gets free auto-updates and there is no hassle or extra steps for users, since there is only one way for them to effectively use it. I'm sure if Firefox were to support Pepper that they would make it available in a PPA or something.

mapleoin 1 day ago 0 replies      
Just reminded me to uninstall my flash-plugin.
lwhi 1 day ago 1 reply      
This announcement further convinces me that Flash is gearing down to obsolescence.
donniezazen 1 day ago 0 replies      
Good riddance. Flash has never been anything but trouble for Linux. It consumes huge amount of power on Linux and they are not inclined to fix it. Hopefully we will see better HTML5 support in future.
mrbill 1 day ago 0 replies      
All I see lately is that Flash is becoming less and less relevant. It's not on iOS; I purposefully didn't install it on my latest Android devices.
unabridged 1 day ago 0 replies      
I've been living flash-free for about 6 months, now that youtube autoloads html5 video the only thing pissing me off on a regular basis are the charts on Google Finance and Yahoo Finance.
brudgers 1 day ago 0 replies      
Unsurprising, both products are provided free in order to facilitate the tracking of users, and I suspect that the vast majority of users do not turn off Flash cookies.

So long as Google's Youtube defaults to Flash, it's a case of mutual interests.

alanh 1 day ago 0 replies      
I laughed out loud " shocking how different Adobe's headline and the HN submission title are!

"Adobe and Google Partnering for Flash Player on Linux" " zzZZZ, good for them

"Flash For Linux Will Only Be Available For Chrome" " Holy balls, Flash is really dying, isn't it?

trevorgerhardt 1 day ago 0 replies      
Anyone else shocked to see that they were still using a "Netscape plugin API"? How many years old is that?
pi18n 1 day ago 0 replies      
That's good. In Chrome, when Flash crashes, it only takes out a couple of tabs.
hmart 1 day ago 0 replies      
Flash player is the worst CPU hog in my Linux and Mac computers. But, could we live without it? I doubt.
njharman 23 hours ago 0 replies      
Works for me.
jejones3141 1 day ago 0 replies      
Trying to keep the tinfoil hat off, but... when I tried to post a comment on the Adobe blog asking whether Linux is being singled out in this respect and if so, why, I got a database error.
silon3 1 day ago 0 replies      
Bye flash. Welcome flash alternatives. And downloading/torrenting even more videos.
tosseraccount 1 day ago 0 replies      
Adios flash ads!
Hello html5 ads.
nodata 1 day ago 0 replies      
Good. So now I don't have to use it anymore.
dave_sullivan 1 day ago 0 replies      
Btw, really? Downvote me but you've got nothing to say?

To weigh in on the pro flash side: say you're developing applications for large enterprises, many of which still run ie7 or 8 (mind you, not websites, but applications that run in a browser and are delivered over the Internet). Since HTML5 (by which, none of you actually mean html5 in that case, it's mostly some kind of JavaScript front end with frameworks far from mature (though I like both backbone and ember/sproutcore, they've got a ways to go before being comparable to flex w/ robot legs, and js will never be as3)) will not work well in this situation, what do you propose?

For Adobe's part, I wish they'd be a bit more transparent, but regardless, I think I'm good to go with a pretty wide and stable cross-browser feature set today and will be that way for a while while JS frameworks play catch up. And meanwhile, good luck getting an IT dept at a fortune 500 to upgrade all their browsers to the latest version of firefox or chrome and to make that a requirement to use your software. And what would you gain by doing that today exactly if that's your target market?

What can HTML5/JS do today for RIA's that flash can't do better, faster, and cheaper?

The US recording industry is stealing from me aardvark.co.nz
233 points by mitchie_luna  2 days ago   53 comments top 11
tsunamifury 2 days ago 3 replies      
Several companies have falsely filed claims on the stock sounds in Final Cut Pro and GarageBand as well. It irritates me to no end.

More importantly though, it illustrates how Google likes to make sweeping solutions based on generalizations. They created a system which allows content owners to claim they own any and all content, regardless of the veracity of their claim.

billybob 2 days ago 2 replies      
TL;DR - "They flagged my video as containing their content, even though it doesn't, which means they get my ad revenue."
alan_cx 2 days ago 2 replies      
Its likely I have missed something, if this guy's video matched another video, why is it his one which is the offender and not the other video? Does this mean that the videos uploaded by the record companies are flagged as being legitimate by default ? If so, who decides that one then? Must every "legit" publisher register are some how legitimate?

If it were me, I would want to find out which video was supposed to be the original of which the IP was allegedly stolen, and accuse them of taking my material. Surely by these standards one of r them must be illegal, right?

Or does this lark only work one way for the benefit of the dear old mega-corps?

ck2 2 days ago 3 replies      
Can non-US-citizens sue US corporations in small-claims court?

Because at $5k or $10k a pop, you might find them stop this behavior.

They do it now because there is no downside for them for false DMCA claims.

$10k a pop might become a downside.

bengl3rt 2 days ago 2 replies      
Interesting to hear about the company pursuing royalties for the use of the default sounds that ship with Sony Vegas. I've always wondered what the legal specifics of using the included loops and samples in Logic, Reason etc are, since I hear them in pop songs and commercials and so on - definitely could trigger a pattern match.
maeon3 2 days ago 0 replies      
The only language riaa understands is money, to fight back, it will take far more than whining about morality or fairness or justice. They need to be destroyed, and the only way this will be possible is to starve the corruption engine of its fuel: money.
ecounysis 2 days ago 2 replies      
Is there a viable alternative to YouTube that isn't as persuaded by the powers that be? If so, and if this problem were pervasive enough, wouldn't the independent content providers prefer that service over YouTube. Then most independent content would eventually be on the other service which would drastically reduce the value of YouTube.
showkhill 2 days ago 1 reply      
Just on youtube's copy checking, I got a FB message from a friend who stumbled on a vid that uses one of my songs, it has 280k + views and advertises a motorcycle shop. No mention of the song or my band. Normally we license stuff through jamendo pro (all our music is cc). Anyone know how youtube polices this kind of thing? It's obviously a commercial usage but we got ne'er a penny nor even an attribution.
amatus 2 days ago 0 replies      
I'd be interested to know what the YouTube ToS says about this. It's likely that they can take away your ad revenue for no reason at all. Instead of complaining that someone stole your ad revenue, you should be thanking the all mighty Google every time they are gracious enough to give you a single cent of their ad revenue.
da_n 2 days ago 0 replies      
Wow, all I can say is fuck Google, three passes in as many weeks is to much for me. The golden child has surely now turned to rust.
TazeTSchnitzel 2 days ago 0 replies      
Oh, the solution's simple: YouTube should just not check for rap music!
Lord of the Files: How GitHub Tamed Free Software (And More) wired.com
232 points by jurre  2 days ago   66 comments top 10
newman314 2 days ago 9 replies      
Ryan Blair, a technologist with the New York State Senate, thinks it could even give citizens a way to fork the law " proposing their own amendments to elected officials. A tool like GitHub could also make it easier for constituents to track and even voice their opinions on changes to complex legal code. “When you really think about it, a bill is a branch of the law,” he says. “I'm just in love with the idea of a constituent being able to send their state senator a pull request.”


I find this quote fascinating. This would be fantastic if it actually gained traction. It would democratize the process of actually writing a bill. People could actually vote for/against sections for inclusion.

leftnode 2 days ago 4 replies      
Is it just me or are there a lot of errors in this article? Scott Chabon?

I wonder if they intentionally did that so people would fork the article and fix it.

4ad 2 days ago 4 replies      
Oh my god:

> GitHub.com is best thought of as Facebook for geeks


psquid 2 days ago 0 replies      
Gah, why would they use a bar chart for the "Getting Into Github" diagram and then not stick to anything resembling a consistent x-axis - at first it looked like it might be exponential, but it's not even that. The bar sizes are nearly meaningless, because different segments of them are to wildly different scales. All it actually conveys is "this one is more than this one".
hkarthik 2 days ago 1 reply      
Interesting hearing about how they want to tackle the Microsoft ecosystem.

As someone who spent years in that ecosystem and recently left it for OSS, I'll be curious to see what kind of success they see and who they are targeting.

randall 2 days ago 0 replies      
"The old regime “makes it very hard to start radical new branches because you generally need to convince the people involved in the status quo up-front about their need to support that radical branch,” Torvalds says. “In contrast, Git makes it easy to just ‘do it' without asking for permission, and then come back later and show the end result off " telling people ‘look what I did, and I have the numbers to show that my approach is much better.'”"

Substitute "business" for branch and "startup" for git.

Startups are git branching for the economy.

jacques_chester 2 days ago 1 reply      
I was really quite surprised at the way this article was pitched. I'd expect to read this airy, hand-wavy level of detail in a news magazine, not Wired.
yabai 2 days ago 4 replies      
Wow! Linus is using a Macbook Air!
brown9-2 2 days ago 2 replies      
Quadrupling the number of employees in a year is damn impressive.
ComputerGuru 2 days ago 1 reply      
I did a double-take at the title and am left wondering whether or not the one-letter difference between this and William Golding's Nobel Prize-winning "Lord of the Flies" is purposeful or not.
       cached 24 February 2012 05:11:02 GMT