hacker news with inline top comments    .. more ..    28 Nov 2012 Best
home   ask   best   7 years ago   
Hacking my Vagina scanlime.org
930 points by kogir  3 days ago   251 comments top 45
jtchang 3 days ago 3 replies      
Really cool!

Is reverse engineering the wireless protocol easy? I imagine hacking hardware involves a lot more work than software.

I also love how she 3d printed out some plastic cases for her toy. I see cheap 3d printers eventually being so ubiquitous that a quick prototype may be just as easy to hack up as a working software program.

qdot76367 3 days ago 4 replies      
For anyone interested in more stuff like this, I run a site about sex and technology and track open source sex projects:


zeteo 3 days ago 15 replies      
I wonder if an article called "Hacking my Penis" would ever last long the HN front page.
swalsh 3 days ago 2 replies      
I love the breadth of skill that went into this project. It shows good knowledge of software, hardware, reverse engineering, and its even tied together in a really neat package.
lifeisstillgood 3 days ago 4 replies      
She is inventing an industry for something every human on the planet does pretty much every day. Big market, disruptive technology. Yet somehow I don't see it on techcrunch.

I am reminded of pg's discussion on finding the taboo's in society.

We still have plenty.

DanBC 3 days ago 1 reply      
I'm so pleased there's no use of the word "dildonics" on that article.

And I'm sad there's no awesome open source version of Rez to go with her device. Probably NSFW for language and underwear photo (http://www.gamegirladvance.com/2002/10/sex-in-games-rezvibra...)

scanlime 2 days ago 2 replies      
Hey, this is beth from scanlime.org. I was doing my best to keep the site up, but it looks like Dreamhost just pulled the plug. I'm pretty annoyed with this.
victorhn 3 days ago 6 replies      
This is not meant to be a troll post, but just sincere curiosity.

From the site "My full name is Micah Elizabeth Scott, but I used to be Micah Dowty prior to Fall 2010. My friends call me Beth."

How can a woman who used to be a man can have a vagina? is there some kind of surgery? do you keep your sensitivity to be able to use vibrators?

jiggy2011 3 days ago 3 replies      
In all seriousness, somebody needs to disrupt internet porn again.

If you trying and look for it on google all you get is shitty "tube" websites full of autoplaying livejasmin ads and links that go round in circles.

The content is terrible too, either staged "reality" BS , stuff designed to shock more than titilate, unwatchable crap made with a smartphone or weird softcore stuff that tries to be "arty" or "feminist".

jiggy2011 3 days ago 1 reply      
Anyone want to fund my kickstarter for a vibrating fleshlight that runs nodejs?
guylhem 3 days ago 0 replies      
The title is misleading. But the idea of haptic sex toys, and the approach of the hack - especially identifying that the remote was the problem- is great.

We will see if haptic sex toys become mainstream. I didn't even know they existed in the first place!

It seems like a real innovation!

georgeorwell 3 days ago 6 replies      
Warning: this is not a politically correct viewpoint, but nevertheless it's my perspective.

The actual best thing about this post being in first place on Hacker News is not that it's a woman posting but that she has a Y chromosome and most people don't realize it.

It's like her recently-acquired vagina is a new laptop to be hacked. It's still objectification of women if you try to turn into one and then objectify yourself.

The real goal of this post is to get a bunch of men to fantasize about "her" and glorify how cool "she" is for being a geeky hacker.

Not all trans(vestite|gender|sexual) people are like this.

tzury 2 days ago 1 reply      
This hacker-girl has an amazing portfolio (note she's 26 years old).

See her resume at http://scanlime.org/resume/

dmschulman 3 days ago 0 replies      
Cool hack! And maybe I don't read enough, but I thought it was especially interesting since it was one of the first builds where I saw someone create a custom circuit AND a custom enclosure using a 3D printer.

Though it involves a sex toy, I think the build was straightforward, technical, and decidedly un-sexy. I don't know where the feather ruffling is coming from besides the discussion of the link's title.

lobotryas 3 days ago 3 replies      
Fun article and an excellent hack!

Any takers to found a startup in order to dive in and disrupt the sex toy industry with some cutting edge innovation? Imagine the millions you'd rake in if you re-invent sex.

I'm looking forward to at least one ero-toy applicant in the next round of YC apps.

dematio 2 days ago 0 replies      
This is awesome. It's cool to see more people are hacking sex toys. Hopefully with all these cool hacks, it will remove the stigma. People have a stigma against vibrator is because they always imagine vibrator as a huge penis vibrator that they seen in porn movies.
The fact is many studies shown that size doesn't matter.
The first electrical vibrator was invented as a medical device and to stimulate the clitoris, not the inside.

I believe the world will be a better place when women can have orgasm as much as men do.

My startup, www.vibease.com, is helping couples to stay intimate even from a distance. We have a mobile App with long distance vibrator. We use Bluetooth and internet connection. Currenlty we are taking pre-order.
We try to bring it to mainstream market and it's not easy:

ciriarte 3 days ago 3 replies      
I cannot express how much I admire this post. This is how I think women should address gender equality: not by antagonizing men, but through sound and assertive work like this.
ghjm 3 days ago 4 replies      
This is a really cool hack, but my problem is that I don't understand the sex part. What's better about waving your hands around to control the motor speed, vs. using a dial?

This is not out of prurient interest. I just can't understand the engineering without understanding the use case. Maybe you have to be female to get this?

stcredzero 3 days ago 0 replies      
After everyone was feeling stuffed and mellow in the house, I brought out the old PS2 and hooked it up to the projector and the stereo, then put in Rez. (Has been called "Tron on Ecstasy.")


I think there's a lot of hacking potential. I think it would be cool to have a back room in a club where you have Rez on a game console on a big HD screen with a nice sound system. One could also implement a wireless protocol for the trance vibe info and publish the protocol, so spectators would casually walk in and wirelessly experience the "synaesthesia."

hcarvalhoalves 3 days ago 0 replies      
Sex in the future. It's going to be weird.
JeremyMorgan 2 days ago 0 replies      
I clicked on this only because of the extreme curiosity the headline generated. The article did not disappoint.

Easily one of the smartest things I've read in quite a while, and much of the hardware stuff is over my head but wow, color me extremely impressed.

askothecoder 3 days ago 1 reply      
This is weird. Or not. I have hard time even making a difference between weird and not weird these days. Either way, carry on.
rhplus 2 days ago 0 replies      
Link is down. Here's a Coral Cache version: http://scanlime.org.NYUD.NET/2012/11/hacking-my-vagina/
polarcuke 3 days ago 0 replies      
Wow, this is easily the strangest top post I have ever seen on hacker news. I actually can't stop laughing. I guess it's because most people just don't think about technology and sex in the same thought unless you are thinking about internet porn. An interesting read no less.
cindygallop 2 days ago 1 reply      
cough Have already urged @scanlime to do this, but anyone else, as and when you hack your vagina - or penis, we're totally equal-opportunity :) - please do submit your (or your friends') #realworldsex video demo to https://makelovenotporn.tv/ with our revenue-sharing business model, you could make a nice chunk of change :)
personlurking 3 days ago 0 replies      
"Power Exchange" section title. Must be a resident of SF.
egypturnash 2 days ago 0 replies      
Here's a functioning cache link, since the site is hammered for rather obvious reasons. http://scanlime.org.nyud.net/2012/11/hacking-my-vagina/
Empro 3 days ago 1 reply      
I'm amused this is the top story.
robbles 2 days ago 1 reply      
Really cool project, and looks like it was executed really well in many ways.

However, I couldn't help being reminded of this comic: http://xkcd.com/196/
I feel like there's a lot of social factors that would make using an invention like this a little awkward. Maybe that's just me?

nickik 2 days ago 0 replies      

Going a step further would be to give the vibrator some AI. There is of course some need to monitor pleasure, I dont know if that is easily possible but one could implment some kind of AI that would to that.

The next step would be to sell it to millions of woman and analysie all the data. Would be intressting what could be figured out that way.

smagch 1 day ago 0 replies      
The product seems really innovative sex toy for female. I've seen a interview of tenga's founder before. He thought there were much room for innovation of sex toy industry. I was quite impressed that he spent a year to build his first product. I've been impressed again.


jzurawell 2 days ago 0 replies      
Your vagina has a 500 error.
khmel 2 days ago 0 replies      
Vagina rocks! This topic would never get so much attention if this would be 'Hacking my ... nose, knee, hand etc..'.
Some star topics are easy to predict.

Nerds have their own weaknesses

tathagatadg 2 days ago 0 replies      
secure communication of Vstorker(see qdot76367 comments) + this hack with video == Sex over IP.

Target customers: long distance couples, virtual sex business.
Taking it further: build a Airtime like social network around it.

Revenue stream: from selling the hardware, membership.

brennenHN 3 days ago 0 replies      
Super clever title and article for getting HN attention, but also pretty interesting.
tomnardone206 2 days ago 0 replies      
There is a project out there to share 3D printer files for sex toys. MakerLove.com provides files you can download and print for free.
dillon 2 days ago 0 replies      
Science that gives me an erection. Very very good read.
agumonkey 3 days ago 0 replies      
Wonderful website full of gem.
felipelalli 3 days ago 0 replies      
iframe 3 days ago 0 replies      
Why this? aren't there thousands of virgin nerds ... ?
_W_o_W_ 2 days ago 1 reply      
...and how does it feel to your soul to be slave of your body? (instead of master...)
marktronic 3 days ago 0 replies      
What's a vagina?

Never Touched A Boob

the1 3 days ago 0 replies      
If you have smart enough AI, do you get pregnant by it? That's how babes born, right?
cwb71 2 days ago 0 replies      
500 Internal Server Error

Best linkbait title ever?

Leaping Brain's "Virtually Uncrackable" DRM is just an XOR with "RANDOM_STRING" plus.google.com
670 points by asherlangton  1 day ago   242 comments top 35
Eliezer 1 day ago 7 replies      
Maybe there's a scheme here to prevent good DRM by flooding the market with highly inflated impressive-sounding claims attached to laughable security. The Old Media crowd won't be able to solve the Design Paradox (http://www.paulgraham.com/gh.html) well enough to tell who's lying, good designs won't be able to charge more than laughable competition, and the DRM field will slowly die.
mturmon 1 day ago 2 replies      
From http://leapingbrain.com/:

"Video content is protected with our BrainTrust™ DRM, and is unplayable except by a legitimate owner. All aspects of the platform feature a near-ridiculous level of security."

Near-ridiculous security seems about right.

toyg 1 day ago 9 replies      
I am awed by the chutzpah of whoever is behind Leaping Brain, selling snake oil to clueless media people.

This is why I'll never be rich: I am utterly unable to sell crappy non-solutions to people with more money than knowledge.

radarsat1 8 hours ago 1 reply      
I would like to propose that DRM is not intended to be uncrackable. It's easy to convince yourself that DRM is flawed, because fundamentally it is a flawed tool. Companies know this, they're not stupid. However, DRM is actually not a technical tool to prevent piracy. Rather, DRM is a legal tool to provide stronger legal arguments that theft has occurred.

I'm not saying this is right, necessarily, but I think companies know full well that their DRM scheme will be broken, so it's not really worth investing in an "uncrackable" and costly solution. Instead, the role that DRM play is purely legal -- when the company does decide to go after someone for piracy, the DRM scheme, no matter how simple, provides them with the ability to say that the accused person "broke a lock," rather than simply walking in through an unlocked door. "Entering" vs. "breaking and entering." It's nothing but legal leverage, and effective at that role even if it's not a very strong lock.

Of course, to have this argument hold, a company would never be able to admit that they purposefully implemented weak security -- this would be akin to admitting that their door was unlocked afterall, and would weaken their legal argument. Therefore, there remains a niche in the market for solutions that look secure even if they fundamentally aren't. It's all about lip service.

pilif 19 hours ago 2 replies      
This could very well be a simple bug where it's supopsed to XOR with some really random string generated on the server, but some replacement of a template string isn't happening which is why it XORs with RANDOM_STRING.

Of course this is only marginally better and should really have been caught, but there's a huge difference between saying that XORing 12 bytes with RANDOM_STRING is kick-ass DRM and actually having a kick-ass DRM infrastructure that then doesn't work right because of a bug.

If this was any really random looking string, I would be more inclined to assume that this was intentional. By the string being this token, I would guess it's a bug somewhere.

Remember. If RANDOM_STRING was truly random, unique per file and account and only transmitted from the server before playing, then this would be as good an encryption as any.

hosay123 1 day ago 6 replies      
You cannot simultaneously crow "hurr, DRM is broken!" and act all smug about this discovery. Perhaps the original developer, like you, understood this, and did the absolute bare minimum necessary to fulfil commercial obligations, all the while making it easier for people like himself (i.e. you) to get what they want, and making a few bucks from the old and dying media industry all at the same time.

Given the evidence (complex integration with a non-standard set of open source libs, complex industry area in general), I'd say it's almost certainly an insult to imagine the developer could not have made your life harder if he'd chosen to.

Please, if anything commend the dear fellow, and shame on whoever considered a momentary glimpse of Google Plus limelight worth making this guy's Tuesday morning and ongoing professional reputation much harder earned than it otherwise might have been.

"No good deed goes unpunished"

mahmoudimus 1 day ago 1 reply      
I did a lot of reverse engineering back in the day - you'd be surprised how many "virtually uncrackable" DRM protections used by companies like Adobe (at the time - Macromedia) that were just stupid XORs of magic strings.

Ahh..the good old days of SoftICE and w32disassm.

Oh man, the worst was the md5 of some salt + whatever you put in.

If you ever want to see some gems of misuse of cryptography for DRM management, let me know - email's in my profile.

Some examples: Using RSA 1024 bit keys, with exponent of 3...

marshray 1 day ago 3 replies      
This is apparently why the DMCA anti-circumvention provisions only apply to bypassing "effective copy protection" systems.

Of course, if a copy protection system was "effective" it wouldn't need a law prohibiting its circumvention. Conversely, if a copy protection system is circumventable, it's not effective.

yk 18 hours ago 0 replies      
This is roughly the level of programming I expect from DRM software. After all, the content needs to be in unencrypted format at some point to view it.[1] Therefore there are two kinds of programmers working on DRM, idiots and liars. One kind does not understand the futility of their efforts, the other kind wagers that there superiors do not understand the futility of their efforts.

[1] Assuming a general computation device, not a dedicated hardware player.

ataggart 1 day ago 2 replies      
Judging by the headline, it sounded like they tried to implement a one-time pad, but had only heard of them by rough description.
jiggy2011 51 minutes ago 0 replies      
Question: Can anybody name a DRM scheme that hasn't been cracked?
joezydeco 1 day ago 1 reply      
How do we know this wasn't a non-english speaking subcontractor that took the spec too literally?
asherlangton 38 minutes ago 0 replies      
The CEO of Leaping Brain (or someone pretending to be him) has now joined the Google Plus thread, implying that the "DRM" was intended as satire...
asdfaoeu 1 day ago 3 replies      
Someone want to explain why this is less secure than other DRM methods?
jcromartie 12 hours ago 0 replies      
You know what's absolutely terrifying? This guy could conceivably go to jail for this. Looks like he has kids, presumably a wife... hoping it goes well for him.
danso 1 day ago 1 reply      
Ha, so the key really was "RANDOM_STRING", in the literal sense...was that just the programmer giving up, or was that pseudocode that was missed during shipping?
anonymous 19 hours ago 1 reply      
facepalm Come on, people!

First rule of weak DRM, you do not talk when you find weak DRM.

Second rule of weak DRM, you DO NOT talk when you find weak DRM.

Third rule of weak DRM, upload to pastebin, then walk away.

pav3l 1 day ago 9 replies      
Can someone explain how he got a hold of the decrypted .mov files that he compared the encrypted ones with? It's not very clear to me from the post, and I'm not familiar with Leaping Brain.

Either way.. wow... XOR encryption with just such a short repeating string! I bet it wouldn't be too hard to decrypt it even without the original file, since the file signature alone would probably be longer than the string. DISCLAIMER: I'm just speculating, I don't know the .mov specs.

photorized 22 hours ago 0 replies      
The business goal behind most of these "protection" methods is to make unauthorized (unpaid) copying/sharing inconvenient. That's it. There are no commercially feasible methods to protect video or audio content against "a determined hacker", but that's not what these barriers are for. You can make fun of these laughable encryption methods all you want, but they serve their purpose by providing the desired purchase to piracy ratio.

The problem is marketing folks getting carried away when describing these "technology solutions" to the content owner, because that's what they (as well as VCs) want to hear.

Disclaimer: cofounded a video CDN+DRM provider more than a decade ago, developed many content protection methods over the years.

sigkill 14 hours ago 1 reply      
To be fair, when I read the title I thought that if the string is truly random then it's actually a very good technique. This is the core operating principle behind the one-time pad which is provably secure.

Now that I read the article twice, I literally got a panic attack when I realized that it wasn't a random string that they were xor'ing their data with, but a string called "RANDOM_STRING". Although it sounds bad, one must realize that this is not security by obscurity since the key has been leaked, and nobody guarantees encryption against a leaked key.

iandanforth 1 day ago 5 replies      
Could someone (OP?) provide more of the steps that might have gone (went) into discovering it was an XOR operation and the original string? Seems like an impressive intuitive leap to me!
tlrobinson 19 hours ago 1 reply      
"It turned out the actual player, launched from their compiled app, was a Python wrapper around some VLC libraries"

Isn't VLC licensed under the GPL? Or at least was until very recently? http://www.jbkempf.com/blog/post/2012/How-to-properly-relice...

Is/was Leaping Brain violating the license?

EDIT: the wrapper script is apparently released under the GPL too: http://news.ycombinator.com/item?id=4834834

shocks 17 hours ago 0 replies      
"All aspects of the platform feature a near-ridiculous level of security."

Well... They weren't lying...

nnq 16 hours ago 0 replies      
...I find it extremely funny when people use the word "virtually" to mean "practically" or "nearly" or "almost" and they turn out to be wrong but are excused by the fact that they added the magic word "virtually" :) ...and conversely, if someone uses the word when talking to me, I label everything the person says afterwards as 99% weasel words...
loup-vaillant 5 hours ago 0 replies      
The obligatory xkcd: http://xkcd.com/221/
cafard 15 hours ago 0 replies      
Back in the 1990s, the revolutionary organization Sendero Luminoso was naive enough to believe in WordPerfect's encryption. This was a grave mistake, for that encryption (for 4.2 and 5.1 at least) was a simple XOR of the password against the text--and in 5.1 you had 10 or so bytes of known text to compare against in the header. The decryption of the files was not the only thing that worked against Sendero Luminoso, but it must have hurt them.
javajosh 1 day ago 0 replies      
This should be lauded just as much for being a solid little piece of citizen, even activist, journalism. The specific issues about DRM are important, but I think the greater willingness to really look into things and publish the results should be encouraged.
damian2000 1 day ago 0 replies      
There's two software engineers and a product architect listed on the about page - http://leapingbrain.com/about/

It might be a good idea to remove their names, to protect their reputation. ;-)

stcredzero 23 hours ago 1 reply      
Breaking repeated XOR with a string is a variant of the Vignere cipher or the Vernam cipher, depending on how you think of it. Either way, breaking it is a freshman cryptography exercise.
i0exception 1 day ago 6 replies      
Anyone who has taken Computer Security 101 would know that security through obscurity is not the smartest thing to do. Calling it "near-ridiculous level of security." is downright blasphemy.
samuellevy 23 hours ago 0 replies      
Tomorrow on HN: "Legislation passed to embed DRM chips into people's heads, which automatically shut down visual input if un-authorized content is detected playing in their vicinity. Three strikes policy before permanent blindness."
Syssiphus 16 hours ago 0 replies      
Hm, anybody remember Dmitry Slyarov? http://en.wikipedia.org/wiki/Dmitry_Sklyarov

As far as I recall the Adobe PDF encryption was also just some XOR with a simple passphrase. Got him into serious trouble.

And WTH is 'virtually uncrackable'?

ballfrog 12 hours ago 0 replies      
From their website:

Fort Knox-level security.

Video content is protected with our BrainTrust™ DRM, and is unplayable except by a legitimate owner. All aspects of the platform feature a near-ridiculous level of security.

seanhandley 15 hours ago 1 reply      
XOR isn't insecure per se. What I'd like to know is how this "random string" is created in the first place
px43 1 day ago 0 replies      
This is what they call this a 1024 bit Vernam Cypher in the movie "Swordfish".
The YC VC Program ycombinator.com
394 points by pg  1 day ago   164 comments top 38
cs702 1 day ago 4 replies      
YC is perhaps the most innovative angel/accelerator/VC firm in the planet. Despite all the success the firm has had, it is still acting like a startup -- constantly questioning, experimenting, tinkering, and gradually optimizing via careful trial-and-error, based on the real-world results achieved with each batch of companies. Clearly, it's run by hackers.
jedberg 1 day ago 2 replies      
This is great! I always thought the $150k was too much of a runway. It allowed the poor startups to limp along for too long.

And besides, the real value from these investments isn't the money, it is the mindshare you get with the VCs. This will help make that mindshare greater.

evansolomon 1 day ago 2 replies      
Is there any significance to Ron Conway not being in the group anymore?
debacle 1 day ago 5 replies      
What kind of messy things happen with 150k that don't happen with 80k?
dmmalam 1 day ago 6 replies      
80k may be a little tight for international founders, as flights and legal fees for visas can be very expensive. Also silly things like paying extra for deposits on rent/bills as you have no US credit history. But otherwise seems reasonable.
khangtoh 1 day ago 0 replies      
Not everything scales horizontally and shouldn't have to. YC should just stick to a lower number of startups for each cycle.

1) Maintains the prestige of a team being selected.

2) Keeps investment level at previous level.

3) Spend quality time with each team.

rdl 1 day ago 1 reply      
I've never heard of Maverick Capital -- are they the hedge fund? They seem to be med-tech focused, if that's them.
rdl 1 day ago 0 replies      
It's sad to see SV Angel isn't part of the W13 VC program.
josh2600 1 day ago 0 replies      
I think YC has more of a problem dealing with the bad startups than it does dealing with the successful ones. My opinion from Paul's writing is that his goal is to find the AIRbnb's as quickly as possible while relegating things that aren't going to be mammoth. It makes sense when you're playing a volume/numbers game like YC has been doing.

Quite the investor lineup; I think they had their choice of anyone in the world to be honest.

I like reading PG's stuff because these sort of decisions are counterintuitive, but very fun academic/didactic examples.

argumentum 1 day ago 1 reply      
THANK YOU! This makes 250% sense, particularly for "no idea" companies. 150k led to several bad decisions at the beginning, and led to our breakup later on. 80k will be a lot more valuable than 150k, and make startups think more carefully about how to spend money.

Perhaps for capital intensive startups that ought to be funded, there can be a separate arrangement.

This shows why YC is YC.. it constantly innovates and is not afraid to change things up as needed.

mrkmcknz 1 day ago 0 replies      
Is there a particular reason that Maverick were involved? I don't know the guys well enough but they seem like your traditional Wall Street hedge fund.

I'm probably completely wrong but guessing Sam Wyly was perhaps one of the major reasons?

ssebro 1 day ago 1 reply      
What stops other investment firms from making identical blanket investments, or how do you guys plan to stop that?
genuine 1 day ago 0 replies      
As alluded to in the post, I think that the 80k/startup rule is probably not a good one either.

80k might not be enough and "doing more with less" can just as often lead to failure. Case in point is the recent post about the lean startup that only provided the minimum viable product (MVP), but failed to really provide what was needed. If pressure is put on startups to provide a product faster for less, then less of a product will be provided. Granted, YC will provide more support and try not to let such things happen.

I think the funding should be decided on a case-by-case basis instead. Possibly more funding could be given to those with a better idea that have greater need through some sort of point system, and either provide a range of funding from 40-120k without requiring more from the VC's, or provide 80-150k if the VC's are able.

kmcloughlin 1 day ago 0 replies      
Really interesting change.

Not sure what metrics YC collects on each of its cohorts, but if "Cycle Length" and "Total Cycles" are two of them (i.e. end-to-end time through build-measure-learn loop), then it would be valuable to see if January's teams deviate, in a meaningful way, from the established average and mean.

Hypothetically, you'd think that "Cycle Length" would decrease as less cash in hand forces teams without product-market fit to tap into higher gear (or sustain that same high level of output), rather than spend valuable attention wondering about how to get a piece of the "carcass" should things crash and burn. Total cycles would then move up and to the right, as teams get a few more in near the end of their runway.

That said, there are at least two additional potentialities.

1. "Cycle Length" and "Total Cycles" are already running along their respective natural asymptotes.

Consequence(s): A. Awesomely cool discovery re: limits of productivity! B. The number of founders able to remain highly productive while simultaneously squabbling (and providing PG w/ massive headaches) decreases.

2. The "Starting Date" for raising an additional round of financing shifts to the left (i.e. becomes earlier). With less cash in the bank, and hence a shorter runway, the average starting date for teams w/o product-market fit to begin seriously committing time to fundraising shifts "earlier".

Consequence(s): A. If more time fundraising = less time building, then the average total output of a team, in the 12 months post-YC, could decrease.

Mitigation: Personal access to the best VCs in the world could mean that the length of a "Funding Cycle" (i.e. total time spent raising a round) decreases, thus offsetting consequence 2a.

It'd be great if these numbers were something YC actually collected. If there's a science to building companies, you could certainly glean great insight from those digits.

grandalf 1 day ago 0 replies      
Think about it this way, nobody applies to YC for the money, so why part with it?
Rezal 1 day ago 1 reply      
Here are my thoughts about this:

1. You have idea, a plan to validate it and a plan to execute upon different outcomes and different startups require different financing. I am just wondering if YC has clear matrices which shows there is a same optimum financing for early stage startups in all the domains (healthcare, edu, consumer web, etc.) I am just curious about this!!!
But then I read the following: " it sometimes caused messy disputes in the unsuccessful ones. Switching from $150k to $80k may not completely eliminate such problems, but it will make them at most half as bad."

2. The entire funding decision and amount is changed based on negative thought of a dispute among the founders. I believe this is fundamentally wrong and is against the entrepreneurial spirit. I wouldn't feel good as a person and an entrepreneur if my investor would come up with this. It basically means you don't know what you are doing and I am going to protect you against yourself!!!

3. Although a 70K difference is a small number, but it will have a huge impact on your startup:
- validating your idea and pivoting: the startups will show a tenancy towards low hanging fruit instead of seeking for the bigger picture solution
- Shorter runway impacts your flexibility and thus deal negotiations for the next rounds considering a 3-6 months funding period.
- Am just wondering how this will impact valuations of companies in post YC. I would think the valuations would go down.

What are your thoughts on this?

waderoush 1 day ago 0 replies      
pg briefed Xconomy on the changes this morning, and the interesting part to me was the feeling that too many failing YC startups were feuding internally over the extra cash. He said such disputes have been taking up more than half of Jessica's time recently. Now startups will have less money to argue over.


thinkdevcode 1 day ago 1 reply      
Does this mean that the outside investment of 150k is no longer available? I don't see whats to stop them from investing (150k) anyway, considering it wasn't part of YC in the first place.
In my opinion 80k is a little too low, especially for international founders or those with international teams. I agree, though, that advice from VC would be better in terms of building a business versus just throwing cash at the startup.
hippich 1 day ago 0 replies      
The new version involves less money and more engagement. The VCs will invest $80k in each startup instead of $150k, and we'll organize sessions of office hours in which partners from the VC firms advise the startups in each batch. As before, the investments will be done as convertible notes with no valuation cap and no discount.
bsims 1 day ago 0 replies      
It is worth reading PG's email from 5 months back about his vision of what the fund raising environment might look like.


philwelch 1 day ago 1 reply      
Is this in addition to the $12k-20k invested by YC itself, for a total of $92k-100k?
realrocker 1 day ago 0 replies      
So it has become easier to fail fast. The inertia to prevent death has been halved. Good or Bad?
semerda 1 day ago 0 replies      
This is what will make this work: "and we'll organize sessions of office hours in which partners from the VC firms advise the startups in each batch".

That's the difference between "smart money" where investor(s) contribute more than cash into startups and "dumb money" where investor do not.

Great move YC!

dreamdu5t 1 day ago 2 replies      
$80k or $150k seem like pithy amounts of money to fund any sort of company. What am I missing?

That's one good person's salary for a year. One.

cyphersanctus 1 day ago 1 reply      
Pg, are there really many ugly ducklings on demo day, despite the rigorous selection and the subsequent teaching/incubation process?
tyang 1 day ago 0 replies      
Another way to fix the founder breakup problem is to have a pre-nup with a few menu options.

Require founders to choose and sign on to prior to receiving any of the $150K (or maybe prior to acceptance to YC).

ajaymehta 1 day ago 0 replies      
I think this is wonderful... the expansion of VC office hours will be a huge help for future batches. Having investor allies is always a great thing for a young startup.
johnrgrace 1 day ago 1 reply      
Good call, guaranteeing $150K plus the seed money for the given inputs was far in excess of what you could raise elsewhere for comparable inputs. Simply the seed money, social capital, and demo day was attractive enough for me to apply.
polshaw 1 day ago 1 reply      
What equity is given for this (/how has it changed?) and is this pre defined or 'negotiated' ?
dogan 1 day ago 0 replies      
This will definitely reduce the number of startups with H1B holder founders, even if you transfer the visas, the runway will be comparatively short.
metra 1 day ago 1 reply      
Off topic: did Yuri Milner lose money on his Zynga and Groupon investments?
namank 1 day ago 0 replies      
I'm not sure if one amount will fit all startups.

Because categorization is important for organization, how about having startup brackets?

base: 80k
international: 100k

bmohlenhoff 1 day ago 0 replies      
My first thought at seeing the submission title was that YC was offering a program teaching people about how to be VCs. Then I realized that this was simultaneously awesome and nonsensical, since Joe Random probably isn't going to have the scratch necessary to fund much of anything. Still, it would probably be fascinating from an academic perspective.
andrewhillman 1 day ago 0 replies      
YC is treated just like software. Time to scale back and move forward. Those bottlenecks were obviously causing a problem.
littlegiantcap 1 day ago 0 replies      
I like the 20k from each. It's always good to have a diversity of opinions.
raheemm 1 day ago 0 replies      
It just keeps getting better
kevingibbon 1 day ago 0 replies      
love this move. I've seen first hand the 150k let's startups limp around for far to long.
n00b101 1 day ago 5 replies      
$80k is peanuts. Apparently these folks haven't learned the lesson that you get what you get what you pay for!

Maybe $80k is enough for some ridiuclous $2 iPhone game or some startup for making a website for adding a single puny feature to an existing social network and having 15 minutes of fame.

But $80k is nowhere near enough money to create an Enterprise solutions startup. I laugh at your $80k and the hubdreds of flight-by-night, gimmicky, bullshit, small businesses that you will burn your money on.

It takes at least $5 million over 5 years to startup a serious business that has any chance in hell of becoming a real billion dollar company. That is the future.

$80k ... it's insulting to people with real ezperience, real ideas, real businesses and real explosive growth potential. You will never fund the next Apple, Microsoft, Google, Oracle, NVIDIA, etc ... You're just going to fund a whole lot of infantile business projects and a whole lot of fail.

Apple's Module proposal to replace headers for C-based languages [pdf] llvm.org
369 points by _djo_  1 day ago   169 comments top 34
haberman 1 day ago 4 replies      
Overall I like it. I like how they are treating both C and C++ as first-class citizens of this new feature (instead of, for example, inextricably tying its design to classes and namespaces). I like that they have a plausible migration story for how to interoperate with existing header files. And the overall design really looks like something that would fit into all of the C and C++ work that I do without getting in the way.

Sure it's non-standard and no one who cares about portability will use this (yet). But this is exactly the way that good ideas get refined and eventually standardized. You surely wouldn't want to standardize a module system that hadn't been already implemented and tested -- that would just leave you with surprises when theory meets reality.

C and C++ are here to stay -- we should be open to improvements in them.

They don't explicitly mention this, but I'm sure that they have no plans to remove existing #include functionality -- it is a near certainty that someone, somewhere depends on having the preprocessor state affect how an include file is processed. There are probably even cases where you can look at the design rationale for this choice and say "yep, that really is the best solution for what you are trying to do."

greggman 1 day ago 4 replies      
It's possible I don't understand the proposal and I'm probably going to get egg on my face and but I'm not sure I really want this. If I wanted Objective C or C# or Java or Python I'd use Objective C or C# or Java or Python.

I actually like the preprocessor. I like that I can write code like this

    #ifdef DEBUG
#define DEBUG_BLOCK(code) code
#define DEBUG_BLOCK(code)

void SomeFunction(int a, float b) {
LOG_IF_ENABLED("Called SomeFunction(%d, %f)\n", a, b);
... do whatever it was SomeFunction does ..

In other languages that I'm used to there's no way to selectively compile stuff in/out.

I like that I can change the behavior of a file for a single include unit

   -- foo.cc --
#include "mylib.h"

-- bar.cc --
#include "mylib.h"

-- baz.cc --
#include "mylib.h"

because enabling it globally would run too slow

I like that I can code generate

    // --command.h--
#define COMMAND_LIST \

// make enum for commands
#define COMMAND_OP(id) k##id,
enum CommandId {

// --command.cc--
// Make command strings
#define COMMAND_OP(id) #id
const char* GetCommandString(CommandId id) {
static const char* command_names[] = {
#define COMMAND_OP(id) #id,
return command_names[id];

// make a jump table for the commands
typedef bool (*CommandFunc)(Context*);
bool FunctionDispatch(CommandID id, Context* ctx) {
static CommandFunc s_command_table[] = {
#define COMMAND_OP(id) id##Proc,
return s_command_table[id](ctx);

Or this

    class Thing {
void DoSomething();

// needs access to Thing's internals.
void EmulateOldSlowLegacyFeature();

Yes, I can try to hide the implementation but again, the reason I'm using C++ is because I want the optimal code. Not a double indirected pimpl. If I wanted the indirection I'd be using another language.

I love C/C++ and it's quirks. I use it's quirks to make my life easier in ways other some other languages don't. Modules seems like is ignoring some of what makes C/C++ unique and trying to turn it into Java/C#

People saying the preprocessor has issues are ignoring the benefits. I miss the preprocessor in languages that don't have one because I miss those benefits.

You could say, "well, just don't use this feature then" but I can easily see once a project goes down this path, all those benefits of the preprocessor will be lost. You can't easily switch your code between module and include, especially if it's a large project like WebKit, Chrome, Linux, etc.

Leave my C++ alone! Get off my lawn!

_djo_ 1 day ago 2 replies      
This looks promising, aside from being long overdue. Header files have always been one of the more annoying parts of C/C++/Obj-C development.

The important bit is that the proposal's ideas for making the transition easier are good and make it seem like this may get traction where similar efforts have failed before. That Doug Gregor and other LLVM/Clang/LLDB developers are already working on the Clang implementation is even better. At the very least we may see this in Objective-C.

nkurz 1 day ago 8 replies      
While LLVM authors probably know best, I don't understand some of his criticisms on the "Inherently Non-Scalable" slide.

  • M headers with N source files ->  M x N build cost

It's only MxN if there is no use of the "#ifndef _HEADER_H" workaround that he mentioned earlier. Wouldn't adding a preprocessor directive like "#include_once <header.h>" solve this? Alternatively, these guards could be added to the headers themselves without changing the preprocessor. This probably should be a parallel set of headers (#include <std/stdio.h>) to avoid breaking the rare cases that depend on multiple inclusions, but creating that set would be a simple mechanical translation.

  • C++ templates exacerbate the problem

I'm mostly a C programmer, so I have no argument here.

  • Precompiled headers are a terrible solution

Why is this? It likely would break the ability of headers to be conditional on previous #define statement, but since the proposal does this anyway it doesn't seem insurmountable. Along that lines, how does this proposal handle cases where one needs/wants conditional behavior in the header such as "#ifdef WINDOWS" or the like? And is caching headers during the same compilation also "terrible"?

dpark 1 day ago 0 replies      
Overall this seems good. The lack of modules in C/C++ is a huge pain. The "link" section seems like a really leaky abstraction, though.

  module ClangAST {
umbrella header “AST/AST.h”
module * { }
link “-lclangAST”

This hardcodes an implementation-specific syntax and yet says nothing meaningful. Drop the "-l" and you're just restating the name of the module.
What value is there in baking a command-line flag into the module definition?

P.S. I also find it strange that something intended to blend with C/C++ doesn't use semi-colons. This is just stylistic, though.

SeoxyS 1 day ago 8 replies      
I think I'm the only person who likes headers. I'm not overly concerned with compilation times and big-o notation. Computers can compile things really fast nowadays.

I'm more concerned with the developer usability benefits & drawbacks of the feature. As somebody who is a polyglot, but spends a large amount of time writing Objective-C, I have come to absolutely love header files.

I see header files almost as documentation. To me, a header file is a description of everything that's public about an API. My header files tend to be very well commented, and very sparse, only containing public methods and typedefs.

When the need arises to make internally-includable headers (say I'm writing a static library, and have methods that are private to the library, but public to other classes within the library), I will usually write a `MyApi+Internal.h` header for internal use, which doesn't ship with the library.

A developer should never have to dig into implementation files, or into documentation, in order to use a library. Its headers ought to be sufficient. Things like private instance variables or anything private does not belong in a header file.

FWIW, here's the public header for the library I spend most of my time working on:


meaty 1 day ago 4 replies      
It the risk of starting a fight, I really don't want this. I'm quite happy with headers and know how to effectively manage them without shooting myself.

Granted there is some compiler overhead for importing large header files but I don't really notice it at all.

Also, we already have an Apple/Next non-standard C extension (objective-C). I don't think we want anything else added without proper standardisation regardless of the motivation. I'd rather they forked the language.

thwest 1 day ago 2 replies      
"Apple's Module proposal..." Is Apple really who deserves credit here? Is there something I missed about Apple's management driving this, and not Gregor or the C++ standards committee?
albertzeyer 10 hours ago 0 replies      
I once thought about an automatic bullet-proof precompiled-header system. I think this is actually not impossible to implement.

When first parsing a header, the parser can take track about all macros it depends on in the preprocessor state. E.g. a very common macro is the one it reads very first, like `#ifndef __MYFILE_H`.

Then, including a header becomes a function (<macro1, macro2, ...>) -> (parser state update, i.e. list of added stuff). This can be cached.

shadowmint 1 day ago 0 replies      
Sounds interesting, but I've got to admit my initial response is cautious.

How will this avoid the LD_LIBRARY_PATH hell of trying to depend on a local module (ie. conflicts)?

How will it work at all with local submodules inside a single project? (ie. I have 200 local c files, each with a header. Now what? A module each? How do we handle simple dependencies between classes and functions?)

How will we import actual macros?

My guess is that the answers are:

1) include path style --module-path=blah

Seems fair, but this is going to be as messy as include paths already are.

2, 3) dont use modules except at a system level.

Thats a shame as far as im concerned, but perhaps I'm wrong.
Can anyone else see how these might work?

vilya 1 day ago 3 replies      
Anyone know where I can read more detail about this proposal? It looks really interesting, but there are a couple of things I'm not clear on from the pdf:

How do you get away from creating a header file for a closed source module? Without a header, how would users of your module know what they can call? Can you perform reflection on a module to inspect it? Is there some kind of tool proposed, like javadoc or pydoc, to generate documentation for a module?

How does this work with C++ templates? If you don't know in advance what types the template will be instantiated with, how can you pre-compile the code?

I'm sure the authors have thought through all these issues and more; I'd love to read about their solutions.

Nursie 16 hours ago 1 reply      
Anyone saying 'inherently unscalable' about a feature of a language that's used in as many places as C needs to really think about things...

I understand some of the objections, and the import mechanism doesn't sound like a bad thing, though some of the objections are weird -

Import only imports the public API and everything else can be hidden - who wasn't doing this for libraries or large code modules anyway? Have static functions at the code module level, 'private' headers for sharing functions within a larger logical module and public headers at the logical module or libary level. Is this too cumbersome?

crncosta 1 day ago 3 replies      
Almost like the way D programming language handle it


pubby 1 day ago 2 replies      
Everyone wants modules but nobody can agree on how they should behave. This is why they weren't included in C(++)11.
cjensen 1 day ago 1 reply      
I didn't see any mention of the recursive-usage problem. How do they plan on handling co-dependant files?

For example, class A's code makes use of class B. Class B's code makes use of class A.

a.h looks like:

  class B;
class A {
public: void foo (B *);

a.cc looks like:

  #include "a.h"
#include "b.h"
void A::foo (B *) { b->narf (); }

and b.h and b.cc use A in the same way.

vinayan3 9 hours ago 0 replies      
My concern is portability because that is the largest benefits of C at-least. C++ portability is less true however it is more portable than most languages.
cperciva 1 day ago 1 reply      
I don't buy the performance argument: NxM -> N + M only works if every one of the N .c files is including every one of the M .h files.

If you're spamming #includes like that, you need to fix your #includes, not redefine the language.

rjzzleep 1 day ago 3 replies      
long overdue indeed, reminds me a lot of google go?
rootedbox 1 day ago 0 replies      
Maybe all the things they figured out in pascal aren't that bad.
angersock 1 day ago 0 replies      
I may come off as a bit nutty here, but do we really want to add another mechanism to C/C++ for something that has been worked with for decades?

Most of these issues are things you learn really fast how to avoid in production systems, using things like #pragma once, include guards, and proper symbol and header exposure when writing C libraries.

This doesn't really seem like much other than feature bloat for something that works, works well enough, and which probably won't be implemented in a timely fashion by at least 1 major compiler vendor (Hey folks! There's life outside clang and gcc and icc!).

dchichkov 19 hours ago 1 reply      
It breaks "there should be one-- and preferably only one --obvious way to do it". And quite a few others.

But as a stand-alone feature, stateless preprocessor includes could be a nice feature to have.

jeremyx 1 day ago 2 replies      
Since these are not in C++11, we'll have to wait 10 years...
jonhohle 1 day ago 0 replies      
I like the idea, but hopefully they'll shorten the std submodules to `std.io`, `std.lib`, etc.
sev 1 day ago 1 reply      
They mention:

> "‘import' ignores preprocessor state within the
source file"

I wonder if that would remove specific use-cases where you wouldn't want the import to ignore the state of the pre-processor within the source file?

Overall, I like it!

pserwylo 1 day ago 0 replies      
This is cool, should make Rusty Russell's CCAN [0] a whole lot more interesting. Instead of just snippets of useful code, it could contain full modules like CPAN/PyPI/PEAR/CRAN/CTAN and various other repositories for other languages.

[0] - http://ccodearchive.net/index.html

acomjean 1 day ago 1 reply      
I thought one of the points of headers system was you could use the code without having to slog through all the source.

Thinking about my trips to /usr/include, those headers weren't that useful for coding with but you could get constants and function names at least.

mtdev 1 day ago 2 replies      
Looks like the core issue is a poor preprocessor implication. It's a good idea in principle, however, we would be adding new features to address shortcomings in existing features instead of fixing the problems in the existing code.
georgeg 1 day ago 0 replies      
Is D not doing this sort of thing already or am I wrong?
Executor 1 day ago 0 replies      
Reading this PDF made my day! I can't wait for headerless c/c++. When will this be implemented???
optymizer 1 day ago 1 reply      
I hear D is backwards compatible with C (and C++?). They already have modules: http://dlang.org/module.html . I should use D more often.
jfaucett 1 day ago 0 replies      
He's basically just describing a clunkier version of the Go package model
zopticity 1 day ago 0 replies      
Looks like they are trying to rewrite Python.
jheriko 1 day ago 1 reply      
the one problem i agree with him on is performance - from what i can see his proposal does something to potentially improve that, but its not clear. i worry that caching pre-processed files is a red herring - its it really faster than re-including? what about preprocessor states? what about macros in include files? etc.

i feel that the preprocessor ultimately ends up with the same amount of work, just an extra pass for each included header to build a version to be cached… not to mention the complexity required to handle the multiplicity of pre-processor states required for this. maybe i am being dim and missing the obvious.

tbh, i'd rather they made their compiler work properly, like respecting alignment on copies with optimisation turned on, or implementing the full C++ 11, before adding language features to fix problems that nobody really has.

msbarnett 1 day ago 0 replies      
> The article is disingenuous. Alongside its oh-so-sassy table of file sizes for helloworld, it needs a table of runtimes for helloworld. Turning stdio into an API instead of preprocessor soup is going to blow that up, unless the guy means something very unusual by "API".

You seem to be deeply confused here. This is simply a proposal for persisting the AST for eg) stdio across compiler invocations instead of reparsing it on every textual substitution of a #include, and isolating source AST changes from erroneously corrupting header ASTs. It has no runtime implications because the outputs of the linking stage will be identical.

stdio's preprocessor soup is its API. You seem to be the one with an unusual and much narrower meaning of "API".

I'm writing my own OS gusc.lv
286 points by maidenhead  1 day ago   197 comments top 55
ChuckMcM 1 day ago 10 replies      
Sigh, folks give the guy a break.

Sure he doesn't know what he doesn't know, but he has decided to fix that. Which, if you know your history, is not a whole lot different than Linus back when he was calling out Minix for being crap.

The challenge here is that the barrier to speaking on the interwebs is quite low so you can make a fool of yourself if you're not careful.

Jean Labrosse, who wrote uC/OS (which everyone called mucos), in his original thesis statement made many of these exact same arguments. And like your author he made some choices that he felt were reasonable, only to learn through experience that perhaps they weren't a well thought out as he had hoped.

I am a huge fan of people just saying "How hard can it be?" and jumping in. Enjoy the ride, they can surprise you if you under estimate them.

So assuming this person notes that they are getting a ton of traffic from HN, and comes here to investigate, my three suggested books are :

Operating System Concepts [1], Operating System Implementation [2], and The Design of UNIX [3]. Preferably in that order. Any decent college library should have all three in the stacks.

[1] www.amazon.com/Operating-System-Concepts-Seventh-Edition/dp/0471694665/

[2] www.amazon.com/Operating-Systems-Design-Implementation-3rd/dp/0131429388/

[3] www.amazon.com/Design-Operating-System-Prentice-Hall-Software/dp/0132017997/

zanny 21 hours ago 2 replies      
I love little thought experiments like this, so heres my 2 cents:

1. Targeting a modern architecture is good, but if I were being this ambitious, I would wager having such a backwards compatable burdened architecture like x86_64 (even when it is highly performant just through raw funding dollars) I would still rather start at square 1 on some risc 64 bit virtual 48 bit physical word system. Go even further, and design such a hardware ecosystem with heterogeneous computing built into the foundations - have arbitrary numbers of ALUs and FPUs and have different pipeline structures allowing for various degrees of SIMD parallelism across some tightly integrated weak cores and more heavily pipelined and bulkier serial cores, and have an intelligent enough instruction set to allow for scheduling (or even better, the hardware itself) to recognize parallel tasks and execute them with varying degrees of parallelism. Take AMD Fusion or Tegra to the next level and instead of having a discrete gpu and cpu on one die mash them together and share all the resources.

2. I'd kick C out. If I'm going with a new architecture, I need to write the compiler from scratch anyway. I might consider LLVM for such a system, just because the intermediary assembly layer is intentionally lossless and allows for backwards language compatability with everything under the sun right now. But ditch C, take modern language concepts from C++, Python etc, and cut down on the glyphic syntax and try rethinking the distribution of special characters (I think pointer<int> c makes more sense than int (star)c, for example - go even further, and provide 3 levels of verbosity for each concept, like pointer<int32> c, ptr<int32> c, and &:i32 c). I would definitely want to fix standard type sizes at the least, having things like i32 integers instead of the int type being 16 or 32 bit, etc, with some more modern niceities like the D style real float that uses the architecture restricted maximum FPU register size).

3. Screw UEFI, mainly because it is a design by consortium concept - it is inherently cumbersome because it was a commitee project between industry giants rather than a revolution in booting. I do like cutting down on legacy interfaces, I'd go even further and try to minimize my system to (in theory) one serial transport and one digital, maybe 4, with unidirectional and bidirectional versions of both, and maybe support for some classic analog busses (like audio, which doesn't make much sense to transport in digital format, although I haven't looked into it much). Everything plug and play, everything attempting to provide power over a channel so you don't need additional power connectivity if you can avoid it. For the BIOS, I would replace it with some metric of scan busses for profiles -> incite some kind of device-wise self test -> provide device information in memory to the payload binary, to allow memory mapping and all the other goodness. Maybe even have the bios itself act as a sub-kernel and provide the mapping itself. Maybe even fork the kernel, and treat it like some kind of paravirtualized device environment where the bios never overrides itself with the payload but instead stays active as a device interface. Saves a lot of code redundancy between the two then. It would of course have an integrated bootloader and the ability to parse storage device trees for some bootable material. Maybe have file system standards where each partition has a table of pointers to loadable binaries somewhere, or maybe stick them in some partition table entry (obviously not a FS expert here).

4. Screw URIs, go straight for a kernelwise VFS that can reference everything. I'd love to see /net/<IP address>/ referening the top level of some remote servers public resources. You could have a universal network protocol where each connection is treated as a virtual mount, and individual files (and everything is a file, of course) can dictate if they use streamed or packet based data off some network transaction about the base protocol. So instead of having http://google.com, you could use /net/google.com/ which when opened does DNS resolution in the VFS to (well, ipv6, obviously - we are talking about a new OS here, so 2001:4860:8006::62 - and as a side note, I would never try to get rid of IP as the underlying transport protocol - as insane I might be about redesigning hardware and rethinking stuff people much smarter than myself came up with, I know you will never ursurp IP as the network trasport everyone uses to connect the world ever). And then when you open google.com/search, you open a remote file that interprets the "arguements" of ?q=baconatorextreme on the extension into the returned page file that you access.

I agree with getting rid of Unix directories, they are outdated crappy, and all their names make no sense. However, /bin is meant to be system required binaries to boot, where sbin is root utility binaries, /usr/bin is general purpose executables that might not be local to the machine and might be a remote mount, and /usr/local/bin is the local machines installed binaries. Of course these polcies are never abided by, and they still have /etc, /usr/games, and a bunch of other folders to make life a hassle.

That's enough rates for a HN comment thread though, I'll stop and spare y'all :P

readme 1 day ago 0 replies      
If any of you naysayer arsehats (you know who you are) bothered reading to the bottom of his article, you'd have seen that he has a section where you (if you do indeed know about OS development) could help him by answering his questions. I pasted it below for reference:


My research

What I've found out so far:

Boot sequence:

Master Boot Record (MBR);

Bootloader " the program that takes it over from MBR and loads your Kernel;

How to write your own MBR and write it to Disk on windows.

I've written a small utility in Visual C++ that allows directly to read/write from disk (download here, source included for Visual Studio 2010 Express);

How to write bare bones C kernel entry point.

How to write “naked” functions on Windows in Visual Studio

Missing link - I still don't know how to properly step from MBR to Bootloader to Kerlen, that is " write your own MBR code that would load bootloader, pass the execution to bootloader that would load and pass execution to bare bones C kernel:

What exactly is Global Descriptor Table (GDT) and Interrupt Descriptor Table (IDT), and how it looks in C and Assembly?
How and when, and again how, if when is later (for example in Long Mode or Protected Mode) to set up all this GDT and IDT stuff. They say you have to set it up before the kernel. Then they say you can set it up with dummy values and set it up later in kernel. Then they say, that to set it up, you have to be in Real Mode, so your kernel (which might be way over 1Mb of real mode space), needs to switch between modes. And then if your kernel is over 1Mb, you can't access memory locations after 1Mb, and so on… It's confusing, but I'm going to find it out and post it here later on.

How to handle Interrupts in C?
Will they perform as callbacks that await some return values or do I have to use inline assembly to process them correctly;

Is it possible to write MBR in C?
I do understand that you still have to set ORG to 7c00h, and use some specific assembly instructions, but if they could be wrapped in C using inline assembly and C entry point can be glued with few lines of assembly code, why not?

robomartin 21 hours ago 1 reply      
OK, if you don't have any real experience in low-level embedded coding (relevant to device drivers), RTOS or OS design in general, file systems, data structures, algorithms, interfaces, etc. And, if you have "hobby level" experience with Assembler, C and C++. And, if your intent is to write a desktop OS, from the ground up, without making use of existing technologies, drivers, file systems, memory management, POSIX, etc. Here's a list of books that could be considered required reading before you can really start to write specifications and code. Pick twenty of these and that might be a good start.

In no particular order:

1- http://www.amazon.com/C-Programming-Language-2nd-Edition/dp/...

2- http://www.amazon.com/The-Answer-Book-Solutions-Programming/...

3- http://www.amazon.com/The-Standard-Library-P-J-Plauger/dp/01...

4- http://www.amazon.com/C-Traps-Pitfalls-Andrew-Koenig/dp/0201...

5- http://www.amazon.com/Expert-Programming-Peter-van-Linden/dp...

6- http://www.amazon.com/Data-Structures-In-Noel-Kalicharan/dp/...

7- http://www.amazon.com/Data-Structures-Using-Aaron-Tenenbaum/...

8- http://www.amazon.com/Mastering-Algorithms-C-Kyle-Loudon/dp/...

9- http://www.amazon.com/Code-Complete-Practical-Handbook-Const...

10- http://www.amazon.com/Design-Patterns-Elements-Reusable-Obje...

11- http://www.amazon.com/The-Mythical-Man-Month-Engineering-Ann...

12- http://www.amazon.com/The-Programming-Language-4th-Edition/d...

13- http://www.amazon.com/The-Standard-Library-Tutorial-Referenc...

14- http://www.amazon.com/API-Design-C-Martin-Reddy/dp/012385003...

15- http://www.amazon.com/The-Linux-Programming-Interface-Handbo...

16- http://www.amazon.com/Computer-Systems-Programmers-Perspecti...

17- http://www.amazon.com/System-Programming-Unix-Adam-Hoover/dp...

18- http://www.amazon.com/Memory-Programming-Concept-Frantisek-F...

19- http://www.amazon.com/Memory-Management-Implementations-Prog...

20- http://www.amazon.com/UNIX-Filesystems-Evolution-Design-Impl...

21- http://www.amazon.com/PCI-System-Architecture-4th-Edition/dp...

22- http://www.amazon.com/Universal-Serial-System-Architecture-E...

23- http://www.amazon.com/Introduction-PCI-Express-Hardware-Deve...

24- http://www.amazon.com/Serial-Storage-Architecture-Applicatio...

25- http://www.amazon.com/SATA-Storage-Technology-Serial-ATA/dp/...

26- http://www.amazon.com/Beyond-BIOS-Developing-Extensible-Inte...

27- http://www.amazon.com/Professional-Assembly-Language-Program...

28- http://www.amazon.com/Linux-Kernel-Development-3rd-Edition/d...

29- http://www.amazon.com/Version-Control-Git-collaborative-deve...

30- http://www.amazon.com/Embedded-Software-Primer-David-Simon/d...

31- http://www.amazon.com/Programming-Embedded-Systems-C/dp/1565...

32- http://www.amazon.com/Making-Embedded-Systems-Patterns-Softw...

33- http://www.amazon.com/Operating-System-Concepts-Abraham-Silb...

34- http://www.amazon.com/Performance-Preemptive-Multitasking-Mi...

35- http://www.amazon.com/Design-Operating-System-Prentice-Hall-...

36- http://www.amazon.com/Unix-Network-Programming-Sockets-Netwo...

37- http://www.amazon.com/TCP-Illustrated-Volume-Addison-Wesley-...

38- http://www.amazon.com/TCP-IP-Illustrated-Vol-Implementation/...

39- http://www.amazon.com/TCP-Illustrated-Vol-Transactions-Proto...

40- http://www.amazon.com/User-Interface-Design-Programmers-Spol...

41- http://www.amazon.com/Designing-Interfaces-Jenifer-Tidwell/d...

42- http://www.amazon.com/Designing-Interfaces-Jenifer-Tidwell/d...

43- http://www.amazon.com/Programming-POSIX-Threads-David-Butenh...

44- http://www.intel.com/p/en_US/embedded/hwsw/software/hd-gma#d...

45- http://www.intel.com/content/www/us/en/processors/architectu...

46- http://www.intel.com/p/en_US/embedded/hwsw/hardware/core-b75...

47- http://www.hdmi.org/index.aspx

48- http://en.wikipedia.org/wiki/Digital_Visual_Interface

49- http://www.amazon.com/Essential-Device-Drivers-Sreekrishnan-...

50- http://www.amazon.com/Making-Embedded-Systems-Patterns-Softw...

51- http://www.amazon.com/Python-Programming-Introduction-Comput...

52- http://www.amazon.com/Practical-System-Design-Dominic-Giampa...

53- http://www.amazon.com/File-Systems-Structures-Thomas-Harbron...

54- ...well, I'll stop here.

Of course, the equivalent knowledge can be obtained by trial-and-error, which would take longer and might result in costly errors and imperfect design. The greater danger here is that a sole developer, without the feedback and interaction of even a small group of capable and experienced programmers could simply burn a lot of time repeating the mistakes made by those who have already trenched that territory.

If the goal is to write a small RTOS on a small but nicely-featured microcontroller, then the C books and the uC/OS book might be a good shove in the right direction. Things start getting complicated if you need to write such things as a full USB stack, PCIe subsystem, graphics drivers, etc.

exDM69 18 hours ago 0 replies      
This guy is slightly clueless but he has the spirit. I've written my own hobby operating system skeleton and it was a very good learning experience.

Here's a few notes about his plans:

  > Target modern architecture
> Avoid legacy, drop it as fast as you can. You can even skip the Protected mode and jump directly to Long mode

I went on and wrote my hobby OS on x86_64 too. Unfortunately, working in x86_64 long mode is a little bit more difficult than using 32 bit protected mode. You can go direct to long mode, but you'll have to write that from scratch. GRUB and other multiboot protocol capable bootloaders set up 32-bit protected mode for you but not long mode. You cannot be in long mode without paging enabled (unlike in protected mode).

So if you want to "skip" protected mode, you'll have to write a pile of assembly code to get there. x86_64 is a lot more work than 32bit x86.

  > Jump to C as soon as possible

This is most definitely the right thing to do. Jump into C code as soon as possible. Getting shit done in Assembly is so much slower.

You only need a few pieces of assembly code to get an operating system running: the boot code and the interrupt handler code. The boot code and the interrupt handler are just small trampolines that go to C code as soon as possible.

In addition to the boot and interrupt handler code, you occasionally need to use some privileged mode CPU instructions (disable interrupts or change page table, etc). Use inline assembler for that.

Anyone who (in this thread) suggested using something else than C seemed to be fairly clueless about it. Of the choices you have available, C is the simplest way to go. Everything else is either more work or more difficult.

  > Forget old interfaces like PCI, IDE, PS/2, Serial/Parallel ports.

Not so fast. You most likely want to implement a serial console for your operating system. Maybe even add a serial port debugging interface (GDB stubs).

You're most likely going to have to deal with PCI bus at some point too, although many devices don't use the physical pci buses on motherboards, some devices still hook up to the pci bus. Look at the output of "lspci" on Linux, all of those devices are accessed through PCI. This includes USB, PCIe, SATA, IDE, Network interfaces, etc.

Again, using the modern buses is a lot more work than using the old ones and it partially builds upon the old things.

  > Why does every tutorial still use such an ancient device as Floppy?

Because when doing a bootloader from scratch for a tutorial, it's a lot easier to use the floppy disk than it is to use a real hard disk or any other media.

  > Avoid the use of GRUB or any other multiboot bootloader " make my own and allow only my own OS on the system

No no no. If you want to build an operating system, do not build a bootloader. Use the multiboot protocol and things will be a lot easier. You'll get started so much faster and get to the real stuff sooner. (NOTE: I don't know how UEFI devices boot, it might contain something like multiboot).

Most hobby operating systems are just half-assed stage 1 bootloaders. Just get over the fact that you'll have to use code written by others and get booted.

Popular emulators (bochs, qemu) can boot multiboot kernels directly so you'll save a lot of time there too.

You need to get booted in an emulator and running under a debugger as quickly as possible. Operating system development is so much easier to do with a debugger at hand. Failures generally cause a boot loop or hang the device so there won't be a lot of diagnostics to help with issues.

So my advice is: set up Qemu + GDB + multiboot, and get your kernel booted in a debugger as early as you can.

I won't go into commenting his wacky ideas about VFS structure or APIs. It's nice to make great plans up front but by the time you're booted to your own kernel, a lot of the naïve ideas you started with will be "corrected".

Happy hacking and do not listen to the naysayers.

PS. here's my hobby OS: http://github.com/rikusalminen/danjeros

untog 1 day ago 8 replies      
I almost didn't post this comment because it makes me sound like such a killjoy, but:

"I spend most of my days in the world of PHP, JavaScript (I love jQuery) and a little bit of HTML, CSS and ActionScript 3.0. I've done some hobby software development and also wrapped my head arround languages like C, C++ (except for templates), C# and Java."

It sounds like you don't have the experience required to make an OS. I certainly don't either (I'm no C-head) so I am in no position to snark, but you're going to fail in this endeavour.

That doesn't mean it's pointless, though- I think it'll be a tremendous learning experience in getting to grips with the core of how computers actually work. So, good luck. Just don't go thinking you're going to make the next Linux out of this.

EDIT: It's also important to note that the author didn't submit this to HN. He didn't say "take this, HN amateurs!", he just posted something on his blog that someone else picked up.

Xcelerate 1 day ago 2 replies      
I think people are being overly pessimistic. It's not that ambitious of a project (particularly a hobby project at that). Why? Well, he doesn't have to support thousands of different drivers and hardware configurations -- he only needs code that supports his own. That eliminates a good chunk of the OS code.

Second, a lot of grunt-code can be found in open source projects, so most of the tedious/time-consuming programming can be eliminated if he chooses to follow this option.

Drop preemptive multitasking, cache-optimization, modes, virtual memory, and networking and there's not too much left.

And finally, it doesn't take that long to understand the GDT. Mine became corrupted once so I took a day to learn how it worked -- fixed that crap in a hex editor.

So no, he's probably not going to invent the next highly polished OS that handles every edge case and has been rigorously tested against bugs, but then again I don't think it's unreasonable to see a simple little functional OS.

steve8918 1 day ago 1 reply      
I don't think writing your own OS is as difficult as people are saying. You're not trying to compete against Windows or Linux, you just want to get something up and running. I'm sure bugs and crashes will be prevalent, but I'm sure it would be a great project.

A really, really good book for this that I've read is "Developing Your Own 32-Bit Operating System" by Richard Burgess. It starts you from the beginning, and walks you through all the steps in writing a really basic OS.

It's old and out of print, but it's definitely the best one I've seen.

Edit: I just found the website, they are offering the book free here:


robomartin 1 day ago 10 replies      
Anyone who has ever written a small RTOS on a small 8 bit embedded processor will only laugh at the OP. And, I hate to say, it would be justified. There are about twenty or thirty books between where he is now and where he'd have to be in order to even start talking about designing an OS for a desktop platform. Add to that 10,000 hours of coding low to high-level projects across embedded to desktop platforms.

A quick read of the "About" page is probably in order:


What to say?

"Someone holding a cat by the tail learns something he can learn in no other way" --Mark Twain.

Here's the tip of the tail:





Have fun.

kamme 1 day ago 0 replies      
Most of the comments seem so negative... When I was 16 I was also interested in OS development and actually wrote a bootsector, a very small kernel and support for fat12 in assembler. Previous experience? Basic and QBasic. It's quite a good way to learn and it is possible when you take one step at a time and are willing to spend some time reading... Your mind is a great tool, have some faith in it.
forgottenpaswrd 1 day ago 0 replies      

Too ambitious. Doing that requires millions of dollars and tens of thousands of man hours to make. How do I know? I do electronics and low level programming work and I am really good at it. Just understanding the bugs that manufactures put in hardware and then solve in software(because it is way cheaper) takes a ton of work.

As I suppose he is not super rich, he will have to convince people to join their project, a la Linus.

Good luck with that!! I really wish a clean-no backwards compatible OS were real, I will add native OpenCL, OpenVG and OpenGL to the list, but my arachnid sense tells me a person that does not use Unix will have a hard time getting traction with geeks.

munin 1 day ago 1 reply      
Implementing an OS is hard work, even if you build on the very hard work of those who have tried before you in terms of APIs, abstraction layers, etc. It doesn't sound like you have a lot of familiarity with low-level software development, or software development in general, so you just took a "hard mode" project and cranked it up to "nightmare".

Generally this is a bad idea because without any external motivation, you lose interest and stop working. With external motivation is worse, because you can burn out and become a catatonic shell of a person, staring absently into space for the rest of your life.

Just some FYIs:

> On the side note - It's 21st century, but our PCs are still booting up as old-fart Intel 8086.

You should read about EFI (http://www.intel.com/content/www/us/en/architecture-and-tech...)

You should also read all of the lecture materials from good universities OS classes. In those classes, you basically do this. Some classes are more guided than others. Some places to start:

- CMU: http://www.cs.cmu.edu/~410/

- UMD: https://www.cs.umd.edu/~shankar/412-F12/

UMD uses a toy operating system called GeekOS that the students extend. You might find browsing its source code useful (http://code.google.com/p/geekos/)

Good luck!

wreckimnaked 1 day ago 1 reply      
> How to handle Interrupts in C? Will they perform as callbacks that await some return values or do I have to use inline assembly to process them correctly;

Not to disappoint you, but you should try doing some more low level programming or dabbling with some existing OS code to have an idea how this kind of programs look like. Maybe having a look at Minix for a reference of simple OS?

Have you thought about targeting ARM? Its architecture may be way less trickier than most Intel CPUs.

Well, good luck with that. Worst case scenario, you'll end up reading lots of interesting resources.

robomartin 1 day ago 0 replies      
On threads such as this one it is easy --and even popular-- to dump on those, such as myself, who come down on the "nay" side, even when this is firmly based on experience and having made plenty of mistakes in the field.

The linked article does NOT talk about a one-semester school project or a quick-and-simple learning OS.

No, the article talks about a web developer with no real experience writing low-level code not only wanting to bootstrap every single device driver but also ignoring years of accumulated knowledge and code libraries to write an OS that boots directly into graphical mode, does not take advantage of POSIX and more.

There's nothing wrong with the "How hard can it be?" approach to learning. I've done this many times. And almost every single time I came away with "I sure learned a lot, but what the fuck was I thinking?". The last time I pulled one of those was about fifteen years ago and the "three month project" took nearly two years.

What he is talking about is more complex than writing the Linux kernel from scratch because he wants to re-invent everything. Here are some stats on the Linux kernel:


Even if his project was 10% of this it would still be a grotesque miscalculation for a single developer, otherwise employed and without the experience to back-up some of what he is proposing.

If, on the other hand, the post had suggested something like this it would have been far more reasonable an idea:

"Hey, I just spent a year implementing everything in the Tanenbaum book. Now I would like to start from that base and enhance the OS to make it do this...".

Let's compare notes in a year and see how far he got.

robinh 1 day ago 6 replies      
Oh dear... overly ambitious plans, changing everything for the sake of change, fearing assembly, using VS for OS development, and not even knowing how the GDT and IDT work. There's so much wrong with this I don't even know where to begin. I sense a doomed project.
mvzink 1 day ago 0 replies      
Looks like you haven't even started thinking about the problems you'll run into architecting the actual mechanisms and policies of the operating system; process scheduling, virtual memory, etc. That's probably for the better"one thing at a time. For when you do get to that stage, I recommend Operating Systems: Three Easy Pieces. http://pages.cs.wisc.edu/~remzi/OSTEP/
readymade 1 day ago 0 replies      
I could have guessed this would turn into a massive flamefest, but c'mon people. So what if he's green? This will be a learning experience for him. And I'll hazard that in the end, even if he never ends up writing a whole OS from scratch, that he will have gained more valuable low level experience than the vast majority of those here.
endlessvoid94 1 day ago 0 replies      
I'll be following this. Don't listen to the hate -- dive in, I'm sure you'll learn a ton, and maybe build something useful for yourself and others.

This is the exact definition of hacking, if you ask me.

drewmck 1 day ago 0 replies      
Please read Linus Torvald's book "Just for Fun: The Story of an Accidental Revolution" http://www.amazon.com/Just-Fun-Story-Accidental-Revolutionar.... It mostly deals with his experience building Linux and the insane amount of work it took (he was a student at the time he wrote the first version, with the help of hundreds of other people via distributed development). It might give you some additional insight into the effort involved.
chewxy 1 day ago 2 replies      
There is a surprising amount of negativity coming from what I expect to be a 'hacker' crowd. This kid (he's 30 can I call him a kid?) has ambition and what appears to be the drive to create his own OS - I mean, if you read the bottom of his page, he did some research (sure, wiki-ing them is not equal to actually understanding them), but it at least shows willingness to learn, and we should not be putting him down at all.

Sometimes the HN crowd surprises me. We pride ourselves in being hackers, most often idealistic (bitcoins and patent law change anyone?) but when a singular person shows idealistic ambition, we immediately engage in poppy cutting.


nnq 13 hours ago 0 replies      
Short advice: find a compiler that supports modern decent C (C99), NOT VS (http://www.infoq.com/news/2012/05/vs_c99_support/) - may not be relevant for your kernel code that much, but at least at the end of your adventure you will have learned how to write good modern C (and no, C it not dead and replaced by C++, they are languages with different philosophies and used by different kinds of programmers and they are both evolving on they own route, despite FUD originating from Microsoft and other sources)

...and when you reach to the GUI part, do the same for C++, use the latest version and language features: I've heard that VS2012 lasts upgrade got closer to it, but google around before settling on it

...or to keep it simpler: better use GCC compilers (since the Linux kernel is built with it, you should find enough compiler specific docs and related tools too)

polymathist 1 day ago 0 replies      
UPDATE: The original author has posted a Part 2. Looks like he's already started writing code and hitting milestones. http://gusc.lv/2012/11/im-writing-my-own-os-p2/
sourc3 1 day ago 1 reply      
This is pretty interesting. I find this type of behavior to be everywhere in the software world. If I cannot figure something out with a platform, let me re-write it. Interesting observation is that the number of these "re-writes" is inversely proportional to the experience of the person proposing these ideas.

Good luck to the author, nonetheless it will be a good learning experience for him.

pshc 1 day ago 3 replies      
I'd like to see someone try re-inventing a minimalist userspace. Create an OS running on the Linux kernel without coreutils, binutils etc., and see how far you can go.

If you strip out loadable module support and such, is it possible to boot without the usual POSIX support structure? Without filesystems?

agumonkey 1 day ago 0 replies      
I hope you know plan9 or other OSes beside *nix/POSIX so you have a larger field of view.

If I had crossed the desire threshold to start that project (#1 project in my mind since I left college) I'd leave the C ecosystem altogether, design a typed ,functional ,binary friendly, modular, subset of C (and probably be forever alone). Something in the groove of http://en.wikipedia.org/wiki/BitC, even though its talented author concluded it wasn't a successful path.

olalonde 23 hours ago 0 replies      
I am reminded of http://www.sparrowos.com/ aka losethos).
happywolf 1 day ago 0 replies      
The Unix/Linux systems are designed by a lot of very smart people and have gone through many iterations. No doubt they do have some historical baggage, but, there are good reasons why the current design is as-is. Not trying to learn the history will doomed to repeat it.
mimog 1 day ago 0 replies      
Nope. If you want to know how to make a small OS that can run on a PC, take a look at xv6 which is a modern re-implementation of the sixth edition of Unix and is on Github. You can compile it and then run it in qemu. Fork it, read and understand the source and then expand upon it to your hearts content. That would at least give a very good starting point.
desireco42 1 day ago 0 replies      
I think his critique of linux files layout and other points made are completely spot on. Me and I am sure others often were thinking how it would be great if things were different. I happen to know why those folders are named as they are, and they are completely arbitrary. He also started on it, did some initial progress. So he did a php before, boo-hoo.

I would prefer he decided to fork linux and change things he didn't like, then start from skratch. However, there is a great value starting from scratch. I wish I had a life :) to join him and figure out things together, it would be a blast, how many times in your life you have a chance to work on actual modern OS.

I believe it is totally possible for him to accomplish what he started, if knowledgeable people would join him and work with project together. Today with amazing tools, it is good time to create a new OS that would have modern tooling.

I wrote recently on my blog about a need for developer distribution of linux. Strangely this is still missing. http://softwaredevelopmentinchicago.com/2012/10/17/ubuntu-al...

It is great that we are discussing this. That is how things start.

guilloche 17 hours ago 0 replies      
As many people here said, this guy may not know what he doesn't know. But I am admiring his braveness and it is good thing for someone to shoulder off all legacy burdens and start a fresh OS.

As a developer, I have similar feeling on softwares including OSes, and I started a fresh vector editor project(Torapp guilloche online designer http://www.torapp.info), I know a vector editor is much simpler than an OS, but it is also pretty complicated. When designing the editor, I learned a lot and changed designs multiple times. I am sure that guy will learn and even if he can not complete an OS, he may leave some well-designed code base for other people to start with.

jff 1 day ago 0 replies      
Take care when writing your own bootloader, or you may find yourself essentially maintaining two separate kernel trees.
jiggy2011 1 day ago 0 replies      
Won't be big and professional like gnu.

Seriously though, good luck.

bitteralmond 1 day ago 0 replies      
I agree that this project is hugely ambitious for one man. So was Linux. Linus recruited a bunch of other hobby programmers to help him make it. Although he may be jumping the gun on announcing it, he's got a lot of good ideas about stripping back all the obfuscation that's resulted from 30 years of stacking things onto the same old conventions and wiggling them around until they fit.

I'm sure the idea of building a modern OS that is straightforward and written in a simple, popular language like C (and possibly Python later for higher-level stuff) will appeal to a wide range of people who will all want to help. I'd love to see this project happen, and if the day comes where Gusts is calling for help, I'll be right there in line to help him make this.

mrng 1 day ago 0 replies      
"I still don't understand the meaning and differences between /bin, /usr/bin and /usr/local/bin"

Oh. OK, then.

dindresto 8 hours ago 0 replies      
OS development can be fun. I'm writing a hobby OS myself, but currently using GRUB. GDT and interrupts are working, so I can already e.g. get keyboard input. But there's nothing like usermode programs yet.
I'm 16, so what bcantrill said might be true ("there is a certain arrogance of youth here").
DannyBee 22 hours ago 1 reply      
Today's modern interfaces are tomorrow's obsolete ones.
It's not like PCIe will last any longer than PCI, PCI-X, ISA, Vesa Local Bus, EISA, etc.
olgeni 1 day ago 1 reply      
> No Unix directory tree. I hate it " it does not say anything to a user

Actually it has a lot to say, but in this case it just appealed to the fifth amendment.

drivebyacct2 1 day ago 0 replies      
[deleted] ChuckMcM is right, let him find out what he doesn't know. Props for the ambition.
ww520 1 day ago 0 replies      
Sigh. I don't know what to say. I admire OP's desire to dive into OS development but I hope he has the perseverance to carry it through, because he has a long way to fill in the huge gaps in his knowledge to build an OS.
capkutay 1 day ago 0 replies      
I would avoid the project unless you truly understand the scope of what you're doing and are dying to get your hands dirty with vm and filesys implementation despite little reward (other than the satisfaction of learning). OS dev is quite low level and infamously hard to debug
capkutay 1 day ago 0 replies      
I would look at the Design of Unix. That's what Linus used to make Linux. Individually study everything you don't understand...

Also, prepare for about 6 months of hard yet rewarding, given that you put in about 50 hours a week ;)

Geee 1 day ago 0 replies      
Good luck, I certainly hope you don't end up like losethos.
gusc 12 hours ago 0 replies      
Holly jumping Jesus … I just got my 15 minutes of shame/fame over the internet. I linked my “just write down what you're thinking” blog post on dzone.com and somehow it got posted on Hacker News and from there … Shit just hit the fan. It seems that writing “I'm writing my own OS” as a blog title can be translated from “I have spare time, I want to try out new things” into “Fuck this shit, I'm going for a revolution!!!” It's time consuming to answer all the comments I've gained, so I'm writing this post as an answer to all of you.


And thank you again for inspiration (even the cynicism is inspirational ;)

djhworld 13 hours ago 0 replies      
This is cool.
Big project for one person, but cool none the less
husam212 1 day ago 1 reply      
Writing new OS entirely from scratch? without any intention to rely on some useful parts of previous projects?
This is literally what we call "Reinventing The Wheel", and if you go through human history you will find that this is absolutely NOT the best way of development.
dysoco 1 day ago 0 replies      
I like the idea of the directory tree, good luck!
I have been interested in OS Development lately, I have read some of the James tutorial, the OsDev Wiki, and the Tanenbaum's book... but still have no idea what I'm doing.
ommunist 1 day ago 0 replies      
Read books, Guncha! Good luck. If Stallman did that, so can you.
monochromatic 21 hours ago 0 replies      
Write your own OS: great!

Say you're writing your own OS: ok, sure...

grundprinzip 18 hours ago 0 replies      
Good luck and thumbs up for the idea of writing a new OS. I think an inspiration would be to look at BareMetal OS, perhaps you can find some ideas there.


merlish 1 day ago 0 replies      
Good luck, have fun.
emeidi 15 hours ago 0 replies      
I remember a guy saying the same a few years ago ... what's his name? Linus?
frozenport 1 day ago 0 replies      
3rd Year ECE + Hope . How did this make HN?
luxxx 22 hours ago 1 reply      
Why not just improve upon Linux rather than act like an asshat and post to HN?

The path to Linux kernel contribution is simple.

Learn C -> Master C -> Contribute -> Stop posting to HN

christina_b 12 hours ago 0 replies      
Good lord ...
aeip 1 day ago 1 reply      
An ABC proof too tough even for mathematicians bostonglobe.com
277 points by ot  2 days ago   141 comments top 15
dsrguru 2 days ago 0 replies      
The more mathematically-inclined HNers might be interested in Brian Conrad and Terrence Tao's comments at the bottom of this previous HN article:


Edit: Minhyong Kim's initial thoughts seem very interesting as well!


And for the less mathematically-inclined:


codeulike 2 days ago 12 replies      
If a programmer locked himself away for 14 years and then emerged and announced he'd written a completely bug free OS, there would be skepticism. Code needs to be battle tested by other people to find the bugs.

Mathematics is the same, to an extent; one guy working alone for 14 years is likely to have missed ideas and perspectives that could illuminate flaws in his reasoning. Maths bugs. If he's produced hundreds of pages of complex reasoning, on his own, however smart he is I'd say there's a high chance he's missed something.

Humans need to collaborate in areas of high complexity. With a single brain, there's too high a chance of bias hiding the problems.

sek 1 day ago 0 replies      
Just read his Wikipedia entry:

> Mochizuki attended Phillips Exeter Academy and graduated in 2 years. He entered Princeton University as an undergraduate at age 16 and received a Ph.D. under the supervision of Gerd Faltings at age 23.

He is 43 Years old now, I assume he is 100% committed to Mathematics. These people fascinate me, having a feedback loop that is unbreakable. Especially for topics where you have a knowledge of something and almost nobody else is the world is capable of understanding you. It's like Star Trek for the mind.

dbaupp 2 days ago 1 reply      
Another article with slightly more background on the ABC problem itself (and possibly slightly less sensationalist). http://www.nature.com/news/proof-claimed-for-deep-connection...

And the MathOverflow discussion referenced: http://mathoverflow.net/questions/106560/what-is-the-underly...

Xcelerate 2 days ago 8 replies      
This article seems to suggets that mathematicians are all too eager to drop his work at the slightest whiff of any flaw. Could someone more knowledgable on the subject explain to me why this is?

It is clear that he has already done some very great things in mathematics, so even if there was a flaw in his proof, I would think his papers would still have many deep insights that no else had thought of. I mean, it's not like mathematicians are pressed for time -- if I was one I would certainly dedicate a lot of time to studying something interesting like this.

sek 1 day ago 0 replies      
A Youtube video with a pretty accessible explanation.
elliptic 2 days ago 0 replies      
Is this situation similar to that of Louis de Branges & the Riemann Hypothesis a few years back? I.e, a well-respected mathematician (de Branges had settled the Bieberbach conjecture in the 80s) releases a proof of an important unsolved problem using his own poorly understood mathematical technology?

Edit - lest this sound too negative, one should realize that the Bieberbach proof took a long time to be accepted.

bnegreve 2 days ago 4 replies      
Would it be possible to use proof assistants like Coq [1] to verify this kind of proofs ? If not, does anyone know why ?

[1] http://en.wikipedia.org/wiki/Coq

ph0rque 2 days ago 0 replies      
...the proof itself is written in an entirely different branch of mathematics called “inter-universal geometry” that Mochizuki"who refers to himself as an “inter-universal Geometer”"invented and of which, at least so far, he is the sole practitioner.

In this universe, at least...

dbz 2 days ago 2 replies      
Can anyone explain what "inter-universal geometry" is?
ArtB 2 days ago 1 reply      
Wouldn't the easiest way to check this proof be to enter it into something like Coq? That way you'd only have to understand how to translate each step rather than learn each field.
atas 1 day ago 1 reply      
"Release early release often" applies to Math as well. Wouldn't it be better for everyone if he hadn't been so secluded and published some of his work in the meantime?
pfanner 1 day ago 0 replies      
I'm a physics student. Sometimes I'm thinking if I should completely change my path to math. I always sucked at it but it seems to be so huge, exciting and powerful.
mememememememe 2 days ago 5 replies      
Will the solution(s) to ABC proof be a nightmare to all security protocols relying on prime number factorization, such as RSA?
Redis crashes - a small rant about software reliability antirez.com
269 points by hnbascht  14 hours ago   80 comments top 16
jgrahamc 13 hours ago 2 replies      
His point about logging registers and stack is interesting. Many years ago I worked on some software that ran on Windows NT 4.0 and we had a weird crash from a customer who sent in a screen shot of a GPF like this: http://pisoft.ru/verstak/insider/cwfgpf1.gif

From it I was able to figure out what was wrong with the C++ program. Notice that the GPF lists the instructions at CS:EIP (the instruction pointer of the running program) and so it was possible by generating assembler output from the C++ program to identify the function/method being executed. From the registers it was possible to identify that one of the parameters was a null pointer (something like ECX being 00000000) and from that information work back up the code to figure out under what conditions that pointer could be null.

Just from that screenshot the bug was identified and fixed.

dap 10 hours ago 2 replies      
Great post, showing admirable dedication to software reliability and a solid understanding of memory issues.

One of the suggestions was that the kernel could do more. Solaris-based systems (illumos, SmartOS, OmniOS, etc.) do detect both correctable and uncorrectable memory issues. Errors may still cause a process to crash, but they also raise faults to notify system administrators what's happened. You don't have to guess whether you experienced a DIMM failure. After such errors, the OS then removes faulty pages from service. Of course, none of this has any performance impact until an error occurs, and then the impact is pretty minimal.

There's a fuller explanation here:

CrLf 7 hours ago 2 replies      
I find this idea of a lack of ECC memory on servers disturbing... This is the default on almost all rack mountable servers from the likes of HP or IBM. Of course, people use all kinds of sub-standard hardware for "servers" on the cheap, and they get what they pay for.

I haven't seen a server without ECC memory for years. I don't even consider running anything in production without ECC memory, let alone VM hypervisors. I find it pretty hard to believe that EC2 instances run on non-ECC memory hosts, risking serious data loss for their clients.

Memory errors can be catastrophic. Just imagine a single bit flip in some in-memory filesystem data structure: the OS just happily goes on corrupting your files, assuming everything's OK, until you notice it and half your data is already lost.

Been there (on a development box, but nevertheless).

shin_lao 13 hours ago 3 replies      
This is an interesting post, especially the part about memory testing.

We have a simple policy: ECC memory is required to run our software in production. Failure to do so voids the warranty.

js2 8 hours ago 0 replies      
It's crazy that an application should have to test memory. It should simply be handled by the HW and OS. e.g. Some details about how Sun/Solaris deal with memory errors:


Note the section on DRAM scrubbing, which I was reminded of from the original article's suggestion on having the kernel scan for memory errors. (I remember when Sun implemented scrubbing, I believe in response to a manufacturing issue that compromised the reliability of some DIMMs.)

apaprocki 12 hours ago 0 replies      
Can't agree with this more.. And he is just talking about logging crashes. One of the best debugging tools you have at your disposal in a large system (a lot of programmers contributing code -- bugs can be anywhere) is logging the same stack information in a quick fashion under normal operation in strange circumstances so as not to slow down the production software. The slowest part of printing that information out is the symbol resolution in the binary of the stack addresses to symbol names. This part of the debugging output can be done "offline" in a helper viewer binary and does not need to be done in the critical path. We frequently output stack traces as strings of hex addresses detectable by a regex appended to a log message. The log viewer transforms this back into an actual symbolic stack trace at viewing time to avoid the hit of resolving all the symbols in the hot path.
codeflo 13 hours ago 0 replies      
In theory, there's nothing stopping the OS from remapping the pages of your address space to different physical RAM locations at any point during your test. So even if you have a reproducible bit error that caused the crash, there's a chance that the defect memory region is not actually touched during the memory test.

Now, this may not be such a huge problem in practice because the OS is unlikely to move pages around unless it's forced to swap. But that depends on details of the OS paging algorithm and your server load.

jimwhitson 11 hours ago 1 reply      
At IBM, we were very keen on what we called 'FFDC' - 'first- failure data capture'. This meant having enough layers of error-detection, ideally all the way down to the metal, so that failures could be detected cleanly and logged before (possibly) going down, allowing our devs to reproduce and fix customer bugs. Naturally it wasn't perfect, and it depending on lots of very tedious planning meetings, but on the stuff I worked with (storage devices mainly) it was remarkably effective.

In my experience in more 'agile' firms - startups, web dev shops and so on - it would be very hard to make a scheme like this work well, because of all the grinding bureaucracy, fiddly spec-matching and endless manual testing required, as well as the importance of controlling - and deeply understanding - the whole stack. Nonetheless, for infrastructure projects like Redis, I can see value in having engineering effort put explicitly into making 'prettier crashes'.

nicpottier 9 hours ago 0 replies      
This kind of attention to detail is all too rare these days. I love Redis, because I have never, not once, ever had to wonder whether it was doing its job. It is like a constant, always running, always doing a good job and getting out of the way.

It only does a few things, but it does them exceedingly well. Just like nginx, I know it will be fast and reliable, and it is this kind of crazed attention to detail that gets it there.

erichocean 10 hours ago 1 reply      
Although we use ECC in our servers already, I've recently been experimenting with hashing object contents in memory using a CityHash variant. The hash is checked when the object moves on chip (into cache), and re-computed before the object is stored back into RAM when it's been updated.

Although our production code is written in C, I'm not particularly worried about detecting wild writes, because we use pointer checking algorithms to detect/prevent them in the compiler. (Of course, that could be buggy too...)

What I'm trying to catch are wild writes from other devices that have access to RAM. Anyway, this is far from production code so far, but hashing has already been very successful at keeping data structures on disk consistent (a la ZFS, git), so applying the same approach to memory seems like the next step.

The speed hit is surprisingly low, 10-20%, and when you put it that way, it's like running your software on a 6 month old computer. So much of the safety stuff we refuse to do "for performance" would be like running on top-of-the-line hardware three years ago, but safely. That seems like a worthwhile trade to me...

P.s. Are people really not burning in their server hardware with memtest86? We run it for 7 days on all new hardware, and I figured that was pretty standard...

BoredAstronaut 3 hours ago 0 replies      
This post reminded me of my time as a consulting systems support specialist. Lots of weird problem turned out to be bad hardware. Usually memory or disk, sometimes bad logic boards. For end users, this would often lead to complete freezing of the computer, so it was less likely to be blamed on broken software, but there were still many times it was hard to be sure. Desktop OS software can flake out in strange ways due to memory problems. I used to run a lot of memory tests as a matter of course.

I think the title of the article could be more accurate, considering how much is devoted not to issues about software reliability per se, but to distinguishing between unreliable software and unreliable hardware. I think an implicit assumption in most discussions about software reliability is that the hardware has been verified.

I personally do not think that it is the responsibility of a database to perform diagnostics on its host system, although I can sympathize with the pragmatic requirement.

When I am determining the cause of a software failure or crash, the very first thing I always want to know is: is the problem reproducible? If not, the bug report is automatically classified as suspect. It's usually not feasible to investigate a failure that only happened once and cannot be reproduced. Ideally, the problem can be reproduced on two different machines.

What we're always looking for when investigating a bug are ways to increase our confidence that we know the situation (or class of situation) in which the bug arises. And one way to do this is to eliminate as many variables as possible. As a support specialists trying to solve a faulty computer or program, I followed the same course: isolate the cause by a process of elimination. When everything else has been eliminated, whatever you are left with is the cause.

I'm still all jonesed up for a good discussion about software reliability. antirez raised interesting questions about how to define software that is working properly or not. While I'm all for testing, there are ways to design and architect software that makes it more or less amenable to testing. Or more specifically, to make it easier or harder to provide full coverage.

I've always been intrigued by the idea that the most reliable software programs are usually compilers. I believe that is because computer languages are amongst the most carefully specified kind of program input. Whereas so many computer programs accept very poorly specified kinds of input, like user interface actions mixed with text and network traffic, which is at higher risk of having ambiguous elements. (For all their complexity, compilers have it easier in some regards: they have a very specific job to do, and they only run briefly in batch operations, producing a single output from a single input. Any data mutations originate from within the compiler itself, not from the inputs they are processing.)

In any case, I believe that the key to reliable programs depends upon the a complete and unambiguous definition of any and all data types used by those programs, as well as complete and unambiguous definitions of the legitimate mutations that can be made to those data types. If we can guarantee that only valid data is provided to an operation, and guarantee that each such operation produces only legitimate data, then we reduce the chances of corrupting our data. (Transactional memory is such an awesome thing. I only wish it was available in C family languages.)

One of my crazy ideas is that all programs should have a "pure" kernel with a single interface, either a text or binary language interface, and this kernel is the only part that can access user data. Any other tool has to be built on top of this. So this would include any application built with a database back-end.

I suppose that a lot of Hacker News readers, being web developers, already work on products featuring such partitioning. But for desktop software developers who work with their own in-memory data structures and their own disk file formats, it's not so common or self-evident. Then again, even programs that do rely on a dedicated external data store also keep a lot of other kinds of data around, which may not be true user data, but can still be corrupted and cause either crashes or program misbehaviour.

In any case, I suspect that this is going to be an inevitable side-effect of various security initiatives for desktop software, like Apple's XPC. The same techniques used to partition different parts of a program to restrict their access to different resources often lead to also partitioning operations on different kinds of data, including transient representations in the user interface.

Can a program like Redis be further decomposed into layers to handle tasks focussed on different kinds of data to achieve even better operational isolation, and thereby make it easier to find and fix bugs?

ComputerGuru 13 hours ago 4 replies      
Page is down. Here is a formatted copy: https://gist.github.com/4154289
grundprinzip 12 hours ago 0 replies      
I totally like this post, because main-memory based software systems will become the future for all kinds of applications. Thus, handling errors on this side will become more important as well.

Here are my additional two cents: At least on X86 systems, to check small memory regions without effects on the CPU cache can be implemented using non-temporal writes that will directly force the CPU to write the memory back to memory. The instruction required for this is called movntdq and is generated by the SSE2 intrinsic _mm_stream_si128().

chewxy 4 hours ago 0 replies      
And people wonder why I recommend redis. Having run redis for over 1.5 years on production systems as a heavy cache, a named queue and memoization tool (on the same machine), redis has never once failed me. It's clear with antirez's blog post, his attention to detail.

This post is fantastic.

lucian1900 13 hours ago 8 replies      
Perhaps using safer languages (and languages with better error reporting) would be a solution to these kinds of problems.
pnathan 8 hours ago 0 replies      
there is an approach to hard real time software where antirez's idea for a memory checker is done.
News about Mark Crispin (author of the original IMAP specification) ietf.org
258 points by muriithi  3 days ago   50 comments top 16
saurik 3 days ago 4 replies      
A few months ago, I started working on an IMAP server, and as part of that process I decided to read, as best I could, "the collected works of Mark Crispin". Of course this meant that I read through the latest IMAP specification (in its entirety, "cover to cover") but it also meant that I read through all of the old ones as well (if nothing else: Mark actually often stated one should).

"""It's instructive to read the IMAP3 document (RFC 1203), if only to see a dead-end branch in IMAP's evolution.""" -- http://mailman2.u.washington.edu/pipermail/imap-protocol/200...

However, I honestly found this person fascinating: the more I read, the more I wanted to read; I thereby continued from the specifications, and have been reading everything I could get my hands on, scouring mailing lists old and new. I imagine that this is similar to how many might feel about their favorite author, only for me my favorite author is not Tolstoy, Dickens, or Shakespeare: it is Mark Crispin.

Obviously, like with most authors, I don't pretend to know anything about him as a person, but I look up to him as a writer. I thereby don't really know what I would say to him (nor even feel it terribly appropriate to do so at all); I do think, however, I can at least help some people here on Hacker News who might not know much about him appreciate what Mark Crispin has been doing for us in his life.

This man has been working, nearly constantly, on the IMAP protocol specification now for decades of his life; he has seen numerous challenges to compatibility and has had to make countless tradeoffs and compromises to both his vision for the protocol and his wording in specifications to keep making forward progress. Much of this is actually documented in years of mailing list archives.

"""This was a mistake. We all acknowledge it to have been a mistake. However, the discussion about naming that took place in the early 1990s wasted at least 18 months of everybody's time (and probably reduced all of our lifespans by a few years due to high blood pressure). What came up was a wretched compromise, but at least it let us do our work.""" -- http://mailman2.u.washington.edu/pipermail/imap-protocol/200...

From all of this, I would like to say: I believe he was actually a visionary. Many people who use IMAP do not realize this, but Mark did not (from my reading) ever believe in the offline e-mail that Google and Microsoft are slowly obsoleting, even at the benign level of IMAP synchronization; in fact, his own client (alpine) doesn't even support that mode of operation: it is purely on "online" IMAP client with a tiny memory cache.

"""Email synchronization is a fool's errand; but there seem to be an abundant supply of fools that undertake it. Thus we have miserable mobile device email clients such as Mail.app on the iToy, BlackBerry, and the default Mail app on Android. At least Android has k9mail which - just barely - steps over the line into "usability".""" -- http://mailman2.u.washington.edu/pipermail/imap-protocol/201...

If you go back to the early IMAP specifications, this is actually laid out in the rationale section: the argument is that in an age where users have too many devices to easily manage and network connectivity is nearly universal--or as I will call it, "Mark Crispin's 1988" (yes: 1988, and this is already IMAP2)--it no longer makes sense to store e-mail on the client; he then lays out a strategy for an efficient mail-specific thin-client protocol, with everything from server-side search to server-side parsing.

"""Consequently, while the workstation may be viewed as an Internet host in the sense that it implements IP, it should not be viewed as the entity which contains the user's mailbox. Rather, a mail server machine (sometimes called a "repository") should hold the mailbox, and the workstation (hereafter referred to as a "client") should access the mailbox via mail transactions.""" -- http://tools.ietf.org/html/rfc1064

It is only, however, when one delves into the mailing lists where you truly get a sense for this: on various occasions, Mark has even looked at modern webmail systems as having more in common with IMAP than the alternatives people normally compare IMAP to (such as POP).

"""It's easy to dismiss all this, because only a few IMAP clients are sophisticated enough to take advantage of this state. The vast majority are glorified POP clients that babble IMAP protocol. This came about because of the long-obsolete notion that Internet access is a difficult and expensive commodity that requires that the client must keep a mirror of what's on the server. The success of webmail (which transforms the browser into the ultimate thin client) proves that this notion is complete nonsense today. Yet people persist in claiming it. Webmail won the war for the hearts and minds of users, not because webmail is so much better than IMAP, but rather because webmail is so much better than POP.""" -- http://mailman2.u.washington.edu/pipermail/imap-protocol/200...

What struck me the most, though, is just how often people refused to see this: assuming that IMAP was something that it was not, or simply not giving Mark the respect he deserved from the history he has thinking about this problem; people oft would approach claiming they knew better, and wanted to start over. This meant that he often had to spend his time attempting to herd people towards a common goal, and defending what existed against misconceptions; even having to teach people what it meant to have a protocol at all.

"""Before assuming that you are smarter than the old guy, you ought to make sure that you really understand the problem.""" -- http://mailman2.u.washington.edu/pipermail/imap-protocol/200...

He didn't just sit back and heckle, though: he provided long and detailed critiques; he imparted his knowledge to others, even as he saw people often ignore what he had learned. His explanations usually also gave you a history lesson, illuminating part of the process and showing not only why something works the way it does, but why it worked the way it did, and how that notion had to be stretched into what we are currently using today: you can learn a lot about not just IMAP, but protocols in general, from his writings.

"""Furthermore, if you design for the stupid, you must also design for the defiant. If you fail to do that, you have learned absolutely nothing from my experience in the past 22 years.""" -- http://www.ietf.org/mail-archive/web/imap5/current/msg00005....

There was a continual sobering undercurrent, however, with relation to how long it has taken IMAP to come to fruition (technically, it is still only a proposal). Hearing today's news brings back to mind one e-mail in particular from 2007, which I will now end this comment with (its a long one, but I consider it quite powerful, and in this context, I think it is important to include in its entirety).


RFC 3501, like all human endeavors, is not perfect. We have spent about 20 years in trying to get IMAP beyond Proposed Standard status. We are probably going to fall back yet again with another Proposed Standard RFC for IMAP.

You can't assume that the specification is going to tell you everything that you need to know. It will never happen. We can address this particular question, but I can guarantee that someone will find another one after the publication of the new RFC.

Each IMAP specification update consumes a couple of years of my time. Invariably, there are months of "last calls" and inactivity, only to have someone call a "wait, we need to do this" at the last minute that pulls everything back. Requests to review drafts don't work.

And, with the addition of more expository text to say (what is obvious to some people), we get a larger and more bloated document that people won't read. There are already many IMAP implementations written by people who looked at the examples but never the formal syntax or the expository text, because their implementations blatantly violate both.

I understand -- and sympathize -- with the desire to remove reliance upon folklore and common sense. I see no hope of that ever being accomplished.

The sad fact is that we are running out of time. Given past history, there is little hope that it will reach full standard status under my tenure.

I don't think that it's a good use of the next decade or so to make a futile attempt to perfect the base specification. It needs to be made good enough, and there needs to be general understanding of the architecture so that people don't blame their silly decisions on the base specification.

-- Mark --

""" -- http://mailman2.u.washington.edu/pipermail/imap-protocol/200...

cromwellian 3 days ago 0 replies      
This is pretty sad, I had the pleasure of meeting Mark a few times when I was part of the Lemonade working group, he seemed like a very nice guy, energetic, unfazed by commercial interests, someone who stuck to his guns.

It reminds me of Jon Postel, in the sense that many of the core IETF people, those responsible for building the world as we know it, are getting old now, and some of them have already passed away. Everyone remembers Steve Jobs, but the greater public at large is oblivious to people who have built even more important infrastructure.

I hope the history books of the future won't just jump from Edison and Westinghouse directly to Steve Jobs, but also remember those who did the massively important work done in between.

kabdib 3 days ago 0 replies      
I didn't have any real working relationship with Mark, but I remember him well.

I met him over the Arpanet; we shared a common interest in Atari computers, and when I moved to the Bay Area to work for Atari, I met him (and Mabry Tyson) at a local users group meeting. He was kind of intense. He knew a /lot/ about mailers (how much, I didn't appreciate for years).

He had a DEC-20 in his spare bedroom. I saw it once, years later. It was orange. It's safe to say that not many people had DEC-20s in their houses.

Anyway, he introduced me to the Silly Valley hacker culture, and decent chinese food, and I'll never forget that.

ChuckMcM 3 days ago 0 replies      
This makes me sad. I'm glad I got a chance to send him the PDP8 programming manuals while he could still enjoy them (about 10 years ago). I only met Mark at a conference on DEC hardware but engaged in several discussions on the INFO-MICRO mailing list. He was the only person I knew of who had a DEC2020 system in his garage.
Evbn 3 days ago 1 reply      
Mark and his team at UW wrote pine and alpine, the mail client used by many (most?) US college students in the late 1990s.

They also wrote pico, the pine composer, predecessor of nano, and the most newbie-user-friendly text editor commonly found on Unix text console systems.

lispm 3 days ago 0 replies      
Another Lisp Hacker.

Mark wrote the first IMAP client in Interlisp for the Xerox Interlisp-D Machine. He wrote also the first server, though not in Lisp.

javanix 3 days ago 0 replies      
Dealing with someone close to death's door is one of the most painful things that I have ever gone through. My thoughts and prayers are with everyone who knew Mark.
primatology 3 days ago 0 replies      
Very sorry to hear this. Wikipedia informs me Crispin was born in 1956, which would put him at 55 or 56 years old. Far too young.
colinyoung 3 days ago 0 replies      
I don't know the man, and while I know what IMAP is, I'm not exactly part of that newsgroup.

However, it's awesome to think that I will now think of his legacy whenever I refresh my email, because he made something that I use every day. I think that's what we all hope for here at Hacker News.

unreal37 3 days ago 0 replies      
It's sad to hear that such a young man is in the last stages of life. Clearly he has earned the title of visionary and internet pioneer.

If you read the messages being sent to him on that list, it's interesting that so many of them start with "We had our differences but...". One person even wrote "I found discourse with him to be insufferable at best."

I can imagine he spent a lot of his time arguing with people.

swampthing 3 days ago 0 replies      
Hope he's holding up as well as he can be. I worked in the same department as him at the UW and remember getting a geeky thrill every time I saw one of his emails. The man's contributions to life as we know it are tremendous.
bane 3 days ago 0 replies      
I think it's worth thanking Mr. Crispin for all he's done to help enable us to overcome the barriers we all face trying to get our thoughts communicated to another person.
ronnier 3 days ago 2 replies      
I have no idea if this is related, but I'm really starting to think about the studies related to long sitting sessions and the sedentary lifestyle that come with impassioned software developers/engineers. Our love for what we do might be killing us.
guan 3 days ago 1 reply      
So sad. But shouldn't we post well-wishing messages using IMAP?
rietta 3 days ago 0 replies      
Very sad. I've always loved to use IMAP instead of POP for e-mail. Though I have never met Mr. Crispin, I feel honored to be able to use his invention.
neonscribe 2 days ago 0 replies      
Never mind IMAP. My first mail reader was TOPS-20 MM, back in 1979. That combination of command line editing, command completion and context-sensitive help has not been improved upon.
Why we can't process Emoji anymore github.com
248 points by tpinto  1 day ago   148 comments top 30
ender7 1 day ago 2 replies      
Apropos: http://mathiasbynens.be/notes/javascript-encoding


- Javascript engines are free to internally represent strings as either UCS-2 or UTF-16. Engines that choose to go USC-2 tend to replace all glyphs outside of the BMP with the replacement char (U+FFFD). Firefox, IE, Opera, and Safari all do this (with some inconsistencies).

- However, from the point of view of the actual JS code that gets executed, strings are always UCS-2 (sort of). In UTF-16, code points outside the BMP are encoded as surrogate pairs (4 bytes). But -- if you have a Javascript string that contains such a character, it will be treated as two consecutive 2-byte characters.

  var x = '𝌆';
x.length; // 2
x[0]; // \uD834
x[1]; // \uDF06

Note that if you insert said string into the DOM, it will still render correctly (you'll see a single character instead of two ?s).

oofabz 1 day ago 5 replies      
This is why UTF-8 is great. If it works for any Unicode character it will work for them all. Surrogate pairs are rare enough that they are poorly tested. With UTF-8, if there are issues with multi-byte characters, they are obvious enough to get fixed.

UTF-16 is not a very good encoding. It only exists for legacy reasons. It has the same major drawback as UTF-8 (variable-length encoding) but none of the benefits (ASCII compatibility, size efficient).

praptak 1 day ago 2 replies      
Sometimes you need to know about encodings, even if you're just a consumer. Putting just one non 7-bit character in your SMS message will silently change its encoding from 7-bit (160 chars) to 8-bit (140 chars) or even 16 bit (70 chars) which might make the phone split it into many chunks. The resulting chunks are billed as separate messages.
pjscott 1 day ago 1 reply      
The quick summary, for people who don't like ignoring all those = signs, is that V8 uses UCS-2 internally to represent strings, and therefore can't handle Unicode characters which lie outside the Basic Multilingual Plane -- including Emoji.
driverdan 1 day ago 3 replies      
If you search for V8 UCS-2 you'll find a lot of discussion on this issue dating back at least a few years. There are ways to work around V8's lack of support for surrogate pairs. See this V8 issue for ideas: https://code.google.com/p/v8/issues/detail?id=761

My question is why does V8 (or anything else) still use UCS-2?

gkoberger 1 day ago 1 reply      
Took me a bit to realize that this is talking about the Voxer iOS app (http://voxer.com/), not Github (https://github.com/blog/816-emoji).
hkmurakami 1 day ago 1 reply      
>Wow, you read though all of that? You rock. I'm humbled that you gave me so much of your attention.

That was actually really fun to read, even as a now non-technical guy. I can't put a finger on it, but there was something about his style that gave off a really friendly vibe even through all the technical jargon. That's a definite skill!

beaumartinez 1 day ago 1 reply      
This is dated January 2012. By the looks of things, this was fixed in March 2012[1]

[1] https://code.google.com/p/v8/issues/detail?id=761#c33

pbiggar 1 day ago 1 reply      
A couple of reasons why it makes sense for V8 and other vendors to use UCS2:

- The spec says UCS2 or UTF16. Those are the only options.

- UCS2 allows random access to characters, UTF-16 does not.

- Remember how the JS engines were fighting for speed on arbitrary benchmarks, and nobody cared about anything else for 5 years? UCS2 helps string benchmarks be fast!

- Changing from UCS2 to UTF-16 might "break the web", something browser vendors hate (and so do web developers)

- Java was UCS2. Then Java 5 changed to UTF-16. Why didn't JS change to UTF-16? Because a Java VM only has to run one program at once! In JS, you can't specify a version, an encoding, and one engine has to run everything on the web. No migration path to other encodings!

ricardobeat 1 day ago 1 reply      
Please, if you're going to post text to a Gist at least use the .md extension:


eps 1 day ago 0 replies      
They control their clients, so they could've just re-encoded emojies with custom 16bit escaping scheme, make the backend transparently relay it over in escaped form and decode it back to 17bits at the other end.

Or am I missing something obviuos here?

kstenerud 1 day ago 0 replies      
Small nitpick, but Objective-C does not require a particular string encoding internally. In Mac OS and iOS, NSString uses one of the cfinfo flags to specify whether the internal representation is UTF-16 or ASCII (as a space-saving mechanism).
dgreensp 1 day ago 1 reply      
The specific problems the author describes don't seem to be present today; perhaps they were fixed. That's not to say this conversions aren't a source of issues, just that I don't see any show-stopper problems currently in Node, V8, or JavaScript.

In JavaScript, a string is a series of UTF-16 code units, so the smiley face is written '\ud83d\ude04'. This string has length 2, not 1, and behaves like a length-2 string as far as regexes, etc., which is too bad. But even though you don't get the character-counting APIs you might want, the JavaScript engine knows this is a surrogate pair and represents a single code point (character). (It just doesn't do much with this knowledge.)

You can assign '\ud83d\ude04' to document.body.innerHTML in modern Chrome, Firefox, or Safari. In Safari you get a nice Emoji; in stock Chrome and Firefox, you don't, but the empty space is selectable and even copy-and-pastable as a smiley! So the character is actually there, it just doesn't render as a smiley.

The bug that may have been present in V8 or Node is: what happens if you take this length-2 string and write it to a UTF8 buffer, does it get translated correctly? Today, it does.

What if you put the smiley directly into a string literal in JS source code, not \u-escaped? Does that work? Yes, in Chrome, Firefox, and Safari.

languagehacker 1 day ago 1 reply      
We seem to be seeing this more and more with Node-based applications. It's a symptom of the platform being too immature. This is why you shouldn't adopt these sorts of stacks unless there's some feature they provide that none of the more mature stacks support yet. And even then, you should probably ask yourself if you really need that feature.
eloisant 17 hours ago 0 replies      
Maybe nickpicking but I don't think Softbank came up with the Emoji. Emoji existed way before Softbank bought the Japanese Vodaphone, and even before Vodaphone bought J-Phone.

So emoji were probably invented by J-Phone, while Softbank was mostly taking care of Yahoo Japan.

freedrull 1 day ago 1 reply      
Why on earth would the people who wrote V8 use UCS-2? What about alternative JS runtimes?
cjensen 1 day ago 1 reply      
UCS-16 is only used by programs which jumped the gun and implemented Unicode before it was all done. (It was 16 bits for awhile with Asian languages sharing code points so that the font in use determined whether the text was displayed as Chinese vs Japanese vs. etc). What Century was V8 written in that they thought UCS-16 was an acceptable thing to implement?

Good rule of thumb for implementers: get over it and use 32 bits internally. Always use UTF-8 when encoding into a byte stream. Add UTF-16 encoding if you must interface with archaic libraries.

adrianpike 1 day ago 1 reply      
Here's the thread in the v8 bug tracker about this issue: http://code.google.com/p/v8/issues/detail?id=761

Is there a reason that the workaround in comment 8 won't address some of these issues?

dale-cooper 17 hours ago 1 reply      
The UCS-2 heritage is kind of annoying. In java for example, chars (the primitive type, which the Character class just wraps) are 16 bits. So one instance of a Character may not be a full "character" but rather a part of a surrogate pair. This creates a small gotcha where the length of a string might not be the same as the amount of characters it has. And that you just cant split/splice a Character array naively (because you might split it at a surrogate pair).
clebio 1 day ago 1 reply      
Somewhat meta, but this would be one where showing subdomain on HN submissions would be nice. The title is vague enough that I assumed it was something to do with _Github_ not processing Emoji (which would be sort of a strange state of affairs...).
evincarofautumn 1 day ago 0 replies      
Failures in Unicode support seem usually to result from the standard's persistently shortsighted design"well intentioned and carefully considered though it undoubtedly is. It's a “good enough” solution to a very difficult problem, but I wonder if we won't see Unicode supplanted in the next decade.

All that aside: emoji should not be in Unicode. Fullstop.

FredericJ 1 day ago 0 replies      
How about this npm module : https://npmjs.org/package/emoji ?
masklinn 17 hours ago 1 reply      
Wow, the first half of the text is basically full of crap and claims which don't even remotely match reality, and now I'm reaching the technical section which can only get even more wrong.
pla3rhat3r 1 day ago 0 replies      
I love this article. So often it has been difficult to explain to people why one set of characters can work while others will not. This lays out some great historical info that will be helpful going forward.
shocks 1 day ago 0 replies      
Very informative, great read. Thanks!
xn 23 hours ago 0 replies      
Here's the message decoded from quoted-printable:
alexbosworth 1 day ago 0 replies      
Fixed a good while ago for node.js
mranney 15 hours ago 0 replies      
Note that this message is almost a year old now. The issue has been addressed by the node and V8 teams.
sneak 1 day ago 3 replies      
TLDR: node sucks
csense 23 hours ago 0 replies      
A two-character sequence for a smiley face that should be compatible with everything in existence:


Problem solved. Why is this front page material (#6 as of this writing)?

Skills Don't Pay the Bills nytimes.com
218 points by timr  2 days ago   137 comments top 17
tokenadult 2 days ago 3 replies      
From the article: "The secret behind this skills gap is that it's not a skills gap at all. I spoke to several other factory managers who also confessed that they had a hard time recruiting in-demand workers for $10-an-hour jobs. 'It's hard not to break out laughing,' says Mark Price, a labor economist at the Keystone Research Center, referring to manufacturers complaining about the shortage of skilled workers. 'If there's a skill shortage, there has to be rises in wages,' he says. 'It's basic economics.'"

Agreed. That is the basic problem. If a worker can produce hundreds of widgets a day after specialized training with new computer-controlled machinery, but the worker could make just as much money per hour right after high school flipping hamburgers at the local fast-food restaurant, there is no reason for the worker to go through two years or more of specialized training, especially at the worker's own expense.

Much of the rest of the article discusses the overall rationality of workers seeking jobs that they can obtain with the least investment of their own time in training for a given income. Of course. Part of the problem is that if companies hire on the basis of course completion certificates rather than on the basis of demonstrated competence, they will miss out on good workers, and yet hire some lousy workers, and thus be reluctant to offer competitive starting wages. (It's expensive to hire a worker who can't do the job, and it's also expensive to let go workers who don't learn on the job and to hire their replacements.)

In what I think has become my best-liked comment on HN, I've collected references other participants here helped me find about company hiring procedures. Companies need to hire on the basis of actually being able to do the job, not on the basis of what classes workers have attended. The review article by Frank L. Schmidt and John E. Hunter, "The Validity and Utility of Selection Models in Personnel Psychology: Practical and Theoretical Implications of 85 Years of Research Findings," Psychological Bulletin, Vol. 124, No. 2, 262-274


sums up, current to 1998, a meta-analysis of much of the HUGE peer-reviewed professional literature on the industrial and organizational psychology devoted to business hiring procedures. There are many kinds of hiring criteria, such as in-person interviews, telephone interviews, resume reviews for job experience, checks for academic credentials, personality tests, and so on. There is much published study research on how job applicants perform after they are hired in a wide variety of occupations.


EXECUTIVE SUMMARY: If you are hiring for any kind of job in the United States, prefer a work-sample test as your hiring procedure. If you are hiring in most other parts of the world, use a work-sample test in combination with a general mental ability test.

The overall summary of the industrial psychology research in reliable secondary sources is that two kinds of job screening procedures work reasonably well. One is a general mental ability (GMA) test (an IQ-like test, such as the Wonderlic personnel screening test). Another is a work-sample test, where the applicant does an actual task or group of tasks like what the applicant will do on the job if hired. (But the calculated validity of each of the two best kinds of procedures, standing alone, is only 0.54 for work sample tests and 0.51 for general mental ability tests.) Each of these kinds of tests has about the same validity in screening applicants for jobs, with the general mental ability test better predicting success for applicants who will be trained into a new job. Neither is perfect (both miss some good performers on the job, and select some bad performers on the job), but both are better than any other single-factor hiring procedure that has been tested in rigorous research, across a wide variety of occupations. So if you are hiring for your company, it's a good idea to think about how to build a work-sample test into all of your hiring processes. If the job you are hiring for involves use of a computer-controlled machine tool, have the candidate put the machine to use making sample parts (advertise the job in a way that makes clear a work-sample test is required, to screen out people who have no clue how to operate such machines). Hire the able, and pay them what they are worth.

Ask yourself about any hiring process you have ever been in, as boss or as applicant: did the applicant have to do a work-sample test based on actual work results expected in the company? Why not?

zmmmmm 2 days ago 7 replies      
I see a similar problem even in software jobs. Employers frequently advertise for highly specific skill sets that almost nobody has. Then when nobody or only fraudulent people apply, they reject them all and claim a skills shortage. The problem seems to be a basic misconception about how transferable software skills are. An excellent programmer with no experience in Python will be out performing a poor Python programmer in a matter of weeks, even though they have had to learn an entirely new language. This idea, however, seems entirely lost on most HR departments, and the result is an almost entirely "fake" skills shortage.
lifeisstillgood 2 days ago 2 replies      
This is fairly speculative, but I have been thinking this paradox over for a while.

A high demand for scarce highly skilled workers must drive
salaries up in competition.

Unless the skills are not actually high skilled, but obscure skills. I predict that "high skilled manufacturing" is a morass of proprietary solutions to
highly specific process needs - that is, the metal fabrication plant alluded to at 10USD per hour has highly specific machinery doing a fixed task, and that the skill is mostly one of repairing the proprietary code.

From a software perspective, that mostly means the machinery is "legacy" - and impossible to refactor. So it can never be improved upon, only replaced.

I am not a 3d-printer fanboi - the advantages of additative manufacturing are hugely over hyped (at the moment), and likely to remain elusive for, lets say, a generation, before it becomes obvious to all we throw away factories and their jobs and build millions of thing-o-matics.

However in that generation there will be huge opportunity for semi-general manufacturing - robots with sufficient flexibility in parts and software that they can be re-purposed easily as part of a (virtual) conveyor belt.

This sounds truly skilled work - flexibly adapting as processes and customers change.

if providing semi-general manufacturing machines is uneconomic compared to proprietary simpler but "obscure skill" machines, then we cannot expect a productivity
premium for actual highly skilled workers and should expect
10USD phour jobs to limp along till Shanghai takes their lunch.

If however semi-general machines can be made to adapt to
different manufacturing requirements, then our whole manufacturing base may be in want of replacing.

Again something emerging economies will have an advantage in.

So, overall, the West should view itself as Great Britain was at the end of the 19th Century - a pioneer whose advantages had run out, and without wholesale massive investment will simply enter a managed decline.

Disruption opportunities - Development of semi-general robotic manufacturing solutions that can quickly be re-purposed. And proving that it is both ecomnomic for a greenfield site and an installed base.

chrismealy 2 days ago 2 replies      
Economist Dean Baker comments on this: That Shortage of Skilled Manufacturing Workers is Really a Shortage of Employers Willing to Pay the Market Wage

News stories have been filled with reports of managers of manufacturing companies insisting that they have jobs open that they can't fill because there are no qualified workers. Adam Davidson at the NYT looked at this more closely and found that the real problem is that the managers don't seem to be interested in paying for the high level of skills that they claim they need.

Many of the positions that are going unfilled pay in the range of $15-$20 an hour. This is not a pay level that would be associated with a job that requires a high degree of skill. As Davidson points out, low level managers at a fast-food restaurant can make comparable pay.

It should not be surprising that the workers who have these skills expect higher pay and workers without the skills will not invest the time and money to acquire them for such a small reward. If these factories want to get highly skilled workers, they will have to offer a wage that is in line with the skill level that they expect.


anigbrowl 2 days ago 2 replies      
Part of Isbister's pickiness, he says, comes from an avoidance of workers with experience in a “union-type job.” Isbister, after all, doesn't abide by strict work rules and $30-an-hour salaries. At GenMet, the starting pay is $10 an hour. Those with an associate degree can make $15, which can rise to $18 an hour after several years of good performance.

I'm not a fan of unions or work rules, but those are terrible wages. How does the guy expect to get 21st century skills for 20th century wages?

molsongolden 2 days ago 7 replies      
"The so-called skills gap is really a gap in education, and that affects all of us."

That closing line doesn't really sound like the gist of the piece at all. It sounds like manufacturing employers just don't want to or don't feel they can afford to pay works a fair wage.

opendna 2 days ago 0 replies      
Classic: according to the article, the definition of a qualified new hire is one with the qualifications to get accepted into university engineering programs.

Ya'll have probably read some of Prof Peter Cappelli's editorials (he's been making the rounds to promote his book "Why Good People Can't Get Jobs"). One of the things he's pointed out is that HR use of resume databases encourages people with poor search habits to believe there is no choice. Basically, it's a failure consider the Bayesian math of nested filters.

Consider a company, in a city of a million people (don't want to pay for relocation), which wants someone with a bachelor's degree and five years of experience. The US unemployment rate for people with bachelor's degrees or higher is ~3.8% and about ~7% of the population is between 25-29 years old. Rough and tumble numbers put the pool of candidates around 2500-2700 before including any subject-specific knowledge. Ask for a specific discipline, like "Computers, mathematics, and statistics" for which 4.2% of bachelor degrees were awarded, and you can cut your your pool of candidates down to about 100 (1/10,000). If you also filter to require mastery of PHP, javascript, and Marqui, 10 years experience with Server 2008 and fluency in a Romance language...

tl;dr: the absence of time travelers in the applicant pool is not evidence of a skills shortage.

kirillzubovsky 2 days ago 1 reply      
Okay, so if there is a problem, what's the solution? Sure enough, highly skilled labor in Taiwan, for instance, that is willing to do the same work for a fraction of the cost, isn't going to stop working just because US employees don't find $10/h attractive. It's a free market and if we want to make things in US, we have to keep the costs comparable.

Now, assuming blue collar is done, as robots can take care of most of the tasks, could someone explain why we need workers to oversee the computers?

The way I see it, you need a really expensive robot, and you need a really expensive programmer to make it work; everything in the middle is cheap. Software should be able to distill anything that is happening on the robotic side and present it in such a way that a trained mechanic would understand and be able to fix it. If that's not the case, then we need better developers (and in this case, perhaps, better designers and human-factors engineers) to write better software.

Now, a few folks mentioned that what you need is really a mixture of experience in the mechanical side of things and understanding of the software - presumable that allows you to react and solve problems whenever something goes astray. Well, that could probably be solved by having a few engineers on staff who would help when needed.

That leaves us with yet again, fairly simple mechanical labor. Perhaps then, the article is right, we have a serious education problem. Those who attain enough education, leap forward and presumable learn more to subject themselves to mundane mechanical tasks, while those who would be greatly fitted to do the jobs, actually don't have enough education to understand even the most basics.

Someone's suggested comparing the math needed at Mc'Donalds with the math needed at one of these factories. I'd be curious to know too; although I suspect that in McDs all the calculations are done by a computer and all humans need to do, is simply not to f-up. Even then, when humans fail to add 2+2, all they lose is an occasional McFlurry, while at a factory they could impact tens of thousands of dollars at once.

So, here we have a conundrum. We need labor to work the $10 jobs, but the pool of employees is simply atrocious. At the same time, qualified labor has better things to do with their time. Now, back to my questions - what's the solution?

Soarez 2 days ago 1 reply      
If an Entrepreneur can't pay for the resources needed to build/ensure a product/service at their market prices, he should and eventually will, close business.

There is nothing wrong with that. That is the economical system in the works.

If no one is willing to pay for his product/service at the price he needs to charge, it is because either people do not need it, or they can get it cheaper somewhere else. In business, if you can't compete, you shouldn't.

In this specific case, these jobs are getting oursourced to china or replaced by machines. This is wonderful. Not for infimal number of people that lose their jobs, but to the huge number of people who will be able to afford that product/service cheaper.

I can't set up an ice-cream kiosk in antartica and whine about how business is hard and what will be of the antartica ice-cream industry.

jaggederest 2 days ago 0 replies      
This is the transition between manufacturing as a blue collar occupation and manufacturing as a white collar occupation.
sharkweek 2 days ago 4 replies      
...highly skilled manufacturing jobs " the ones that require people who know how to run the computer that runs the machine. --

Thinking meta for a moment here: How far does this concept go of machine process. Could we program machines that can run the computers that run the machines? And how about machines that program those machines?

One of my favorite Twilight Zone episodes : http://en.wikipedia.org/wiki/The_Brain_Center_at_Whipple%27s

kosei 2 days ago 0 replies      
I cannot fathom how the author takes the side of the employers at the end of this article. So workers should reduce their expectations so that certain manufacturers can succeed while paying them below-expected rates? Does the author legitimately expect individual workers to look out for the country's employment best interests above their own?

The candidates smart enough to realize that they could earn 2x-5x as much after college won't take these jobs, and most of the people who can't earn those wages elsewhere aren't qualified to do these jobs.

frozenport 2 days ago 2 replies      
I wonder what kind of skills are lacking.
Claims of deficiencies are qualitative.

For example, how much math does somebody need to know for this kind of job? Is it trigonometry to find hypotenuses? Are they writing programs or entering in data to notepad?

This is an important distinction because we need to compare these skills to those of a shift manager at McDonald.

For example, at a hotel the shift manager must supervise the employees, handle money, and keep journal entries. The later two involve rudimentary mathematics and computer skills. There is a good chance that this is a fair wage when compared to similar jobs.

andrew_wc_brown 2 days ago 0 replies      
When applying for programmers I'll refuse to hand in a résumé without being able to demonstrate my skills and provide sample work first.

The hiring process that most companies use is backwards, and I think mostly due to people that go to school to learn HR and taught this impractical approach.

ommunist 2 days ago 0 replies      
The name of the article is misleading. It is all about the greedy fat cats that want mice to go to the mousetrap, but moaning about high prices for cheese, so they are unable to charge the mousetrap properly. Good for mice, bad for cats.
gtirloni 2 days ago 0 replies      
Companies are simply offering wages that are comparable to some worker in China would get. It only makes sense in a flat world. Question is: do we want that?

In the IT industry, specially IT support and break-fix development the same thing happens. Local people don't want to accept low salaries? Let's look what India or the Philippines are charging. Problem solved.

erikb 2 days ago 0 replies      
I read about this so often, that I can't think anymore that the problem really is the low pay for high skilled jobs. There must be a reason why in so many countries for so many years nobody increases their hourly rates and hopes to get well educated people for the current rates.
Stripe's 22-Year-Old Irish-Born Founder Is Just Getting Started inc.com
217 points by jkuria  2 days ago   89 comments top 13
jacquesm 1 day ago 2 replies      
Patrick and his brother John are two of the smartest cookies on the planet that I know about. These guys will go very far, and I'll be cheering them on all the way. They are not only successful and smart, they're genuinely nice, and that's a pretty rare combination.
madaxe 1 day ago 5 replies      
Ok, stripe's wonderful, but... why is stripe wonderful?

It's a payment service provider. An expensive payment service provider. They have an API. It's nicely written - but a nicely written API doesn't make a product. They're available in the US and Canada. OK, but there's a whole planet out there.

So, yeah. What exactly is it that's so amazing about stripe?

I work in eCommerce, run and developed a little platform that does a few £bn a year through it, so know my PSPs, and fail to see the differentiation.

sudhirj 1 day ago 0 replies      
What's also impressive is the choice of project: pg has written about this (think he calls it the schelp filter or something). Payments is one of the most difficult, red-tape entangled, regulation-choked domains to start a company in. Not to mention the extreme security requirements and general unsexiness of it all.

Think they've managed to do an awesome through all the hurdles - if I ever decide to follow their example and start a payments company in India it will simply be "Stripe-API compliant". That's pretty much all the marketing / features it will need.

ronnier 1 day ago 2 replies      
Wow, this guy is amazing. Read his wikipedia page:
wiradikusuma 1 day ago 3 replies      
Loosely related with the article, do you need to give significant share to "locals" if you're starting a company in the US? For example, in Malaysia and Singapore (CMIIW), unless you put huge amount of money up front, foreigners are required to partner with locals and give >50% share to them. I.e. foreigners cannot have majority share if they start a "normal" company (not big).
kyro 1 day ago 1 reply      
That's it. I'm doing a startup after med school.

...so what's the policy on using investment money to pay back loans?

vancouverite 1 day ago 1 reply      
Very impressive achievement for his age, though there is of course also a lot of spin-doctoring involved[1]. Stripe seems to be heading into the right direction.

[1] The whole "millionaire at 17" bit decodes to getting a big stock-based exit for Auctomatic with modest cash. Said stock then dropping to pennies shortly thereafter (~40x drop in stock price). Add in the merger and $600k in financing and there probably wasn't left other than a bailout for the investors. http://www.otcmarkets.com/stock/LIVC/chart

vecinu 1 day ago 2 replies      
It's interesting to see people put their studies on hold to pursue something they loved.

I don't quite understand why they moved to San Francisco; the article didn't really make that clear.

I'd love to have a cup of coffee with either of them though. Great people!

safetyscissors 1 day ago 1 reply      
Really amazing guy. Kinda makes me feel inadequate all of a sudden :(
MojoJolo 1 day ago 2 replies      
This guy inspired me. I'm just reading about Stripe a while ago and I don't know the founder is just 22!

I'm 20, and I think I need to rush some things. I don't want to be late.

bimozx 1 day ago 0 replies      
The experience of reading young success stories is both exhilarating and depressing to me. On one hand it expands my vision of what is really possible at such young age, on the other I realize that I am at the point in my youth where people like Patrick and John have achieved so much. It shows a little ingenuity and complete persistence can make you go real far.

Kudos to both of them, and godspeed on their current endeavor.

antihero 1 day ago 1 reply      
For the hiring/visa issue, would a solution, say if two of their hires were from the EU, start a company in an EU country, employ them with said company, and then have the US company "contract" the EU company for their part of the work?
rsmaniak 1 day ago 0 replies      
Amazing guy, inspiring and at the same time, depressing...
Cosmo: A free Metro-inspired theme for Bootstrap bootswatch.com
203 points by thomaspark  1 day ago   57 comments top 31
drivebyacct2 1 day ago 4 replies      
Not all that Metroy, as someone who spent all summer straight immersed in Metro.

Also, I wish Foundation would get 1% of the attention Bootstrap does. Once you figure out how to get past their docs-light into the full docs, I've found it to be FAR faster to work with than Bootstrap.

nlh 1 day ago 0 replies      
Say what you will about Windows 8 / MSFT, but this is a damn fine-looking theme.
irahul 1 day ago 0 replies      
There is a link to demo in there http://bootswatch.com/cosmo/

I like the look, and will definitely be using it in internal projects and prototypes.

I found it a bit hard to read super light gray on somewhat light gray or plain white background(quotations, dropdown). I think a little more contrast is required. The default button, especially when used on similar colored background(forms), has the same issue.

I understand they are going for the metro look, but adding a drop shadow to actionable items(buttons) while the not-clickable items remain flat(alert; element not clickable but the cross) will help mitigate some end-user confusion.

nikcub 16 hours ago 0 replies      
related is Metro UI:


It is a complete CSS+Javascsript library to apply the Metro design and theme to a web app and includes things like icons, etc.

It is very popular amongst Microsoft web developers and looks really good.

thehodge 1 day ago 1 reply      
I love bootswatch and would happily pay $5-$10 a month for regular swatches like this uploaded every few days / week, I'm not a designer nor do I want to spend time trying to be, bootstrap + bootswatch give me the ability to get something up ,running and looking pretty damn good very quickly.
zoop 1 day ago 3 replies      
It is a nice looking theme but it has the same problem that Metro has: there isn't anything indicating what is clickable and what is not.
codewright 1 day ago 0 replies      
I like Metro but for one thing...

...the buttons aren't apparently clickable. I embrace the shift away from excessive skeuomorphism as much as anybody but failing to provide affordances is inexcusable.

RobAley 1 day ago 2 replies      
Is it just me, or do elements of it remind anyone of the latest gmail / google groups etc. interface?
troymc 23 hours ago 0 replies      
It's interesting that people are still calling it "Metro", even after Microsoft was told to stop (because of a trademark issue). Microsoft stopped, but the rest of the world didn't, apparently. I know I didn't. It's a great name.

Can the company that went after Microsoft (Metro AG) go after other people besides Microsoft?


fredsted 1 day ago 1 reply      

   * { border-radius: 0 !important; }

DrinkWater 1 day ago 0 replies      
One of the fewer beautiful Bootstrap Themes.
dkersten 12 hours ago 0 replies      
I don't understand why everyone loves and wants to emulate the metro look so much. I find metro to be really really ugly. Am I missing something?
Avalaxy 1 day ago 0 replies      
Very nice! I've been waiting for something like this. There are other metro themes for bootstrap, but they're a bit over the top and try to replicate the metro GUI on the PC/tablet.
indiecore 1 day ago 0 replies      
I like it. I like it a lot, very stylish work.
leak 1 day ago 1 reply      
I love this bootstrap Metro theme http://wrapbootstrap.com/preview/WB0HT4KX4
kmfrk 1 day ago 0 replies      
Probably the best Bootstrap theme yet. This is also why Bootswatch some kind of "favourite" button so people can store the best themes for later use.
RaphiePS 1 day ago 0 replies      
I like it, but the dropdown menus look strangely out of place.
jtreminio 1 day ago 0 replies      
Funny enough, the form elements look better on Win8/IE10
evv 1 day ago 0 replies      
It's beautiful. For better readability on the demo page, I recommend the following:

section { margin-top: 60px; }

eungyu 1 day ago 0 replies      
Sweet, going flat will reduce the gradient bloat that's currently in the default Bootstrap css.
fkaminski 1 day ago 2 replies      
Its only me? or i´m the only one here to think that Metro UI is corny like the 80´s ?
i think that as time goes by, people will be ashamed of using this as they were ashamed about their hair style in their pictures from the 80´s :
digitalmerc 1 day ago 0 replies      
You know when we built [PDFzen](https://pdfzen.com), we used Bootstrap and made it Metro-esque. I wish we'd have had something like this for the homepage. It certainly would've sped things up.
mikegioia 1 day ago 0 replies      
Info is a purple button and a blue alert/badge/label.
dnyanesh 1 day ago 0 replies      
It's good but It doesn't look like Metro. Infact, it looks similar to Google's Web apps UI (Gmail, Gcal, etc).
dev360 1 day ago 0 replies      
I love the look - reminds me of spongebob squarepants.
lotso 1 day ago 0 replies      
Looks great! Although, why does the search box have rounded corners?
mmhd 1 day ago 0 replies      
This style is what Bootstrap should have been from the start. Well done.
newsreader 1 day ago 0 replies      
Nice. I been looking for something like this: clean and simple...
level09 1 day ago 0 replies      
is it just me, or just the border radius has been removed ?
jesusj 1 day ago 0 replies      
Looking good! :-)
viciousplant 1 day ago 0 replies      
color not bold enough, anyway, it's good.
Inside Google Spanner, the Largest Single Database on Earth wired.com
197 points by Libertatea  1 day ago   68 comments top 14
ghshephard 20 hours ago 1 reply      
I had to chuckle when I read this:

"As Fikes points out, Google had to install GPS antennas on the roofs of its data centers and connect them to the hardware below."

This is usually one of the first things an enterprising sysadmin does at companies when they first start thinking about time - drop a GPS receiver on the roof (and they usually come up with a bunch of cool graphs showing where all the satellites are over time).

Soon thereafter, and a bit of reading about the NTP protocol, they realize that just adding:

  server 0.pool.ntp.org
server 1.pool.ntp.org
server 2.pool.ntp.org
server 3.pool.ntp.org

to their ntp.conf is sufficient for 99.99% of all endeavors which require accurate time, outside of big physics, and, apparently Google's Spanner Database.

This part was a bit incomplete:

"Typically, data-center operators keep their servers in sync using what's called the Network Time Protocol, or NTP. This is essentially an online service that connects machines to the official atomic clocks that keep time for organizations across the world. But because it takes time to move information across a network, this method is never completely accurate,"

Much of the purpose (and math) behind the NTP protocol is to deal with network lag. And it does a pretty good job doing so.

Reading about the True Time Api at: http://static.googleusercontent.com/external_content/untrust...

"This implementation keeps uncertainty small (generally less than 10ms) by using multiple modern clock references (GPS and atomic clocks)"

So - apparently 10ms is their breakpoint - 10ms is about the limit of what you can expect out of NTP, so I guess it makes sense that if Google needs to do 10ms or better, something of their own invention would be required. Cool graph on the paper showing that 99.9% of variance across data centers thousands of kilometers apart are < 10ms deviation.

flyinglizard 1 day ago 4 replies      
What kind of accuracy exists between servers inside the same data center? I assume there are some internal delays (OS stacks, switches, etc) when synchronizing time inside a server group.

I mean, even if you had a picosecond accurate clock available for use inside a server farm, you would still need a way to query it with a known (not necessarily zero; just known) latency to synchronize several machines. Servers are not known latency machines (unless specialized hardware is involved).

How is that accomplished?

And what happens when two transactions happen below the system accuracy limit? (like two transactions pertaining to the same data, 20ns apart, in different servers; impossible to order).

Surely they have solved this, I just wonder how.

nlavezzo 1 day ago 3 replies      
It's interesting to see that the creators of BigTable and the early proponents of eventual consistency have invested the last 4.5 years building a system that adds back strong consistency guarantees.

If the Spanner paper is as important as BigTable, ACID may become the new goal for those building distributed systems.

Full disclosure: I'm with FoundationDB, which is a distributed NoSQL database with high performance cross-node ACID transactions. http://www.foundationdb.com

NelsonMinar 1 day ago 0 replies      
If you want more detail, this article links a research paper that describes the system in detail. Very clever focussing on the timebase as a way to improve distributed consistency; I'd always assumed NTP was sufficient. http://static.googleusercontent.com/external_content/untrust...

The part of the article that stood out to me is that Spanner is used in F1, the new backend datastore for AdWords. That's a significant vote of confidence.

Bakkot 1 day ago 2 replies      
View all pages: http://www.wired.com/wiredenterprise/2012/11/google-spanner-...

> “We can commit data at two different locations " say the West Coast [of the United States] and Europe " and still have some agreed upon ordering between them,” Fikes says, “So, if the West Coast write happens first and then the one in Europe happens, the whole system knows that " and there's no possibility of then being viewed in a different order.”

That's a large enough scale that you have to deal with relativity (light takes almost precisely 0.03 seconds to go from Palo Alto to Paris, eg). So in some sense there is no correct ordering. Anyone know how they deal with this? Have they just chosen some arbitrary point to make their reference frame, for purposes of ordering commits?

sneak 1 day ago 5 replies      
Does this mean that Google datacenters are vulnerable to GPS jamming and/or spoofing now?
Too 17 hours ago 1 reply      
“We can commit data at two different locations " say the West Coast [of the United States] and Europe " and still have some agreed upon ordering between them,”

I'm a bit confused by this. How will this solve the situation when the first transaction renders a second transaction forbidden. To keep it simple, say an account with only $10 and two transactions trying to withdraw $10 each.

philip1209 1 day ago 1 reply      
"[. . .] the company's online ad system " the system that makes its millions [. . . ]"

That is an understatement.

Charlesmigli 1 day ago 0 replies      
Interesting article. Mainly on the timing aspect though. tl;dr version here http://tldr.io/tldrs/50b375dd52b89ec3440000df
sargun 1 day ago 0 replies      
Argh, "And, yes, you do need two separate types of time keepers" - No, you must establish a quorum of time keepers. Almost everyone's advice when setting up high reliability time keeping systems is to use 1 clock, or >3. 2 is no better than 1.
eze 1 day ago 1 reply      
Past and current Googlers that frequent HN are notoriously absent from this thread. Come on, guys! Surely your NDA must allow some vague commentary...
abhijat 17 hours ago 0 replies      
> VC is Google shorthand for video conference

That is the case nearly everywhere, I think :-)

Sarien 1 day ago 3 replies      
m( "Spanner" is the colloquial German word for voyeur. Not the best name for a database. :)
X-editable: In-place editing with Twitter Bootstrap, jQuery UI or pure jQuery github.com
188 points by sohamsankaran  2 days ago   48 comments top 18
masklinn 2 days ago 4 replies      
I find it sad (to annoying) that these projects still use actual form elements instead of `contenteditable`, even though they're quite obviously full-JS (and probably not going to be submitted through HTML forms), given the difficulty of correctly styling, integrating and interacting with form elements.

The web is in dire need of a library correctly reimplementing "form behaviors" (events, mostly) on top of contenteditable, and allowing those behaviors to be applied to arbitrary (to the extent that browser implementations allow) elements on the fly.

edit: just in case, don't take me wrong, HTML forms should stay and I'm an advocate of less javascript everywhere as it tends to be mandated in places where it has no reason to be (and to ultimately decrease usability rather than enhance it), but if you're going to do "edit in place" and rich web applications which require javascript to run " and using the library linked above would probably qualify as you can't do anything without JS enabled and a fairly recent browser running it. And in those cases, HTML form elements tend to be a hindrance more than a help.

rhplus 2 days ago 6 replies      
I don't really understand why this interaction model is a good thing. The inline and popup versions require me to confirm every single action. It's forcing me to perform two actions (select, confirm) when a regular form uses just one (select).
rbcb 2 days ago 2 replies      
It took me a minute to figure out how to edit the field.

You're creating new behavior which is counter to how we all understand links. Underlined words take us to other places. And dashed words unfortunately have the stigma of opening up some crappy add referencing the word.

Consider instead a little edit carrot next to the word and clicking anywhere in the cell makes it editable.

elchief 2 days ago 0 replies      
The only improvement I would suggest would be to change the height of the form element so that it does not alter the height of the row (shifting later elements downward). Nice job.
revetkn 2 days ago 0 replies      
I can't find any info on browser support (maybe I'm looking in the wrong place). Anyone know?
nateweiss 2 days ago 0 replies      
Useful, thank you for posting. Obviously, this isn't a user interaction that you want to use everywhere, but there are plenty of situations where editing individual fields via mini-modals like this will make lots of sense.

For one thing, I can see where having this in the toolbox might enable one to bang out a fast admin-type view quickly--and sometimes being able to implement some relatively "utilitarian" view quickly is what lets you spend the proper amount of time on views that you want to implement more traditionally/fancily.

The use of "links" to launch the little dialogs doesn't personally bother me for whatever reason. (Though I get that sometimes it's just preference--for instance I have always disliked the use of drop-downs as navigation controls, but re-purposing the link element as shown in these x-editable demos doesn't--not sure I can defend why).

cocoflunchy 2 days ago 1 reply      
This doesn't work for me: http://vitalets.github.com/x-editable/demo.html Chrome 23.0.1271.64 m) whereas Editable works just fine. Am I the only one?
sarbogast 2 days ago 0 replies      
Awesome stuff! Now the next step is to integrate that with AngularJS and it will be perfect for my current project :P
dotmanish 2 days ago 0 replies      
This is a lazy comment. Is it possible to include both popup and inline JS of this at once? (the demos are mutually exclusive
asher_ 2 days ago 0 replies      
This is quite neat, thanks a lot!
zerovox 2 days ago 1 reply      
I would definitely use this if it had Foundation tooltip support. How easy would it be to adapt this for other tooltip systems?


ppadron 2 days ago 0 replies      
This is really good. A great addition would be Aviary support for images. If I come up with something good I'll send a PR.
bookcasey 2 days ago 0 replies      
Doesn't work with tab.
chmike 2 days ago 0 replies      
Doesn't work on ipad
BaconJuice 2 days ago 0 replies      
This is very cool, thanks for sharing.
marcamillion 2 days ago 1 reply      
Love this...can't wait for a Rubygem.
TommyDANGerous 2 days ago 0 replies      
I like a lot.
TommyDANGerous 2 days ago 1 reply      
This is fun, HTML5 is so powerful.
Kickstarter, Trademarks and Lies arduino.cc
183 points by lucatironi  1 day ago   75 comments top 11
robomartin 1 day ago 2 replies      
Reading through some of the comments is interesting in that my experience with Kickstarter seems to have been --so far-- nearly polar opposites to that of others.

I have almost exclusively backed technology projects. Out of those, not one of them has failed to deliver. And, not one of them has delivered on time. Out of all the projects I have supported only one has ended-up in the trash can. However, that was not because the widget was not executed well or it was junk. It was simply a case of my idea of the utility of this gizmo failing to align with reality once I got it. No issues on my part. I've done that plenty other times even buying stuff from brick-and-mortar stores.

If I allow myself to presume about the reasons for my "success" I'll have to say that the only thing I can reasonably point to is that I have a lot of experience actually designing. manufacturing and, yes, shipping technology products. I am intimately familiar with the design, sourcing and manufacturing process (and issues) of most products that entail software, electronics and mechanical components using various technologies.

This, to me, means that I have a fairly decent "bullshit" filter when it applies to these kinds of projects. Not to pick on them, but my most recent "this is bullshit" call was the LIFX light bulb:


Why did I call this BS?

You have to rewind to when the project first posted. Their pledge goal was set to $100K. A project like that could easily burn-up half a million dollars just in engineering, NRE's and regulatory testing. Very easily. In fact, my immediate thought after reviewing the project was that the project needed somewhere in the order of two million dollars.

When I see something like that I have to ask: Are the project originators truly clueless about what it might take to get the project done? I don't like to think that fraud is involved. I am one of those saps who believe that the vast majority of people are basically good. So, no fraud. Yet, $100K?

What would have happened with LIFX had they received funding just about their requested $100K goal? Say, $150K. Well, in my world that would have meant that there was no way to complete the project. No way. At least not with anything that I'd want to plug into a lightbulb socket at my house for a myriad of safety reasons.

I avoid such projects.

As it turns out, they have raised about $1.3 million. This may or may not be enough to get this done. Keep in mind that bringing in partners isn't free. COGS must include all costs.

With regards to the ARDUINO issue. I saw that project come up and immediately went to the known Arduino sites. I saw nothing promoting the project or making this connection of having an ex-Arduino manufacturer behind the Kickstarter project. So, I stayed clear.

I see the relationship between Kickstarter and their vendors very much like that of a shopping mall owner and the stores it might house. Imagine that one of the stores decides to sell counterfeit Gucci bags or defraud people in some other way. I can't see the mall owner as being guilty of the crime being committed. If a direct nexus is established, well, then, that's a different story.

I also see the buyer as having to be responsible for the decision they make. If someone sells you a perpetual motion machine and you were not smart or informed enough to realize that this can't possibly work, well, in many ways, it's your fault. Be an informed buyer. That's the only way to protect yourself.

EDIT: I should say that I like Kickstarter very much. I don't have a problem with the service. If you know what you are doing both as either a project originator or a project backer, it's wonderful.

deelowe 1 day ago 10 replies      
I love kickstarter, I really do. There's a lot of issues with the way capital is currently generated that puts the small guy at a disadvantage.

However, I really think the kickstarter team needs to take a closer look at how things are going today. This is but one example. Out of the many kickstarters I've contributed to, only one has shipped and only one more appears to have had any progress in the past 6 months. My friends have had very similar experiences and all of us have said we probably won't be contributing as much for a while. It kind of ruins the idea if a very large percentage of the projects fail or are halted (e.g. trademark disputes).

yock 1 day ago 1 reply      
Woah, wait a minute? Someone contacts Kickstarter and alleges that a paying customer of theirs is using the Kickstarter platform to violate the law and Kickstarter's only response is to contact the customer directly? This has to be some kind of joke, right?
jmole 1 day ago 1 reply      
Crossposting my comment from the blog post here, because the Arduino servers are apparently under heavy load, and most people will TLDR the post and just look at the comments here.


Hi All,

I recently launched a Kickstarter project as well, that ended about a month and a half ago: http://www.kickstarter.com/projects/18182218/freesoc-and-fre...

Dimitri, the creator of the smARtDUINO project got in touch with me a few weeks ago to collaborate on creating a smARtBUS adaptor for our platform.

We've exchanged over 40 emails since then, and my impression is that he's an earnest, upstanding man, who really cares a lot about the electronics community and making it easier and less expensive for people to do creative things.

We had a discussion about Kickstarter in particular, and one of the things he mentioned to me in our conversation was that he felt his campaign placed too much emphasis on the “Arduinoness” of the project, and not enough emphasis on the true innovation, which is the smARtBUS interconnect system.

We discussed the perils of successfully marketing a Kickstarter project, and said in retrospect, “If we communicate better the project, I have no idea were we can be now that it should be clear this is not just another Arduino.” (this was about a week ago, near the end of his campaign.)

I think if you can honestly blame him for anything, it's not effectively conveying the message of the product in an concise way. If his workers were indeed manufacturing Arduino in Italy, there's nothing wrong with claiming it. I'll admit that the claim might look deceptively grand at first glance, but my intuition tells me this has more to do with Dimitri's English ability, rather than a malevolent attempt at hijacking the Arduino brand.

mbanzi 1 day ago 2 replies      
I'm the author of the blog post. The objective of the post is to ask Kickstarter to provide a more direct way to report either trademark violations or lies (like this guy claiming to be a former manufacturer of Arduino) that might affect the people who fund a project.
kfury 23 hours ago 0 replies      
(my comment on the Arduino blog, cross-posted here)

Massimo, you asked the Arduino community for comments, so here's mine.

It's obvious that Dimitri isn't trying to do you or Arduino harm. Did he over-represent his team's association with Arduino? Perhaps, though not to the degree that you claim, and your repeated accusations that he's a liar go over the top.

While you've made it very clear you don't like the term ‘Smartduino' this is a slope that Arduino has navigated before, trying to find the happy medium between building an open platform and community of developers and manufacturers and protecting your own intellectual property.

Regardless of where Smartduino falls on that line it clearly doesn't fall very far away from it. So again, whether Dimitri is right or wrong he certainly doesn't deserve the lambasting bordering on libel that you're dishing on him.

For someone who has such a leadership role in the Arduino community, your actions today have done a very poor job of promoting it. At the heart of the Arduino movement is the idea that hardware development doesn't have to be about huge companies that build walls around their IP with lawyers manning the battlements.

This is how THEY solve their conflicts. This is not how WE solve our conflicts.

Got issues with how the Smartduino project represents itself? Talk to them. Work with them. Don't try to whip us into a frenzy of torches and pitchforks because every single person in this community is trying to expand it and move it forward, a sentiment I see throughout these comments, but not in your own words, where I would most expect to see them.

sschueller 1 day ago 1 reply      
Well I fell for it. I thought the smARtDUINO was from the official team.
dimitrialbino 1 day ago 0 replies      
There is much that anyone can say about Kickstarter.

The problem is that all this discussion started from much different topic.

Here we are talking about someone, Massimo Banzi from Team Arduino, that wrote on the public blog of Arduino, which have for sure hundreds of thousands followers, if not more, that a company doesn't exist and that another person, stated with first and last name, was claiming something that he didn't.

Reading the Kickstarter page, including the bio, the faqs, the comments and the huge quantity of updates, can see very easily that this person always made very clear he's living in China so, why publish something so unfaithful?

When then it came out that the problem was on the table since October 29th, few hours after the launch of the Kickstarter project, why wait almost one month to complain.

The registration certificate went public so, the company exists and this is proven. Nobody heard about any apologize for the false statements.

The only think Massimo Banzi tried to do, here as well, was to try to change the topic.

The question is: is it right that the owner of a so popular blog write false statements, but when become clear to everybody that he was wrong, he try to change topic, instead issue the owed apologize?

This remember me when Steve Jobs (R.I.P.) is supposed he stated: "there is nothing wrong with iPhone 4, they are holding it not properly".

Massimo Banzi is for sure member of a group of very clever peoples that created such a good thing like Arduino, but he doesn't have the right to attribute to others the false and pretend that nothing happen here.

shardling 1 day ago 2 replies      
It would be nice if anyone speculating on Kickstarter's legal obligations laid out their own credentials.

Because just maybe Kickstarter has already considered the legal, logistic and ethical aspects of their own business, and possibly even consulted with lawyers and other experts.

belgianguy 1 day ago 2 replies      
While this "smARtDUINO" (lame!) copycat behaviour is indeed appalling, the alternative can be a real hassle, too. Trademarks and copyrights deserve protection, but as YouTube has shown, DCMA abuse can halt projects just as well.

Perhaps they could implement a 'Flag' system, that allows IP owners to signal illegal use/infringement, after which a review (in which both the complainer and the starter should be involved) can decide whether or not the project can proceed. This could easily sail into DCMA hell or lawyer heaven if a big corporation feels threatened, which could file complaint after complaint just to stall the competition.

But where do you draw the line?

scottymac 1 day ago 0 replies      
While I understand the desire for Kickstarter to police this kind of thing, the burden ultimately sits with the owner of the trademark being infringed upon to pursue legal action or risk losing the trademark. Frankly I'm a little surprised at the response here on HN given that Kickstarter is an early stage company with limited resources. Why should they focus on something where there are already laws in place and avenues to pursue infringement? Who's to say they should be the arbiter on what constitutes infringement?
Ninja IDE: written in Python for Pythonists ninja-ide.org
182 points by mmariani  9 hours ago   100 comments top 35
kghose 8 hours ago 5 replies      
It is FOSS (GPLv3). The license information was a wee bit hard to find (Wayyy down on the about page http://ninja-ide.org/about/) and I first thought it was some frankenstein freemuim product where you had to apply for a free license if you were an OSS devel (like PyCharm) etc. etc.

I gave it a whirl:

1. Snappy, which is nice, since PyCharm can be sluggish on my Mac
2. No VCS integration
3. By default very strict code checking is turned on, which turns my (functional) code into a sea of underlines, which is not so pretty

It looks to be an interesting start, but it will need VCS integration before it looks suitable as a PyCharm replacement.

I didn't look in detail at code completion/code assist, which PyCharm does very well.

sho_hn 8 hours ago 3 replies      
Can someone explain to me why this is at the top of the front page despite a website devoid of useful detail, while this completely fails to catch on: http://scummos.blogspot.de/2012/11/kdev-python-14-stable-rel...

(Seriously, check it out - KDevelop's Python plugin and Microsoft's PTVS are currently the two projects doing serious work on static analysis of Python for live editing purposes. Here's a nice subthread comparing the two: http://news.ycombinator.com/item?id=4725634)

ketralnis 7 hours ago 0 replies      
I realise these are at first blush, but:

* Scrolling is way too slow. This isn't nitpicking, this is really very important to me

* I like PEP8 warnings and use them in other editors, but I don't like not being able to pick which style stuff I care about

* I don't like the PEP8 tooltips. They cover up my code and that's the worst possible place to put them. Even if I do plan to "fix" the issue, coming up over the code that I'm typing right now is never okay.

* It's really quite a lot of work through some confusing terminology to get a test run of the IDE going on an existing project. I don't want to move my code into your workspace. I don't want to import my existing project (that sounds scary)

* Some glaring bugs seem to indicate that this is more young than is indicated on the very flashy project site. For instance, if I try to import a project but cancel the "select a directory" popup, I inconsistently get it either removing my previous selection or crashing the whole IDE

kstenerud 8 hours ago 1 reply      
Pretty cool all around, but it needs a lot more stability work. It crashed a few times just scrolling around in some of my python projects, and there are quirks such as complaining "This font can not be used in this editor" if I open the font selector and then click "Cancel".

Also, changing the margin line doesn't seem to take effect unless you quit and restart the IDE.

unohoo 8 hours ago 2 replies      
What would really help is a small demo video just to get a whiff of what the IDE feels like. The description and screenshots are somehow not enough for me to download and install an entire IDE and take a test drive. If there is a demo video somewhere, my apologies - I was not able to find it.
jra101 8 hours ago 1 reply      
Would be nice to be able to selectively disable some PEP 8 rules in the style checker. I don't care about lines longer than 80 characters and I don't like separating functions by two empty lines.
spindritf 7 hours ago 0 replies      
"For Ubuntu Users: You can add the NINJA-IDE PPA and install it from there (you will get automatic updates!)"


Thank you.

gatox 7 hours ago 0 replies      
Hello, I'm part of the NINJA-IDE Team, and first to all, I would like to thank everyone for the feedback (good ones, as much as bad).
Currently we are working to make NINJA-IDE compatible with Python3 (among other features) and taking care of several issues to ensure better stability (and guide the development process with tests).

I hope we can find the time to take care of some of the stuff mentioned here as videos, screenshots, user guide, etc.

It's a lot of work, but we are proud of what we can achieve with a free software project.

Thx everyone!

yuvadam 8 hours ago 0 replies      
Don't know about the IDE but that font is horrendous.
hoka 8 hours ago 1 reply      
I'll definitely give it a shot.

From a usability perspective, your download button could be better. It doesn't download right away (which is fine), but redirects to downloads/win for me. Might be nice to have it auto-scroll to the win downloads since it took me a while to figure out what was going on.

Here's a screenshot from Win7 32-bit: http://i.imgur.com/2RT6u.png

That random pink line makes it unusable for me.

mikle 5 hours ago 0 replies      
I hate to be that guy, but after almost a decade doing Python one thing I learned is that we prefer Pythonista, not Pythonist.
recuter 8 hours ago 0 replies      
Something something second system syndrome, just use vim/emacs/sublime. 'etc.
veeti 4 hours ago 1 reply      
Although vim has almost completely sucked me in already, does this thing have support for 1) separate indentation settings for different file formats and 2) separate indentation settings for different "projects"?

I've been looking forever for a text editor that does this and surprisingly few do.

buster 8 hours ago 0 replies      
Wow.. how did this not make it to HN before? Already version 2.1.1 and never heard of it?
gruuby 1 hour ago 0 replies      
I cannot use an IDE that doesn't feature a vi mode for the editor. I'd be very, very lost. I'm yet to find an IDE that doesn't get in my way, vi mode or not.
stevoski 8 hours ago 1 reply      
How does this compare to PyCharm?
shill 7 hours ago 0 replies      
I am already extremely satisfied with PyCharm. I'll keep an eye on this though. Being able to write plugins in Python is promising.
endtime 8 hours ago 2 replies      
Having very recently switched to Sublime Text 2 (from Komodo Edit), I'm curious if this offers anything that can't be done with Sublime + mature existing plugins...?
zlapper 8 hours ago 1 reply      
As others have already mentioned, PEP8 validation is enable by default, which is a little excessive in my opinion (specially with the line < 80 chars rule). It would be great to be able to disable individual rules, a la Ecliplse/Netbeans.

All in all it looks very nice, thanks for sharing.

masukomi 7 hours ago 0 replies      
am i the only one who's really wishing there were some real screenshots to check out before downloading the thing?
jlujan 8 hours ago 0 replies      
On mountain lion, it requires X11. Not sure why as my PyQT apps do not.
rxc178 8 hours ago 2 replies      
This is nice, but one quick question, why's the windows installer in spanish?
dmd 2 hours ago 0 replies      
Crashes on launch for me.
nirvanatikku 8 hours ago 0 replies      
Crashed while scrolling =( Was curious, but can't see myself moving away from PyCharm/Sublime.
azinman2 4 hours ago 1 reply      
Tried it out on existing code. Was complaining that spacing wasn't a multiple of 4, when I set it to 2 spaces in the prefs. I even reloaded it and verified the setting.

Back to Sublime!

pablosanta 5 hours ago 0 replies      
It keeps crashing on me. I'm on Lion. :(

Looks good though. I thought it was going to be YET ANOTHER ECLIPSE distribution, but apparently it's not. It seems to be pretty fast. Hope they fix the crashing issue on Lion soon.

btipling 7 hours ago 0 replies      
It can't seem to create or open JavaScript files. How does one use it with Django?
indiecore 8 hours ago 0 replies      
Nice, it would be good to have some screenshots and stuff though, I'll definitely check it out.
neil_s 4 hours ago 0 replies      
The name of the IDE emphasizes that its not just yet another IDE, and yet I don't see anything new here, or any difference from existing IDEs, other than heavy Python support.
jotaass 3 hours ago 0 replies      
Just tried it. Looks nice but a bit lacking on the code completion, i think. Maybe I need to give it another chance.

Also, I think would be nice if there was a way to interact with the console after running a script. I realize this may be sort of an odd request, but it is very convenient when you're not quite sure on how you want to solve a problem, and you need to try out some solutions interactively. I greatly enjoy this in spyder, my current python ide of choice.

DodgyEggplant 8 hours ago 1 reply      
Wing IDE is great
ninetax 8 hours ago 0 replies      
It would be great to see some screen shots.
silasb 8 hours ago 0 replies      
Is this based on QT Creator?
zdanozdan 8 hours ago 4 replies      
whats wrong with emacs ?
gfosco 9 hours ago 1 reply      
As soon as I see the words "cross-platform" on an IDE, I'm no longer interested. Looks really nice though, they did a good job with branding.
Avoiding "the stupid hour" rachelbythebay.com
181 points by greenyoda  3 days ago   47 comments top 14
cstross 3 days ago 4 replies      
Some folks can pull all-nighters; not only can I not stay awake for more than 24 hours (I literally keel over sideways and faceplant on the floor, snoring) but if I try working more than 10 hours, I run into stupid hour syndrome. And in my later post-programmer life as a writer, if I write past a certain point (roughly 4500 words of fiction or 6000 words of non-fiction) in a day, then for every 1000 words past that point, I end up having to bin and re-write about 1500 words the next day. Because it's unmitigated crap.

Limits, folks: we have them. Learn and respect them and don't try to be macho about it, because it doesn't help.

chrisacky 3 days ago 2 replies      
It's really hard to emphasise how important identifying those moments when you're productivity starts to wane actually is....

I'm sure so many of us have this mindset where we all think we are indestructible, mentally and physically, and burning through the nights to be that guy who "gets shit done" is as important as shipping your first line of code.

When I was a lot younger, I couldn't identify poor code even during my best hours, so I could happily burn through an all nighter, but after years of experience you get that wisdom to be able to notice when you aren't switched on. For me, this is usually at about 8pm at night (after working for a full 12 hours with minimal breaks). You start to notice that your concentrate flickers and something that should have taken 15 minutes has actually taken you 2 hours and it's now 10pm (and you have fifteen tabs of HackerNews open).

Do yourself a favour and just stop. Come back refreshed. Whether that is in an hour, or even a full night. Unless you have some insane deadline that doesn't depend on code quality, it's inadvisable to ever burn through it... because ultimately you are wasting time that could be better used on recuperating your faculties!

chaz 3 days ago 0 replies      
There's also the "hero night." It's pulling off an all nighter to accomplish a seemingly impossible task with a hard deadline. This happens about once a year, and it was in April for me this year. The company had 24 hours for an opportunity to be on national TV, but we needed a landing page built, with a contest, and a Facebook integration. Small company means you gotta take the chance.

Grabbed my earbuds, fired up my Spotify, and got to work. Got it done and deployed a couple of hours before airtime in the morning. It worked great. Unfortunately, the TV spot didn't pan out quite the way we were told, and it didn't provide much value after all, but everything I built worked. I would have felt terrible if the opportunity was in fact huge, but what I had built was subpar.

It's a nice feeling to be the hero, even for yourself. But it's also easy to overvalue a success like that and assume that it will always be that way, and always be the hero. Unfortunately, it turns into "stupid hour" most of the time.

naner 3 days ago 0 replies      
The worst part about the self-inflicted "stupid hour" is that your decision-making faculties are already working poorly by the time you decide whether to keep going or not.

It reminds me of the tragic comedy of trying to overcome a bad habit. The stress from abstaining from the vice causes a strong impulse to seek solace in the very vice you're trying to quit.

Plan ahead for moments of weakness. I actually have a "stop-hacking" alarm on my computer... gives me a brief warning to finish what I'm doing then it locks the screen 60 seconds later.

jackcviers3 3 days ago 6 replies      
There's a lot of talk about this subject as if the practicioners of coding long past the point of optimal productivity have a choice not to code when you are at that point. I don't think most of us actually have a choice - it is often a case of ship or lose customer x. Promises are made that can't be kept by humane working hours. Things break days before a huge demo. Someone gets sick. You are in an arms race with a well-funded competitor.

Something I would like the programming world to discuss is that it isn't the best coded product that wins - technical quality rarely matters. Usually, the first product to market wins, or the programmer who kicks out the most features as long as the features works. Shipping isn't just a feature. It is the only code quality metric that matters to anyone who isn't a coder. It also is easier to ask forgiveness for refactoring after the ship date than it is to ask for permission to push the date back. All-nighters are ingrained in the practice of programming for a living. The code may not always be robust or elegant, but in most cases what matters is that it works and gets done faster than the others guys' high quality code job.

derwiki 3 days ago 0 replies      
I never understood why people treated working through sleep deprivation as a badge of honor. Glad to see that others agree!
pcl 3 days ago 5 replies      
> Have you ever come back to a project and been unsure of where to get started? If you had left off just one item sooner the day or week before, you'd already have a known starting point.

I can't find a cite for this right now, but I've heard this phrased as "park facing downhill." Towards the end of the day, I actively try to get myself into a position where I've put together the beginning of an idea and gotten some failing test cases written, or at least some non-compiling pseudocode into a buffer somewhere. This sets me up for success at the beginning of the next day -- I work right into a state of flow while tying together the loose ends from the day before.

jayferd 3 days ago 1 reply      
Just realized that I'm in "stupid hour" right now. Going to bed.
jasonjackson 3 days ago 1 reply      
These articles pop up all the time, as if people hadn't had the foresight to realize their brain function decreases when they don't sleep. Sleep deprivation is a tool which allows you to sacrifice some degree of brain functioning (different for each person) to gain in other areas like meeting hard deadlines, or taking advantage of your programmer flow state, or the positive feeling you get knowing you grinded away at a task non-stop until completion.
lnanek2 3 days ago 0 replies      
A lot of times with a no sleep weekend hackathon or a launch or something I can work a long time on the tough parts first, clearing up all the technical risk in a design with little test screens/pages/activities for example, and then the next day running on no sleep I'm still OK for testing on a dozen different phones/emulators/browsers and making forms return nice looking error messages and making all the buttons on the site look consistent, etc.. Basically you just have to delegate to your stupid hour self the lousy slog work. :)
gurkendoktor 3 days ago 2 replies      
I have this, but a 20 minute nap or a walk to 7/11 usually fixes it. What happens to anyone in this thread when they try that?
bitteralmond 3 days ago 0 replies      
The last bit about the "subconscious processing" is spot on. I read a study once that found that people daydream/drift off around 30% of the entire day, and the people who do not resist doing this and allow themselves to dream are more creative and productive as a result.
lostnet 2 days ago 0 replies      
I think a large problem is in our psychology of not wanting to scrap our previous labor even if it was substandard.

At this point I hope most coders are checking in code regularly enough that they could identify a point close to where quality declined and could throw everything after it out.

Personally, I usually start a day with a review of the previous changes but I rarely back out low quality changes. I often realize a continuation/rework has taken longer than a full backout and redo (and I have virtually never been disappointed by a redo,) yet there is a psychological barrier to overcome before backing out.

kiba 3 days ago 1 reply      
I thought it's "Pulling all nighters"
McAfee's Third World Travel Guide whoismcafee.com
178 points by Tombar  3 days ago   104 comments top 21
melling 3 days ago 1 reply      
I spent 9 months backpacking from Guatemala to Buenos Aires. There are thousands of people who do this sort of thing every year, and you'll meet lots of expats. In fact, until you hit Colombia, you probably can get by without knowing much Spanish. Personally, I would just skip this article, buy the Lonely Planet and live a little. Some places that I'd recommend seeing:




http://wikitravel.org/en/Cusco -- You hang out here when going to Machu Picchu

If I were going today, I'd probably stop in Santiago a see what's going on with StartUp Chile: http://startupchile.org/

nlh 3 days ago 5 replies      
I find reading articles like this exhausting. I've got to imagine being a criminal/sketchball while on the road in whatever country you're in is equally exhausting.

Is it not just possible to travel abroad, carry proper documentation, a bit of cash, and enjoy yourself? Does every situation really require constant vigilance to knowing when to run or not, how to make eye-contact, when to make excuses, etc.?

Perhaps I'm superbly naive. And perhaps I've just not seen enough of the world, but I've gotten along just fine without having to resort to cloak-and-dagger behavior everywhere I go. Sure, checkpoints happen in some places. If you're pulled over, you should have a legit passport and a few dollars if you're asked to pay. But only if you're asked.

I feel like some people ask for trouble wherever they go. McAfee seems like one of those people.

Am I nuts?

stevoski 3 days ago 11 replies      
I've been to 93 countries. All continents. Places travellers would normally not contemplate visiting. All independently. I've _never_ had to bribe someone. In all the years of doing this, I've had an official try to shake me down maybe 5 times.

I think McAfee's advice is way off.

geekfactor 3 days ago 1 reply      
It may just be that I don't have McAfee's cajones, but much of his advice herein seems like a surefire way to, at best, end up locked up in some South or Central American prison for the rest of your life; at worst, end up face down in some ditch someplace.
ghshephard 3 days ago 11 replies      
Is there anyone who has lived in one of these South American areas like Belize for a long time able to confirm any of this system of paying bribes to police officers at traffic stops?

Also, is his statement about police 'planting drugs' just so much self serving nonsense, or has anyone ever had a police officer actually do that?

The entire essay sounds somewhat specious to me...

dmmalam 3 days ago 0 replies      
Most of this irrelevant if your a tourist, I've backpacked through 40+ countries and have only a few times needed to provide any 'documentation'. However the word is needed, you can take the initiative and get away with things your not supposed to do, like bringing alcohol into Columbia's national parks, or bumping long ticket lines!

Where the OP is completely accurate is doing any business (illicit or not). I have much family in India, who own several large businesses and level of corruption needed just to run the company is insane. After a certain size, you pretty much need to be a little socialite, keeping several dozen relationships well greased. It's completely pervasive and everybody knows about it - to western eyes it's insane.

mahmud 3 days ago 2 replies      
Do this if you want to be an abrasive dipshit who the host community rejects. This guy is a colonialist-tourist, not a traveller. You can feel his contempt for the people and the lands he is "visiting" seething through.
mcdowall 2 days ago 1 reply      
I spent a month across the Yucatan in Mexico, Belize and around Costa Rica 5 weeks ago, I didn't experience anything like this at all, I sense an element of desperation, anger and blatant bullshit amongst this post.

Belize was a really warm welcoming country, I've travelled every continent and its up there in my top 5, so to read this is so contrary to my image of a wonderful country.

sergiotapia 3 days ago 3 replies      
"If your contraband is drugs, offer them a small hit while talking. It re-enforces, subconsciously, the idea that the dope is your possession and that they are partaking due entirely to your good will. If you are transporting sex slaves, then I must say first that I cannot possibly condone your chosen occupation, but -offering each one of the policemen a taste of the goods may well seal the deal without any additional cash thrown in."


What a piece of shit.

eli 3 days ago 0 replies      
Flashing bogus press credentials is not cool. It makes it that much harder for actual members of the press to do their job when there are fake reporters running around working on self-serving fake stories.
jacquesm 2 days ago 0 replies      
There is a very simple rule to travelling in unsafe places: don't attract attention to yourself. McAfee failed that rule from the second he set foot in Belize.
vlokshin 2 days ago 2 replies      
The domain was registered 11/16

No one is, even in the slightest, is doubting the validity of this blog?

b6 3 days ago 0 replies      
> As all of my close friends know, I have not always been a teetotalling, drug fighting citizen.

He's pretending he wasn't talking about plugging MDPV on bluelight.ru recently?

amtodd 2 days ago 0 replies      
I have lived in Mexico for the past year and a half and have just moved to Guatemala. During my time here I have driven my Mexican plated car across Mexico three times, across Belize once, and Guatemala twice.

Depending on the area you can either go a whole day driving without being stopped or be pulled over ten times in an afternoon.

Paying bribes has mostly been for things I have done wrong: no seatbelt, no insurance (Belize), not having my license on me etc...

When I first moved to Central America I hated the idea of paying bribes. I hated the idea of such obvious corruption. Now, if I'm in the wrong, I welcome having the ability to pay a small amount of money to avoid what would be a certain large fine and possibly having my car towed and impounded in my own country (Canada)

I have had yelling matches with Mexican border guards at the Belizean border demand an exit fee which doesn't exist and take my passport, threatening to not return it if I don't pay. The majority of tourists that cross the border just pay the $20 without questioning it.

I've had an M16 shoved into my body and surrounded by a group of cartel members with threats of cutting out my tongue. (Which turned out to be their way of playing a joke to scare me, before cooking my girlfriend and I dinner and getting us drunk, sitting around on a beach at night while they balanced automatic rifles on their laps.

I've spent an hour on the side of a desolate highway at 2 in the morning in Belize, smoking cigarettes and working out a bribe with drunk police who pulled us over for not having insurance in their country (we crossed over the border at 8 at night and their insurance office at the border closed at 7 and we tried to make it across the country overnight). We ended up talking them down from $400usd to $20 to hire their services for a police escort to Orange Walk, and helping us find a hotel to stay in until we could purchase insurance in the morning.

The majority of people visiting these countries will never have a negative experience. If you decide to spend any time living in one of these countries like John, then you will most likely, eventually, run into some sketchy situations.

Margh 3 days ago 0 replies      
Without giving a second thought to whether or not the events described are commonplace I thought the article gave some great insights into the psychology involved if/when you get shaken down, both for you and the officers.
Evbn 2 days ago 0 replies      
After reading this thread I don't see what value there is in South America or SE Asia that can't bettered by a big screen TV and some National Geographic and Food Channel and BBC DVDs
contingencies 2 days ago 0 replies      
Maybe this guy has just done way too much coke.

For a better travel guide, see http://www.artoftravel.net/

hn-miw-i 2 days ago 1 reply      
Absolutely fascinating. Very helpful advice that you wouldn't read in a mainstream travel guide. Unfortunately corruption is everywhere and knowing how to respond and knowing the local customs is very important if you wish to keep your skin.

Johns tale grows more epic every day and I am really looking forward to the comic/graphic novel. McAfee is a true adventurer and I hope the injustice of his ordeal is broug to light.

wavesounds 3 days ago 1 reply      
Love this, someone needs to make a movie about this guy.
swah 2 days ago 0 replies      
Reading this and trying to decide if Brazil is third world.
littledot5566 2 days ago 0 replies      
What is the deal with Taiwanese sex slaves? Why the special mention?
Color schemes for Sublime Text 2 and Textmate github.com
176 points by usaphp  4 days ago   80 comments top 24
flyosity 3 days ago 1 reply      
Nice themes! It's pretty cool to see all these screenshots and the background color leaking into the tabs... I'm the designer of the Sublime Text UI theme and Jon was adamant about making the tab background using the color of the theme background color. It was really, really hard (lots of PNGs and lots of switching between PNGs depending on the luminosity of the background color, lots of pixel tweaking) and it's not perfect for all luminosity values, but these screenshots make it look pretty snazzy :)
msluyter 3 days ago 4 replies      
I think these are all quite nice, however... am I the only person out there who likes comments to show up in a nice bright color (my preference, usually a bright terminal green). The faded comment look that most of these share is really hard for me to read.
danberger 3 days ago 1 reply      
Open Terminal...

  cd /Users/dan/Library/Application Support/Sublime Text 2/Packages
git clone https://github.com/daylerees/colour-schemes

In Sublime -> Preferences -> Color Scheme

Brajeshwar 3 days ago 2 replies      
I really love the Tomorrow Theme - https://github.com/chriskempson/tomorrow-theme
Luyt 3 days ago 0 replies      
I use a very simple color scheme for Textmate. Basically black text on a light-yellowish background, with only string literals and comments in a subdued color. I have found that syntax highlighting with many colors distracts from the code. http://www.michielovertoom.com/incoming/textmate-colorscheme...
giu 3 days ago 1 reply      
By changing the general theme, the background color of the build panel won't change automatically (screenshot: http://i.imgur.com/35N6Z.png).

You can change the build panel's background color (in Ubuntu) by editing the ~/.config/sublime-text/\Packages\Theme - Default\Widget.sublime-settings file and replacing the current line in the file with

   "color_scheme": "Packages/Color Scheme - Default/<insert-your-current-theme>.tmTheme"

leak 3 days ago 1 reply      
Does anyone know of a them for Sublime that looks exactly like Chrome?
daylerees 3 days ago 1 reply      
Hi all,

Dayle Rees here, glad you like my themes! If you do, please remember to star the repo! Also if you like some of the themes, but would prefer them slightly different, please let me know using the github issues feature and I will create an alternate version! Also taking requests the same way!


sirn 3 days ago 0 replies      
Seems like this is now available in Sublime Package Control via the "Dayle Rees Color Schemes" package name.
antihero 3 days ago 0 replies      
Obligatory plug for my own Dogs colour scheme: https://github.com/radiosilence/dogs-colour-scheme
metastew 3 days ago 0 replies      
I'm a huge fan of Phoenix Theme and Color Schemes. https://github.com/netatoo/phoenix-theme

It's based off Soda Theme. Currently I'm using Tomorrow-Night Color Scheme, but I'm digging the Dark Green and Dark Blue too. You can also configure color of folders and tabs.

safetyscissors 4 days ago 2 replies      
Any vim ports?
nnq 3 days ago 0 replies      
sweet ...but every time I find a new favorite color scheme, it only lasts for a week max ...after this some part of my brain pulls me into changing back to my good ol' Visual Studio-like color scheme ...wonder if this happens to anyone else too
niyazpk 3 days ago 1 reply      
What is the font used in the screenshots?
hayksaakian 4 days ago 3 replies      
Install instructions would be helpful.
mihaifm 3 days ago 0 replies      
wildranter 3 days ago 0 replies      
Very well balanced color palettes, beautiful. Thanks for this!

Does anyone know how to convert them to Xcode? I googled around but didn't find anything to get the job done.

kentwistle 3 days ago 0 replies      
Awesome selection I especially like the font!

There is plenty of themes listed here as well http://textmatetheme.com/

Benferhat 4 days ago 0 replies      
The font color for comments is too close to the background color -- could use some more contrast. This goes for almost all of the included themes.
photorized 3 days ago 1 reply      
Looking at these makes me want to write code again. Clean, elegant code. Weird.
joria 3 days ago 0 replies      
At http://wbond.net/sublime_packages/community package control website), there are tons of color schemes (just search for scheme) with their corresponding github repo (most of them with screenshots to preview).
madprops 3 days ago 0 replies      
i'm using carbonight now because i'm a boring person
azat_co 3 days ago 0 replies      
Gorgeous, many thanks!
magg 4 days ago 0 replies      
can you add the default theme from smultron/fraise??
Gmail and Drive - a new way to send files gmailblog.blogspot.com
174 points by neya  9 hours ago   69 comments top 20
guelo 7 hours ago 9 replies      
Totally off topic but blogspot is just awful. Why does everything have to be a complicated buggy JavaScript app? There's nothing wrong with serving up good ol HTML pages, especially for simple text and images content like a blog.
munin 8 hours ago 1 reply      
> Have you ever tried to attach a file to an email only to find out it's too large to send?

Yeah! Some jerk who runs my MTA set the size of acceptable attachments really low! I wonder who did that...

$ host -t mx mydomain.com

mydomain.com mail is handled by 0 aspmx.l.google.com.

Oh... I see.

simonsarris 8 hours ago 0 replies      
This is lovely. Very welcome.

Sending and sharing files are two of those things that are just now sluggishly rolling over to discover that it's a new millennium.

Dropbox and Drive are making great strides lately and I'm really thankful for it. Using Dropbox to have the same "folder" across three computers is the first time synced sharing ever felt intuitive enough for my (71 year old) father to regularly use, and now he can use this to reliably send larger files to people without any worry of fouling up permissions (that would otherwise be difficult for him to understand).

WayneDB 7 hours ago 2 replies      
I never liked the idea of hosting my own files on someone else's server (Dropbox) or sending them through a middle-man.

That's why i just run my own "cloud" on my own premises. If I want to give someone access to a file, I just throw it on my Synology DiskStation and the receiver can get at it via FTP or HTTP client.

revelation 6 hours ago 1 reply      
So can we use that to send binaries to people? Because gmail will absolutely not allow you do that. They will go as far as inspecting archives to look for binaries and ban you from sending them.
stephenhuey 8 hours ago 0 replies      
This is long overdue. I've been inserting links to Google Docs (the old name for Drive files) into emails forever, but plenty of people I know don't realize how easily they can do that and give up if a large file cannot be attached to an email. I'm also surprised by how many Gmail-using friends of mine don't even know there's some hefty free file storage a click away even though the link to it has been at the top of their Gmail for years.
tedmiston 7 hours ago 2 replies      
A welcome feature, but we can't ignore the paradigm shift's tiny repercussion: once the sender deletes the file, the receiver will no longer be able to access it (assuming they've lost, deleted, or not yet downloaded their own copy). Lately I've used shared Dropbox folder links for larger attachments, but the same problem seems to persist with any hosted solution. A solution that pleases both the sender having control over their files and the receiver having long-term access is tough to imagine.
danbarker 7 hours ago 3 replies      
I've been paying for Google Drive for several months because I really, really want it to work, but it's actually kinda useless as it causes constant instability and 120%+ CPU load on my 2012 Macbook Pro. This means that I frequently close the application down, so it's not actually covering me and if I lost my computer, the most recent files probably wouldn't be covered. There's been an open issue about this in the support forums for months and there's no news on when they're going to fix it...
csmatt 7 hours ago 1 reply      
It's about time!

I use Google's cloud-based services for as much as I can, but it's still not seamless and is annoying when I have to open a new window to access a service run by the same company providing the one in the page I'm on.

Next step: Please allow me to easily save PDF's and other documents directly to Drive from a URL. I shouldn't have to download a file to my device and then upload it to drive.

goronbjorn 5 hours ago 0 replies      
There is a really good third-party Chrome extension that effectively does this already and also works with Box and Dropbox: https://attachments.me/
yason 8 hours ago 2 replies      
This is how email could work too. The sender would host it (by himself or in cloud) and the recipients go fetch it when they want to read it. Updates and comment threads all collect into the same place. No spam either since nobody would be pushing tens of megabytes of messages to your inbox.
ivanb 6 hours ago 0 replies      
Is this minuscule feature worth the front page?
kamakazizuru 7 hours ago 0 replies      
this is awesome! it might also just tip the scales from dropbox over to drive. I cant believe something so obviously powerful took so long! I do hope that it will allow me to share files with non-gmail users as well!
kissickas 6 hours ago 1 reply      
> Now with Drive, you can insert files up to 10GB

Hmm, how much space do I have in there now?

0% of 5 GB used... Now it makes sense.

fudged71 7 hours ago 1 reply      
Question: so with this, I can send an attachment and change the file before the recipient opens it? Will they see if it has been modified? Will I see when they have accessed it?
agumonkey 8 hours ago 0 replies      
I wonder if this will cause storage optimisations on their data centers.
benaiah 5 hours ago 1 reply      
So, in other words, Gmail just added a feature that Hotmail/Outlook.com have had for years.

golf clapping

mitko 8 hours ago 0 replies      
plug: my friend built a chrome extension that does a superset of that - it is called Cloudy and integrates with filepicker.io which lets you choose files from multiple cloud storages:

disclaimer: I work for a Google competitor

facorreia 9 hours ago 0 replies      
Seems very useful. I bet I'll be using that a lot.
stephengillie 8 hours ago 1 reply      
Sorry for being pessimistic, but any speculation on the vulnerabilities this connection opens?
Raided 9-Year-Old Pirate Bay Girl Came To Save Us All torrentfreak.com
172 points by cyphersanctus  2 days ago   71 comments top 10
mtgx 2 days ago 3 replies      
This "piracy war" is starting to look more and more like the drug war. I could see how in US especially, if marijuana is going to be legalized, all those agencies which would now be left out of work, could refocus on raiding "pirates", barging in and shooting people's dogs, and whatnot. Then this parody might become a reality (the part with the girl at the end):


baddox 2 days ago 1 reply      
> But in what kind of parallel universe does a professional, western police force think it's appropriate, proportionate and a good use of tax-payers' money to send officers to a citizen's home for a petty file-sharing issue, one involving the downloading of a single music album?

That's just it. It's not a usage of taxpayers' money, it's a usage of government money that just so happens to have been taken forcefully from taxpayers. And when you phrase it as "government money," it's not at all surprising that its used this way. Just look at the relationships between a anti-piracy groups and government.

robryan 1 day ago 2 replies      
I think for most of us we have gone past the issues with semantics, that file sharing is no the same as stealing.
What I am more intersted in is how can content producers be fairly rewarded in this new world and what the future of content production looks like if there is less money for producers. Technology destroying some of the middle men should make producers more even with a smaller pie, which will help a bit.
File sharing has been around a long time and yet it still seems more quality content than ever is being produced, maybe the incentives are less of a problem than I think.
pcote 2 days ago 0 replies      
The problem with this matter is it's been done before. In 2000, pictures of Elian Gonzalez being face to face with swat forces caused only limited outrage. The public was exposed to that case so much that they pretty much got numb to the situation and were downright sick of hearing about it before the raid happened. If anything in overall U.S. Cuba relations changed, it probably had nothing to do with the kid.

It's not that different with regard to file sharing. We've been hearing horror stories over extreme anti-piracy tactics for close to 15 years now. Your average 20 year old doesn't know of a world where this sort of thing doesn't happen. So in this kind of environment, I just don't see how one little girl is going to change anything.

yason 1 day ago 0 replies      
Eventually this will just make people transfer to, for example, I2P torrents or something.

It takes one good effort that bundles the I2P codebase, the required plugins and an I2P BitTorrent client such as Robert into a single application that just launches with one click of the mouse and without any further configuration needed and provides a browser view to the I2P torrent trackers as well as the BitTorrent client itself (or the equivalent hops for some other onion style network) and you're pretty much set for genuinely anonymous BitTorrent masses.

These systems, such as I2P and Tor, are designed to be resilient against oppressive governments so the MAFIAA just don't have a chance if the traffic goes underground. What next? MAFIAA would try to make it illegal to use your computer for anything else than connecting to pre-approved websites with MAFIAA approved browsers? Gimme a break.

GoRevan 2 days ago 4 replies      
Hopefully this girl will create a paradigm shift. All of this anti-piracy prosecution makes me feel like im in a dystopian future where hearing music and watching movies is forbidden. :(
cyphersanctus 2 days ago 1 reply      
"Because the public are angry, politicians will be nervous too, and uncooperative politicians are bad news for tougher copyright law. But in the short term anyone sent a “pay-up-or-else” letter from CIAPC (if they even dare to send any more) will be thinking long and hard about paying. The chances of the police coming next time must be slimmer than last week.

And the fact that they will be able to thank a child for that is why this is some of the best news all year."

guard-of-terra 2 days ago 0 replies      
It's not good to have nine years old girl used in propaganda.
We can't prevent this from happening but we should not force it.
It's halfway as bad as using child porn fear to censor "pirates". Even half of that is still very bad.
madao 2 days ago 4 replies      
Here is the thing, someone has gone ahead and spent time and effort to create and sell something, someone then has gone on out of their way and attempted to steal it.

That being said with any form of piracy it is in effect stealing. If one were to go down to the local store and steal a product from the shelves and make a run for the doors, you will also be caught, brought up to the police, charged and taken before the courts.

Now is the methods being used by the record companies correct? probably not. But do they have a right to try and protect their profits from looters and moochers of the world? they sure do.

I think digital media is the way of the future, especially being able to access it from anywhere in the world with little or no effort.

I just think that piracy in this sense has been taken for granted for much to long and we should work towards naming it as it should be named and stop getting up and arms about it as much as we do and just pay for what we use instead of running off to the local torrent site and downloading the shit out of it.

scotty79 1 day ago 1 reply      

I read this comic when I was a child. I was appalled by the cruelty of this scene. It was about illegal artifacts from different time but I it really comes to mind when I'm reading the story of that girl today.

D3, Conceptually - Lesson 1 hazzens.com
168 points by hazzen  2 days ago   32 comments top 12
natch 2 days ago 2 replies      
It looks great but like so many web sites focused on a tech project, it lacks a "what is D3" section. I read the preface, and I see that "D3 is a powerful tool" OK, and "D3 is really a beautiful little library" and it solves the problem of turning data into documents. Yes but how?

I'm not saying I can't figure it out from the code, I can. But still a "what is D3" section would help.

The site looks like a really nice mix of prose explanation and code examples. And D3 looks very intriguing whatever it is. OK I'm off to Google now.

grabastic 2 days ago 0 replies      
(I might have missed it in your tutorial, but if not...) It might be worth noting for your readers that attr, style, property and similar methods can accept an object as the argument. Might save people a tiny bit of typing...

x: function (d) { return d.x; },
y: function (d) { return d.y; },
width: function (d) { d.w; }


vuknje 1 day ago 0 replies      
I like the tutorial. One thing that catches my eye though is the unnecessarily complex example from the beginning:

.text(function() {
var index = Array.prototype.indexOf.call(this.parentElement.children, this);
return (index % 2) == 0 ? 'Even' : 'Odd';

No need for calculating the index variable - it's already provided as the second argument of the function passed to the text() method:

.text(function(d, index) {
return (index % 2) == 0 ? 'Even' : 'Odd';

bslatkin 2 days ago 2 replies      
While learning D3 I was surprised that it's a good general system for manipulating DOM. Using SVG is just the prettiest application of it.
shashashasha 1 day ago 1 reply      
Scott Murray has published some great tutorials on D3.js (http://alignedleft.com/tutorials/d3/), and has an O'Reilly book on "Interactive Data Visualization for the Web" that's about it as well: http://shop.oreilly.com/product/0636920026938.do
enjalot 2 days ago 2 replies      
This is a nice tutorial. I like the order and accessible way it introduces selections and data binding, which are some of the harder things to get into when starting with d3.
Groxx 2 days ago 0 replies      
This is exceptionally readable, many thanks! Partially the language is succinct and well written, partially it doesn't try to teach me Javascript :| Please please please continue, and consider writing others, you're good at this.
rustc 2 days ago 2 replies      
Borderline off-topic:

What are the best open-source charting libraries available right now (in terms of looks)? D3 or not, although I'll prefer D3, as it would be easier to extend.

I'm looking for something as pretty as Highcharts, to use in open-source apps.

path411 9 hours ago 0 replies      
Looks nice. I'm not sure if I like the semantics of the .data selector. Especially since I come from jQuery I would think of it more to do with data attributes.
denzil_correa 1 day ago 0 replies      
One topic I wish you would cover is offline saving of graphs generated from D3.js in a subsection of your articles [0].

[0] http://stackoverflow.com/questions/12719486/d3-js-graphs-out...

rdudekul 1 day ago 0 replies      
D3 provides awesome data driven visualizations. However the code samples on d3js.org site though good have no real explanations. I am hoping these tutorials will fill the gap.
dysoco 2 days ago 1 reply      
I was expecting some concepts for the D Language version 3.
Changing times for web developers amazedsaint.com
168 points by amazedsaint  3 days ago   140 comments top 26
johnyzee 3 days ago 10 replies      
This is against the grain, but it is obvious to me that the way forward for web development is to rise above Javascript and simple DOM mangling, which is what most of these popular tools assist with. Javascript does not scale complexity and manipulating DOM elements directly is both error-prone and a lousy programming paradigm.

We need something in between that offers a sane development model and deals with the complexity and anachronism of the underlying platform. GWT cross-compilation is an excellent example. It has enabled the painless development of complex Javascript-based web UIs[1], with the tool support of any other software development project. This is what I'm going to look to for the future of web application development, not patchy solutions to the complete mess that is barebones Javascript development.

For examples of what I have done with GWT: TeampostgreSQL (http://www.teampostgresql.com), a rich PostgreSQL web interface, and my HTML5 game engine (http://www.webworks.dk/enginetest).

EDIT: By the way, it is only a matter of time until we have complete canvas-based UI libraries, frameworks and tools suites akin to Flex (probably from Adobe, too). When that happens the web will really have arrived as a rich client platform. I would be very surprised if there isn't a few projects in this space nearing completion at this point, since the underlying technology is basically ready.

PommeDeTerre 3 days ago 3 replies      
"Changing times"? What exactly is he talking about? Many of the things he mentioned have been pretty standard, even among the least-knowledgeable web developers, for years now.

jQuery has had pretty significant traction for 4 or so years now.

Crockford's work is extremely well-know, as well, and has been for some time now.

Minifying JavaScript and CSS files isn't new, nor are REST and HTML5.

The times did change, but it looks like he's still just catching up with where the rest of us were years ago.

mmaunder 3 days ago 5 replies      
Agree with half, but don't worry about:

JS MVC frameworks: MVC in JS is almost always overkill.

HTML5: Most of the web doesn't have support for it yet.

Optimization: Sure, but don't preoptimize so rather go looking for the tools once your app tells you it's slow. Also minified JS is great to save a tiny bit of bandwidth and obfuscate, but damn it's a pain to debug your live site.

ceautery 3 days ago 2 replies      
A couple of reactions: You guys seem to hate each other a lot, and love javascript frameworks. Me, I've tried to snipe at my fellows a lot less on sites like this, which has improved my online experience, and I prefer to learn standards over frameworks.

The kind of discussion going on here is reminiscent of old timey C vs. java vs. perl, or maybe vi vs. emacs slashdot discussions from the late 90s: pointless. Focus on the code, not the tool, it will make you a better engineer.

For the comments about the web not being ready for HTML5 yet because it is too young: nonsense. Every phone supports HTML5, and every Apple computer. Every time someone goes to google, they are suggested an HTML5 browser. My non-tech friends are mainly on Chrome and Firefox on their Windows machines, and only my older relatives who want to mash a button to get pictures of their grandkids are using IE... of course, your mileage may vary.

As for the comments about humility being the same as getting overlooked in a competitive world, I disagree. Focus on the code, not on developing a cult of personality. If your work stands out, and you can solve problems other people can't, you need to beat your chest a lot less.

mddw 3 days ago 7 replies      
I'm always amazed to see how people who give advice (and good ones in this case) are totally unable to follow them on their websites.

296 http requests. 1.85mb transfered. Yslow grade D.

So yeah, these are good advices. In fact, the OP should follow 'em if he wants to "survive".

Swizec 3 days ago 3 replies      
Is CoffeeScript a higher abstraction level from JavaScript? Whenever I've taken a casual look at coffeescript I came away with the impression it was just syntax sugar.
edanm 3 days ago 3 replies      
Does anyone have any recommendations for good REST books? For example, the book cited in the article - is it good?

I understand the basics of REST, but I want to get a deeper understanding. Also, I still regularly encounter situations where I'm not sure what the "best" thing to do is (collections of items, linked items, etc. - how to represent this with REST?).

Toshio 3 days ago 2 replies      
I couldn't help but notice the self-aggrandizing "most valuable professional evah" logo front-and-center on this guy's blog, so I feel compelled to add the 7th tip to his list. Here goes.

#7 - Learn the value of humility.

jopt 3 days ago 0 replies      
This almost feels dated, like many recent articles making the same point about web development trends. Unfortunately, a lot of this is still news to a lot of practicing professionals.

When it comes to coding in general, and especially web, I find (I admit anecdotally) that many people who appear in the know are living roughly five years in the past. In a recent discussion, a friend explained that JavaScript is an example of a strictly client-side language.

I suspect many developers are delayed by books and classes, paradoxically, even though all the information on the new sexy things is theoretically a click away.

danso 3 days ago 1 reply      
As others have pointed out, it's hard to take this post seriously because of how poorly the site is implemented...but beyond that, the advice seems either painfully obvious or outright counterproductive.

Moreover, it depends what kind of web development you want to do.

If you want to work as part of a team in a top shop, then sure, know frameworks and write perfectly linted code. However, if you're a freelancer who survives by making small commercial sites, then you're working with people who don't care about a third of a second difference in load times. Or, they'll care way more that you get a button hover-animation to look slick than they will about downloading jQuery uncompressed. And if you're a freelancer/outside-party, you're not going to be able to insist on their IT to use your deploy processes anyway, so minifying/jammiting in a productive way may not even be an option.

gexla 3 days ago 2 replies      
Pretty good list. Some comments here mention this is obvious or dated, but I think that in the wild a lot of devs aren't doing these things.

I think there are still a lot of back-end devs which this very much applies to. If you are a back-end dev who has been able to get away with not knowing CSS (and possibly even JS) well, then you need to fix that deficiency. For example, I typically work with a team of developers where I rarely have to touch CSS issues, but on my own projects, I get a lot of enjoyment on this, maybe because it's just a change.

Client side MVC is overkill in a lot of cases, but when doing client work you will come across these, so this is a good suggestion.

Optimization is something that can get left out if you have a dev team in which nobody picks up that piece. For me, I don't do things the client hasn't authorized payment for me to do and I'm generally busy enough that I hit those paid items and then I'm immediately switching to another project. Often my client is another developer who has pieced together a team. Nobody gets paid to do the optimization, maybe because the main developer is sloppy, lazy, or just doesn't know. It's just one of many details which should be covered but is often left out because of deadlines or a tight budget.

louischatriot 3 days ago 1 reply      
Good guidelines overall, altough I find that learning 5 client MVC frameworks may be a bit too many :
speg 3 days ago 2 replies      
#1 has a bunch of links for JS but none for the CSS part of its title. What are some good CSS resources for a developer who typically isn't suited for design.
tangue 3 days ago 0 replies      
I remember when I was coding ASP sites with tables. We were happy because the CC hover property was implemented for the first time in IE4. Web development is always changing. We are professionals sandcastles builders.
rizzom5000 3 days ago 0 replies      
Tips to learn to survive? I don't know how anyone can even begin without familiarity with most of these. Some of them are somewhat mind boggling though. "...familiarize yourself with at least five..." MVC frameworks? What?
se85 3 days ago 1 reply      
This reads to me like a "6 steps to becoming a better web developer" article because the author completely fails to talk about anything "new".
Volpe 3 days ago 1 reply      
7. Progressive Enhancement.

It should be mentioned more, the current trend of tech is leaving it behind, for no good reason... There should at least be a debate on it... but it seems it's in the "Too hard" basket right now.

Legion 3 days ago 0 replies      
Interesting that learning something besides .NET didn't make the list. (Or anything back-end at all, for that matter)
wildranter 3 days ago 1 reply      
The web is a mess. Where's the news in that?

Code once and run in all platforms is pretty much a myth no matter the framework you use, including the web ecosystem.

I've lost count of how many times I wished we could run in the browser a decent language like python or ruby. Or describe the data of documents in something more meaningful like JSON instead of HTML. And then there's the DOM, CCSS, and all the browser specific nonsense.

Can I just ignore this crap and code my applications already?

amazedsaint 3 days ago 0 replies      
As there are lot of comments here, thought about clarifying few points in that article

1) About clean separation of concerns.

A lot of customers expect you to cleanly separate your client side javascript/css/artifacts from your server side implementation. Even to an extent where you can just take it and repackage the same with minor modifications using a container like Phonegap, and distribute it for mobile devices later. HTML5's significance is beyond web - it can take your app beyond the browser.

2) About the REST Layer

Anyway you are investing in building a web application, so you need to ensure the plumbing portion is re-usable beyond your traditional 'website'. If you want to build a native phone application or a Chrome plug in tomorrow, you should be able to use the same service layer.

znowi 3 days ago 1 reply      
What I like about web development is that it is always changing. Each day something new to learn and try. Those are nice tips, but hardly a revelation for the HN crowd. A bit surprised it's the top story.
aberratio 3 days ago 1 reply      
Nice list of skills that web developers should have. But the general advice "Learn Your Craft Well" has nothing to do with changing times. (The times are always changing, aren't they?)

On the content: Anoop seems to work more on the Backend side of the web and as software architect. It is not uncommon that people who are specialized on backend are not familiar with current frontend standards. So his advice might be addressed to these guys?

pcl 3 days ago 0 replies      
It's a good list. One nit:

> websites are expected to work in different form factors by default

Ironically, I found the font size of the article to be on the small side when browsing on my iPhone.

pjbrunet 3 days ago 0 replies      
Bling for your LinkedIn profile.
_bear_ 3 days ago 0 replies      
Here are some things you should probably learn. And in 2 or 3 years time, you'll have to learn more things.
rietta 3 days ago 0 replies      
Overall, I think the OP's article is excellent.
       cached 28 November 2012 03:11:02 GMT