hacker news with inline top comments    .. more ..    28 Jun 2015 Best
home   ask   best   4 years ago   
Same-Sex Marriage Is a Right, Supreme Court Rules nytimes.com
1885 points by imd23  2 days ago   1200 comments top 57
1
curiousgeorgio 1 day ago 63 replies      
Well, I know I'm probably a minority in saying this, but I'm disappointed - not because I don't think everyone should have access to the government rights attached to marriage, but because it seems our country doesn't actually want to fix problems at the root.

What is the root problem? People on both sides of the debate agree (if given the option) that the government probably never should have messed with marriage, at least not as the cultural/religious thing that it is.

In a nation where we care so much about the separation of church (broadly defined to include ideologies that may not be formal religions) and state, I don't understand why we're seeking to only expand that connection.

What should happen is the government should stop defining marriage of any form (leave that to religion or personal tradition), and simply define all these rights under civil union (or a similar phrase with no significant religious/cultural attachment).

2
mkr-hn 2 days ago 5 replies      
For me and a lot of friends and family, marriage equality. Yay.

"It is now clear that the challenged laws burden the liberty of same-sex couples, and it must be further acknowledged that they abridge central precepts of equality . . . Especially against a long history of disapproval of their relationships, this denial to same-sex couples of the right to marry works a grave and continuing harm. The imposition of this disability on gays and lesbians serves to disrespect and subordinate them. And the Equal Protection Clause, like the Due Process Clause, prohibits this unjustified infringement of the fundamental right to marry." (page 22, from the coverage on SCOTUSBlog)

For others, the opinion also reasserted that people who are really mad about this can continue to be mad and vocal about it, as guaranteed by the First Amendment.

"Finally, it must be emphasized that religions, and those who adhere to religious doctrines, may continue to advocate with utmost, sincere conviction that, by divine precepts, same-sex marriage should not be condoned. The First Amendment ensures that religious organizations and persons are given proper protection as they seek to teach the principles that are so fulfilling and so central to their lives and faiths, and to their own deep aspirations to continue the family structure they have long revered. The same is true of those who oppose same-sex marriage for other reasons. In turn, those who believe allowing same-sex marriage is proper or indeed essential, whether as a matter of religious conviction or secular belief, may engage those who disagree with their view in an open and searching debate." (page ~32)

The full majority opinion [PDF]: http://www.supremecourt.gov/opinions/14pdf/14-556_3204.pdf

3
samatman 2 days ago 20 replies      
While this is excellent news for my gay and lesbian friends, I see no progress on polygamy.

Which, unlike same-sex marriage, is an institution with deep roots both in America (the Mormons were forced to give up this sacrament as a condition of statehood) and in the majority of world cultures, where it ranges from condoned to celebrated.

Without getting unduly personal, let's say that I have a stake in that question being resolved. I know several triples living quietly among us; they face the same kind of problems (child custody, hospital visitation, inheritance rights) as same-sex couples faced prior to this decision.

What the polygamists of the nation lack is a powerful lobby. <shrug> One may hope that nonetheless, reason and freedom will prevail here as well.

EDIT: nation, not world. Worldwide the situation is different. America is suffering from its Christian legacy here. Most Christian countries are adamant about denying this right to their citizens.

4
BurningFrog 1 day ago 7 replies      
I'm glad we'll have full gay marriage now.

But the idea that "the Constitution guarantees a right to same-sex marriage" is pretty laughable.

Does anyone really believe this right was in the Constitution for 250 years, only to be discovered recently? In reality public opinion and culture changed, and 5 justices decided to change the law.

5
baakss 2 days ago 7 replies      
In my lifetime, I think this is the social issue that has seen the most positive change.

As a kid, gay people were practically lepers.

Eleven Years ago, Dave Chappelle just outright said "gay sex is just gross, sorry it just is", and it was considered funny and acceptable. (Not harping on him specifically, just pointing out what it was like in 2004)

Seven years ago prop 8 passed, if barely with some caveats about lack of understanding.

And now? SCOTUS upholds gay marriage and it's socially reprehensible to mock homosexuality. It's a strange and very positive feeling watching a country's world view shift like this.

6
agd 2 days ago 4 replies      
Great decision. It's amazing how quickly gay rights have advanced in the western world.

Unfortunately it's still legal in many states to discriminate against employees on the basis of sexual orientation. Hopefully that's next to be fixed.

7
ajays 1 day ago 0 replies      
Rant time.

This talk about "church's definition of marriage", etc. is a red herring, and just a couched way of saying "we don't like homosexuality and homosexual behavior".

I got married in India. In a ceremony presided over by a local priest. There was no "church" involved. But guess what? No Christian here (in the US) has ever doubted the authenticity of my marriage.

And then I got divorced in the US. The courts here had no problem recognizing my marriage, even though it was performed in some other country, by some unknown religious authority. The officials had no hesitation in breaking up this marriage. Why don't we require the Church's blessing to break up a marriage (I am aware that Catholics have a certain process of appealing to the Pope, but not all churches do)?

If you don't support the idea of the government getting involved in marriage, you shouldn't support the idea of government-approved divorces either! Go to your church and get a divorce!

8
jscheel 2 days ago 4 replies      
I've never really given credence to the people that suggested a ruling would be a slippery slope. However, after reading the opinion for myself, I can see how the court's stance on marriage (opposite-sex and same-sex) can now be extended to polygamy and incest. I understand the need to define it as a fundamental right within the context of this ruling, but it seems that some of the wording opens the way for other marriage relationships that are not explicitly defined in the court's opinion.
9
sytelus 1 day ago 0 replies      
One of the interesting thing is how severe disagreements are between the supreme court judges[1]. You might think these judges are debating with cold logical arguments and finally either understand other person's argument or be able to convince others of theirs. Instead what we are seeing is judges literally and personally attacking other judges in same panel and accusing them to derail the very constitution and democracy they are expected to protect. Can they make any ore serious accusations? It's also very interesting that judge votes were highly predictable based on their political leanings and which administration appointed them. This just boggled my mind. First, why we should allow any person strongly conforming to any political ideology as a supreme court judge? Why a political leader who almost always have represented as head of certain political ideology be able to even appoint a supreme court judge?

1. http://www.nationaljournal.com/domesticpolicy/marriage-same-...

10
malkia 2 days ago 2 replies      
"No union is more profound than marriage, for it embodiesthe highest ideals of love, fidelity, devotion, sacrifice,and family. In forming a marital union, two people becomesomething greater than once they were. As some ofthe petitioners in these cases demonstrate, marriageembodies a love that may endure even past death. Itwould misunderstand these men and women to say theydisrespect the idea of marriage. Their plea is that they dorespect it, respect it so deeply that they seek to find itsfulfillment for themselves. Their hope is not to be condemnedto live in loneliness, excluded from one of civilizationsoldest institutions. They ask for equal dignity in theeyes of the law. The Constitution grants them that right."
11
slayed0 2 days ago 4 replies      
Great news. Glad this won't be an issue going into the next presidential election. There's been so much time wasted on this issue when, at the end of the day, any 2 consenting adults should have all of the same rights as any other 2 consenting adults (barring felony conviction, poor mental health eval, etc).
12
tosseraccount 2 days ago 8 replies      
If love is about ignoring race,class,gender,age shouldn't it be about ignoring numbers, too?

Polygamy is the next hurdle for society.

The judges should declare all consenting marriage legal !

13
pavanky 2 days ago 4 replies      
As a non american, does the US Government have an official definition of marriage that is going to change today?

I understand the social consequences. But what legal rights have been granted now that did not exist previously ?

14
tptacek 2 days ago 0 replies      
Congratulations to everyone who worked on this. A truly epic win.
15
jfaucett 1 day ago 0 replies      
I'm sad to see this happen, not because LGBTs can marry, in my personal opinion I think they should be able to. I just dislike systems in general that force any global mandate and prefer those in which entities can freely compete against one another to decide between one another what works and what doesn't. The two party, winner takes all system the USA has right now, is already too totalitarian IMHO, this just seems like another step in that direction.

I think in the long term all less efficient, less happiness producing systems will loose out anyway. All you need to do is ensure that your system protects the minority enough so that they can freely compete and coexists with all the rest.

But I'm a libertarian, so I think all laws regarding sexual relations between consenting adults should be repealed whether gay or straight...

16
troycarlson 2 days ago 5 replies      
Hopefully this whole "issue" will just go away soon...so many more important things that deserve the public's attention.

Edit: To clarify, I support marriage equality and believe this is great news that deserves celebrating.

17
amyjess 1 day ago 0 replies      
I'm really happy about this.

Not just because I'm a lesbian (I'm happily single, so no marriage for me regardless), but because it's a fundamental right that shouldn't be denied to anyone. It really warms my heart that everyone can finally marry the people they love.

18
Lawtonfogle 2 days ago 4 replies      
Is there a general summary of the arguments made by the dissenting 4?
19
braythwayt 2 days ago 1 reply      

 > Lawyers for the four states said their bans were justified by > tradition and the distinctive characteristics of opposite-sex > unions.
Reads like they dusted off the old arguments in favour of slavery and/or segregation and remixed the recipe for marriage.

Take a dollop of tradition says it should be so, and sprinkle on distinctive characteristics to taste.

20
chippy 1 day ago 0 replies      
I seem to find it funny when people talk of things killing "marriage". Divorce kills marriages, not more marriages! Let there be more happy marriages!
21
mjhoy 2 days ago 0 replies      
The oral arguments are worth a listen, from back in April.

http://www.oyez.org/cases/2010-2019/2014/2014_14_556

Mary Bonauto argued it perfectly, I thought.

22
habosa 1 day ago 0 replies      
Pretty crazy to think how far America has come on this and related issues. America isn't so good at changing as soon as we should but we've made a lot of social progress in our short ~300 years.

The other day I was watching Eddie Murphy's "Delirious" special [0]. It's widely considered one of the best standup specials ever and it's Eddie in his prime. But he spends the first ~5 minutes just spewing anti-gay jokes. Not hateful stuff, but just saying over and over how he's scared of gay people, etc. And he was probably the biggest star in the country at the time and at the peak of his abilities as a comic. I don't think that would go too well now (even if it is comedy).

Anyway, congratulations to anyone who was previously unable to get married and can now do so. It's a real victory for the good guys.

0 - https://en.wikipedia.org/wiki/Eddie_Murphy_Delirious

23
PebblesHD 1 day ago 0 replies      
I have nothing to say other than good. This is how it should always have been. To all the people who disagree, you are entitled to your opinion, and to all those who can now celebrate their love with all the peace and respect of any other person, congratulations and I wish you all the best.
24
bcheung 1 day ago 0 replies      
Just my 2 cents.

The government should not enforce marriage contracts, they should enforce legal contracts. Marriage is a private matter, not a government matter.

They shouldn't officially recognize marriage at all, nor should they discriminate on marital status for tax purposes. All people should be equal in the eyes of the law.

People should be free to enter into legal contracts with whomever they want.

25
empressplay 2 days ago 1 reply      
Hopefully Australia's next!
26
cpursley 1 day ago 0 replies      
Fantastic. Now the next step is to get government out of the marriage business completely (which was the case before the 1900s). Free people should not need to obtain a license from governments to enter a voluntary social contract.
27
trhway 1 day ago 0 replies      
as it is a technical nerds forum lets throw in a bit of technology aspect of it - today's ruling lays groundwork for protecting the interests of children which will be "born", or more specifically - created - using something like the DNA mix-up of the 2 same-sex parents (with, at least at the initial stages of the technology, may be throwing into the mix a biologically minimally necessary of bio-material from an opposite sex donor)
28
moioci 1 day ago 0 replies      
Just wanted to point out that certain counties in Alabama have announced that they are getting out of the marriage license business. To avoid sanctioning sanctioning same-sex unions and avoid legal trouble, they won't issue licenses to any couples. That way they can't be accused of discrimination.
29
WorldWideWayne 2 days ago 1 reply      
Gay Marriage is a controversy that is constantly used to to distract people from much bigger problems.

The ruling that had vastly more wide-reaching effects this week is that they upheld the the terrible "Affordable" Care Act. This act is a capitalist abomination of much more well thought-out socialist single-payer plans. Now that health insurance companies have a state granted monopoly, there's no reason to bring prices down or change anything.

30
Pxtl 1 day ago 0 replies      
As ecstatic as I am about this, I'm disappointed that it had to come from the Supreme Court instead of the American voters. I do believe that gay marriage is an obvious extension of the non-discrimination clause, so it's perfectly appropriate for the Supremes to act on this. I guess I'm just more disappointed in the voters than anything else.
31
jayess 2 days ago 6 replies      
My partner and I are discussing getting married, but the federal income tax "marriage penalty" is giving us serious pause.
32
MrZongle2 2 days ago 3 replies      
I'm a straight conservative (more fiscal than social, and sure as hell not Republican any more), and my reaction to this is: "meh". The writing has been on the wall for a while now, and anybody surprised by this hasn't been paying attention.

Any social conservatives on HN -- that is to say, both of you -- should keep in mind that if you're worried about how this affects the sanctity of marriage, that institution has long since been sullied by a) allowing government to get involved with it, b) easily-obtainable divorces and c) that whole Henry VIII business. Same-sex couples can't possibly do any more damage than that.

Of the threats to American culture or even Western Civilization as a whole, the SSM boogeyman pales in comparison to a feckless electorate, unaccountable government with Big Brother aspirations, crushing debt and even Islamic extremists.

That's why my reaction is "meh": as a "problem", SSM isn't even on the radar.

33
fweespeech 2 days ago 3 replies      
Yeah, I'm still slightly pissed they basically waited until the majority was clear before they'd rule on the matter. Its another clear sign our Judiciary is really just as political as the politicians are, even if no one says so openly.

> As late as October, the justices ducked the issue, refusing to hear appeals from rulings allowing same-sex marriage in five states. That decision delivered a tacit victory for gay rights, immediately expanding the number of states with same-sex marriage to 24, along with the District of Columbia, up from 19.

> Largely as a consequence of the Supreme Courts decision not to act, the number of states allowing same-sex marriage has since grown to 36, and more than 70 percent of Americans live in places where gay couples can marry.

34
mason240 1 day ago 0 replies      
I for one am glad that we got the opportunity to show, state by state, that we supported supported same sex marriage.

An earlier SCOTUS decision would have taken away our ability to show consensus on the issue.

I'm glad that I was able to personally vote "No" to an amendment in MN that would have banned marriage rights.

35
Liquix 2 days ago 7 replies      
"A recent Gallup poll found that 60 percent of Americans an all-time high support extending the same rights and privileges to same-sex marriages as traditional ones."

only 60% of US citizens support same-sex marraige? in my area it is more like 90-95%, this was surprising and sad for me to hear

36
malkia 1 day ago 0 replies      
Came to work today, opened the internal web page and saw this - #ProudToLove - https://www.youtube.com/watch?v=WSiehK2asbI
37
sbt 1 day ago 0 replies      
America continues its slow march forward. Congratulations.
38
jakeogh 1 day ago 0 replies      
Why is the gov in the business of approving who is married anyway? There is no reason to need "licenses".
39
interlocutor 1 day ago 1 reply      
Given this ruling, what do you now teach children? Do you tell your 5-year-old that "when you grow up you marry either a man or a woman"? Are both choices to be equally preferred? Or do you say, "by default" you marry someone of the opposite sex, but if you choose someone of your own sex that's OK? Is it OK to teach that being straight is "the default"?
40
daily_dose_420 1 day ago 1 reply      
This has nothing to do with technology.
41
ForrestN 2 days ago 1 reply      
On my birthday in April, my partner Andrew proposed to me in a Minion mascot costume, handing me a solid gold Ring of Power from Lord of the Rings, which was set into a Chicken McNugget, while a live vocalist sang "I Say a Little Prayer for You." When we get married I will feel so grateful to have fallen in love in the present.
42
danceswild 2 days ago 0 replies      
this is great! congratulations US!
43
dataker 1 day ago 2 replies      
Beyond YAY!'s, the state is saying it will 'let' you love someone of the same sex.

Can someone sense the creepiness in this? It tells you how/who/why you should love.

It still doesn't include certain groups and it could revoke such 'rights' in other circumstances.

44
once-in-a-while 1 day ago 0 replies      
What's next? I would like to marry myself!! When will I finally be able to do that?

Do I have to wait another century?

45
acd 1 day ago 0 replies      
I'm happy for this decision, way to go!
46
3zzy 1 day ago 2 replies      
Honest question: Do gay couples have to commit adultery to produce children? OR Rely on straight couples to produce children whom they can adopt?
47
DonHopkins 1 day ago 0 replies      
This is going to mean a lot of jobs for database programmers!

http://qntm.org/gay

15 years ago we solved the Y2K problem. Now we've got to solve the SQL2Gays problem!

48
Cyberis 1 day ago 1 reply      
So, shouldn't plural marriage, intra-family marriage and inter-species marriage be made legal as well?
49
innguest 1 day ago 0 replies      
Gay people finally earned the right to have their relationships meddled with by the government. Let's see if they remain "gay" (happy). Watch the parades dwindle!
50
brudgers 2 days ago 3 replies      
The title of the post is incorrect.
51
smackfu 2 days ago 9 replies      
>the rest of the developed world.

Some countries where gay marriage is illegal: Germany, Italy, Australia, Japan, Austria, Switzerland, Greece.

52
nlake44 2 days ago 0 replies      
I'm happy for the gay community. At least the Supreme Court is not fully corrupted.
53
urda 1 day ago 0 replies      
I'll tell you it upsets me to see so many homophobes here on Hacker News attempting to argue that those in same-sex situations should be denied the right to marriage.

Hacker News you disappoint me.

54
michaelsbradley 1 day ago 5 replies      
A sad day for our country, our highest court and much of our culture has succumbed to social insanity. For my part, I will never under any circumstances recognize by word or deed that any so-called marriage can be or has been established between two persons of the same sex. No government on earth has the power or authority to do that, anymore than a legislature could repeal the law of gravity or a judge could declare a human person to be a toaster oven. With kindness and charity, but with firm resolve, persons of good will should give civil disobedience to this court ruling and all that flows from it.
55
anti-shill 2 days ago 7 replies      
good...now maybe the so-called "leftists" can put this issue aside and concentrate on issues that matter to more than about 1% of the population...I refer to these really "silly" issues like wages, single-payer healthcare (obamacare is a mess), trade, flooding of the labor supply, the drug war, worker rights and benefits, more time off and vacation, monopolies, regulatory capture, NIMBYized zoning/regs (aka affordable housing)...you know, the issues that matter to all americans, the issues that used to occupy leftists before they got sidetracked with The Most Important Issue Of All Time.
56
octatoan 1 day ago 1 reply      
My goodness, the barely-concealed hate in this is shocking. It's a very, very sad thing these people do. And I can't help but find "black robes" a bit telling, although maybe that's just some sort of bias or psychological effect.

http://www.afa.net/the-stand/government/rainbow-jihadists-of...

57
anon4327733 1 day ago 1 reply      
What a crappy ruling. Yes the answer they got was right, but they never bothered to rule that sexual orientation is a (partial) suspect class. Now we have to wait for another ruling to get that resolved. Stupid lazy evaluation of courts.
Atom 1.0 atom.io
1084 points by mrbogle  3 days ago   435 comments top 72
1
bdcravens 2 days ago 7 replies      
Congrats to the team for reaching this important milestone.

I'm a Sublime license holder, but I use Atom as much as I can, because the more open source can win, the better.

However, yesterday I was doing some complex regex's (porting a random sql dump file into a seeds.rb), and Atom kept dying, whereas Sublime was pretty much instantaneous.

I'm not doing the usual "Atom is slow" drum beating, but saying some undertones of the announcement make me worry a bit. I hear discussion of things like Electron and "social coding" as the future, and I'm hoping that means that no one considers 1.0 to equate to the core editing experience being finished. It's not, and I hope the Atom team continues to iterate before moving on to new features.

Being able to open files larger than 2MB isn't sexy, but it's necessary. Having to hard-kill my editor because the save dialog is trapped on my other full screen session that it won't let me get to deserves more than a "but it's open source" response.

tl;dr congrats team and your core users want the best editor possible over bells and whistles

2
harperlee 2 days ago 8 replies      
Wow... just downloaded the windows installer version and it autoinstalled itself wherever it chose fit, without questioning, it installed shortcuts on the start menu, placed itself on an already bloated contextual menu on several file extensions as an Open option, instead on "Open with...", etc.

I usually install software on my user folder on the work laptop, as I don't have enough priviledges. This time the installer worked, but why override the questions to the user, like install location, etc.? There's a standard for Windows installers, why did they ignore it? Not cool.

3
reledi 2 days ago 2 replies      
Congratulations GitHub and Atom team!

Atom is my favourite editor for coding in, and it just keeps getting better.

I introduced my team to it today (pre 1.0 release, this is a nice surprise) and they were surprised by how pleasant the experience was - just a few minor hiccups. We've tried a bunch of editors and usually stick with Sublime because it's easiest to use while pairing, but I think that will change now.

Sorry for the tough HN crowd, you can never please them.

Here's to Atom 2.0 <3

4
joefitzgerald 2 days ago 7 replies      
The killer feature of Atom to me is the ease with which it can be extended (via packages) and the openness to community contribution on core features. That's not a knock against any other editor (some of which share similar characteristics in this regard) it's just what draws me to Atom.

It's super easy to hack on and contribute to.

5
joshburgess 2 days ago 5 replies      
I've gotta say, honestly, I'm preferring Visual Studio Code over Atom simply due to the fact that it seems MUCH more stable and lightweight. Atom is very visually appealing, and I'm a fan of the project, in general, but it constantly freezes up and crashes on me. I think I'll be sticking with VS Code & Sublime.
6
discreditable 2 days ago 2 replies      
I find it to be a bit depressing that software bloat has advanced to the point that we have text editors plagued with performance issues.
7
cnp 2 days ago 1 reply      
I made the switch yesterday not knowing 1.0 would be released today, and I am seriously psyched. Their Vim bindings are now good enough, and there are tons of tweaks you can do to make them better (which will inspire more people to contribute). I found a few bugs here and there related to installing / removing packages (just checked and 1.0 fixes them), but nothing major, and was able to migrate my mammoth .vimrc configuration file over over the course of an afternoon, with everything I need having already been developed by the community. Super fast, too.

Also! I was able to create the colorscheme of my dreams in about 15 minutes, thanks to to the dev tools integration.

8
ahoge 2 days ago 3 replies      
Eh. Why is 1.0 out already? Keyboard layouts which use AltGr are still broken.

E.g. I can't type '@', '\', and ''. Yes, I can't write metadata annotations or escape some characters.

9
Walkman 2 days ago 2 replies      
Atom is the ONLY editor which cannot handle my keyboard layout properly so I can't write brackets :D ([]) I reported it on the first day when the alpha came out, still no fix for this. Let me put it this way:I CAN'T WRITE BRACKETS IN A TEXT EDITOR. LOL
10
sneak 3 days ago 0 replies      
I had initially dismissed the whole thing as folly as JavaScript is really stupid, but I had someone install Atom the other day and came to learn that it is really quite an impressively great editor these days.

The ease of finding and installing themes and plugins is unparalleled.

Considering trying it for a week or two as my daily driver (with vim mode, of course.)

11
drvortex 2 days ago 1 reply      
I am loathe to install something like Atom with a 4MB limit on file size and 80 MB install size when Sublime Text does the same or better in a 8 MB install package with no limits.
12
Mahn 2 days ago 7 replies      
I really want to like Atom but it's such a pity that they went for CoffeeScript instead of plain ES5/ES6.
13
smaili 2 days ago 4 replies      
Just downloaded and opened Atom for the first time, and I have to admit the look and feel is amazing! Fantastic job to all those involved!

However, one thing that stands out to me, the file size of Atom.app is 203MB!! How in the world can a text editor be that large? Compare that with MacVim, which is about 27MB.

14
franciscop 2 days ago 2 replies      
I am a paying customer of Sublime Text, but I will give Atom a try. They both seem really similar feature-wise, but Atom is open source, something I care about. Also is based on web technologies, which is really cool (although I've heard it's not so fast).
15
mattdeboard 2 days ago 0 replies      
For anyone wondering, Python indentation of new lines inside lists, tuples, etc. is still broken (https://github.com/atom/language-python/issues/22). However, it looks like hitting tab at least allows you to manually indent the line, which is a passable bridge until it works properly. Auto-indent will completely remove any manual indentation, however, which seems like something that should be fixed.
16
engi_nerd 3 days ago 5 replies      
Is it still limited to only editing files < 2 Megabytes? For most people that is not a showstopper, but it may be for some.
17
hitlin37 2 days ago 3 replies      
i have been using it for 3-4 months now and its getting there each month. Sometimes the plugins fail but this could be because of the rapid development cycle of atom.On the positive side, its a good text editor, easy interaction, plugin install super easy.A downside is that its not easy on your ram though. Consumes more than what vim would do. But overall, if you need a modern editor, this is the way forward.
18
jokoon 2 days ago 0 replies      
Have they fixed the sluggishness ?

The installer is almost 10x times as large as the sublime text installer.

Please leave "web technologies" where they belong.

19
jmtucu 2 days ago 1 reply      
I don't know why, but is very slow on my computer and I have 8Gb of RAM / i3 This message appears everytime http://i.imgur.com/hTZixD8.png
20
solveforall 2 days ago 0 replies      
I was lucky enough to get a key to use Atom when it first came out. I was not very impressed at the time, with its limited capabilities, so I ended up switching to Brackets for a while.

Later in my search for an editor that handles EJS, I rediscovered Atom. It really has improved since it first started. AFAIK, Atom and Sublime are the only editors that handle EJS. I also use Atom to edit JS, JSX, gradle, and FTL which work well as well. Still I stick to IntelliJ for most programming languages since I haven't found a way to get code completion, reference jumping, etc to work on Atom.

Very impressive work from the Atom team and the contributors!

21
octref 2 days ago 2 replies      
Congrats!

I started using Atom a year ago, but at that time it was very unstable and the performance sucks so I switched back to Vim.

This 1.0 still has something to pine for: some of the essential packages are still not updated for the 1.0 API (vim-mode, etc), and when processing large files it still slows down significantly, but as they say, it's now a good foundation to build upon.

22
swasheck 2 days ago 0 replies      
I so desperately want to like Atom (and Code), but I have a window refresh issue with it (https://discuss.atom.io/t/display-does-not-refresh-when-focu...).

This issue is still present in the current release. It seems like a minor annoyance but when it happens it really kills my productivity.

23
jscheel 3 days ago 12 replies      
I still haven't given Atom a go. Is it worth switching from ST3?
24
BenjiSujang 2 days ago 1 reply      
It's only fashion. There're no hard facts why someone should prefer it to vim, notepad++ or sublime or even a proper IDE like a Jetbrains product or Visual Studio.
25
grandalf 2 days ago 2 replies      
Congrats! I still prefer emacs but perhaps not for long (I use both day to day)...
26
bobbles 2 days ago 1 reply      
Maybe something worth pointing out..

I wanted to download this but after clicking every link I still hadn't seen a way to do it anywhere...

Obviously if I go to the homepage now the first thing I see is a big download link, which is great.

I think a 'download' link on the site though would be good since if anyone links ANYWHERE else it's hard to find.

27
zippy786 2 days ago 0 replies      
Seriously, creators of Atom, what feature did you not find in any of the current text editors that you had to built one. Let's have a editor in every language!!!
28
usaphp 2 days ago 2 replies      
I tried atom, the only thing I liked about it was design and color scheme, the rest are superior in sublime text, so I just went ahead and created a theme/color scheme for sublime which matches atom (1). Atom is laggy, even basic file navigation using arrows can be slow sometimes (and I have a latest retine macbook pro).

But the biggest issue for me is a battery usage, it reduces my battery usage on RMBP15 by 2 hours compared to sublime text, I am mostly working from remote places and having a good battery usage is vital for me.

[1] - https://www.evernote.com/shard/s21/sh/cc73487c-08c9-4937-ac6...

29
typedweb 2 days ago 0 replies      
Both emacs keybinding emulation packages are sub-optimal. One misses C-p, the other misses C-e, both basic editor movement commands. Not impressed.
30
Yhippa 2 days ago 0 replies      
I have this way of picking technologies where I will try a bunch of them at the same time and naturally gravitate to the one that works best for me. After using Notepad++, ST2, and Atom I feel that Atom works best for me. I rarely have to use Google to find out how to use some features and it's reasonably snappy.

I do need to give Visual Studio Code a fair shot. Heard a lot of good things about it.

31
niuzeta 2 days ago 2 replies      
I remember when Atom beta came out I was turned off because it (presumably) didn't run on Windows(my workstation at the time).

Then I remember trying to give it a try once again a few months ago but gave up because I've heard so many horror story about performance issues.

Now I'm willing to give it yet another try because of vim-bindings and performance issue improvements. Is it at workable state?

32
logn 2 days ago 0 replies      
With the following Atom community packages, I basically have a Rust IDE: linter, linter-rust, build, language-rust, racer
33
mparramon 2 days ago 0 replies      
I wrote a small guide to set up Atom for web development:

www.developingandstuff.com/2015/04/setting-up-atom-for-rails-development.html

34
sagarjauhari 2 days ago 0 replies      
Huge milestone! Congrats.

And in case you're wondering about the video, well, its this 50 year old documentary "The Home Of The Future: Year 1999 A.D.": https://www.youtube.com/watch?v=0RRxqg4G-G4

35
LoSboccacc 3 days ago 7 replies      
what's up with name reuse these days. atom (the syndication format) may be on the verge of becoming obsolete, but it's also forgotten and irrelevant already to warrant a name reuse?
36
amoney 2 days ago 2 replies      
So where do I get that nifty terminal they were using in the video?
37
snarfy 2 days ago 2 replies      
The site is severely hammered right now. I'd like to try it again to see if the performance issues have improved.
38
shadowmint 2 days ago 0 replies      
woo! awesome work. I use atom for everything now.

 ...but realizing the full potential of Atom is about more than polish. We're considering questions such as: What does super deep git integration look like? What does "social coding" mean in a text editor? How do we enable package authors to build IDE-level features for their favorite language?
O_o you what?

Things I'm interested in ---> A hackable, fast, extensible editor.

Things I have no interest in at all ---> 'Super Deep' github integration, 'social' coding in my text editor.

Dont get me wrong, atoms a great piece of work, and making it extensible for building custom tooling is really great, but what on earth are you talking about?

I hope this is just 'and now we're going to make some plugins' talking...

39
wodenokoto 2 days ago 1 reply      
A lot of people complain about the slowness of the Dom/JavaScript backend, but I see a lot of potential for some really cool things, like integrated juPyter notebooks, semi wysiwyg rendering of latex and markdown and maybe drawing rendering trees or other creative things.
40
dbbolton 2 days ago 0 replies      
It's nice that they provide a .deb, but a signed repository/PPA would be really great.
41
gaoshan 2 days ago 0 replies      
Downloaded it and added Facebook's Nuclide plugin suite but many things seem to not work. the mercurial plugin doesn't appear to function at all and frequently the config section where you list installed plugins just seems to hang, without loading anything. I'll stick with Webstorm (favorite) and Brackets (favorite "Atom like" editor) and vim (favorite command line). Atom seems too buggy to me.
42
Dingler 2 days ago 0 replies      
Pretty cool intro vid. A breath of fresh air from the typical startup/product videos I encounter everywhere. I'm excited to see how this progresses. The development appears to be extremely active, with frequent + quality updates. Spotted the new-ish Office Code Pro[1] font in use too...

[1]https://github.com/nathco/Office-Code-Pro

43
Betelgeuse90 2 days ago 0 replies      
Can anyone explain to me why startup speed is such an important factor for them?

I really can't get my head around it. It's such a non-issue for me.

I care so much more about general performance post-startup. I wouldn't even bring startup speed up as an issue as long as it's in the few-seconds range, which it always was for me using Atom.

44
sytse 2 days ago 0 replies      
I'm surprised that in their vision for Atom GitHub doesn't mention integrating Atom as the webeditor for GitHub.com. I always assumed it used web technology so it could form the foundation of a online IDE integrated closely with GitHub.com It will be interesting to see if this happens and what Koding, Nitrous.io, Cloud9, CodeAnywhere, Codio and CodeEnvy will do. At GitLab we currently have no plans in this direction.
45
d0m 2 days ago 1 reply      
Is it possible to use atom right in the browser, rather than in a separate app? I know facebook engineers have an in-browser editor.. is this atom or something else?
46
gabeio 2 days ago 1 reply      
I have to say amazing job to the team. My reason: last time I looked/was regularly using Atom the generic memory size was around 100mB if I remember correctly. I just checked with 1.0 and the memory size has now dropped below Sublime Text 3.0 (~70mB) to ~60mB. (OS X.10)
47
naryad 2 days ago 0 replies      
Go To Matching Bracket taking almost 2 seconds in atom, happens instantaneously on vim/sublime.
48
tehbeard 2 days ago 0 replies      
Ok I gotta ask.

Is there no recent files menu? Or am I just missing where they placed it?

I do like the find/replace UI compared to ST3, but the lack of a recents menu and it choking if I accidentally click a large file just aren't making me feel the need to swap to this.

49
z3t4 2 days ago 0 replies      
Cool video and great copy-writing.

The editor still seems crude though. The new install used some old packages from a previews install that I though was uninstalled!? It also called home to report a bug without asking for permission. Then it froze after I had uninstalled the old package.

50
roelvanhintum 2 days ago 0 replies      
Nice, this is the first thing i start in the morning, use all day and close at the end of the day. I'm using Atom with monokai and linter and love it.

That mid century video is hilarious.

51
lpgauth 2 days ago 1 reply      
alias mate="atom" - old habits die hard.
52
Globz 2 days ago 0 replies      
I am really tempted to give it a try but I am already all setup with Brackets and I am having a hard time understanding what are the benefits if I make the switch over Atom. I will surely give it a try a some point!
53
norman784 2 days ago 0 replies      
I like Atom, was using for few days and I'm quite happy, also seems much like Sublime Text experience when first launch after Textmate "dies"
54
dothething 2 days ago 2 replies      
Have they resolved the long-standing performance issues present for the last six months? I've tried using Atom numerous times, but the longer it runs, the less stable it gets.
55
ajryan 2 days ago 0 replies      
Is there anyone with a BGR subpixel-order monitor who has been able to configure Atom (or VS code for that matter) to do the right kind of antialiasing?
56
merrua 2 days ago 0 replies      
I think Atom could still have better performance. It wouldn't hurt to have performance tips for package developers too (maybe it has, and I missed them).
57
a1b2c3 2 days ago 1 reply      
What is this? I can't even tell. A text editor?
58
anantzoid 2 days ago 0 replies      
https://atom.io/ is down, btw.
59
yummybear 2 days ago 0 replies      
Just tried to install Nuclide and Atom complained about atom.io being unavailable. Guess I know why now.
60
therealmarv 2 days ago 0 replies      
I still cannot search for whole word expressions in directories without using RegEx, bummer.
61
misiti3780 2 days ago 0 replies      
Off topic Does anyone here use Light Table? If so - what do you think of it compared to atom?
62
isuraed 2 days ago 1 reply      
Broke the default font, ugh! Anyone know how to get the old font back?
63
JoshMnem 2 days ago 0 replies      
I'd rather write vimscript than coffeescript.
64
therealmarv 2 days ago 0 replies      
They should do a LOST remake of that Atom Youtube Video ;)
65
thomasrossi 2 days ago 0 replies      
Congrats to the Atom team:) what happened to rAtom?
66
bitmapbrother 2 days ago 0 replies      
The absolute disregard for any thanks for the Chromium team in the announcement is disgusting and a slap in the face to the foundation this editor is based on.
67
intrasight 2 days ago 1 reply      
I had assumed this was browser-based, but upon visiting the site I see there is an installer. Can this editor run in a browser? If not, what's the point?
68
sagarun 2 days ago 0 replies      
How do anyone run tox from atom editor?
69
lucaspottersky 2 days ago 0 replies      
can it handle files larger than 2MB yet? :P

Incredible promo video though!!!

70
notatoad 2 days ago 0 replies      
>Please forgive the approach

no. please stop spamming unrelated stuff in the hn comments

71
facepalm 3 days ago 2 replies      
What is it?
72
tomkin 2 days ago 0 replies      
I'm sure there's a few problems with this software, just like others. After reading some of these FWP, and the whining...I dunno, man. There are people throwing acid in the face of girls on the other side of the world, and you're all complaining about having to go into the registry and manually remove context menus.

I don't know anything about Atom, but I'm willing to bet there's really nothing this software is doing that prevents you from sleeping at night.

What happens when you stop relying on resumes alinelerner.com
439 points by appamatto  2 days ago   185 comments top 36
1
tikhonj 2 days ago 6 replies      
First of all, I definitely agree that resumes are woefully overused and overrated. They have structural problems and incentivize people to use "negative selection" criteria where people are eliminated based on not having a specific feature rather than selected for excelling on something.

This article neatly demonstrates that resumes are not necessary and that not using them can unlock new sorts of candidates.

However, I don't think there's a conclusion to be made about the actual method used here. I suspect that it worked because it was different, not because it carried a fundamentally strong signal. If everyone did this, project descriptions would be gamed even more than resumesit would select for people who prepared for the selection process more than anything else.

This reminds me of various captcha strategies I've seen used by small forums to great effectsolving some math, typing a word into a text box, choosing a popular character's picture etc. They all work, perfectly. But only because spammers don't care about the small fry: it's not worth their time to modify their bots for your little site. If any given captcha becomes used widelyor your forum grows big enoughthey will bypass it trivially.

Now, an essay like this isn't quite as bad as a captcha, but the idea is the same: it works because it's new and different. If everybody used it, it would probably be a step back.

Ultimately, I think the real moral is that more companies should do their own thing, even if that thing is not great in the abstract. Being different carries a value of its own, and it breeds biodiversity that's healthy for the system as a whole. (Of course, many of the things companies try are really bad for various reasons, but that's a different story)

In particular, most people have a bunch of "red flags" they look for with, at best, cursory rationaleeverything from passing on people who didn't go to the right school to those who have breaks in their work history, based on "common sense" or "experience" rather than anything meaningful. Most of these criteria seem counter-productive.

I also think this is really true for college admissions and especially the admissions essay. A project blurb for hiring is more or less the same idea in a new context.

2
soham 2 days ago 1 reply      
Thanks Aline, for yet another well researched article.

I have to say though, that in my experience, these experiments in sourcing work quite well when your hiring is small. The moment you hit some sort of scale, it becomes very very difficult, if not impossible to run and rely on such experiments.

E.g. in the first growth phase at Box, we were tasked with hiring 25 engineers a quarter. At that scale, the company deals with too many resumes and too many stakeholders in the hiring process. And at that point, you also have a group of people explicitly looking at resumes, less involvement from actual hiring managers, deadlines to meet, land to grab etc. Not saying one thing is better than the other, just that hiring at scale is an entirely different game.

The other thing, which is implied in the article, but may get lost if the reader isn't careful: regardless of how a candidate is sourced, the interview bar still remains the same. i.e. AJ also must have had to clear same or similar technical interviews like other engineers that got hired there.

3
roymurdock 2 days ago 1 reply      
So, essentially the company that was hiring (KeepSafe), allowed applicants to submit highly flexible cover letters, and then they actually read the cover letters.

It would be great if the majority of companies used both the resume and the cover letter effectively. It feels like most companies that require a cover letter only do so to screen out the laziest 10% who can't be bothered to write up a generic 1 page essay filled with ass-kissing and vague jargon.

The cover letter is just a relic from the olden days when the application was slower and more formal. There were less applicants for each position so HR probably had more time to read/screen.

This study presents an interesting alternative: Let people submit some text along with their resume on any topic of any length, and see how their personality comes through in the writing. Probably wouldn't work extremely well at a large company, but it seems like it served KeepSafe quite well.

4
a-dub 2 days ago 2 replies      
Resumes are fine, it's really in how you treat them. When I read resumes I generally ignore where people have worked and gone to school and instead look for what they have done. If there's either a good match between the general type of stuff they've done in the past and what the role is, or if there's stuff on there that is interesting enough such that I'd enjoy hearing about it, I give a thumbs up.

When I interview, I tend to spend most of the time asking in depth questions about the projects I find most interesting on the resume. What was easy? What was hard? X sounds like it would be a problem, how did you solve it? What was fun? What was headbangonthewall miserable? Generally this gives a sense as to whether or not there's any bullshitting going on, and gives a sense for whether or not the candidate has a good head for thinking about hard problems.

Finally, I'll ask a few questions to probe for "difficult-to-work-with" red flags and finish with a few fairly easy "technical challenges" that offer opportunity for the candidate to either walk about having solved the problem, or walk away having solved the problem and demonstrated understanding of the solution from top to bottom.

5
codeonfire 2 days ago 6 replies      
When people abandoned resumes like this it's because there is some corruption in their hiring process and lesser skilled management is attempting to hire down for political control. Their only goal is to go through the motions and stay employed. Some people don't want experts. They want to have the illusion of a functioning business unit. People with no formal training that have projects that sound impressive to the layperson are not going to make waves or quit when they find out that management really doesn't know what they're doing. Just as one can build a model airplane in their garage, those same people will never be able to build an airliner. It's not a good idea to hire hobbyists if you're in the airliner business. When you have layperson managers judging what is a strong candidate and what is important technology, you are going to get hobbyists skilled in popular tech and your company is going to get worthless hobbyist tech for your organization. For example, the article refers to an 'open source Android animation library.' To many people that sounds like a massive, great achievement. On the tech scale of importance it is a 0 or 1 out of ten. No business can be formed around an animation library, it's not a difficult or uncommon thing, and there are thousands of alternatives.
6
wambotron 2 days ago 0 replies      
I worked at two places where people had resumegasms over "brand name" schools. Neither of them hired anyone who was ever any better than completely average (and yes, I include myself).

I don't really care where anyone went to school. It doesn't mean anything. Really, going to school at all doesn't mean much. I need to see what you've done outside of that to make any meaningful evaluation. It doesn't matter if it's a huge project. You can give me a couple 10-line things that do something useful and I'll still get to see how you name things, format code, use built-in libraries, etc. Then we can chit chat about project management and how much you love or hate it.

7
appamatto 2 days ago 1 reply      
I think this is very interesting counterpoint to TripleByte's data which implied that talking about passion projects was lower signal than their coding quizzes.

The benchmark was performance in a long form coding interview for TripleByte, whereas Aline's is the final offer, so not exactly apples to apples.

8
s_q_b 2 days ago 0 replies      
I agree we need an alternative to resume-based hiring, and the hiring process in general.

For example, I don't do well in whiteboard interviews, which is odd because I normally don't have a public speaking issue. It feels like there's some muscle memory attached to coding that isn't well replicated with poor handwriting in a room full of people.

Whiteboard lines of code are simply not the manner in which developers work once hired. That is the reason for the disconnect between speaking well about projects (easy to verbally explain and sketch) and the programming portion (bizarre.)

Right the industry is doing the equivalent of interviewing lawyers by asking them to write a legal brief on a white board.

We're testing the wrong thing: a proxy for the work, when we we could easily test the work itself.

I much prefer work sample tests rather than whiteboard Q/A as it better replicates the actual job. Give me a few hours with problems I would actually face on the job, my dev environment, internet access, and a set of problems that truly reflect the work, and I find it much more natural.

Is it too much to ask that an interview measure skills the job actually requires, in an environment that emulates the work?

9
fecak 2 days ago 2 replies      
Another great article from Aline. Past employers, schools, and GPAs can obviously generate false positives. I like this idea overall and would be interested in seeing the results of others. 400 applications to 1 hire isn't a great result, and I was somewhat surprised that only about 5% of applicants were even interviewed.

A few months ago I launched a side project, doing (of all things) resume review and revision services. When my clients want a review of a resume that I know won't get results, and I ask "Give me more to work with", the types of things I hear are eerily similar to the "awesome stuff" quotes in this post. I try to incorporate those things into the resume when possible.

Is it the resume itself that is the problem, or is it that candidates are just less inclined to include additional details (that may seem irrelevant) that could differentiate them from others? Some resumes will list accomplishments that make it rather clear of their qualifications, but everyone doesn't have that luxury.

When a candidate doesn't have a long list of work accomplishments, do they think to include this type of content that might get our attention?

10
blfr 2 days ago 4 replies      
Would a Northrop Grumman engineer with a GitHub full of cool projects really be overlooked in a recruitment process? Not once, that can always happen, but regularly?
11
Harj 2 days ago 5 replies      
We have the same belief in the limited usefulness of resumes at Triplebyte. We found talking about projects with candidates both more enjoyable and interesting than looking at words on a resume. Especially when people are enthusiastic about what they built.

The difficulty we had was not seeing a strong correlation between talking about projects and doing well at programming during an interview.

12
vonmoltke 2 days ago 2 replies      
> While AJs government work experience gave him a good amount of cred in the public sector, he found that making the move to industry, and startups especially, was near impossible. It wasnt that he was blowing interviews. He just couldnt get through the filter in the first place.

...

> It was AJ, a candidate that Zouhair Belkoura, KeepSafes cofounder and CEO, readily admits he would have overlooked, had he come in through traditional channels.

This was the story of my job search three years ago. It still kinda is.

13
sjtgraham 2 days ago 4 replies      
I screened and interviewed a lot of developers in my last position and it stood out to me that rsum quality seemed to be inversely correlated with the candidate's actual ability. I distinctly remember the absolute best developers that I hired also had the most atrociously bad rsums. The candidates with rsums that literally almost knocked me off my chair failed miserably at the most basic programming task.
14
tempestn 2 days ago 0 replies      
> Resumes dont have an explicit section for building rockets or Minecraft servers, and even if you stick it somewhere in personal projects, thats not where the readers eye will go.

That's true, but a good cover letter can go a long way toward helping with this. Most cover letters are generic, bland, and obviously copy-pasted from a template. (Or more often from a previous application, sometimes with info about the previous company left in!) A cover letter that talks about something exciting you've done recently, and ideally how it might be related to the job, or even just how it demonstrates skills you'll use in the job (and describes exactly how), is awesome in comparison. A letter like that would absolutely get you an interview with me, almost regardless of experience. One of our current co-op students actually had almost no programming experience on paper; he had actually switched out of a theatre degree iirc. But his cover letter was awesome (the theatre degree probably not being coincidental). Got him the interview, which got him the job, and I haven't regretted it. Just a co-op of course, but the point stands. The cover letter is probably the most important part of your application. Take the time to write a good one.

15
andrewstuart 2 days ago 0 replies      
Everyone should just give up on trying every sort of new angle to "identify great developers". It's purely subjective, there is no meaningful universal definition.

It comes down to whether or not the people doing the recruiting all have the same subjective opinion.

16
sopooneo 2 days ago 2 replies      
All these discussion of how hiring is broken. Doesn't this imply an enormous market opportunity for the recruiting agencies or companies that can properly capitalize on either undervalued candidates or better knowledge?
17
caublestone 2 days ago 1 reply      
We use an alternative screening process where we prompt prospective interviewees with 2 extreme customer service complaints (this product sucks, it didn't arrive on time, i hear it's poison etc.), ask them to provide an answer directed to the customer and create a plan to prevent the issue from happening in the future. It's fascinating to see how people approach the problem of making people happy now and preventing dissatisfaction down the road. If they pass (1/10 do), we use a 30 minute phone call to identify interests and motivations which ends up being the strongest indicator of value add. This might work for B2C companies only but I'd love to see any company identify people that are truly passionate about making people happy.

Edit: Clarifying that this is for non-engineering roles.

18
vultour 2 days ago 0 replies      
Something else piqued my interest in this article.

This lady claims that the company is fighting for candidates with Google, although the only thing they do (if i read it right) is provide an encrypted version of Dropbox. How does this require world class engineers? I've coded a file syncing app quite fast as a personal project once, and I don't think I could call myself even a regular developer. I do not believe such application would be even remotely as complex as anything Google does.

19
dropit_sphere 2 days ago 0 replies      
Is it just me, or does this still seem like a crazy hard problem? There were still 415 not-resumes to wade through for one offer.
20
nickpsecurity 2 days ago 0 replies      
Great article. A good move. I think one of the reasons the method is successful is that it asks applicants to keep it real. The traditional channels want people to come off a certain way. People also know about their filtering rate. So, the incentive for them is to tell companies what they want to hear and in a way that conveys unreliable information.

Seems your example changed the incentives, got useful information in return, and that led to a positive result. Unsurprising in hindsight. I'm going to send your article to a few people to see if I can get any to try that approach.

21
7402 2 days ago 0 replies      
I've always found resumes to be quite useful in figuring out who to bring in for an interview - BUT, most of the people I've been involved in hiring have had between 5 and 15 years of experience, usually at at least 2 or more real companies. I can well believe that if you're only interested in people right out of school, then it's harder to figure out candidates from their resumes.

I am a little puzzled, though, about why others seem to find resumes so opaque. It seems like resume-reading is a lost art. A resume is usually a document that someone has spent a lot of effort on to make themselves look good. If you learn to read them, that can tell you a lot about the author. (Note: searching for buzzwords is not "reading.") A resume should not be regarded as simply a collection of facts - of course you'll be misled if you do that; a resume should be regarded as a document of self-expression. After a while, you can see useful patterns in what people put in resumes - a least for more-experienced applicants. Almost every resume suggests a bunch of next questions, which can be asked in a phone screen or interview to get a pretty good idea of what a person is about.

It's worth recalling that absolutely all software engineers at all software companies in the world from the first ones around 1955 up to 2002 were hired without benefit of LinkedIn, StackOverflow, Github. Almost all of these engineers submitted resumes, which were reviewed prior to offering interviews. Yes, there were hiring mistakes in the old days, but I don't see a huge number of people taking about how the hiring process now is so much easier, smoother and more foolproof than it used to be.

22
bsder 2 days ago 0 replies      
> His GitHub, full of projects spanning everything from a Python SHA-1 implementation to a tongue-in-cheek What should I call my bro? bromanteau generator, hinted at a different story, but most people never got there. While AJs government work experience gave him a good amount of cred in the public sector, he found that making the move to industry, and startups especially, was near impossible.

Huh? The companies this woman hires for don't look at github? Not looking at public code that someone has published is more broken than relying on resumes. If someone has published code and it doesn't suck, I'll probably bring them in for an on-site, period. I may even tell them that "We're going to talk about "file foo.c in your code where you implemented feature Z. So be prepared."

And, I suspect with startups it was more a case of "How many years were you in government? That would makes us so unhappy that we would leave. Why didn't you?" That's a different way of asking "Is this really the place for you?"

As a hiring manager in a startup, when I knew I only had 9 months of runway without more funding, I'd feel REALLY bad about taking someone with a family away from their very stable job. As someone who has recruited employee single digit, I often have made a point to meet the family when recruiting someone--even if I have to fly to them. I need both the prospective employee and their partner to understand that the big probability is that the company won't be around in 24 months, there won't be any payoff, and a new employment search is likely to be the result. Yeah, there is a small probability that we'll survive and an even smaller probability that we'll get some money. It's a really delicate balance for me, at least, to properly sell the company (Startup! Options! Novel!) and reality (Bankrupt! Flameout! Layoffs!).

I'd say I'm batting about 50%. For every employee I scare off, I absolutely convince one to join. Funnily enough, every single one who didn't run away said the same thing: "My wife told me I had to work with you." They were stunned that someone so important (Hah! Management in a startup is a good way to understand how unimportant you are really quickly ...) would take the time to make sure the family was informed properly about the risks and rewards.

23
chris_wot 2 days ago 0 replies      
My submission would be:

I was a help desk pleb at a well known inkjet/scanner/camera company 15 years ago and this company "extended" their clipper database to record third party cartridges, but recorded them in .ini format. That's right, one file per record, in key=value pairs. I was bored and accidentally mentioned to the guy whose job it was to copy and paste the data from each of all 90,000 ini files into an Excel spreadsheet that Perl could do it, and I'd even use references to hashes to do it. He had no idea about that last bit, but I did it for him on the proviso that he didn't tell anyone, and reduced 10 weeks of work to 30 seconds. They unfortunately made me employee of the quarter but neglected to tell me so I missed my awards ceremony.

24
Mimu 2 days ago 0 replies      
I fail to see how someone building games at 14 or own one of the most successful Minecraft server could not pass the resume filter.

Not saying the process described in the article is bad, even though I believe anything can be gamed, but I don't really see a big difference. Main change is the way recruiters looked at what they got, resume or essay wouldn't have change a lot I think.

Maybe off topic but if companies want the best people, maybe THEY should write the essay explaining why people should join instead of sitting in their high tower waiting for minions to come.

25
AndyNemmity 2 days ago 3 replies      
I just went through a job search, and never created a resume. I only submitted my linkedin. If anyone required a resume, I immediately responded that we weren't a good culture fit.

Worked out really well. Not sure it's to be duplicated, but for me it went fantastically.

26
lordnacho 2 days ago 1 reply      
The "new hiring process" experiment comes up here quite often, and I do appreciate it.

However, how can we conclude anything from a procedure that only examines the hired population and none of the unhired?

27
moises_silva 1 day ago 0 replies      
GapJumpers seems like a good alternative to resume screening: https://www.gapjumpers.me/

Even if not using GapJumpers itself, you can follow the concept by requesting solving a problem or submitting a piece of original technical content along with the resume.

28
pbreit 2 days ago 1 reply      
What surprises me is how many people dislike cover letters. When I go through applicants, a decent cover letter demonstrating some enthusiasm about the company and a unique point or two is appealing.
29
dools 2 days ago 0 replies      
I do something very similar in my hiring processes on odesk. I ask each candidates opinion on something (articles usually). This allows me to eliminate 90% of applicants. Then I review the 10 applicants who's answers weren't complete gibberish and give the best 2 or 3 a programming task. Works pretty well, I've only ired one guy turned out to be inadequate for the role (out of about 20 hires over the past 4 years for php dev work).
30
sytelus 2 days ago 4 replies      
You can do much more easier, automated filtering: Ask candidate to submit link to any of the followings:

1. Github a/c

2. StackOverflow a/c

3. Their blog

4. Anything they made online

If candidate fails to submit link for any of above then just don't interview them. I would guesstimate this simple check filters out 70% of the junk resume and probably 20% of the good resumes. It can scale like crazy and expanded even more (for example, use APIs to get their profile information and rank resumes).

31
ammon 2 days ago 0 replies      
Interesting post. Talking and writing about yourself well, even when not matched with programming well, probably helps get job offers (and in many cases helps be a good employee). It's difficult to tease these two thing apart. I imagine looking at speaking and writing can still miss great people (as can every filter), but I can believe that it's much better than looking at resumes.
32
paulgayham 2 days ago 0 replies      
For what position? How can you judge candidates without a position to judge how well they fit?

I can just see this guy going out to Web devs, System devs, DBAs etc. and them all disagreeing because they're looking for different things (and value things differently).

33
dzhiurgis 2 days ago 1 reply      
So you've turned down 399 people to employ 1. Not sure if that is any different when employing via resume, but that sends shivers to my spine. 399:1 ratio says to me that there is oversupply of engineers.
34
ilaksh 1 day ago 0 replies      
I think allowing people to submit a link to a demo or portfolio makes a lot of sense, especially since everyone has computers and the internet now. Welcome to the amazing new world of hyperlinked multimedia, hiring people.
35
ageofwant 2 days ago 0 replies      
This is very bad news. I have always highly valued resumes as a very effective candidate filter. It works as follows: I take the pile of resumes, shuffle them thoroughly an divide roughly in half. The pile to the left goes in the bin. I repeat this process until I have the luckiest candidate's resume in my hand. This is the type of guy I want to associate with: one on whom fortune smiles, repeatedly.
36
mpenn 2 days ago 1 reply      
The writer claims to "rely heavily on data," but the punchline of the article is purely anecdotal that 1 person at 1 company got hired, was good and would have been overlooked. I am sure there were also many candidates with good resumes who were now overlooked.

This article starts out with an air of science and ends with a completely unproven conclusion.

While I do agree in my gut that resumes are not an amazing filter, she has completely failed to present evidence that her alternative interview process is better.

And in fact, while KeepSafe still has the no resumes option open, they are now accepting resumes again -- I do not great confidence that the alternative system was anything more than a PR move by the company.

Running Lisp in Production grammarly.com
387 points by f00biebletch  1 day ago   114 comments top 19
1
white-flame 1 day ago 2 replies      
We deploy distributed, multi-language, centrally Lisp/SBCL servers as well. A few specifics that I'd point out:

Many of SBCL's optimizations are fine grained selectable, using internal SB-* declarations. I know I was at least able to turn off all optimizations for debug/disasm clarity, while specifically enabling tail recursion so that our main loop wouldn't blow up the stack in that build configuration. These aren't in the main documentation; I asked in the #sbcl IRC channel on FreeNode.

You can directly set the size of the nursery with sb-ext:bytes-consed-between-gcs, as opposed to overprovisioning the heap to influence the nursery size. While we've run in the 8-24GB heap ranges depending on deployment, a minimum nursery size of 1GB seems to give us the best performance as well. We're looking at much larger heap sizes now, so who knows what will work best.

While we haven't hit heap exhaustion conditions during compilation, we did hit multi-minute compilation lags for large macros (18,000 LoC from a first-level expansion). That was a reported performance bug in SBCL and has been fixed a while back. Since the Debian upstream for SBCL lags the official releases quite a bit, it's always a manual job to fetch the latest versions, but quite worth it.

Great read, and really familiar. :-)

2
Grue3 1 day ago 3 replies      
Common Lisp's macros and grammar go together like bread and butter. A grammar module in the app I built [1] uses macros to generate huge amounts of repetitive code.

[1] https://github.com/tshatrov/ichiran/blob/master/dict-grammar...

I wonder if they're still hiring Lispers. I once passed on the opportunity to work in their Kiev office, but I might give it a shot again.

3
lkrubner 1 day ago 7 replies      
Good lord, I would go insane if I ran into a bug like this:

"We've built an esoteric application (even by Lisp standards), and in the process have hit some limits of our platform. One unexpected thing was heap exhaustion during compilation. We rely heavily on macros, and some of the largest ones expand into thousands of lines of low-level code. It turned out that SBCL compiler implements a lot of optimizations that allow us to enjoy quite fast generated code, but some of which require exponential time and memory resources. "

4
orthecreedence 1 day ago 0 replies      
Great article, and good reminder on using trace. Every time I rediscover trace, I can't remember how I ever forgot to use it in the first place for most of my problems.

I used CL in a production environment a while back for a threaded queue worker and nowadays as the app server for my turtl project, and I still have yet to run into problems. It seems like you guys managed to push the boundaries and find workable solutions, which is really great.

Thanks for the writeup!

5
doomrobo 1 day ago 2 replies      
Slightly off-topic, but does anybody know of a kind of "LISP Challange" set? I recently started the Matasano challenges[0] and I found them really well-suited to my style of learning (learning by doing and expanding by reading relevant material, enabled by my own internal motivation). Is there anything like that that has a relatively small set of condensed yet rich challenges that demonstrate key elements from LISPy functional programming? I read some of SICP but reading long form really puts a damper on my motivation/excitement. Also there were a lot of exercises (with a lot of overlap in concepts) so I didn't know what to do and what not to do, since I wasn't about to do every single one. Any pointers would be much appreciated!

[0] http://cryptopals.com

6
davexunit 1 day ago 0 replies      
Awesome stuff. Articles like this are what we Lispers/Schemers need to show that our languages can be used for "real work"(tm).
7
PuercoPop 1 day ago 2 replies      
One of the things I would have liked to see on the article is how do they handle the deployment itself. Do they build an executable with build app? To they used sb-daemon? An home-grown solution using sb-posix:fork?
8
dfischer 1 day ago 6 replies      
Is it worthwhile to explore Clojure for web-dev seriously or more as a toy?
9
jon-wood 1 day ago 5 replies      
Apparently they use "JVM languages", JavaScript, Python, Go, Lisp and Erlang in production.

I may be in the minority, but that would drive me mad. I assume they're not routinely jumping between those stacks multiple times a day, but even so is there really that much benefit that it's worth keeping track of how to do things in that many different environments?

10
BlanketLogic 1 day ago 2 replies      
Very informative. Thank you.

Anyone here has any experience with the GCs of Allegro or LispWorks or any other commercial Lisp implementations?

11
outworlder 1 day ago 0 replies      
Aw, now they have disclosed their secret weapon! [1]

[http://www.paulgraham.com/avg.html]

12
akssri 1 day ago 0 replies      
> but we value choice and freedom over rules and processes.

Which I exactly why I feel Lisp doesn't see much use elsewhere :(

13
jjawssd 1 day ago 0 replies      
Once you go deep enough you are fucked no matter what language you choose. Might as well pick one that doesn't beat you up too much.
14
welshguy 1 day ago 0 replies      
I love it :0) I tried grammarly, and typed in a remembered poem. It informed me that it had detected significant plagiarism.

Edit: It's still not advice I would pay for, though.

15
avodonosov 1 day ago 0 replies      
Thanks, great post and lot of useful references.
16
zenogais 1 day ago 0 replies      
This was fantastic. Thank you.
17
mud_dauber 1 day ago 2 replies      
Wow. I can't ever remember reading about a consumer-facing app using Common Lisp. Ever.
18
eruditely 1 day ago 3 replies      
Why not racket?
19
vseloved 1 day ago 2 replies      
Well, the HDF5 problem was actually not on the Lisp side ;)But, in general, do you really believe that there are no issues with libraries in other languages? I've had my share in Python or on the JVM, as well. The whole point in the article was to show that there are some challenges, but they didn't become critical to our operation.
Ask HN: How do you familiarize yourself with a new codebase?
374 points by roflc0ptic  2 days ago   230 comments top 115
1
tessierashpool 2 days ago 10 replies      
I wrote some simple bash scripts around git which allow me to very quickly identify the most frequently-edited files, the most recently-edited files, the largest files, etc.

https://github.com/gilesbowkett/rewind

it's for assessing a project on day one, when you join, especially for "rescue mission" consulting. it's most useful for large projects.

the idea is, you need to know as much as possible right away. so you run these scripts and you get a map which immediately identifies which files are most significant. if it's edited frequently, it was edited yesterday, it was edited on the day the project began, and it's a much bigger file than any other, that's obviously the file to look at first.

we tend to view files in a list, but in reality, some files are very central, some files are out on the periphery and only interact with a few other files. you could actually draw that map, by analyzing "require" and "import" statements, but I didn't go that far with this. those vary tremendously on a language-by-language basis and would require much cleverer code. this is just a good way to hit the ground running with a basic understanding which you will very probably revise, re-evaluate, or throw away completely once you have more context.

but to answer your actual question, you do some analysis like this every time you go into an unfamiliar code base. you also need to get an idea of the basic paradigms involved, the coding style, etc. -- stuff which would be much harder to capture in a format as simple as bash scripts.

one of the best places to start is of course writing tests. Michael Feather wrote a great book about this called "Working Effectively with Legacy Code." brudgers's comment on this is good too but I have some small disagreements with it.

2
scott_s 2 days ago 4 replies      
A post from last year, "Strategies to quickly become productive in an unfamiliar codebase": https://news.ycombinator.com/item?id=8263402

My comment from that thread:

I do the deep-dive.

I start with a relatively high level interface point, such as an important function in a public API. Such functions and methods tend to accomplish easily understandable things. And by "important" I mean something that is fundamental to what the system accomplishes.

Then you dive.

Your goal is to have a decent understanding of how this fundamental thing is accomplished. You start at the public facing function, then find the actual implementation of that function, and start reading code. If things make sense, you keep going. If you can't make sense of it, then you will probably need to start diving into related APIs and - most importantly - data structures.

This process will tend to have a point where you have dozens of files open, which have non-trivial relationships with each other, and they are a variety of interfaces and data structures. That's okay. You're just trying to get a feel for all of it; you're not necessarily going for total, complete understanding.

What you're going for is that Aha! moment where you can feel confident in saying, "Oh, that's how it's done." This will tend to happen once you find those fundamental data structures, and have finally pieced together some understanding of how they all fit together. Once you've had the Aha! moment, you can start to trace the results back out, to make sure that is how the thing is accomplished, or what is returned. I do this with all large codebases I encounter that I want to understand. It's quite fun to do this with the Linux source code.

My philosophy is that "It's all just code", which means that with enough patience, it's all understandable. Sometimes a good strategy is to just start diving into it.

3
JustSomeNobody 1 day ago 0 replies      
1. I make sure I can build and run it. I don't move past this step until I can. Period.

After that, if I don't have a particular bug I'm looking to fix or feature to add, I just go spelunking. I pick out some interesting feature and study it. I use pencil and paper to make copious notes. If there's a UI, I may start tracing through what happens when I click on things. I do this, again with pencil and paper first. This helps me use my mind to reason about what the code is doing instead of relying on the computer to tell me.If I'm working on a bug, I'll first try and recreate the bug. Again, taking copious notes in pencil and paper documenting what I've tried. Once I've found how to recreate it, I clean up my notes into legible recreate steps and make sure I can recreate it using those steps. These steps are later included in the bug tracker. Next I start tracing through the code taking copious notes, etc, etc. yada yada. You get the picture.

4
monk_e_boy 2 days ago 4 replies      
Debugger! Surprised no one has mentioned it yet. I work in js and php, both of which I use the debugger a lot.

Set a breakpoint, burn through the code. Chrome has some really nice features - you can tell it to skip over files (like jQuery) you can open the console and poke around, set variables to see what happens.

Stepping though the code line by line for a few hours will soon show you the basics.

5
kabdib 2 days ago 4 replies      
I just crack open the source base with Emacs, and start writing stuff down.

I use a large format (8x11 inch) notebook and start going through the abstractions file by file, filling up pages with summaries of things. I'll often copy out the major classes with a summary of their methods, and arrows to reflect class relationships. If there's a database involved, understanding what's being stored is usually pretty crucial, so I'll copy out the record definitions and make notes about fields. Call graphs and event diagrams go here, too.

After identifying the important stuff, I read code, and make notes about what the core functions and methods are doing. Here, a very fast global search is your friend, and "where is this declared?" and "who calls this?" are best answered in seconds. A source-base-wide grep works okay, but tools like Visual Assist's global search work better; I want answers fast.

Why use pen and paper? I find that this manual process helps my memory, and I can rapidly flip around in summaries that I've written in my own hand and fill in my understanding quite quickly. Usually, after a week or so I never refer to the notes again, but the initial phase of boosting my short term memory with paper, global searches and "getting my hands to know the code" works pretty well.

Also, I try to get the code running and fix a bug (or add a small feature) and check the change in, day one. I get anxious if I've been in a new code base for more than a few days without doing this.

6
agentgt 1 day ago 0 replies      
There is a significant number of answers that may interest you on Stackoverflow. Specifically: http://stackoverflow.com/questions/215076/whats-the-best-way...

Two things I do to familiarize with a code base is to look at how the data is stored. Particularly if its using a database with well named tables I can get some rough ideas of how the system works. Then from there I look at other data objects. Data is easier to understand than behavior.

The other is watching the initialization process of the application with a debugger or logger. Along those lines if your lucky (my opinion) and the application uses dependency injection of some sort you can look to see how the components are wired together. Generally there is an underlying framework to how code pieces work together and that generally reveals itself in the initialization process if its not self evident.

7
Mithaldu 2 days ago 5 replies      
This may or may not apply to you, since i work with Perl. Typically i'm in a situation where i'm supposed to improve on code written by developers with less time under their belt.

As such my first steps are:

1. tidy/beautify all the code in accordance with a common standard

2. read though all of it, while making the code more clear (split up if/elsif/else christmas trees, make functions smaller, replace for loops with list processing)

While doing that i add todo comments, which usually come with questions like "what the fuck is this?" and make myself tickets with future tasks to do to clean up the codebase.

By the end of it i've looked at everything once, got a whole bunch of stuff to do, and have at least a rough understanding of what it does.

8
bite_victim 1 day ago 4 replies      
Side rant:

I just cannot believe people praising 'Unit Test'-ing. Fellow programmers, how exactly do you unit test a method / function which draws something on the canvas for example? You assert that it doesn't break the code?!

I see some really talented people out there who write unit test as proof that their code works without issues, that it's awesome and it cooks eggs and bacon etc. They write such laughable tests you cannot even tell if they are joking or not. They test if the properties / attributes they are using in methods are set or not at various points in the setup routine. Or if some function is being called after an event is being triggered.

My point is this: unit testing can only cover such tiny, tiny scenarios and mostly logic stuff that it is almost useless in understanding what is going on in the big picture. Take for example a backbone application like the Media Manager in WordPress. Please tell me how somebody can even begin to unit test something like that.

Unit testing is a joke. And sometimes a massive time consuming joke with a fraction of a benefit considering the obvious limitation(s).

9
jpgvm 1 day ago 0 replies      
I usually work on more traditional command line applications and daemons so my approach might be a little different to a web developer.

I always start by gauging how much source code there is and how it's structured. The *nix utility "tree" and the source code line counter "cloc" are usually the first 2 things I run on a codebase. This tells me what languages the applications uses, how much of each, how well commented it is and where those files are.

The next thing I usually do is find the entry point of the program. In my case this is usually an executable that calls into the core of the library and sets up the initial application state and starts the core loop and routine that does the guts of the work.

Once I have found said core routine I try to get a grasp for how the state machine of the program looks like. If it's a complicated program this step takes quite a while but is very important for gaining an intuitive understanding of how to either add new features or fix bugs. I like to use my pen and paper to help me explore this part as I often have to back track over source files and re-evaluate what portions mean.

Once I have what I think is the state machine worked out I like to understand how the program takes input or is configured. In the case of a daemon that often means understanding how configuration files are loaded and how the configuration is represented in memory. Important to cover here is how default values are handled etc. I actually prioritise this over exploring the core loops ancillary functions (the bits that do the "real" work) as I find it hard to progress to that stage without understanding how the initial state is setup.

Which brings us to said "real" work. Hanging off of the core loop will be all the functions/modules are called to do the various parts of the programs function. By this time you should already know what these do even if you don't know how they work. Because you already have a good high level understanding at this point you can pick and choose which modules you need to cover and when to cover them.

10
vineet 2 days ago 0 replies      
I studied a lot of people doing this as part of my PhD. The thing is that there are not many answers that work well in a lot of situations. Given that though, my suggestions is to iterate on developing three views of the code:

1. The Mile High View: A layered architectural diagram can be really helpful to know how the main concepts in a project are related to one another.2. The Core: Try to figure out how the code works with regards to these main concepts. Box and arrow diagrams on paper work really well.3. Key Use Cases: I would suggest tracing atleast one key use case for your app.

11
droppedasakid 2 days ago 1 reply      
Whatever your IDE/editor of choice is, I think these having these three functions are critical to learning a new codebase, or even developing for that matter:1. Go to definition2. Find all references3. Navigate back

This allows you to go down any code rabbit hole, figure stuff out, then get back to where you were. If you can't do those things it will take much longer to understand how things are interconnected.

12
gshx 1 day ago 0 replies      
I start with running the tests if there are any. Typically peeling layers of the onion starting with the boundary. If there are no tests, then I'll try to write them. Then running tests in debug mode helps step through the code. If I have the luxury of asking questions to an engineer experienced with the codebase, I request a high level whiteboarding session all the while being cognizant of their time.

Some others have mentioned recency/touchTime as another signal. For large complex codebases, that may or may not always work.

13
fourier 1 day ago 1 reply      
I'm working a lot with a huge legacy codebases in C/C++. Here are some advices:

1. Be sure what you can compile and run the program

2. Have good tools to navigate around the code (I use git grep mostly)

3. Most of the app contain some user or other service interaction - try to get some easy bit (like request for capabilities or some simple operation) and follow this until the end. You don't need a debugger for it - grep/git grep is enough, these simple tools will force you to understand codebase deeply.

4. Sometimes writing UML diagrams works -

- Draw the diagrams (class diagrams, sequence diagrams) of the current state of things

- Draw the diagrams with a proposal of how you would like to change

5. If it is possible, use a debugger, start with the main() function.

14
nissimk 2 days ago 0 replies      
I agree with what many others on here have said. It's also a personal thing. In general I like to try to force myself to learn only the minimum required to do what I need to do. If that philosophy sounds good to you, I would recommend taking the buggy version of frozen columns and try to fix the bugs. You may learn that the bugs are structural and you need to implement it differently, or you might be able to fix it with minimal changes. You will certainly get an understanding of the parts of slickgrid that you need to interact with to add this feature.

For the ajax data source thing, I would try to modify or extend the existing data source code to add the behavior you are looking for. As you mess around with it trying to figure out what you need to change, you will encounter the areas of the code that you need to understand.

With this sort of strategy you can avoid having to fully understand all the code while still being able to modify it. You might end up implementing stuff in a way which is not the best, but you will probably be able to implement it faster. It's the classic technical debt dilemma: understanding the complete codebase will allow you to design features that fit in better and are easier to maintain and enhance, but it will take a lot longer than just hacking something together that works.

15
Sakes 2 days ago 1 reply      
I wish I had a better answer, but I honestly just stumble around it. I typically start by trying to understand how they structured their files, then I'll start diving into the code. I wouldn't try to "understand" it completely. Just look over it until you feel comfortable enough to try to make some modifications.

Michael's code looks clean and well organized. Shouldn't be terribly difficult for someone proficient at JS.

16
eterm 2 days ago 0 replies      
My approach is to break stuff. If I can break it (and I am good at finding bugs, so I usually can) then I now have a narrow focus which helps me getting "lost" in the code base.

Once I've found and fixed a few things, or if the code base is particularly small or clean that I can't find bugs to fix, I'll set about hacking in the feature I'd like.

I usually start by doing it in the most hacky way possible. That sounds like a bad approach but it narrows the search of how to implement it and means I'm not constraining myself to fit the code base that I don't yet appreciate.

In hacking that feature I'll often break a few things through my carelessness. In then trying to alter my hacked approach so it no longer breaks stuff I'll become more aware of the wider code base from the point of view of my initial narrow focus. This lets me build up the mental model.

Eventually I'll be comfortable enough I can re-write the feature in a way more consistent with the wider code base.

I don't normally start by trying to "read all the code" because that guarentees I won't understand much of it (I'm not quick at picking up function from code). I might have a skim if it is well organised, but I find the "better" written a lot of stuff is, the harder it is to grok what it is actually doing from reading it. to me, reading good code is often like trying to read the FizzBuzz Enterprise Edition[1].

I've worked on many legacy systems: I was last year implementing new features into a VB6 code base, this year (at a different job) I am helping migrate from asp webforms to a more modern system. I've found that starting with trying to fix an issue to be the best way to dive into the code base.

Use good source control so you're never "worried" about changing anything or worrying that you might lose your current state. Commit early, commit often, even when "playing around".

[1] https://github.com/EnterpriseQualityCoding/FizzBuzzEnterpris...

17
brudgers 2 days ago 2 replies      
When you think you understand something write a test and test your belief. If the test passes then both your knowledge and the code base are better for it. If the test fails then rewrite the test to the failure and write another test. Again you will know more and the code base will be better.

Good luck.

18
jdefr89 1 day ago 0 replies      
I tend to use a hybrid approach, but In general I try to identify the entry point of the code which will lead me to the core datastructures and possibly event loops that act as a central hub for any other code that is called.. That is I look for some kind of dispatch pattern that Integrates the rest of the system, routing and calling different code when needed. Once you identify this "hub" you will have a good mental model and the system and its high level components. From there you can delve into different subsystems and slowly tweak and make changes to be sure a code path does what you conjecture it may do. Using a debugger is helpful at certain points to explore depth of the code.. When you can get a small tweak working as expected you probably have a decent starting model of the code base that you can easily add to.
19
git-pull 12 hours ago 0 replies      
I love wrapping my brain around large codebases in my spare time. I wrote an application for help me download source code repositories in git, svn and mercurial and keep them in sync:

http://vcspull.readthedocs.org/en/latest/

I keep the applications I want to study in a YAML file (https://github.com/tony/.dot-config/blob/master/.vcspull.yam...) and type "vcspull" to get the latest changes.

You can read my ~/.vcspull.yaml to see some of the projects I look over by programming language. You can setup your config anyway you want (perhaps you wanted to study programming languages implementations, so have ~/work/langs and cpython, lua, ruby, etc. inside it.

20
spion 1 day ago 0 replies      
Another thing that is helpful, especially if you don't even have knowledge of the problem domain of the codebase: Write a glossary.

As you read the code and encounter terms/words you don't know, write them down. Try to explain what they mean and how they relate to other terms. Make it a hyperlinked document (markdown #links plus headings on github works pretty well), that way you can constantly refresh your memory of previous items while writing

Items in the glossary can range from class names / function names to datatype names to common prefixes to parts of the file names (what is `core`? what belongs there?)

Bonus: parts of the end result can be contributed back to the project as documentation.

21
aleem 1 day ago 0 replies      
Some good pointers and links here, surprisingly they miss both my favourite approaches.

1. If it's on Github, find an issue that seems up your alley and check the commits against it. Or the commit log in general for some interesting commits. I often use this approach to guide other devs to implement a new feature using nothing more than a previous commit or issue as a reference and starting point.

2. Unit tests are a great way to get jump started. It functions as a comprehensive examples reference--having both simple and complex examples and workflows. Not only will it contain API examples but it will also let you use experiment with the library using the unit test code as a sandbox.

22
etagwerker 2 days ago 0 replies      
This is usually how I do it for libraries:

* Read the README.

* Install it and start using it with a couple of sample cases. That will give you an idea of what it does.

* Read the test suite. This will give you a better idea of what the library does.

* Look at the directory structure. This should tell you where things are.

* Start reading the core files.

* Start looking at open issues. Try to solve one by adding a test and changing the code.

* Submit a pull request.

23
barnacs 1 day ago 0 replies      
I think a top-down approach is pretty much the only way to do it: Start at a high level of abstraction: packages, modules, namespaces, etc and their relations. Pick one that seems related to some core functionality or central to the change you intend to make and dive deeper: interfaces and data structures within that unit and possibly other related units they depend on. Ideally, up to this point you shouldn't even have to worry about function definitions and algorithms, just declarations, types and relations.

While static typing helps a lot with this kind of exploration and navigation, I don't know of any IDEs or other tooling for any language that would really help you with it.Sure, you can probably generate UMLs or something, but it usually requires some additional tool and the output is pretty static. You can't just zoom in from a package-level view to an interface-level and then keep zooming until you are eventually shown line-by-line implementation of a specific function.

I've been thinking about this lately, and I've come to the conclusion that the way we think and reason about code is pretty far from the way our tools present it to us. I tend to think in terms of various levels abstractions and relations between units, yet the tools just show me walls of text in some file system structure (that may or may not mirror the abstractions) and hardly any relationships.

24
mavidser 2 days ago 1 reply      
Well, I'm not very good at this either but here's what I do. I usually work on modular projects where there are hundreds of files in the project. I usually skip directly to locating the file where I've to make amends (using a lot of grep. grep for function and object definitions, grep for usage patterns, grep for checking how to implement something). Thus, I learn about the codebase as I go along.

Sure, this is not the best practice, and unsuitable for many, but it's what works for me.

25
geertj 1 day ago 0 replies      
My typical workflow for checking out new open source projects:

- find . -type f

- find . -name \* .ext | wc -l (get an idea of complexity)

- git log (is this thing maintained?)

- find . -name \* .ext | ctags

- find main entry points, depending on platform and language

- vim with NerdTree enabled

- CTRL-], CTRL-T to jump to browse tags in vim

Generally a lot of find, grep and vim gets me started.

26
shawnps 2 days ago 0 replies      
Get it running locally and then see what happens when you delete some stuff, especially stuff that you don't understand when reading through the code.
27
dave_ops 1 day ago 0 replies      
I take it out for dinner and drinks. Spend some time getting know about it and where it comes from and what it does for a living. Then after we're a few cocktails in we get all philosophical. Really start asking the hard questions like, "Why do I even exist? Is any of this real or is it all some weird virtual world?"

We become fast friends and feel like we really understand each other.

But days pass, and each encounter feels less magical. It's almost like we having nothing in common. Like we're from two completely different worlds. One where its stuck in the past and one where I'm ambitious and excited about the future.

After awhile we don't really speak to each other anymore, and after some pretty ugly fights at work that get too personal... I rewrite it.

28
awinder 2 days ago 0 replies      
You got a little bit lucky with this project because there's a decently built-out test suite. I would start by digesting the tests because if they're good, you'll be able to see the mechanics about how the exposed interfaces in the code work, and this should also give you a good idea if changes you're making are breaking the expected workflow or not.

From my experience, there are really two ways that learning a new codebase can happen. One is that there's an existing test suite that's fairly comprehensive, and you can learn a lot by examining the tests, making changes to add features / make bug fixes, and then validate that work by rerunning the tests and adding new ones. That's really a great place to be as someone unfamiliar with a new codebase. The other is that there are no tests, and you inevitably need to rely on people familiar with the code, and make peace with the idea that you're going to write bad code that breaks things as you learn the depth of how the project works.

29
hal9000xp 1 day ago 0 replies      
I probably won't say anything new here. Last five years, I do the following in order to get my foot wet with new project (some projects I worked with contains more than two millions lines of code):

1. Just make sure I can build project;

2. Play around with services/application (just run, send some requests, get response);

3. Pick up simplest case (for example, some request/response);

4. Find breakpoints (for debugging) somewhere connected with this simplest case (for example, which stopped somewhere when I send request) and setup them in debugger. Usually, I find place where to put breakpoint by just searching keyword associated with my request;

5. Play around with these breakpoints while performing simplest case (for example, sending request) and try to find out call graph;

6. Try to change code and see what happens;

After I do this stuff several days/weeks, I become more and more familiar with the project.

30
physicsmichael 1 day ago 0 replies      
A very simple method that helps me is to make sure I tackle a new code base on a large monitor, vertically oriented, with small font size. Add to that a pane that shows the file/class structure. Seeing more at once helps ground me in the types of interactions in the code and the code landscape.
31
ignoramous 1 day ago 0 replies      
I work on AOSP, which is a fairly larger code base. During the early years, the documentation on the internals of Android was close to non-existent. Plenty of tutorials in Mandarin/Cantonese, but not many in English.

A good way to get hang of the code base was to read it (usually using a tool like sourcegraph [0], pfff [1], open-grok [2], doxygen [3], javadocs [4]). Although a lot of people have argued that code is to be not treated like literature [5], but in this case, there was no choice.

The second step was to see if assumptions about the what the code does is correct. This is usually achieved by adding log statements, writing sample apps, and debugging in general.

Repeat the steps above, over and over again.

Checklist:

1. No matter what you do, you absolutely need to document everything you understand / misunderstand about the code base.

2. Never underestimate value of having a different pair of eyes look at code you have hard time reasoning about.

3. Be in constant search for resources (like books, blogs) available on the code / topic of your interest. You'd learn amazing amount by reading through other people's analysis. Stackoverflow is a great start. Heck, you can even ask well thought-out questions on Quora/Stackoverflow.

4. Hang out on related IRC channels / community mailing lists. For things written in esoteric languages such as OCaml, I found these to be pretty helpful.

5. You could blog about it, share the information you know over email lists, setup wikis; and people who know better would correct you. Its a win-win.

Good luck.

[0] http://sourcegraph.com/

[1] https://github.com/facebook/pfff

[2] https://opengrok.github.io/OpenGrok/

[3] http://www.stack.nl/~dimitri/doxygen/

[4] http://www.oracle.com/technetwork/articles/java/index-jsp-13...

[5] http://www.gigamonkeys.com/code-reading/

32
datashovel 1 day ago 0 replies      
If it's not obvious just by looking at how the directories are structured, and files are named, generally I find that everything is (or should be) relatively easy to understand if you start from the perspective of a user.

1) Read docs for how to USE the library if they exist

2) Review example code that describe how a person would use the library to accomplish tasks.

3) In order to start diving in, find a specific example that does something interesting, then hop in from there. Read the code within the methods / functions the user calls, then the functions / methods called inside those, etc.

4) As you dig deeper you may start finding that you understand, or you'll start building up your own hypotheses like "If I change X to Y in this function then something different should happen when I call it". Try it out, and see if your hypothesis is correct.

After a few iterations of doing something like this you'll probably start getting an idea of how the code is structured and where you'd need to go in order to make the changes you'd like to make, or add the features you want to add.

33
matthewrhoden1 2 days ago 0 replies      
I've worked in a lot of legacy code bases. Here's my approach:* Skim around to get a general idea of what components are involved.* Try to understand that one module/class that keeps getting used a lot or is really important. * I mentally trace through that code, as if I'm a debugger. * Most importantly, I write down my discoveries/understanding as I go to help me retain this idea.* Re-skim with my new understanding and/or reorganize the code to be more concise or simpler. Depending on how ambitious you are, you might try to keep these changes. But with legacy code, it typically breaks as a result.

Every code base takes time to digest all the information. Sure the information passed your eyes, but is it committed to memory?

34
jameshart 1 day ago 0 replies      
For clientside JavaScript, one useful way in is to run the Chrome profiler on it. That will produce a treeview of the calling hierarchy, and give you an idea of what are the code's 'hotspots' - the functions that are called from everywhere, or the functionality which dispatches everything.

This can be especially useful for event driven code (looks like SlickGrid is jQuery-based, so that definitely applies here); you can start a recording profile, carry out the action you're interested in, then stop recording, and you can then find out exactly which anonymous function is handling that particular click or scroll or drag.

35
vpeters25 1 day ago 1 reply      
This answer is going to be rather unortodox and might get downvoted but this is how I do it:

I just skim throught all the sources, then somehow I am able to point approximate file and line of code where a specific question might be answered.

This might sound "out there" but I realized during college I had the ability to recall the approximate location of specific information I needed from a textbox If I just skimmed through the whole book at the start of the semester.

For years I did this out of intuition, then about 10 years ago I took a course named "photoreading" and to my surprise they were teaching my "ability" but with clear steps so anybody could use it effectively.

36
VeejayRampay 2 days ago 1 reply      
Drawings will help tremendously. Extract the big masses, their respective interfaces to each others and the means through which they communicate.This will help build a mental map of the code and reduce the cognitive load needed to understand each separate part.
37
tjallingt 2 days ago 0 replies      
I personally like to reverse engineer functions within a certain codebase to better understand what is happening.

For example I would start by looking up out a basic example of that codebase and for each of the function calls go through the files and see what is happening. This gives me an idea of how the code base is written and how it works. It also gives a clear understanding of the level of separation/specificity of the different functions.

Disclaimer: not very experienced so there might be better ways to familiarising ones self with a new codebase, this is just one way of doing it and it has worked for me in the past.

38
scrabble 2 days ago 0 replies      
I generally just start by fixing small bugs in different areas of the system. I find that debugging various areas of the system help me understand them better and allow me to start forming a cohesive picture in my mind.
39
lukaslalinsky 2 days ago 0 replies      
I'm working with somebody else's code more often than writing something new from scratch. It takes some time to get used to that, but it's very far from the hardest tasks developers face.

A couple of things that I typically do:

- Start with a fully working state, i.e. setup your environment, make sure tests (if there are any) are passing. If you can't get things to work properly, that's your first issue to investigate and fix.

- Don't try to understand all of the code at once. You don't need it yet. I'm assuming you want to take over the project for a particular issue. So just focus on that and ignore the rest of the code. If you ask any senior developer about something in their project, there is a great chance they will not remember the exact details, but know where in the code to look at. Aim to get at that level, not memorizing how everything works on the lowest level.

- Don't make any changes to the code that you don't understand. I have a recent example of this. Yesterday I was trying to find a bug in the Phoenix database, which was failing to start after an upgrade. I have never seen the code in my life. After some debugging I realized it's doing something with an empty string that shouldn't be empty. The obvious "solution" is to add an check if the string is empty and be done. Don't do that. Understand exactly why is the problem happening and only do a change like that after you are sure of all the implications. This has two effects, you are not introducing new bugs and you are learning about the codebase. At the end, the fix from my example was just a simple "if", but without understanding how is it ending up with an empty string, I might have caused more problems than I fixed.

- Use the VCS a lot when figuring our why something is done they way it's done. Use "blame" to see when things have been changed, read through the logs, etc. This is one of the main reasons why I don't like people rebasing/squashing their commits before merging. There is so much information they are throwing away this way.

- Adopt the coding style of the existing code. Don't try to push your style, either by having inconsistent style in different parts of the code or re-formatting everything. It's just not worth it.

- Don't be afraid to change things that need changing. There is nothing worst than making a copy of some module, call it v2 and then having to maintain two versions. If you are afraid to make a change in the existing code, make yourself familiar with the part of the code first.

40
wazari972 1 day ago 0 replies      
I like to use interactive debuggers like gdb (for C) or pdb (for python) for that.

You first have to localize a region (function) you want to study, then you reach one of its execution with a breakpoint, or a conditional breakpoint.

Then, you inspect:

- the callstack: in which condition the function was call

- the parameters / local variables

- the subfunctions: in both tools, you can manually call any (reachable) function, try different parameter values and check the result. Pay attention through to the side effects!

41
aethertap 1 day ago 0 replies      
The first thing I do is try to get a handle on the libraries it pulls in (maybe spend a day just going through the high-level readme material for each one). That will usually tell me where to start looking for the entry points where I might want to start modifying things. After that, I give myself a series of small functionality changes to implement, kind of like capturing a bunch of little flags. After doing that for a bit I usually have a decent idea of how things work, and it's easier to go forward, at which point I can dig into the relevant parts of the codebase with more confidence.

The first few mods are inevitably disgusting hacks, so don't pick anything you want to keep for your first couple of goals. It is pretty easy to go back and do them right once you've got your head around the rest of the project if you do end up wanting to keep them though.

I've used this method on some decently large C++ and javascript projects (around 100k-200k lines) and it works pretty well for me. I don't learn very well by just reading the code, but doing the little mods seems to make it stick.

42
wiresurfer 1 day ago 0 replies      
The most critical step is to get the lib in your workflow , preferably with (build-introspect-debug) capabilities. This increases the upfront time to start, but leads to much quicker "code understanding in my opinion.

TL;DR; Start with the minimum exposed surface area of the project (API), dig through these functions first. Definitely know the initialization sequences the library needs.

This is my approach concerning JS projects or for dealing with other peoples code in general.

First, I make a mental model of what I want to do. !important.Then I write the smallest wrapper needed to start fledging out points where "separation-of-concern" happens.

At this point I should have an idea of what the other persons libraries expose as API. I also should have an idea of what can be done with a unmodified library, and what would need patching.

Then comes monkey-patching the lib at individual function levels with a healthy dose of TODO markers and NotImplemented Method signatures.

By this point I should have a good picture of what goes on in the library apart from what gets exposed and would probably have forked a branch by now.

This strategy has been useful not just for JS projects but bigger codebases of java/scala libraries like Lucene Core/Solr or Play framework, Django in the python realm and to limited success with Research code releases like Stanford Core NLP.

43
l-jenkins 1 day ago 0 replies      
I see a lot of comments talking about code that is in a repository. And that is great, if you have it available. There have been many, many times where our team is handed an application that is broken (or has a bug) and asked to fix it. In many, many of those cases we don't have access to the original repository, or there wasn't one.

We generally approach it with heavy customer/owner involvement at first. We need to know what the application's intended purpose is. It is sort of like a lightening BA session. We get what the application should do, and what it isn't doing properly out of this session (and more importantly, what it should be doing instead).

Our first step: get it into a repo.

Now that we have an understanding of what the application's intended purpose is, we can dive into the code. We don't have any analysis tools (but if there are some that people could recommend, I'm all ears) outside of our IDE (Visual Studio). We generally look for the last-modified date as an indicator of what needed work most recently. Of course, we don't have file history so we don't know exactly what changed, but it gives us a rough idea of what was worked on and when.

Next we usually try and use the application in our development environment. We chase each action a user takes in the code to determine what is the core/central part of the application. After that, we try to determine the cause of the problem (and while we are at it, we generally do a security review of the code).

It takes time, and is painstakingly nuanced and very boring. But I'm not sure what other options we have in such cases. As I said, I'm all ears as to what other might do in these situations.

44
tracker1 1 day ago 0 replies      
Try to fix/change/adjust something in the front-end and look back from there... Although this can be frustrating depending on the codebase, but your best bet in learning something new is to try to do something... even something small. If you want to go the extra mile, add comments to stuff that doesn't make sense as you go, and tag things for refactoring with Todo's and corresponding tickets.

Going a step farther still would be to add to the user documentation as you go...

Do something small, and iterative, and go out from there... for that matter, just getting a proper build environment is hard enough for some projects... automate getting the environment setup if it's complex. I've seen applications with 60+ step processes for getting all the corresponding pieces setup.

45
georgerobinson 1 day ago 0 replies      
I'm currently involved in a project (with 3 others) for my MSc in Computer Science in which we aim to take Google Native Client (a browser extension for Chrome which sandboxes untrusted, native code, downloaded from the web and executed inside your browser) and use it on the server-side to sandbox an HTTP server. Since almost all documentation is catered towards developers who wish to write untrusted native code that runs inside the browser, or browser vendors (this part of the documentation is quite incomplete) who wish to include Native Client in their browser, we're pretty much stuck in the dark.

First, we read the Native Client papers (http://www.chromium.org/nativeclient/reference/research-pape...) to understand how Native Client sandboxes untrusted code. We then looked at the tests in the Native Client source repository to see how to run untrusted code within a Unix process. We're yet to be able to debug executables via GDB for reasons we don't quite understand - so at present we:

1. Set NaClVerbosity to 10 and trace the system calls and functions invoked in the tests2. Run "grep -r" in the src folder to find the source files for each of the functions invoked then read and understand the code for each3. Insert our own calls to NaClLog in the source code to read the state of variables and to validate our hypotheses of paths of execution within Native Client

For example, just this afternoon we found out how to send data via inter-module communication instantiated from the trusted code to the untrusted code. We first thought this wasn't possible - and that communication had to be initiated from the untrusted code, handled in the form of a callback function in the trusted code. However it simply turned out we had set the headers incorrectly in that the first four bytes of the header should be 0xd3c0de01. What's crazy is that we haven't yet understood what these bytes mean - so we're back in the Native Client source code to try and see why it works.

This probably sounds like a rant about Native Client and the Native Client developers. However, the complete opposite is true. The folks on the Native Client Discuss forum have been very helpful and have been more than happy to answer our questions. Quick shoutout to mseaborn: thank you for your help!!!

46
jajaBinks 1 day ago 0 replies      
For a large c/C++ code base, I use an editor called SourceInsight. This is the most invaluable tool for navigating code I've come across in my 3 year career as a software developer. I work in a very large software company, and there are several code bases running into millions of lines of C/C++ code. My previous team had 60,000+ files, with the largest file being about 12k loc.

If you have access to logs from a production service / component, I find TextAnalyzer.net quite invaluable. I take an example 500 mb log dump - opened in TextAnalyzer.net and just scroll through the logs (often jumping, following code paths etc), while keeping the source code side by side. This allows me to understand the execution flow, and is typically faster than attaching a debugger. If it's a multi-threaded program, the debugger is hard to work with - and logs are your best friend. You are lucky if the log has thread information (like threadId etc)

47
pmontra 2 days ago 0 replies      
Build it, if there is something to build. Scripting languages usually don't have builds but JS minification and dependencies installation could be a build. Find and read the code paths that perform some recognizable action. Run tests, read them. Add a new feature with tests or pick an open issue and fix it. You're going to have to debug something and that will give you more insight in the inner workings of the code.
48
dnprock 1 day ago 0 replies      
I don't do code reading or comprehension study. Reading code is boring. I typically create a list of small tasks that I want to achieve with the project. If the task is big, I break it down into smaller tasks. Then I rank the tasks from easy to hard. This way, I can start learning about the codebase and achieve my tasks.

In your case, frozen columns seems to be a hard feature. So I would start with ajax data source. I'd start with a simple SlickGrid example and get it to run. Then go find how SlickGrid sets up data source. Expand that piece of code to add ajax data source. Once I finished ajax data source, I'd dig into frozen columns.

If you are working on a new codebase and worry about bugs, you just give yourself more stress. Bugs (that are not yours) are expected. If they aren't blocking your task, ignore them. Most likely, they aren't relevant to what you are trying to do.

49
corysama 1 day ago 0 replies      
I pick a function or an outcome and type out the pseudocode stack traces leading to that in notepad.

I include function names and the names of the variables passed as parameters. But, no braces or other syntax. Almost always omit branches/variable decls/error checking. Include all interesting function calls along the path, but omit any branches/function bodies that lead off the desired path. Inline callbacks as function calls with addition notation. If the process has separate steps that aren't a single call/callback tree, start a new tree with the note "then later..."

To do this, I have to start from the line of code that enacts the outcome and determine the backtrace with a combo of debugger stack traces and examining the code for branches/callbacks of interest.

But, when it's complete, I'll have the start-to-finish process of some complicated task in the code --usually on a single screen of text. It's a tremendously better use of my short term memory to scan over that than to constantly bounce around the actual code base.

50
jdavis703 2 days ago 0 replies      
When I have a new code base that I'm unfamiliar with and need to understand it quickly, I'll go line-by-line and add comments about what I believe to be the intended behavior. As I gain more knowledge I'll update the comments. For me explaining something I've learned helps me commit it to memory better, and makes sure I really did comprehended what I just read.
51
goblin89 1 day ago 0 replies      
Document the codebase, in my experience it helps.

In case of JavaScript youd probably use something like JSDoc. Describe your units and make the tool automatically create beautiful HTML out of that. You dont have to document everything at once but be sure to lay the groundwork, automate documentation build process, and in general try to make maintaining the docs effortless (for yourself and for others). Take some existing well-documented JavaScript codebase as an example.

Thisd make a great contribution already: SlickGrids codebase is somewhat poorly documented, which is a barrier to the involvement of interested developers.

As you write the docs weak spots in existing implementation will come to your attention, helping you figure out what to fix first.

One downside is that writing down and structuring your knowledge in easy for others to grasp way is a challenge in itself, though arguably a useful exercise.

52
chipsy 1 day ago 0 replies      
I try to seek out the data structures first. If I need help doing it, I either run a profiler or insert some debug prints to get an idea of what parts of the callstack are "hot" and then progress from that to discovering the data. (Languages that don't require type signatures everywhere often have this problem of hidden structures.)

Once I know what the data is I can look at the code with an eye towards maintenance of data integrity. I might still need some "playtime" to grok the system but the one truism of large software is that data is always getting shoved from one big complicated system to another, and I can usually identify boundaries on those systems to narrow the search space.

(the exception to this is if you have code that leaks global state across the boundaries. Then much swearing will occur.)

53
techbio 2 days ago 0 replies      
I've written scripts to read files and match function calls to their definition/body and output text "trees"; but the process deserves some better visualization, navigation of dependency graph/comprehension specific highlighting. I'd be interested in trying an IDE that can do this.
54
orthoganol 2 days ago 0 replies      
First, have in your mind what the function of the chunk of code is. If it's not important to the system, skip it, don't read it. If it is important to the system, take a guess how you think it should work, how you would probably implement it if you were the original develop. Then begin reading it.
55
LoneWolf 2 days ago 0 replies      
At least to me there is no specific method, I work mainly with Java, since your specific case is JavaScript it may not even apply.

If the problem is some bug and there are stack traces that is my starting point, debugger and a few breakpoints chosen from the trace and then follow the stack and from there I start knowing how it is structured, and then the next bug and so on (fixing them of course)For code where I need to add features things get a little more tricky, but there is always some entry point, a web-service invocation, some web page, and try to understand what it is currently doing, again using the debugger to follow the calls and how the data is changed (sometimes even going into libraries).

Reading the docs if there are any is also a good place to start.

Once again, use the debugger a lot, makes it easier to understand than just reading the code.

(edit: formatting)

56
macNchz 1 day ago 0 replies      
A proper IDE can go a long way towards understanding a large codebase. It will be able to index everything so you can really quickly jump around the projectbeing able to jump directly from a method call to its declaration without momentarily context switching to search for where it lives is very valuable.

As you start to add to a project the IDE can also prove valuable in discovering how everything fits together, since it will provide smart and helpful completions with docstrings, method signatures, types etc. This can really help you start writing new code a lot faster.

Also, an IDE will usually also have a decent UI for running the code with a debugger attached, which can be incredibly useful for understanding the changing state of a running program.

57
misterjinx 1 day ago 0 replies      
This is one of the reasons I've always thought that each project should have a minimal developer documentation that should include the project's scope, how it's structured, what are its main components and how they are connected etc. This would help a lot a future developer to faster start working on the actual project and reduce the initial time spent on figuring what is all about.
58
meadori 2 days ago 0 replies      
I enjoyed this presentation by Allison Kaptur on how to understand CPython better:

http://pyvideo.org/video/3465/exploring-is-never-boring-unde...

While it is focused on CPython, most of the techniques are applicable elsewhere. It also mentions a great article by Peter Seibel (http://www.gigamonkeys.com/code-reading/) that discusses why we don't often read code in the same way we would literature.

Essentially, as the complexity of software has grown people have been forced to take a more experimental approach to understand software even though it was created by other people.

59
twunde 2 days ago 0 replies      
The first thing I try to do is to understand the directory structure, ie where should I be looking for files? Hopefully there should be a standard structure that's used. After that I'll typically try to dig in and fix a minor bug or two. This is especially helpful if you can narrow down the part of the codebase you're working on. I also recommend using an IDE like WebStorm which will give you the ability to jump to a function definition and will help you find the functions you're calling.

One thing I do NOT recommend is changing the code style, unless you're ready to take full ownership of the project. It can make it much harder for the project owner to merge in and if there are any lingering PRs those will typically need work to merge in properly.

60
buremba 2 days ago 0 replies      
I use debuggers a lot for that purpose. It really helps to find the code paths for specific operations. Instead of reading code file by file, just setup a debugger, set a few breakpoints to the code, perform an operation and follow the read application code through through the paths.
61
shurcooL 1 day ago 0 replies      
This is not a comprehensive answer, but it's additive.

If you're looking at a large Go codebase with many packages, I find it helpful to visualize their import graph with a little command [0].

Here are results of running it on consul codebase:

 $ goimportgraph github.com/hashicorp/consul/...
http://virtivia.com:27080/cehy9dnqaq92.html

[0] https://github.com/shurcooL/cmd/tree/master/goimportgraph

62
gregulator 1 day ago 0 replies      
I've had to ramp up quickly on a number of projects so far during my career, and I can tell you there's no substitute for simply reading the heck out of the code. Yes it takes discipline to go through code line-by-line, and at times may seem pointless or like its "not sticking". But persistence here pays dividends.

The first read-through is not about comprehending everything. It's about exposing your mind to the codebase and getting it to start sinking into your subconscious. It's kinda like learning a new piece on the piano.

63
bonestamp2 1 day ago 0 replies      
I like to try two impractical tasks (impractical in the sense that they might not be possible, which is fine).

1. Access some data in the highest level component from one of the lowest level components

2. Access some data in one of the lowest level components from one of the highest level components

In a lot of cases, good architecture will prevent one or both of these from being possible, but identifying how data flows through the app seems to be a good way to understand the general architecture, limitations and strengths of most apps. These two tasks give concrete starting points for tracing the data flow.

64
mtrn 1 day ago 0 replies      
Related question of programmers.se: http://programmers.stackexchange.com/q/6395/436

> What tools and techniques do you use for exploring and learning an unknown code base?

65
bliti 1 day ago 0 replies      
This codebase is documented and well structured. I would simply being by tackling the issues on github first and sending pull requests. No need to take over it right away. After you feel comfortable reading the code and knowing where is what, you can ask to become a maintainer.

I'd try to fix it using the same style used in the codebase. This way anybody else reading, maintaining ,or using it won't have to make sense of the new style. Pay attention to how each method is defined. They are very readable. Very few traces of complex one line statements.

Most importantly, be patient. You won't be any good with it in less than 2 weeks of constant tinkering. Good luck.

66
dmuth 1 day ago 0 replies      
As someone who hates debuggers and is a fan of "learning by doing", I make heavy use of console.log() or similar, and I start putting breakpoints all over the code that print out sentinels ("hey, I'm in this part of the code") and data ("the content of this variable are: XXXX").

Then I run the app and put it through its paces, while watching the output in another console.

If there's some code that doesn't make sense, I use console.log() heavier in that section, to help me fully understand what it does. Once I have that level of understanding, I then write some comments in the code and commit them so that other contributors may benefit in the future.

67
makmanalp 2 days ago 0 replies      
I do divide-and-conquer. Find some part or feature of the tool you know from an outsider perspective, and then try to find it within the code. Then work backwards from there. Maybe even try to fiddle with it to change how it works, and see what happens.

I think reading each file or reading the data structures is more difficult because you have no familiarity as to what is going on and you have no knowledge of why things are structured as they are, so it'd end up like reading a math paper straight down: memorize a ton of definitions without knowing why, until you finally get to the gist of it.

68
deepaksurti 2 days ago 0 replies      
I first try to familiarize myself with the high level design/org of the code base, going through the README, other docs, looking at the test code if any and just generally scanning the important files/modules etc.

Then I prefer to jump into fixing any existing issue. Working on fixing an issue teaches a lot, more fixes, then features, rinse, lather, repeat.

While this post talks about fixing compiler bugs, the overall steps are much replicable: http://random-state.net/log/3522555395.html

69
perlgeek 1 day ago 1 reply      
git grep.

I search for strings that appear in the frontend (or generated HTML source, or whatever), and then I use a search tool (git grep) to find where it comes from. And then I the same search tool again to trace my way backwards from there to where it's called, until I find the code that interests me.

And then I form a hypothesis how it works, and test it by patching the code in a small way, and observe the result.

Oh, and don't forget 'git grep'. Or ack, or ag, or your IDE's search feature.

70
lucidguppy2000 1 day ago 0 replies      
Write characterization tests for modules, see what inputs produce which outputs. Then you have the start of unit tests.

Programming with unit tests really helps. And it points out where certain parts are too entangled and bound to implementation.

71
niuzeta 1 day ago 0 replies      
First build and run. See what it does. Check what it does and what I think it does, see how they differ.

Start from main() and start from the one click event(or any end-game action). Try to connect the two.

72
crcsmnky 1 day ago 0 replies      
While this generally works best for larger code bases, I tend to start reading through open bugs/tickets and find things that appear easy. Then I will assign them to myself and do what I can to fix it or at least track it down.

Generally I find it hard to just start reading through packages, source, functions, etc. and find it much easier to try and solve some sort of problem. By tracking and debugging a particular issue through to the end, I find a learn a lot about the codebase.

73
kalari 2 days ago 0 replies      
I usually skim the code to get an idea of patterns and organization, get it working in a local environment and then run/step-through the code. This usually gives a good idea of what different pieces do.
74
fsloth 1 day ago 0 replies      
I try to compose a formal model and algebra of the codebase - quite informally, mind you. Takes a bit of pen and paper and a few caffeinated drinks usually.

People really do learn quite differently and everyone needs to find their mode of learning - there is no one single true way. This is one of the most important skills in software development, IMO. Once you learn how you learn you can apply it to most new contexts.

I write stuff down because for me that-the process of writing seems to be the most effective way to learn.

75
sown 1 day ago 0 replies      
I only recently developed this skill a little.

The Ruby application server I looked at was for doing social network feeds. Posts/Likes/Comments go in, feeds come out.

I followed some common code paths for things such as posting a comment and getting a feed. I would write the stack trace down on paper as I went.

It also helped that I happen to know that this ruby server used wisper and sidekiq. This way I didn't overlook single lines of code such as 'publish: yada yada'

76
xarien 1 day ago 0 replies      
I'd speak to the last person who worked on it face to face with a whiteboard and a marker handy. Get a brain dump ASAP. Even if the person no longer works there, you can take some time to contact them for a lunch. Most people would not say no to this type of request (especially if you're buying). Just make sure you have questions ready so you don't waste their time.
77
zaphar 1 day ago 0 replies      
Other than the many great answers here I will frequently start by doing cleanups of the codebase.

I'll start reading the files using any of the strategies mentioned here and looking for things I can cleanup. Formatting, Simple Refactors, Normalizing Names.

These are all things that are comparatively easy to do and safe but force you to reason about the code you are reading. Asking yourself what you can refactor or fix the naming for is a deent forcing function for actually understanding the code.

78
amenghra 1 day ago 0 replies      
When you find interesting pieces of code, look at the commit that brought it to life. Commits contain precious gems of information: you'll understand what files are related, who worked on which parts of the codebase, how the commit was tested, related discussions, etc.

Some people use graphical tools to visualize a codebase (e.g. codegraph). It can help you understand what pieces of code are related to each other.

79
bozoUser 1 day ago 0 replies      
I have recently jumped onto working for a very huge codebase at work. In general here are a few tricks that helped me. 1) Look at the unit tests and see the flow of the code 2) Try to make a mental picture of how the code is organized(doing it on paper is more helpful) 3) Every codebase has few core classes that do lot of heavy lifting, talk to other contributors and ask them to point you to these.2) also helps you achieve this. Good luck.
80
zzzcpan 1 day ago 0 replies      
I found call tracers to be the most efficient way to do this kind of thing. It could be as simple as a perl script inserting printfs on every call and every return, since not every compiler supports instrumentation.

Simply digging through code, tests or reading commit messages in an unfamiliar code base takes at least an order of magnitude more time.

EDIT: tried call graphs too, better than reading through code, but still require you to understand and filter out a lot of unnecessary information.

81
thoman23 1 day ago 0 replies      
If it's code that I need to understand in intimate detail, I actually trace through the code keeping notes with pen and paper. I complement a simple reading of the code with actually exercising the code with test data and a debugger. I go through a few iterations, each time learning a little more about what is important and what can be safely ignored, until I eventually build up a Gliffy diagram of the important parts.
82
ivan_ah 1 day ago 0 replies      
On that note, could someone recommend a tool for automatically generating the graph that shows the class dependencies/hierarchies in a Java code base? I'm sure there are good tools out there, but all the ones I tried so far (JArchitect, CodePro Analytix, SonarQube) don't seem to have a good graph layout engine.

I'd like to print out a big graph and stick it to the office walls so I'll have a good view of the logical structure.

83
kh_hk 1 day ago 0 replies      
When adding support for small new features or fixing bugs on large codebases the answer is: you don't [1].

You do not need to familiarize yourself with the full codebase at the start. It's too time-consuming and mostly not worth the effort. Set up an objective and go for it slashing your coding axe around until it works.

[1]: Unless you have an special interest or you are assumed to familiarize with the codebase.

84
exacube 1 day ago 0 replies      
One idea is to useLinux's `perf` to sample stack traces, as the program is running, over a minute or so and see where the code flows.
85
richardlblair 1 day ago 0 replies      
IMO, the one tool you can't do without is grep.

My typical strategy is to get the project running, then just get to work. Start fixing bugs, and adding requested features. Use the code around you as a guide on what is right and wrong within that company, and forge forward. When you are unsure of something turn to grep, find some examples, and keep going.

86
kungfooman 1 day ago 0 replies      
Overwrite functions in dynamic languages (like JavaScript) with some "dump all arguments code" and call/return the original function, to get a quick glimpse in the code. Though this doesn't work with closures without some extra eval tricks.
87
estsauver 2 days ago 1 reply      
I try to work backwards from the public api to get a sense of the operations that are supported by the system. A trick I picked up from a thoughtbot training video a couple years ago for Rails applications is to look at the routes file. If you work with webapps, the routes generally define the things that people can do.
88
makuchaku 1 day ago 0 replies      
Start with smaller bugs & try to fix them. Bugs help you to focus your understanding on very small parts of code/paths. This helps in time spent vs output vs confidence.
89
pbreit 1 day ago 0 replies      
Best thing by far is to find someone familiar with the code and spend 15-30 minutes with them in person or by phone. That should be possible in the vast majority of situations.
90
puissance 1 day ago 0 replies      
I don't.

Take the extreme programming approach. Don't try to familiarize yourself with a new codebase all at once. Start small. Work on a small ticket. It will, organically, help you assimilate what's happening.

91
ausjke 1 day ago 0 replies      
I use source navigator to understand the code base. I wish someone will keep improving it, especially the font etc under linux is not looking impressive, under Windows it's all I need. I'm unsure if other tools can provide as many functions for code base analysis.
92
ak39 1 day ago 0 replies      
"You cannot understand a system until you try to change it." ~ Kurt Lewin
93
benjamg 2 days ago 0 replies      
Assuming there is some form of bug list associated with it that is often my preferred way to learn a new code base.

Try to fix a bug and you'll soon find yourself having to learn how the code involved works, and with a goal your focus will be better than just reading through the code flow.

94
MarkMc 2 days ago 0 replies      
One thing helps me enormously: I sketch a class diagram as I explore the code. Here's an example:

https://s1.whiteboardfox.com/s/494b923d01d7ad05.png

95
fasteo 1 day ago 0 replies      
Brute force: Choose a new feature to implement and start looking for the place to write your first line of code.

This is probably not the best way to approach this, but I am somehow ADHDish and I need a clear task to avoid perpetual diving in the codebase.

96
stuaxo 1 day ago 0 replies      
Back when I did Java, using static analysis tools like findbugs, then going and fixing all the issues found was a good way to get coverage of the codebase... I'm sure for JS there must be similar analysis tools.
97
antoinevg 1 day ago 1 reply      
Read it until I can identify which fad of the moment the author was following.
98
Lord_Cheese 2 days ago 0 replies      
If there is a bug list handy, I find tackling a few small ones is often an excellent way to get to know a codebase. It also gives some good insight into the codebase's quirks and oddities.
99
IanCal 1 day ago 0 replies      
Try doing some profiling. It'll take you through some of the more heavily used parts of the code, is useful in and of itself, and provides a target / some focus.
100
netoarmando 1 day ago 0 replies      
Good resource for Code Spelunking: http://www.codespelunking.com/
101
blago 1 day ago 0 replies      
The first thing I do is turn on db and http request logging. Sometimes this alone can be quite a challenge.
102
OpenDrapery 1 day ago 0 replies      
Pick a class and new it up from a unit test. You will quickly find out where the dependencies are, and how tightly coupled things are.
103
elkhourygeorges 1 day ago 0 replies      
Pick couple of bugs and fix them. Best way to familiarize yourself with a new codebase.
104
nickbauman 1 day ago 0 replies      
I always read the tests first. (If there are no tests, I don't take the job. Life is too short.)
105
chris_wot 2 days ago 0 replies      
Answer: with great difficulty.
106
lloyd-christmas 2 days ago 0 replies      
Break it one line at a time.
107
AdrianRossouw 2 days ago 0 replies      
read tests, and then start writing tests for things.

something usually comes up.

108
dm03514 1 day ago 0 replies      
Build and run the project locally

Then I write unittests

109
aikah 1 day ago 1 reply      
sourcegraph.com can help.
110
gdubs 1 day ago 0 replies      
Fix a bug. Repeat.
111
dm03514 1 day ago 0 replies      
I Write unittests
112
latenightcoding 1 day ago 0 replies      
grep -r "function()" .
113
coolsunglasses 1 day ago 0 replies      
If it's in Haskell, I start cleaning up and refactoring datatypes.

Like changing some function like:

 Text -> Text -> IO ()
into:

 ServerHost -> Path -> IO ()
Changing the types will naturally lead you through the codebase and help you learn how everything fits together via the type errors.

In any language I'll try to read the project like the Tractatus.

In stuff that isn't Haskell? Break stuff and run the tests.

114
mVChr 1 day ago 0 replies      
I've spent the last year rebuilding a huge business-critical system from scratch (along with one other engineer). Yes, usually complete rewrites are a Bad Idea, but in this case product and business decided it was the only way to move forward because the system was in maintenance hell and it was way too difficult and risky to add new features. I discovered why as I learned the architecture, business logic and features of this behemoth pile of spaghetti. Here's what I recommend to do if you're in a similar situation, whether it be a large and great project or a large and horrible project...

- Get a functional dev environment set up where you can mess around with things in a risk-free manner. This includes setting up any dev databases and other external dependencies so that you can add, update and delete data at will. There's nothing that gives more insight than changing a piece of code and seeing what it breaks or alters. Change a lot of things, one at a time.

- Dive deep. This is time consuming, but don't be satisfied with understanding a surface feature only. You must recursively learn the functions, modules and architecture those surface features are using as well until you get to the bottom of the stack. Once you know where the bottom is you know what everything else is based on. This knowledge will help you uncover tricky bugs later if you truly grok what's going on. It will also give you insight as to the complexity of the project (and whether it's inherent to the problem or unnecessary). This can take a lot of time, but it pays off the most.

- Read and run the tests (if any). The tests are (usually) a very clear and simple insight into otherwise complex functionality. This method should do this, this class should do that, we need to mock this other external dependency, etc.

- Read the documentation and comments (if any). This can really help you understand the how's and why's depending on the conscientiousness of the prior engineers.

- If there's something that you really can't untangle, contact the source. Tell him what you're attempting, what you tried, exactly why and how it's not working as you expect, and ask if there's a simple resolution (I don't want to waste your time if there's not). You may not get an answer, but if you've done a lot of digging already and communicate the issue clearly you might get a "Oh yeah, there's a bug with XYZ due to the interaction with the ABC library. I haven't had time to fix it but the problem is in the foo/bar file." You may be able to find a workaround or fix the bug yourself.

- When you do become comfortable enough to add features or fix issues, put forward the effort to find the right place in the code to do this. If you think it requires refactoring other things first, do this in as atomic a manner as possible and consult first with other contributors.

- Pick a simple task to attack first, even if it's imaginary. Get to more complicated stuff after you've done some legwork already.

There are other minor things but this is generally my approach.

115
yellowapple 1 day ago 0 replies      
It depends on the language, the libraries, the tooling, etc.

My dayjob is with a Ruby on Rails consultancy. Said dayjob involves familiarizing myself with a lot of different codebases. My strategy here is rarely to try and digest the whole codebase all at once, but rather to focus on the portions of code specific to my task, mapping out which models, controllers, views, helpers, config files, etc. I need to manipulate in order to achieve my goal.

The above strategy tends to be my preference for most complex projects. The less I have to juggle in my brain to do something, the better. I tend towards compartmentalizing my more complex programs as a result. For simpler programs (and portions of compartmentalized complex programs), I just start at the entry point and go from there.

Languages with a REPL or some equivalent are really nice for me, especially if they support hot-reloading of code without throwing out too much state. Firing up a Rails console, for example, is generally my first step when it comes to really understanding the functionality of some Rails app. For non-interactive languages, this typically means having to resort to a debugger or writing some toy miniprogram that pulls in the code I'm trying to grok and pokes it with function calls.

For some non-interactive languages, like C or Ada, I'll start by looking at declaration files (.h for C and friends; .ads for Ada) to get a sense of what sorts of things are being publicly exposed, then find their definitions in body files (.c/.cpp/etc. for C and friends; .adb for Ada) and map things out from there. Proper separation of specification from implementation is a godsend for understanding a large codebase quickly.

For a rigorously-tested codebase, I'll often look at the test suite, too, for good measure. When done right, a test suite can provide benefits similar to specification files as described above; giving me some idea of what the code is supposed to do and where the entry points are.

How mosquitos deal with getting hit by raindrops nationalgeographic.com
403 points by davi  3 days ago   79 comments top 20
1
developer1 3 days ago 2 replies      
Of course the video doesn't show anything interesting, the mosquito's leg is hardly even grazed. I was definitely hoping for the version where a drop smacked the insect dead on target. Fairly strange for a lab result - if that's the only video that was captured, it really doesn't seem to divulge much at all. Where's the cool video? :D
2
upofadown 3 days ago 4 replies      
>A study says a mosquito being hit by a raindrop is roughly the equivalent of a human being whacked by a school bus, the typical bus being about 50 times the mass of a person.

That is not a sensible comparison. When you scale something mass changes as the cube of dimension. Strength changes as the square of dimension. So small things are inherently stronger with respect to their mass.

3
dgemm 3 days ago 3 replies      
> But because our mosquito is oh-so-light, the raindrop moves on, unimpeded, and hardly any force is transferred. All that happens is that our mosquito is suddenly scooped up by the raindrop and finds itself hurtling toward the ground at a velocity of roughly nine meters per second, an acceleration which cant be very comfortable, because it puts enormous pressure on the insects body, up to 300 gravities worth, says professor Hu.

Interesting article, but in the span of one paragraph here we have confused velocity, acceleration, and pressure - and there are similar errors in the following one. For an article about physics, I would expect this to at least be proofread.

The Gell-Mann Amnesia effect: http://harmful.cat-v.org/journalism/

4
daniel-levin 3 days ago 2 replies      
From an io9 article on the same research:

>> [Hu] and Dickerson constructed a flight arena consisting of a small acrylic cage covered with mesh to contain the mosquitoes but permit entry of water drops. The researchers used a water jet to simulate rain stream velocity while observing six mosquitoes flying into the stream. Amazingly, all the mosquitoes lived.

The researchers used simulated rain drops on six mosquitoes. There are more than six species of mosquitoes. They controlled for wind effects (which are part and parcel of rain). So they excluded horizontally travelling raindrops. My immediate reaction to the conclusion that mosquitoes can fly in rain was "Really? Not always". Here is a methodologically lacking and wholly unscientific anecdote: I have lived in Johannesburg my entire life, where mosquitoes are quite prevalent during the summer months. When it is raining heavily (it is usually quite windy as well), the local species of mosquito that feeds of humans do not present a problem as the number of airborne mosquitoes tends to zero.

5
nippoo 3 days ago 0 replies      
"Had the raindrop slammed into a bigger, slightly heavier animal, like a dragonfly, the raindrop would feel the collision and lose momentum. The raindrop might even break apart because of the impact, and force would transfer from the raindrop to the insects exoskeleton, rattling the animal to death."

Has anyone actually done any research on dragonflies being hit by raindrops, or is this just speculation?

6
chrismorgan 3 days ago 1 reply      
The drawings in this article tend to be absurdly large, with the outcome that the document is, transferred, around 23MB, for no good reason. Sigh.
7
Kiro 3 days ago 2 replies      
> In most direct hits, Hu and colleagues write, the insect is carried five to 20 body lengths downward

> If you want to see this for yourself, take a look at Hus video

What? Nothing like that happens in it.

8
jbert 3 days ago 1 reply      
Does this places a reasonable selection pressure on the kinds of flying insects we can have?

Big enough to shrug off a raindrop hit, or small enough to surf along the surface tension until it can slide off?

9
theVirginian 3 days ago 1 reply      
It would appear they haven't yet evolved to deal with being hit by cars quite as gracefully.
10
jokr004 2 days ago 0 replies      
Not really important but.. "nine gravities (88/m/squared)"

I don't get it, the scientificamerican blog that they are quoting has the right units, where did they come up with this?

11
blumkvist 3 days ago 1 reply      
A commenter on the site says that some type of mosquitoes (Texas) are used in oil drilling. I tried googling "texas mosquitoes oil drilling" and variants, but didn't find anything.

>"Why, one species even secretes an enzyme to dissolve the organic matter in blood leaving only the iron in haemoglobin. Then another enzyme causes the iron atoms to join to form biological drill pipe! These structures are known to be as much as 6 inches in diameter and to extend a mile deep."

Is there something to it or he just went to on the internet to tell lies?

12
mordrax 3 days ago 1 reply      
> But because our mosquito is oh-so-light, the raindrop moves on, unimpeded, and hardly any force is transferred.

So if the mosquito's weight is insignificant compared to that of the heavier and denser water drop and that's what keeps it from having the force transferred, would this equally apply to hailstorms? (Where our mosquitoes are pelted by small hail balls the size of raindrops)

13
mleonhard 2 days ago 0 replies      
The article embedded a short video. Here's longer video with explanations:https://www.youtube.com/watch?v=LQ88ny09ruM
14
state 3 days ago 1 reply      
Can't help but immediately notice: "Drawing by Robert Krulwich"
15
ebbv 3 days ago 0 replies      
If it wasn't for the cute child like drawings this would be a truly terrible piece of link bait. As it is it's still pretty and, and I expect better from NatGeo.

Anyone who lives in a mosquito heavy area knows that mosquitos (like almost all airborne insects) go into hiding during heavy rain and/or wind.

16
dharma1 3 days ago 0 replies      
if you like watching slo mo videos, recommend this channel: https://www.youtube.com/user/theslowmoguys/videos
17
bnolsen 3 days ago 2 replies      
so if mosquiotos are oblivious to rain is there some way to make artificial rain with different properties that could destroy mosquitos en masse?
18
rokhayakebe 3 days ago 1 reply      
I just realized how making things fun and funny can help to teach anything. The drawings and the comical tone made this seem so approachable. I wish they had a series of 1000 of such lessons I could read.
19
stillsut 3 days ago 1 reply      
Send this to Bill Gates, that guy HATES mosquitoes.
20
cJ0th 2 days ago 0 replies      
very interesting article. It is a pity that his column has no rss feed.
Fighting spam with Haskell facebook.com
347 points by vamega  2 days ago   91 comments top 17
1
LukeHoersten 1 day ago 4 replies      
For those not involved in the Haskell community, Simon Marlow worked full time on the GHC compiler and specifically run-time system for many years. Along with Simon Peyton-Jones, he's huge in the Haskell world. Marlow also wrote the very excellent "Parallel and Concurrent Programming in Haskell" book.

Facebook also employs Bryan O'Sullivan, an epic Haskell library writer (Aeson, Attoparsec, Text, Vector, and on and on http://hackage.haskell.org/user/BryanOSullivan). Bryan also co-authored the "Real World Haskell" book.

So Facebook has hired two prolific Haskellers and probably others I don't know about.

2
dasmoth 1 day ago 3 replies      
Damn you FaceBook.

I dislike the underlying premise, the adverts, and (especially) the "real names" policy.

But... between great bits of Open Source like React, cool infrastructure projects like this, and a technical culture which seems a whole lot more open than many other big companies, it's getting kind-of hard to go on hating. Walk back a bit from the obsession with open plan offices, and I might just cave...

3
HugoDaniel 1 day ago 1 reply      
This is an amazing effort, implementing ApplicativeDo and using Haxl for automatic batching and concurrency, doing code hot-swap in a compiled language, developing per-request automatic memoization, finding a aeson performance bug, translating C++ from/to haskell to do partial marshalling of data, implementing allocation limits to ghc threads, creating a shared c++ library to bundle the c++ dependencies in ghci for interactive coding, killing two ghc bugs, and more... and in the end producing a reliable scalable solution.

ouch!

4
seddona 2 days ago 1 reply      
Thanks for the overview Simon, great to hear about the use of Haskell at scale. At CircuitHub we use Haskell to build our entire web app, Haskell is great for most tasks these days.
5
j_m_b 1 day ago 1 reply      
|We implemented automatic memoization of top-level computations using a source-to-source translator. This is particularly beneficial in our use-case where multiple policies can refer to the same shared value, and we want to compute it only once. Note, this is per-request memoization rather than global memoization, which lazy evaluation already provides.

I would like to know more about this. What is a request exactly? An API call? If so, when an existing policy is changed, do the memoization tables have to change as well? How are the memoization tables shared? If this is running on a cluster, I would imagine that lookups in a memoization table could be a bottleneck to performance.

6
fokz 2 days ago 5 replies      
I am under the impression that a large part of engineering effort at established companies go into porting existing components to a deemed to be more appropriate language for that task.

Is it plain impossible to pick the best fit language without implementing a solution in the first place and fleshing out the requirements and challenges that specific to the problem space? Or do the problems evolve fast enough that no matter how well you design the system, it will need to be deprecated once in a few years?

7
wz1000 1 day ago 2 replies      
How does the hot-swapping work? The only way I had seen of making this happen is what xmonad does. I'm assuming this is radically different from that.
8
VeejayRampay 2 days ago 0 replies      
And two GHC bugs fixed along the way. Well done everyone.
9
vezzy-fnord 1 day ago 0 replies      
Loading and unloading code currently uses GHC's built-in runtime linker, although in principle, we could use the system dynamic linker.

That they could use the system dynamic linker makes me think they're using some form of relatively basic dlopen/dlsym/call-method procedure, or something along those lines. That's fine, though the use of "hotswapping" evokes the image of some more elaborate DSU mechanism.

10
themeekforgotpw 1 day ago 1 reply      
Does this system also detect propaganda operations?
11
edwinnathaniel 1 day ago 4 replies      
I learned a few things from this post outside the usual "technical" explanations:

1. They have CORE Haskell contributors on their payroll to deliver this type of project (what this mean is that no... Haskell isn't any better than other language, it's just that they have people who know Haskell very very deep to the compiler level...)

2. In-house custom language eventually does not scale (the EOL is much much much shorter than other programming languages), plan for that :)

12
saosebastiao 1 day ago 1 reply      
Anybody know what sort of policy resolution algorithms are used? Is this based on Rete, or home grown?
13
covi 1 day ago 2 replies      
In the throughput graph, why does Haxl perform worse in the 3 most common request types?
14
reagency 1 day ago 3 replies      
This is impressive, and in line with Haskell's philosophy to "avoid success at all costs".

As a mere mortal programmer, who knows a little Haskell, my takeaway is that if you want to run Haskell in at web scale for a a large userbase, you need the language's primarily compiler author to help build the application and to modify the Haskell compiler to make it performant. And you also need your team led by a 20-year-veteran Haskell expert who is one of the language's handful of luminaries whonwrotr a plurality of its main libraries. What are the rest of us to do, who aren't at Facebook?

15
nlake44 2 days ago 2 replies      
Facebook censors so much and swaps out links. I can't even post a link to https://scientificamerica.com. Their SSL cert is weak and should not be trusted. It is swapped out for phishing scams.
16
yellowapple 1 day ago 0 replies      
> Fighting spam

> (facebook.com)

Ha.

17
giancarlostoro 1 day ago 1 reply      
It seems like Facebook seems to be the one company developing a lot of the very interesting and useful tools for developers.
4chan discusses HN 4chan.org
328 points by cantbecool  1 day ago   89 comments top 29
1
vezzy-fnord 1 day ago 2 replies      
We've had these types of threads posted several times before and they've always been fun, though it seems like /g/ are really struggling for material this time around.

That said:

> [600 points] Why only web development matters (http :// nautil.us medium wordpress theverge gawker .com)

2
CPLX 1 day ago 2 replies      
There is some inspired content in that link. My personal fave:

We're disrupting the 1gorillion dollar [insert industry] sign up for our beta to check it out[0].

[0]We just need your name, address, credit card, and birth date. To verify your a human.[1]

[1] and we store all of this in clear text files on our server.[2]

[2] which was written using [insert new hipster language] by some guy who's been programming for 3 weeks.[3]

[3] but we promise not use your data to mine the shit out of you and sell it to advertisers.[4]

[4] jk

3
threatofrain 1 day ago 2 replies      
"[145 points] Node.js + ASM.js + Angular.js + Coffee Script: how I built my static website"
4
calbear81 1 day ago 2 replies      
[213 points] Sleeping considered harmful - Why I stopped sleeping

I liked this one.

5
sergiotapia 1 day ago 0 replies      
It's refreshing to see what people post without fear of retaliation or identification on the web.

Gold!

"Ask HN: Why won't VCs invest in our dating app, and why is it because we're women founders?"

"[1583] We taught 13 women from Sierra Leone node.js"

6
twerkmonsta 1 day ago 3 replies      
Most of these are incredibly accurate parodies of HN. As someone who reads HN every day, these are making me cry laughing.
7
fao_ 1 day ago 1 reply      
These are great; my favourites are:

> How I rewrote Bash in javascript.

> I decided to re-implement Javascript in Javascript. It failed. Here is my story.

> [450 points] Why I have private Github repos at my startup but everyone else should give away their software for free.

It's so true ;~;

8
Cyph0n 1 day ago 0 replies      
Uninformed opinion I'll try to pass off as insightful and fact-centric by using footnotes [1][2][3].

[1] theatlantic.com[2] theverge.com[3] blog.tumblr.com

9
Koahku 1 day ago 0 replies      
10
bichiliad 1 day ago 1 reply      
Someone mentioned that they hated the lack of humor in HN's comments section, which I tend to agree with.
11
jmottz 1 day ago 0 replies      
My fav: Why I rewrote Go in node.js in Java to play tic tac toe
12
anon_adderlan 1 day ago 0 replies      
I started frequenting HN because I kept encountering these kinds of problems everywhere else. Certain subjects just have more gravity as it were, and when present will inevitably take up all the oxygen in a room. The only way I know to manage it is to isolate it in sub-fora, and be very strict about containment.

An important difference between HN and other fourms I frequent(ed) however is that instead of taking offense and going on the defensive when 'attacked' by 4chan, they recognize the joke and find it funny. That alone puts HN lightyears ahead of those other organizations.

And regardless of why, it's exactly this kind of self-awareness and identity that enables people to discuss ideas without feeling threatened by them, something which has held back both social and scientific progress in the past.

13
c2the3rd 1 day ago 1 reply      
> [dead] I'm Terry Davis and I created TempleOS

How many people know who Terry Davis is?

14
djent 1 day ago 0 replies      
Even after many HNers read this thread and appreciate the points being made, I'm sure we'll still see the same "Site I made in unique2me.js" garbage headlines in the top feed. Hopefully what we all take away from this is that we need to better spot patterns of articles/blogspam and self-moderate those submissions.
15
ryan-c 1 day ago 0 replies      
> [999 points] Why we raised $6 billion in a series J and deferred IPO

I wouldn't be surprised to see this in a few years.

16
jeffbush 1 day ago 0 replies      
"Edit: why all these downvotes?"
17
rakoo 1 day ago 0 replies      
> Shitty layout from 2003

I actually like this layout. It's fast and easy to read, renders ok on mobile and is lightweight. I'm grateful that the maintainers didn't switch to an over-the-top look-at-my-framework.js thing just to make it look modern at the detriment of usability.

18
eranation 1 day ago 2 replies      
I liked this one

> Anonymous 06/26/15(Fri)17:51:52 No.48697104[600 points] [meta] 4chan technology board satires hacker news, hilarious.

19
marcus_holmes 1 day ago 1 reply      
SJW represent :)

I know people that don't read HN because it's too virulently sexist, so having 4chan see it as too SJW is interesting.

Will we end up with two "social justice" realities, like we have with vaccination, creationism/evolution and climate change, where it's entirely possible to spend your entire browsing time on sites that agree with your opinions on everything?

20
underwater 1 day ago 3 replies      
So I'm out of the loop. Are complaints about social justice warriors just a modern twist on "I'm not racist, but..."?
21
task_queue 1 day ago 1 reply      
This community is a joke on every site that isn't HN.And yet, I post.
22
yellowapple 1 day ago 0 replies      
The best humor is that which is based upon reality.

And goddamn is this hilarious.

23
logicrime 1 day ago 5 replies      
I've been reading HN for 4+ years now, and I think that practically every post in that thread is spot-on. HN has turned into the very thing that I thought the guidelines were designed to prevent. The internet doesn't need another reddit, it's bad enough as it is.

HN has become host to feminist shilling and corporate endorsements, on top of the already flawed content model that encourages disengagement to the point where people are just reposting headlines and treating HN as a comments section for the article itself.

Either way, there's something to be said for constructive criticism like this, and HN can potentially learn from this.

It won't. But it could.

24
estrabd 1 day ago 0 replies      
Nailed it.
25
atorralb 1 day ago 1 reply      
my fav:J(ew) Combinator
26
ryandvm 1 day ago 1 reply      
I'll be honest - the "Ask PG" comment was kinda spot-on.
27
confiscate 1 day ago 0 replies      
haha 4chan has real hackers man
28
boomskats 1 day ago 0 replies      
Well this is bit meta...
29
johnsberd 1 day ago 0 replies      
There's a comment on there parodying this exact statement. Not that you're wrong, just that this place is pretty predictable in what we say and post.
Killing Off Wasabi fogcreek.com
308 points by GarethX  3 days ago   297 comments top 30
1
tptacek 2 days ago 8 replies      
The author has graciously posted _Wasabi, The ??? Parts_, the internal book about Wasabi mentioned in the blog, on their website:

http://jacob.jkrall.net/wasabi-the-parts/index.html

Having now read it, I've come to the conclusion that the blog gives the wrong impression about the implications of having a custom language. Readers of the blog post come away with the idea that Wasabi was so full of "o.O" that someone was moved to write a book about that. In reality, the book is simply documentation of the language features, with callouts for weird interactions between VB-isms, ASP.NET-isms, and their language.

You should definitely read the "Brief And Highly Inaccurate History Of Wasabi" that leads the document off. It's actually very easy now to see how they ended up with Wasabi:

1. The ASP->PHP conversion was extremely low-hanging fruit (the conversion involved almost no logic).

2. Postprocessing ASP meant PHP always lagged, so they started generating ASP from the same processor.

3. Now that all their FogBugz code hits the preprocessor, it makes sense to add convenience functions to it.

4. Microsoft deprecates ASP. FogBugz needs to target ASP.NET. They can manually port, or upgrade the preprocessor to do that for them. They choose the latter option: now they have their own language.

It's step (3) where they irrevocably commit themselves to a new language. They want things like type inference and nicer loops and some of the kinds of things every Lisp programmer automatically reaches for macros to get. They have this preprocessor. So it's easy to add those things. Now they're not an ASP application anymore.

Quick rant: if this had been a Lisp project, and they'd accomplished this stuff by writing macros, we'd be talking this up as a case study for why Lisp is awesome. But instead because they started from unequivocally terrible languages and added the features with parsers and syntax trees and codegen, the whole project is heresy. Respectfully, I call bullshit.

2
Udo 2 days ago 4 replies      
> Building an in-house compiler rarely makes sense.

I disagree. First of all, Wasabi solved a real problem which doesn't exist anymore: customers had limited platform support available and Fog Creek needed to increase the surface area of their product to cover as many of the disparate platforms as possible. Today, if all else fails, people can just fire up an arbitrarily configured VM or container. There is much less pressure to, for example, make something that runs on both ASP.NET and PHP. We are now in the fortunate position to pick just one and go for it.

Second, experimenting with language design should not be reserved for theoreticians and gurus. It should be a viable option for normal CS people in normal companies. And for what it's worth, Wasabi might have become a noteworthy language outside Fog Creek. There was no way to know at the time. In hindsight, it didn't, but very few people have the luxury of designing a language which they know upfront will be huge. For example, Erlang started out being an internal tool at just one company, designed to solve a specific set of problems. Had they decided that doing their own platform was doomed to fail, the world would be poorer for it today.

3
zak_mc_kracken 3 days ago 10 replies      
It's always struck me as extremely bizarre that a company that regularly advertises that it's at the bleeding edge of software engineering practices (see Spolsky's numerous blog posts on the topic) made such a colossal error as writing their own language, and that it took them a decade to realize this mistake.

I also find this kind of phrasing weird:

> The people who wrote the original Wasabi compiler moved on for one reason or another. Some married partners who lived elsewhere; others went over to work on other products from Fog Creek.

It's like the author of this article goes out of their ways to avoid saying that some people left the company, period. It also wouldn't surprise me if some of these defections were caused by Wasabi itself. As a software engineer, you quickly start wondering how wise it is to spend years learning a language that will be of no use once you leave your current company (yet another reason why rolling your own language as a critical part of your product is a terrible idea).

4
adnam 3 days ago 3 replies      
Worth recalling what DHH had to say about Wasabi in 2006:

http://david.heinemeierhansson.com/arc/2006_09.html

Scroll down to the part "Fear, Uncertain, and Doubt by Joel Spolsky" from September 01 (permalink 404s)

5
Marazan 3 days ago 2 replies      
The killer lines from the blog post is that they are killing off Wasabi for exactly the reasons that everyone said would make it a bad idea

1) Maintenance Nightmare2) No-one likes programming in propriety language as it dead ends your career3) Company leavers take vast amounts of knowledge away with them and impossible to hire-in to replace that knowledge

6
scotch_drinker 2 days ago 1 reply      
Wasabi was technical debt. Like all forms of debt, there is productive and unproductive technical debt. Productive technical debt actually give some return on that debt. Unproductive technical debt doesn't. Given the fact that Wasabi lasted 10 years (and the company seems to have made a bunch of money because of it) and because Wasabi gave them the ability to adjust to market factors, I'd say the payoff from this technical debt was highly justified.

All technical debt decisions should be made based on what the business hopes to get from the debt. Considerations for the alternatives, when the debt is retired, etc should all play a part. To globally say "this is a terrible idea" like so many in this thread are doing totally ignores these types of factors in favor of "It's a bad computer science" idea and thus miss the point of technical debt in the first place.

7
_pmf_ 2 days ago 1 reply      
It's strange that the HN crowd, consisting mostly of developers who work for companies that exist on average for 16 months, has the audacity of calling Wasabi, a tool that has been in use for close to 15 years, a failure.
8
brlewis 2 days ago 1 reply      
We hadnt open-sourced it, so this meant any investment had to be done by us at the expense of our main revenue-generating products. While we were busy working on exciting new things, Wasabi stagnated. It was a huge dependency that required a full-time developer not cheap for a company of our size.

The way I read this article, creating Wasabi a decade ago was not a mistake, given what they were doing and what was available at the time. Not open-sourcing Wasabi was a mistake, though.

9
stevoski 3 days ago 2 replies      
Why I'd never develop a Wasabi in my company:

When I learn a new, perhaps hyped-up computer language, I soon run into difficulties, no matter what the merits of the language. The difficulties are lack of tooling. eg, no debugger - or only a rudimentary one, no static analysis, no refactoring, no advanced code browser.

If the language is successful, these things come with time.

When you develop an in-house language, you'll never get the advanced tooling that makes for a great software development experience. This, for me, was why I was surprised by Joel Spolsky's announcement of an in-house language.

(Although, to be fair, these things didn't really exist of VBScript nor for PHP at the time Wasabi came to be.)

10
protomyth 3 days ago 1 reply      
I do wonder what the difference would have been had it been open sourced early on? I would think a tool that had a VBScript-like syntax that could deploy to PHP would have been a popular item with enterprise developers for the same reason it appealed to Fog Creek.
11
michaelbuckbee 3 days ago 0 replies      
It's not mentioned in the article, but I suspect another factor was the growth of their SAAS (aka "Fogbugz on Demand") offering - which obviously severely undercuts the value of Wasabi.

It's listed from a Google search, but from just clicking around the Fogbugz site I can't even find the page/pricing for on premise installation.

12
mwcampbell 3 days ago 2 replies      
A lot of people criticize Fog Creek for writing their own compiler, but I think it's a good example of working smarter rather than harder to solve a problem they had at the time. I think that companies which apply sheer brute force to rewrite the same app in Objective-C and Java to target the popular mobile platforms could learn from this.

I wonder to what extent the generated C# code depends on C#'s dynamic typing option. I ask because the original VBScript was certainly dynamically typed. So by the end, to what extent was Wasabi statically typed, through explicit type declarations or type inference? And how much did the compiler have to rely on naming conventions such as Hungarian notation?

13
chkuendig 2 days ago 0 replies      
Related from Jeff Atwood: http://blog.codinghorror.com/has-joel-spolsky-jumped-the-sha...

It's funny how they teamed up 2 years later to build stack overflow (and again used an MS stack)

14
tptacek 2 days ago 2 replies      
I'd really, really like to read _Wasabi The ??? parts_.
15
overgard 2 days ago 1 reply      
One thing I've always wondered (and I don't intend this as a criticism per se), but why didn't they open source it? If it had a following outside of Fog Creek it might not have been an inevitable dead end.

While I hesitate to endorse a language based on VBScript, it seems like the extensions they added to it were pretty nice. I mean, if you're inclined to use a VBScript style language, Wasabi wasn't horrible, and given Spolsky's following and Fog Creek's mindshare, it seems at least possible it could have become a useful thing rather than a legacy thing to be replaced. I mean, at the very least, it's probably not worse than PHP. (Granted: my opinion of PHP is very low). Maybe it's for the best though, the world is probably better off without new wasabi projects.

16
Locke1689 2 days ago 1 reply      
Super excited to see what you did with Roslyn! We'd love feedback on things we could do to improve the experience!
17
cjensen 2 days ago 1 reply      
Replacing Wasabi code with the prettified output of Wasabi seems like a terrible idea to me. Is the result similar enough to the original source that it will still make sense? Do comments get preserved?

Programmers just love to change good working code into the new style or new language. One has to always view the impulse to change skeptically.

18
noir_lord 3 days ago 1 reply      
They tried something interesting, it didn't work and they replaced it later.

When this has been mentioned previously there is a strong "they did a crazy thing with Wasabi" but progress depends on doing crazy things that just might work sometimes.

19
pablosanta 2 days ago 1 reply      
Has Joel jumped the shark?

http://blog.codinghorror.com/has-joel-spolsky-jumped-the-sha...

Related blog post from a couple of years ago.

20
userbinator 3 days ago 0 replies      
This seems like the logical conclusion of the inner-platform effect:

https://en.wikipedia.org/wiki/Inner-platform_effect

21
bootload 2 days ago 0 replies      
"the fact remains that we had a Classic ASP application running on Linux and we didnt have to pay anybody for the privilege." [0]

Commercial reasons dictated why Wasabi was created and retired. Bravo Joel.

[0] http://jacob.jkrall.net/wasabi-the-parts/introduction.html

22
narrator 2 days ago 0 replies      
You could have just started on Java. I've worked on very large 15 year old Java enterprise code bases that are doing just fine. That language is so pathetically maintainable and the backward compatibility between JVM releases is very good. Microsoft seems like a moving target with how often they deprecate things.
23
jim_greco 3 days ago 6 replies      
A company should never build a compiler, database, or operating system if it is not their primary business.
24
10098 2 days ago 1 reply      
I wonder how much time and money they would save if they had just bitten the bullet and rewritten the entire thing in PHP in the first place instead of doing what they did.
25
serve_yay 2 days ago 0 replies      
I guess this worked for them, for some value of the word "worked", but it always seemed like a bad idea to me.
26
tempodox 3 days ago 0 replies      
Building an in-house compiler rarely makes sense.

Dang, it's significantly less entertaining but I'm afraid he's right.

27
davelnewton 2 days ago 0 replies      
Isn't this exactly what everybody said when they announced Wasabi in the first place?
28
robotnoises 3 days ago 4 replies      
This is off-topic, but how is it that this is the top story on HN right now with zero discussion (assuming that by the time I click "add comment" I am the first to comment).

Is it dumb to assume that it's normal for upvotes and comments to increase at a similar rate?

29
Lazare 2 days ago 0 replies      
It's fascinating, because writing your own custom language is one of those things that everyone knows is a bad idea.

It's one of those things that people do because it seems like the path of least resistance (and in the short run, is), but it inevitably snowballs into a pit of technical debt. Spolsky knew this quite well (he'd written eloquently on the subject).

...and yet he still did it. His defence was that it was the easiest option in the short term, and he was probably right, but it doesn't matter. People only do stupid stuff that seems smart; saying "this stupid thing seems smart!" is only a defence if you have no idea that it's actually fundamentally stupid. Of all the people in the world, Spolsky is one of the least able to mount this defence.

Contemporaneously with his decision to go all in on Wasabi, he wrote a scathing condemnation of Ruby for being slow, unserious, obscure; he suggested that a serious company shouldn't opt for Ruby because it was risky, and that choosing it would put you at risk of getting fired.

Was he right? In 2006, maybe? I mean, he turned out to be wrong, but I don't think it was entirely obvious that Ruby was a serious choice 15 years ago. Of course, he wasn't writing 15 years ago, but even nine years ago, a very conservative, safe approach to choosing a technical stack very possibly did mitigate against selecting Ruby, for all the reasons he outlined. But those arguments applied twice as hard to Wasabi. You don't get to argue that there "just isn't a lot of experience in the world building big mission critical web systems in Ruby" (and hence you shouldn't use Ruby), and then turn around and use Wasabi for your big mission critical web system.

Of all the people in the world, Spolsky probably had the best understanding of why Wasabi was a stupid, short sighted decision. He did it anyway. And it was stupid and short sighted. Rarely is someone so right and so wrong about the same thing at once.

(And yes, Fogcreek is still around, and so is FogBugz. But I don't buy for a moment that Wasabi was actually a good choice. They survived it, but they didn't benefit from it.)

Edit: Spolsky has written too much about why things like writing something like Wasabi is terrible idea to link it all. Besides, a lot of it has been linked in other comments. But I don't think I can express strongly enough that my anti-Wasabi position is simply repeating the things the guy that signed off on developing it and using in production wrote. ...then he decided to write a new language because apparently Ruby was too slow to possibly use to generate a graph, and there was literally no alternative to using Ruby for graph generating than writing your own compile-to-VBScript/PHP language. Words fail.

30
mixmastamyk 2 days ago 3 replies      
Wow, rabbit hole is right... The first mistake was writing a web-app in VBScript (not VB) ?? I'm not even sure that's possible, it's such an awful, limited language. Probably required components written in C++? The developer who started that should have been fired as incompetent. These are the "B-players in a hurry" you should get rid of, that can cost your company millions.

Then instead of cutting their losses, they doubled and tripled-down on it until they had their own language and sophisticated tools around it. Around this time, Django and Rails had been started already. And several decent cross-platform web frameworks were years old, such as CherryPy. Even PHP would have been a better choice. One of these could have been phased-in parts at a time to minimize disruption.

Did I get this right? Because there are so many WTFs that I must have missed something.

The Unix Philosophy catb.org
347 points by dorsatum  3 days ago   249 comments top 13
1
fauigerzigerk 3 days ago 10 replies      
I keep wondering about what seems to be the most important component of Unix philosophy: write many small programs that do one thing well and interface using text streams!

Yes, modularity is important. However, in some cases, this philosophy has resulted in the "tangled mess held together by duct tape" kind of systems architecture that no one dares to touch for fear of breaking things.

I think Unix philosophy is struggling with a fundamental dilemma:

On one hand, creating systems from programs written by different people requires stronger formal guarantees in order to make interfaces more reliable, stronger guarantees than interfaces within one large program written by one person or a small team would require.

On the other hand, creating systems from programms written by different people requires more flexibile interfaces that can deal with versioning, backward and forward compatibility, etc, something that is extremely difficult to do across programming languages without heaping on massive complexity (CORBA, WS-deathstar, ...)

I think HTTP has shown that it can be done. But HTTP is also quite heavy weight. It doesn't exactly favor very small programs. Handling HTTP error codes is not something you'd want to do on every other function call.

In any event, I think Unix philosophy is a good place to start but needs a refresh in light of a couple of decades worth of experience.

2
alfiedotwtf 3 days ago 6 replies      
People will hang shit on ESR, but The Art of Unix Programming is one of my all time favourite books. If you haven't read it, even if you're not a *nix developer, do yourself a favour and just skim the table of contents... something may pique your interest and you may learn a thing or two.

It's also free online:

 http://www.catb.org/esr/writings/taoup/html/

3
zvrba 3 days ago 2 replies      
> The Unix philosophy originated with Ken Thompson's early meditations on how to design a small but capable operating system with a clean service interface.

Except that a program-to-program interface based on formatting and parsing text is anything but clean.

4
McElroy 2 days ago 1 reply      
At my place of work, we have a client relationship to a company like ours in a neighbouring country. They develop a software that is of much use to us so we pay a license to them for it. They are nice people but god damnit, there is one thing they just never got right. They only support one platform, one that is certified UNIX, yet no matter how severe an error, their cli tools and scripts exit with 0 no matter what. I'm the guy who writes some smaller tools and scripts on our side integrating with their software so you probably understand why I get a bit upset about this at times. Still, I enjoy my work and as I said they are nice people and also they are a quite small team so I don't want to burden them with these concerns when there are other things that our company need from them more.

Anyway, I've been with my company for a few years and soon my contract expires and I'm going to study the field our company is in and get a degree in that, then I'm going to apply for a position doing our core business. I would still like to be involved with the software my current position is touching on, though, if possible. (Our company has 1000+ employees and several different sub-sections, so even though I might get back into the company, it's not a given that I'll be working with the group of people I am now even though I'd like to.)

I also sometimes think that if possible, perhaps I'd like to work for that other company in our neighbouring country for a few years and be on the dev team of the software. After all, I have experience from the user side which the dev team has not and the dev team has seen some of the tools I've made and a couple of the guys seemed to think that some of that stuff was pretty decent.

5
jokoon 3 days ago 10 replies      
This should be taught in any programming class.

> Rule of Diversity: Distrust all claims for one true way.

Although, does the python rule "There should be one-- and preferably only one --obvious way to do it." contradict this one ?

> Rule 5. Data dominates. If you've chosen the right data structures and organized things well, the algorithms will almost always be self-evident. Data structures, not algorithms, are central to programming.

I wonder what rob pike has to say about OOP or java, I wish I could listen to it.

Also it says that text is a good representation of data, but I think he meant it as intermediary. I don't think xml or html are really good choices when you see all the CPU cycles spent parsing those.

> Rule of Economy: Programmer time is expensive; conserve it in preference to machine time.

I prefer this rules versus the "no premature optimization" rule.

6
sytse 3 days ago 2 replies      
When making a microservice application you have to choose between a text (JSON) or binary (Thrift) interface. You cloud argue that the unix way is to make it JSON until it becomes a performance problem.
7
oldpond 2 days ago 0 replies      
Can't upvote this enough. My takeaway from this gem is that we need to keep thinking about our craft even as we evolve our technologies, and we need to have the courage to stand up for it. The hardest thing in the world to see is your own point of view because you have to step outside of it to see it.
8
coldtea 3 days ago 3 replies      
>Make each program do one thing well. To do a new job, build afresh rather than complicate old programs by adding new features.

Which means that no program is more un-UNIX-y than Emacs...

9
thinkmoore 2 days ago 1 reply      
The Unix Hater's Handbook (https://en.wikipedia.org/wiki/The_Unix-Haters_Handbook) has a few sections on of some of the ills of the Unix philosophy.
10
jkot 3 days ago 2 replies      
How debacles such as micro-kernels and XServer fits into Unix Philosophy?
11
hitlin37 3 days ago 0 replies      
in case of mankind extinction, we should preserve the above text and pass it on to the next race.
12
el33th4xx0r 3 days ago 4 replies      
I wonder how relevant is it nowadays. Many good things we can't have, like systemd, if we have to follow strictly to the unix philosophy.
13
digi_owl 3 days ago 1 reply      
the rule of separation makes me wonder if the long term legacy of Wayland will be as a hardware accelerated backend for X.
How side projects saved our startup crew.co
317 points by rmason  1 day ago   41 comments top 23
1
awwstn 1 day ago 3 replies      
Interesting to note that while Unsplash's launch on HN led to it becoming a staple and a huge success, the comments from the HN community were overwhelmingly negative: https://news.ycombinator.com/item?id=5794083

As a happy subscriber to Unsplash since it first launched here, I'm glad that the team ignored the comments and kept making this.

2
candu 1 day ago 5 replies      
One issue I have with posts like this, however awesome/uplifting/well-intentioned they may be, is that they attempt to generalize mostly anecdotal evidence to general advice.

Is this an engaging and inspiring story? Yes, and it's great that people feel free to share their stories (successes and otherwise) here.

Having read it, do I have a better idea how likely this strategy is to work for any given person/company? No, and I don't know that anything short of an exhaustive longitudinal study would help there. (There's some mention of studies on creative hobbies, but it's a bit of a leap from there to repeatable ROI.)

3
zem 1 day ago 1 reply      
"side projects as marketing" has a rather unfortunate acronym (:
4
swalsh 1 day ago 0 replies      
I think the "side project as marketing" might only work if your side project is a good vertical for your product. People coming to a site that helps give referrals to designers (i think that's what Crew is) is definitely the same people who would be interested in UnSplashed.

If i'm working on a business for pharmacists, i'm not sure my side project playing around with neural networks is going to get me the right eyeballs.

5
amelius 1 day ago 1 reply      
This is too simple to be of interest. Only a few companies can have side projects like that, and with that amount of success.

Honestly, HN sometimes (but not always) feels like it is made up of a bunch of gold-diggers, clinging to the hope of one day making a big breakthrough, without proportional effort. It has a very shallow feel to it.

6
imh 1 day ago 0 replies      
It's very meta to realize that this blog post is marketing too. Great execution of the "useful marketing" idea.
7
simonswords82 1 day ago 0 replies      
We do similar things for http://www.staffsquared.com.

For example:

- What type of manager quiz are you: http://www.staffsquared.com/what-type-of-manager-are-you-qui...

- Timesheet calculator: http://www.staffsquared.com/timesheet-calculator/

- Maternity calculator: http://www.staffsquared.com/calculator/maternitycalulator/

...and much much more.

Generally these "side projects" take a few days to put together from concept through to launch. They're very minimal overhead, and they drive good numbers that convert to trials to the site.

The best side projects don't just link back to the website you're actually selling, but somehow draw users in. A good example of this is the http://invoiceomatic.io/ by Freeagent. They grab you by giving you the opportunity to create an invoice, next thing you know you're knee deep creating a Freeagent account...it works.

8
personjerry 1 day ago 0 replies      
For Unsplash, could we get soft-links (i.e. https://unsplash.com/photo/1 would always link to photo 1 of the set of 10 for that week)?Then I could set up a script to update my wallpaper weekly, because these are gorgeous! :)
9
hrayr 1 day ago 0 replies      
This is a very interesting marketing (unmarketing?) strategy. I'm in a similar spot as Crew were when they posted that project, I even remember seeing it on hn at the time.

Andrew recently posted an interview [1] with the founder of betalist.com, which was also born out of desperation and as a side project. Marc talks about his betalist experiment and the impact it had [2].

I would love to see other examples and write-ups about this. Was it accidental or strategic? I'm the sole developer of our product right now, but we're also struggling with marketing at the moment. How much does it make sense for me to put the effort into such side projects?

[1] http://mixergy.com/interviews/marc-kohlbrugge-betalist/[2] https://medium.com/beta-list/how-i-tricked-techcrunch-into-w...

10
mozumder 1 day ago 1 reply      
> The best marketing is when you dont know its marketing

There's an entire industry that revolve around this idea.

We call the people that work in that industry "publicists".

11
ljoshua 1 day ago 0 replies      
Thank you for this post: I've been feeling especially down on myself today since I've a lot of learning to do when it comes to marketing and managing side projects, but the ideas in the post were gold. Kudos!
12
dools 1 day ago 0 replies      
This is a very common seo tactic most often referred to as "tools". See quicksprout.com for a high profile example.

cueyoutube.com has been a good source of seo juice work workingsoftware.com.au but beware tools that have a maintenance overhead: youtube updated their api weeks ago and i haven't had time to fix it.

13
ivan_ah 1 day ago 0 replies      
Very good insights about marketing material that is "useful".

The idea reminds me of Vaynerchuks Jab, jab, right hook strategy http://www.forbes.com/sites/danschawbel/2013/10/11/13-memora... sorry for the popups)

14
Jugurtha 1 day ago 0 replies      
This is nice. I experienced something similar. I live in Algeria and the banking system is deplorable. I wanted to be able to buy things online and needed a debit card. It took forever to find a bank that proposed them (sometimes, the bank employees themselves didn't know their own bank proposed such a service).

Anyway, after all the fuss and after gathering all the necessary documents to open an account and get a card, I thought that a lot of people were in a similar situation.

I created a wordpress.com blog and listed all the necessary documents. I created a PDF the bank required but didn't even bother to ask for until you went there (so extra trip) with fillable fields and all and uploaded it there. The whole thing. It was so frustrating to me that I went overboard and listed other options like comparing other card providers to the specific context of the country, and how each one could be used differently.

After that, I got proper hosting and redirected the .wordpress there. There were about 300 people daily on the site. Not much, but that's 100k people who read a very long post. The post alone had more than 700 comments (I changed to Disqus) and I replied to 99.99% of them. The remaining was spam. Soon, other readers were answering questions of "new" readers. They also sent me different documents to attach to the article. The site was linked to from a whole bunch of geek sites in the country. Sites to buy cars linked to it, too (they were interested in buying car accessories).

Often times, people I knew would read the article and then read the author's name and laugh because they knew me personally. Another contacted me and said good things and asked if I was related to an author/Gynechology Professor (my uncle) and he said he avoided his wife a couple years of prison time (she was to be jailed for medical mistake and my uncle apparently made a report it wasn't, the investigation was reopened). Others said it would be cool to meet IRL for coffee, etc. Others said I should monetize it.

The site ranked 1 on Google for "MasterCard Algrie" (it's not anymore as I was too busy to renew hosting, etc. But the wordpress.com blog ranks 7th).

It all came because I was too frustrated by the paperwork and the 18th century style banks have to do business.

The point is: It might not seem like it's a big thing (I mean, it's only a darn card, right).. But you never know how bad the itch is for someone else. A good indication is how it is bad for you, though. It doesn't matter if it's not revolutionary, only that it needs violent scratching.

Good luck with your projects.

15
hardwaresofton 1 day ago 1 reply      
I've been using unsplash for a while and I was amazed the first time I used it, and am still amazed today. Enjoyed learning a little more about how they started.

The content is some times a little repetitive (just how many shots of amazing, beautiful scenery could one need?), but it has become an absolute go-to for me.

Article is old, but still like hearing about Unsplash

16
melle 1 day ago 0 replies      
This is some great advice, nice post!

Too bad the article only focuses on the success stories. I for one would be really interested to know how many failed/abandoned side projects they created and how they relate to the successful ones.

17
ddrum001 23 hours ago 0 replies      
Very interesting post and seems to be the best way forward for "marketing" and creating the right culture at a start-up.
18
fruitfulfrank 1 day ago 0 replies      
Candu- excuses excuses, time to just get out there and do it. Try something, you might strike gold first time like Crew did, but probably not. Invest little and expect little back, that's the lesson here. If it works, then shine a light on it.
19
joslin01 1 day ago 0 replies      
Oh wow! This was you who created it. I had been looking around for nice stock images to use for a product and stumbled upon "Unsplash". I was like sweet! Awesome work and thank you again!
20
avinassh 1 day ago 0 replies      
btw, Github was also a side project once.[0]

[0] - https://news.ycombinator.com/item?id=1772357

21
philip1209 1 day ago 0 replies      
This isn't clear on the page, but if you look at the metadata - this article is from October 2014
22
Disruptive_Dave 1 day ago 0 replies      
Also love what Hubspot did w/ Sidekick.
23
hackuser 1 day ago 0 replies      
I'm sure this breaks some HN rule, but consider visiting this discussion:

The Nigerian Teenagers Who Built Crocodile Browserhttps://news.ycombinator.com/item?id=9787010

One of the teens found his/her way to HN, but all that's in the thread is nitpicking about their website. It would be great if he/she got some engagement and encouragement.

Microsoft quietly pushes 18 new trusted root certificates hexatomium.github.io
280 points by svenfaw  1 day ago   130 comments top 14
1
Mojah 1 day ago 3 replies      
I'm glad someone noticed, and at the same time it's a shame it took a month before the news actually came out.

I think this demonstrates 2 very major problems with SSL Certificates we have today:

1. Nobody checks which root certificates are currently trusted on your machine(s).

2. Our software vendors can push new Root Certificates in automated updates without anyone knowing about it.

More content on this rant: https://ma.ttias.be/the-broken-state-of-trust-in-root-certif...

2
TazeTSchnitzel 1 day ago 0 replies      
Hmm. Are these actually new? Did the OP look at Microsoft's full certificate store and notice some additions, or did they look at their local machine? Because in the latter case, Windows does not include a full certificate store. Rather, it fetches them on demand.

EDIT: I googled the first two. "GDCA TrustAUTH R5 ROOT" and "S-Trust Universal Root CA" are both new certificates (~Novemeber 2014). The latter is in Firefox already, and is a new SHA-256 root certificate to eventually replace a SHA-1 certificate for an existing CA.

3
0x0 1 day ago 4 replies      
The CA system is so broken. Letting a vendor decide who you should trust to issue intermediate certificates that your software will automatically trust to not issue illegitimate certificates doesn't make the system particularly trustworthy, especially seeing as they are still adding new root CAs for entities with unknown agendas (governmental agencies etc).

Maybe the system could be changed to one where domains can only be signed by the DNS registrar, or something.

It's pretty crazy that any CA can issue a certificate for any domain. And paying for more validation doesn't help you at all, since it won't keep the "bad guys" from getting an illegitimate one from a cheaper/lazier CA.

4
userbinator 1 day ago 2 replies      
It's good that someone noticed... if you have automatic updates enabled, you have implicitly consented to giving Microsoft what is essentially a root account on your system. They can modify it to both fix - and break - things just as easily.
5
pdkl95 1 day ago 3 replies      
We need dynamically selectable trust.

I don't mean the simple ability exposed in most browsers to add/remove certs. That still assumes one set of trust that used globally, which is completely incorrect.

Maybe I don't trust $COUNTRY to handle their root certificate for most uses. Currently we handle that case by removing the cert completely. Trust, however, is not a simple boolean value, and maybe I do trust that certificate for $COUNTRY's official government pages. I should be able to specify that I trust some certificate for some domain (or other, non-domain based use!), but not for others.

As another example, consider a local Web Of Trust. whenever Web Of Trust is brought up, people complain about the difficulty of key exchange. Well yes, that's a difficult problem, but there is no reason that it has to be solved for all use-cases before anybody starts using it. Maybe a circle of (usually physically local) local friends want to have secure communications. They can share a key in person easily, and so it should be easy to give access to a private forum by simply sharing a key/cert on a USB disk.

We can currently approximate those cases, but it is not well supported, and is certainly not something that most users would be expected to be able to do. We can fix some of that with a better UI, but I'm suggesting a far more fundamental change, because actually solving problems like key sharing will not be easy, and I suspect they will only be solved once we have infrastructure in place. HTTP was successful because it did not require that everybody implements the full, fairly complex specification. Instead, we had a fluid, extensible protocol that allowed anybody to extend it, and that allowed for the development of a wide variety of software.

The problem with traditional PKI (at least as implemented) is that it assumes that we can assign an absolute trust value to anything. In reality, trust is relative, and may in fact have multiple values at the same time. Until software is designed around those realities, it will always be inflexible and insecure for any use case where the needed trust assumptions do not reflect the assumptions made by the authors the software.

Unfortunately, I'm a old-style UNIX nerd who is fine with using GPG, and I'm not sure what the UI for a dynamic-trust system would even look like. sigh

6
mythz 1 day ago 2 replies      
Douglas Crockford has an interesting video on why CA system is broken and how it can be fixed in his recent "upgrading the web" talk at: https://angularu.com/VideoSession/2015sf/upgrading-the-web

Essentially he's proposing side-loading a new application under a custom url scheme so that the browsers will launch a helper app that's used to handle web applications with the following url format:

 web: publickey @ ipaddress / capability
The url contains servers ECC521 public key as part of the url so that it gets around the CA System and clients can just encrypt requests with the servers published public key directly.

He's planning on developing the helper app that's based on a sand-boxed node.js and QT application which just uses a TCP session to communicate with the server.

7
caf 1 day ago 2 replies      
"RXC-R2" is certainly insufficiently verbose.

I think the author might be a bit behind the news on Tunisia, though.

8
Hello71 21 hours ago 0 replies      
Everything you Need to Know about HTTP Public Key Pinning (HPKP): http://blog.rlove.org/2015/01/public-key-pinning-hpkp.html

> A flaw in this system is that any compromised root certificate can in turn subvert the entire identity model. If I steal the Crap Authority's private key and your browser trusts their certificate, I can forge valid certificates for any website. In fact, I could execute this on a large scale, performing a man-in-the-middle (MITM) attack against every website that every user on my network visits. Indeed, this happens.

> HPKP is a draft IETF standard that implements a public key pinning mechanism via HTTP header, instructing browsers to require a whitelisted certificate for all subsequent connections to that website. This can greatly reduce the surface area for an MITM attack: Down from any root certificate to requiring a specific root, intermediate certificate, or even your exact public key.

related previous articles:

Firefox 32 Supports Public Key Pinning (188 points by jonchang 304 days ago | 100 comments): https://news.ycombinator.com/item?id=8230690

About Public Key Pinning (72 points by tptacek 43 days ago | 5 comments): https://news.ycombinator.com/item?id=9548602

Public Key Pinning Extension for HTTP (70 points by hepha1979 242 days ago | 28 comments): https://news.ycombinator.com/item?id=8520812

9
chinathrow 1 day ago 0 replies      
It's easy to call the CA system broken (and I think it really is), but other, better solutions are not that easy currently.

We want an open, yet secure web, anonymous at best. With the current setup, that is not so easily possible. Letsencrypt might help, but even with that, there is still someone you need to beg for a signed cert.

Maybe we need to think ahead.

10
facepalm 1 day ago 0 replies      
They silently installed all the previously existing ones on my system, too. So did Apple on my Mac, and Ubuntu on my old notebook.

Maybe I should, but I am not going to individually double check every root certificate. I don't think I have the means to do so either.

11
cabirum 1 day ago 1 reply      
@svenfaw, your RCC scanner says "Exiting... [Reason: signature database appears to be out of date.]", why is that?
12
nicolas314 1 day ago 0 replies      
For information: OpenTrust (3 roots) is just the new name of the entity that was once Certplus (1 renewed cert).
13
mkramlich 13 hours ago 0 replies      
one word. or 4 letters. understand them and you understand everything else about this issue:

MITM

14
sneak 22 hours ago 0 replies      
It's okay, most apps don't bother checking certificate validity anyway.
I Dont Believe in God, but I Believe in Lithium nytimes.com
274 points by pepys  1 day ago   105 comments top 19
1
jrapdx3 1 day ago 3 replies      
Lithium, as medication, has been a benchmark of my life, though in a different way than portrayed in the nicely written article.

When I was in school back in the 60's, I had the chance to see the healing effects of lithium before it was approved here in the US in 1970. I saw a man in a florid manic state dramatically improve in two week's time, kind of magical and it left a lasting impression on me.

A few years later I happened to be walking in town, and a man stopped me. "I know you. You were one of those students there when I was in the hospital." Only then did I know who he was. I asked how he was doing. He said "I'm doing quite well. Lithium saved my life and I'm still taking it."

Since then I've had the responsibility of treating many people with mood disorders, and I didn't forget what I'd learned. Anyway, lithium is still a godsend for many people, but of course it really isn't a magic bullet, nothing is.

Like all medications it can produce bad effects. I've seen that happen too. Renal failure is a risk as the article points out. Careful monitoring can prevent some bad outcomes, though not all. Doing whats best requires utmost dedication by patient and doctor to the cause of stability and quality of life.

In the words of Spinoza, "all things excellent are as difficult as they are rare." Success is possible, we just have to find the courage and strive to get there.

2
nkurz 1 day ago 2 replies      
A study in Japan has shown a sample population to be less likely to commit suicide after drinking tap water containing lithium.

Notably, there is enough lithium in the groundwater in certain areas of the US that this "study" has been happening for a long time. El Paso, Texas has high naturally occuring lithium in the groundwater, and is widely reputed to have less violence than comparable cities with less lithium in their water. I haven't read the whole thing, but remarkably, a recent paper seems to have shown this to be true, at least for suicide mortality.

Lithium in the public water supply and suicide mortality in Texas (Blml et al, 2013)

 There is increasing evidence from ecological studies that lithium levels in drinking water are inversely associated with suicide mortality. Previous studies of this association were criticized for using inadequate statistical methods and neglecting socioeconomic confounders. This study evaluated the association between lithium levels in the public water supply and county-based suicide rates in Texas. A state-wide sample of 3123 lithium measurements in the public water supply was examined relative to suicide rates in 226 Texas counties. Linear and Poisson regression models were adjusted for socioeconomic factors in estimating the association. Lithium levels in the public water supply were negatively associated with suicide rates in most statistical analyses. The findings provide confirmatory evidence that higher lithium levels in the public drinking water are associated with lower suicide rates. 
https://www.gwern.net/docs/lithium/2013-bluml.pdf

Edit: I just realized that the Op Ed linked from the main article mentions the same evidence, although without reference to that particular paper: http://www.nytimes.com/2014/09/14/opinion/sunday/should-we-a...

3
phren0logy 1 day ago 6 replies      
I'm a psychiatrist, and I prescribe lithium quite a bit. It really is the gold standard for treating Bipolar I Disorder.

Aside: If you are a person who uses the word "bipolar" as a synonym for moody or indecisive, I hope reading this will help you understand what actual bipolar mania looks like.

4
the_rosentotter 1 day ago 0 replies      
Ever since reading the referenced op-ed from the New York Times, I have been curious about the effects of low dosages of lithium. If a large dose can counteract bipolar disorder, it seems reasonable that a low dosage would have at least some amount of calming effect (which is what the NYT article claimed, with reference to some studies done on populations with naturally high occurences of lithium in their water supply).

So about eight months ago I started taking a low dosage of lithium, in the form of drops added to my drinking water, in a dosage that amounts to about 2-3mg per day - similar to the amounts in naturally occurring high-lithium drinking water. (In comparison, therapeutic doses are several 100 mgs per day).

Anecdotally, it might just be placebo effect, but I do feel it has had some effect. I have always been a bit anxious, particularly socially, and I feel that has diminished over this period. However, this experiment coincides with a better exercise regimen, and also simply growing older, so it's difficult to 100% attribute the effect (if any) to the added lithium. It would be very interesting to see more studies on this.

If anyone's interested you can buy these drops as 'trace mineral drops' from the Great Salt Lake.

5
yeahdude 1 day ago 0 replies      
There was a study in Italy of a patient who was cured of a severe form of bipolar called rapid cycling bipolar with "darkness therapy". They locked him in a dark room for 14 hours a night and after a couple months his sleep and his mood stabilized. No medication.

http://psycheducation.org/treatment/bipolar-disorder-light-a...

6
ars 1 day ago 2 replies      
Just be careful on long term lithium. Even when taken at the proper dosage it will destroy the kidneys, eventually leading to death. This is not as well disclosed/known as it should be.
8
smsm42 15 hours ago 0 replies      
It is great that there's a way to improve the lives of these suffering people. Still, it scares me how little we know about how these drugs work (and by "we" I don't mean myself, but the summary state of human knowledge, as it appears to me) and that we still - at least as it looks to me, admittedly knowing very little on the subject beyond popular press - that we still rely mostly on luck and trial/error in figuring out how to mitigate mental illness. That sounds like we'd write code by just mostly randomly putting words together and then run through a battery of unit tests and see if something works. And if some unit test passed we'd declare that code a function implementing that unit test's functionality. I imagine you can get somewhere this way, but it's kind of scary we don't have something better.
9
shirro 1 day ago 2 replies      
I wonder if things like bipolar disorder fall on a broad spectrum and a lot of people might not have very mild and perhaps undiagnosable mood disorders that might benefit from very low level lithium supplements. Is lithium supplementation a thing (can you buy it in "health food" stores) and is there any evidence of efficacy for non psychiatric cases?
10
shiggerino 1 day ago 0 replies      
Apparently 7-up contained lithium citrate until 1950 for its mood stabilizing effects:

https://en.wikipedia.org/wiki/7_Up

Coke, on the other hand, lost its cocaine content much earlier, in 1903.

11
MichaelCrawford 17 hours ago 0 replies      
Lithium works well for my symptoms but I do not tolerate it. When I learned that it only reduces hospitalizations by half I stopped taking. I did just fine for six years but became psychotic in graduate school.

Since then Ive taken valproate which works well and so far I tolerate well. However there is significant risk to my liver. I take regular blood tests to watch for that.

Lately Ive been feeling physically ill, as if I have been poisoned. I dont know the cause but will request a liver function test this week.

12
Altay- 1 day ago 1 reply      
I thought this was going to be an article about batteries and the quote would be attributed to Elon Musk or some other tech billionaire.

I was pleasantly surprised.

13
learc83 1 day ago 1 reply      
I live a few miles from the lithia springs that Lithia Springs GA is named after. I still haven 't been able to find out how much lithium there is in our tap water.
14
bite_victim 1 day ago 1 reply      
Side rant:

Claiming that you don't believe in God is equally annoying as those sect guys knocking at your door. I do believe in God and I find the use of lithium in treating these illnesses a hope (with potential dead serious side effects: http://www.ncbi.nlm.nih.gov/pmc/articles/PMC4456600/) but not the potential meaning of life (?! how can you compare lithium with the notion of God?!).

15
atmosx 1 day ago 0 replies      
IIRC the main problem with lithium is the narrow therapeutic window, hence potential toxicity. But I had no idea it was that good. Actually reading studies I wasn't quite sure of many drugs psychiatrists prescribe, especially serotonin re-up takes... But out of experience they seem to work in many cases.
16
jldugger 1 day ago 2 replies      
> After I was admitted to the institute's adolescent ward, I thought the nurses and doctors and therapists were trying to poison me.

Well, they kinda are. Theraputic doses of lithium are disturbingly near toxicity levels.

17
adamclayman 17 hours ago 0 replies      
| i do believe in Gd, but i do not believe in lithium.

Please, everyone, let's stop talking such nonsense and missense. There's a framing error at play here, at a very fundamental level, and a whole field has gone down this rabbit hole for far too long. There is no such illness called "manic depression"; there is a symptome called "hope-despair spectra dysregulation disorder". The phrase "manic-depression", like the word "harassment", is a confusing misnomer almost deliberately invoked by a langauge switcheroo, mostly by professionals who are never trained in the original humanisms from which the word originated and is imparted and imported. As with harassment, which is more clearly expressed as "exhaustion", the term "mania" is more clearly expressed as an assessed "unreasonable and/or extreme hope, leading to reckless energy or cognitive chain investments or behavioural drivers". The term "depression" is simply a prolonged despair, wherein a person is seen to be desperate for air. Psychiatrists and psychologists who speak of manic depression as something more than a persistent "hope-despair dysregulation" are usually, in my experience, blowing smoke, and owe a duty to assess whether the hope-despair complex is the result of illogic, illmotion, or both, and whether that illogic, illmotion, or both is exogenous or endogenous. The postulations in the DSM are not credible, as the Director of NIMH, the National Institute of Mental Health, asserts in pointing out that the field of psychiatry is terrible at identifying causes, and dresses up symptom complexes and symptomologies to look like mechanical medical dis-eases. There are very few diagnoses that psychiatrists can do, and calling "hope-despair spectra dysreg disorder" (or, manic depression, as the DSM calls it) a diagnosis is, in my humble opinion, a fraudulent claim. It's not a diagnosis... it's a symptosis, or [symp]tomosis.

It's also completely imprecise and inaccurate, rather like saying, "@phren0logy has a cough", rather than saying "@phrenology has a rhinovirus" .

Hope-Despair Dysregulation Disorder (HD3), from....Manic = A state of prolonged hopeDepression = A state of prolonged despair

It's only natural that We should have evolved, have had revealed, been given, and overwritten and at times, overridden environmental and social expectancies, and that those should altar the pattern of our hope and despair. The persistence of these patterns can, in the eyes of another, be seen as "abnormal" and an "unwanted deviance from socially integrated expectancy patterns". The response pattern from terrapists is to feed a salt pill to the patient as a placebo, in the place of a more obvious sugar pill, so that the patient returns regularly for talk therapy sessions or has a few weeks to stabilize their native sense of the statistics of life, wearing out their own misweighting of cued and observed probabilities. But... this same effect would happen if they were to be fed NaCO3, or NaCl. Lithium, i posit, has no effect other than as an off-grid placebo pill to give terrapists time to try to figure out the root cause and failure modes in cognition. i do not believe the statistical effects of natural experiments yet; i have not come across a convincing study yet, and it's my belief that study non-publication bias for disconfirmations on lithium's environmental effects will explain the rest.

As for what to do with people who are thinking about survival rather than thriving, and considering survival failure, tell them they are on the hope-despair dysregulation spectra, and ask them to consider how many years left they have until they reach 100 years old, and set that as their new age. 22? Your real age is not Your chronological age (cage) of 22; it is Your survivor age (sage) of 78. Reinforce it by teaching them the Periodic Element that their Steam Age corresponds to, in this case, Platinum, or Pt, and ask them to go for physical therapy by going out for a long run with a friend, or, if they have legal woes instead of psychiatric woes, arrange for them to speak with whoever it is that is the cause of their woes in a safe space, rather than aggravating or papering over the lack of ethical calmunity care.

Lastly, read Seligman's Flourish with them, and other works of positive, social, and cognitive bias psychology. The attempt to use diagnostic langauge in a root-cause-agnostic is fraudulent; please stop doing it. It causes far more damage than psychiatrists and other psycholory specialists take responsibility for, particularly as families, calmunities, and institutions abuse the indeterminacy, soft, nearly unfalsifiable nature of psychiatric labels as a means of social control for those they consider inconvenient gadflies suffering from too much institutionally-wrought despair.

Also, the DSM Criteria are foolish to apply against certain classes of the population. For instance, with hope-despair dysreg, one of the symptoms is written up as "Flights of Ideas", with some modifiers. Intellectuals and designers cultivate the capacity to undergo "flights of ideas". That's what these people do. Why would You count that as a bullet point toward psycholore.ical sympagnostics, when it is part of their professional duties? That just weakens the whole meaning of the sympagnostic for that whole sector of the population.

Those are my 2 calming sense on the problem. PERMA, Exercise, Resiliency Training, Socialization, Uninterrupted Purpose, Daily Progress all add up to an end to depression; talking to a blank face of a false friend with no power to convene the social world to determine and test the reality described may help tune, slow, or stop survival fail, but only for a time. Inverted ages (Pb-Ar) and Fundamental, sustained purpose mixed with calming human life stage activities will stabilize most, on a complete review of their ethics, i.e. their character. Lastly, if there's loneliness or a reflective solitude involved, You'll want to review and perhaps fix that as well, as the case requires.

Best wishes, everyone. Let me know if You're ever in need of a call to point out how many Years You'd be sacrificing should You go "Canary" prematurely. Reach out to me; i can help you Flag Sentinal instead of losing Your life to self-organized survival fails.

18
IshKebab 1 day ago 3 replies      
I thought this was going to be about batteries.
19
adamclayman 17 hours ago 0 replies      
| i do believe in Gd, but i do not believe in lithium.

Please, everyone, let's stop talking such nonsense and missense. There's a framing error at play here, at a very fundamental level, and a whole field has gone down this rabbit hole for far too long. There is no such illness called "manic depression"; there is a symptome called "hope-despair spectra dysregulation disorder". The phrase "manic-depression", like the word "harassment", is a confusing misnomer almost deliberately invoked by a langauge switcheroo, mostly by professionals who are never trained in the original humanisms from which the word originated and is imparted and imported. As with harassment, which is more clearly expressed as "exhaustion", the term "mania" is more clearly expressed as an assessed "unreasonable and/or extreme hope, leading to reckless energy or cognitive chain investments or behavioural drivers". The term "depression" is simply a prolonged despair, wherein a person is seen to be desperate for air. Psychiatrists and psychologists who speak of manic depression as something more than a persistent "hope-despair dysregulation" are usually, in my experience, blowing smoke, and owe a duty to assess whether the hope-despair complex is the result of illogic, illmotion, or both, and whether that illogic, illmotion, or both is exogenous or endogenous. The postulations in the DSM are not credible, as the Director of NIMH, the National Institute of Mental Health, asserts in pointing out that the field of psychiatry is terrible at identifying causes, and dresses up symptom complexes and symptomologies to look like mechanical medical dis-eases. There are very few diagnoses that psychiatrists can do, and calling "hope-despair spectra dysreg disorder" (or, manic depression, as the DSM calls it) a diagnosis is, in my humble opinion, a fraudulent claim. It's not a diagnosis... it's a symptosis, or [symp]tomosis.

It's also completely imprecise and inaccurate, rather like saying, "@phren0logy has a cough", rather than saying "@phrenology has a rhinovirus" .

Hope-Despair Dysregulation Disorder (HD3), from....Manic = A state of prolonged hopeDepression = A state of prolonged despair

It's only natural that We should have evolved, have had revealed, been given, and overwritten and at times, overridden environmental and social expectancies, and that those should altar the pattern of our hope and despair. The persistence of these patterns can, in the eyes of another, be seen as "abnormal" and an "unwanted deviance from socially integrated expectancy patterns". The response pattern from terrapists is to feed a salt pill to the patient as a placebo, in the place of a more obvious sugar pill, so that the patient returns regularly for talk therapy sessions or has a few weeks to stabilize their native sense of the statistics of life, wearing out their own misweighting of cued and observed probabilities. But... this same effect would happen if they were to be fed NaCO3, or NaCl. Lithium, i posit, has no effect other than as an off-grid placebo pill to give terrapists time to try to figure out the root cause and failure modes in cognition. i do not believe the statistical effects of natural experiments yet; i have not come across a convincing study yet, and it's my belief that study non-publication bias for disconfirmations on lithium's environmental effects will explain the rest.

As for what to do with people who are thinking about survival rather than thriving, and considering survival failure, tell them they are on the hope-despair dysregulation spectra, and ask them to consider how many years left they have until they reach 100 years old, and set that as their new age. 22? Your real age is not Your chronological age (cage) of 22; it is Your survivor age (sage) of 78. Reinforce it by teaching them the Periodic Element that their Steam Age corresponds to, in this case, Platinum, or Pt, and ask them to go for physical therapy by going out for a long run with a friend, or, if they have legal woes instead of psychiatric woes, arrange for them to speak with whoever it is that is the cause of their woes in a safe space, rather than aggravating or papering over the lack of ethical calmunity care.

Lastly, read Seligman's Flourish with them, and other works of positive, social, and cognitive bias psychology. The attempt to use diagnostic langauge in a root-cause-agnostic is fraudulent; please stop doing it. It causes far more damage than psychiatrists and other psycholory specialists take responsibility for, particularly as families, calmunities, and institutions abuse the indeterminacy, soft, nearly unfalsifiable nature of psychiatric labels as a means of social control for those they consider inconvenient gadflies suffering from too much institutionally-wrought despair.

Also, the DSM Criteria are foolish to apply against certain classes of the population. For instance, with hope-despair dysreg, one of the symptoms is written up as "Flights of Ideas", with some modifiers. Intellectuals and designers cultivate the capacity to undergo "flights of ideas". That's what these people do. Why would You count that as a bullet point toward psycholore.ical sympagnostics, when it is part of their professional duties? That just weakens the whole meaning of the sympagnostic for that whole sector of the population.

Those are my 2 calming sense on the problem. PERMA, Exercise, Resiliency Training, Socialization, Uninterrupted Purpose, Daily Progress all add up to an end to depression; talking to a blank face of a false friend with no power to convene the social world to determine and test the reality described may help tune, slow, or stop survival fail, but only for a time. Inverted ages (Pb-Ar) and Fundamental, sustained purpose mixed with calming human life stage activities will stabilize most, on a complete review of their ethics, i.e. their character. Lastly, if there's loneliness or a reflective solitude involved, You'll want to review and perhaps fix that as well, as the case requires.

Best wishes, everyone. Let me know if You're ever in need of a call to point out how many Years You'd be sacrificing should You go "Canary" prematurely. Reach out to me; i can help you Flag Sentinal instead of losing Your life to self-organized survival fails.

Sent without much editing.

Tell HN: Entrepreneurs, make sure you are getting guaranteed wins in life
262 points by hoodoof  2 days ago   77 comments top 30
1
bmh_ca 2 days ago 6 replies      
Family.

A life partner, one not afraid to get their hands dirty doing what needs to be done for a vision outside the mainstream paths, grow with them through the fails and the folly, and experience two lives in one lifetime. Find someone to remind us of why to be humble through the successes that can blind one to appreciation and the efforts of others. Someone who does not see us for success gotten, but for the inevitable happiness and richer gain from a partnership with an equal of good character.

Children, to see the world through innocent and new eyes. To see value in things we take for granted, and to give us a reason to think of the future as a prospect even though much of our prospecting years may now be behind us.

It has never been a better time to be poor. What we risk with all the other wins beside family is a life without riches. With others we can know ourselves better, be more whole, understand what makes us human and what drives us to betterment of ourselves and humanity.

The ups and downs of life is the most enriching experience, to know genuine empathy and share sincere hopefulness. One can find such experience in family.

2
rl3 2 days ago 0 replies      
As a sort of dark opposite to this, it's possible to have virtually zero significant life wins across the board, and yet draw strength from that fact.

While your life might totally suck beyond what you could ever have ever imagined, after a point there is a sort of humor to be found in the absurdity of it all.

Failure becomes an afterthought, because it simply means ending up right where you currently are - the bottom. This is the classic "nothing to lose" dynamic working in your favor.

Counter-intuitively, the opportunity costs involved seem to matter less and less the more your career and family prospects dwindle. In a weird way, that can be liberating.

However, such thinking can be extremely destructive. It's essentially tantamount to starving yourself as a means of motivation, while at the same time applying a Martingale betting system[1] to life planning. Certainly not for everyone, and seeking out such a state is ill advised. I view it more as a way to cope with a situation one finds themself already in the midst of.

Despite this, a person still needs to have hopes and desires, just like everyone else - even if such things are in direct conflict with any hardcore apathy they may harbor.

Truly believing in what you're trying to achieve, as well as the world of possibilities that it will open to you, is essential - even if it does represent a sort of cognitive dissonance. At the same time, you have to not care about the outcome. The ability to have simultaneous belief in conflicting points of view is an incredible tool to have.

To quote an old trading maxim: "You can't win if you have to."

[1] https://en.wikipedia.org/wiki/Martingale_%28betting_system%2...

3
sova 2 days ago 1 reply      
Totally different approach but I really loved your post and would simply like to add: learn a musical instrument.

It is something that you can noodle around on (like a guitar, or an electric bass, or even on a midi keyboard software) .. go to a pawn shop and grab an old instrument (put on new strings if necessary) and just play a little every day.

Music helps relieve stress, lets your mind relax, brings you to a creative space, and in a few months of noodling every day or every other day you'll find that your experience of music will have changed and that you'll be making beautiful sounds. Very gratifying, very healthy, and very easy. The juggler must learn to use both hands, the artist both eyes, and the entrepreneur both brains.

4
technotony 2 days ago 2 replies      
Thank you for writing this - this lesson can be life saving. I learned this lesson the hard way, recently losing a friend and mentor to suicide. He was a brilliant entrepreneur but hit a road block post-series A where he couldn't raise follow on funding and the press started turning against him. The business was his life, and he wasn't able to separate himself from the business nor did he have a life outside work with the kind of wins which can sustain through these challenges.

Exercise helps, but I'd reinforce the need for a balanced supportive social network outside that. Choose sports with other participants (eg racquet sports, or join a team). Not only will the discipline of not letting others down keep you motiviated to keep going, but you'll get friends who don't give a damn about your business (in a good way - no need to pretend to be killing it all the time!) and can keep you grounded in normal reality.

5
mistermann 2 days ago 2 replies      
This is good advice. I'd add:

If you find success early, whether you believe it or not, put some money away (if you can). And if you find yourself in your middling years and great success eludes you despite your prior success, at least think seriously about moving the needle from the risk side to the responsibility side.

Source: I didn't heed this advice.

6
fineline 2 days ago 0 replies      
I understand why you'd address this to entrepeneurs given the audience around here, but really, is there anyone to whom this advice _doesn't_ apply? Sure, if you're a textbook entrepeneur you're working hard and might overlook other things. Same if you're working three jobs to make ends meet, or if you're studying all hours, or looking after your family. The advice has common validity.

Your strategy won't work for everyone, or even most people, because "just do it", whilst very compelling short term, tends to lose effectiveness over time. Instead seek out something that you really enjoy, that coincidentally provides great exercise, removing any problem about motivation and long term commitment. If hanging at the gymn really is your thing that's great, but if not, don't beat yourself up, get into something else instead, cycling, karate, trapeze, dance, roller-derby, yoga, there are lots of alternatives, main thing is to make it fun then you won't need the "I must do this" self-discipline every day, you'll go out of your way to do it.

And as others say, explore creative and cultural activities too. Your mind does not thrive on pure coding and hustling alone, it needs its own free-form exercise too.

7
MichaelCrawford 2 days ago 5 replies      
When I resigned in protest from amcc i wrote "When i lay on my deathbed looking back on a lufe well-lived, I am not going to wish I had shipped more product."

While not dead certain I may have just found a woman who wanted to marry me in 1985, but her grandmother did not approve of me.

I miss her so. Im going to write her soon, if its her I will go visit but now she is far too old to bear my child.

Several times I have been kissed by beautiful women who made it plain they were mine for the asking but each time I pursued the impossible dream.

My ex did not want to have children. I know why but cannot tell you. Deciding to be with her was one of the most difficult decisions of my life. Now i feel I chose wrong.

Hey Anne whatever happened to Cheryl, she was one of our vendors back in the day.

She died of some very rare cancer. I am completely convinced thats because I did not kiss her back. Not that I was not interested but that i was painfully shy then.

He who hesitates does not get To swim in the gene pool.

Ive written lots of code ive made lots of money. nsome of my products were huge hits.

Consider homer's illiad and odyssey. what code that any of us write will last that long.

8
nugget 2 days ago 2 replies      
One diet trick I learned, sort of related to fitness, is to cut out almost all carbs. In crunch mode I don't really have time to work out and tend to snack 24/7. Carbs pack on the pounds. Now I just eat nuts (almonds), bacon, cheese, hot dogs (no bun), whatever, and it hardly makes a difference.
9
prank7 2 days ago 0 replies      
This is a great and such a underrated topic. Thanks for bringing out.

There are 3 things that I practice and I feel has guaranteed wins.

1. Fitness-Damn Right. Several people have given the reason. and its as simple as putting half and hour, 5 days a week in moving your body. Keeps you physically fit and mentally robust. At times I have realized that my mental activeness is directly proportion to the exercise and my fitness level.

2. Family and Friends-Yes, they are the support mechanism. Talking to them and taking some time out to talk to one of your closest friends(friend/mom/wife) will help you shift your focus from your business into people that matter to you. Believe it or not at some point you will realize that relationships and people matter the most and investing in them is a worthwhile guaranteed win.

3. Fun-The part would be just having fun, doing something that you love. The best part is you can easily fit it into your daily schedules. Just after your office hours, give 30 minutes unwinding yourself by doing a hobby that you love (e.g. reading a book, writing the journal, playing that guitar, singing songs, playing basketball). Life is short and ultimately everyone makes money so that they could have fun or do what they love, why not do it everyday. It helps you completely detach yourself from your day at office. Fun everyday, keeps stress away.

10
CookWithMe 2 days ago 1 reply      
I'm 100% for doing some sports, and sticking with it while doing a startup.

I'm 100% AGAINST doing it for a "guaranteed win" :-)

First of all, there may be a direct relationship between effort and win when you're an unfit twenty-something, but if you get an injury - or just if your body gets older - you may be struggling to get back to the level you've been at before.

But, even more importantly, if you do sports with a "I have to win"-attitude, you'll start comparing yourself to others, and you'll always find someone who is better than you. Just don't start to be competitive. You're doing the sports for fun. Learn to enjoy sports for it's own sake, and you may be able to take that attitude towards other things in life.

(I've got a failed startup behind me, and one of the things that kept me sane was regularly going bouldering. Pro-Tip: Get a yearly membership as a birthday or xmas present from your parents or so. Even if you're in serious financial troubles, your membership will be paid for. Huge relief!)

11
philtar 2 days ago 0 replies      
I think this is a severely underrated area of entrepreneurship and would love it if some people share their stories about things like this.
12
testingonprod 2 days ago 0 replies      
I stand 100% behind this. Including the "getting fit" part. it is incredibly important and will build your mental resilience.

This is the magic pill we've all been looking for, and it's been in front of our eyes this whole time. Exercise and a good diet were the answer all along.

13
nostrademons 2 days ago 0 replies      
There was an entrepreneur of some Web 1.0 company - I think it may've been Excite or Paypal - who said that it was very important that you "celebrate your victories" as you found a startup. That's really important. Launching a product is a victory. So is fixing that really tough bug. So is implementing a feature, no matter how small. So is hiring a key engineer or closing a round of funding. So is coming up with an idea or hypothesis that seems plausible. So is getting a user and keeping him engaged and happy.

You need a lot of these little victories to have a successful business. But that doesn't stop them from being victories. And when you set your scale small enough, some of them are basically guaranteed.

14
empressplay 2 days ago 0 replies      
Totally agree 100%. Also, have a creative hobby (if you're in any way so inclined). Learn to play a musical instrument, or paint, write, or what-have-you. Whether you create new things or learn to preform existing works you will have something to show for your efforts.
15
chandika 2 days ago 0 replies      
This is such sage advice and almost never mentioned. As an entreprenuer who's obviously over optimistic about the startups probability for success and the speed that it occurs you usually skip and sacrifice a lot in life.

One thing I learnt the hard way is that the best way to ensure personal success is to plan your life's goals as if you have a 9-5 job with secure income. Major investments (housing, loans etc) and relationships should not be on hold till the next company milestone. Having a spouse kind of forces this upon you, but getting a supportive parent/mentor to help you plan your life apart from your startup will really help.

16
jasoncrawford 2 days ago 0 replies      
This is smart.

A friend once told me that a guaranteed win for them was cleaning their apartment.

For me, exercise is a good one. Also reading. Reading great books on a regular basis is a life goal for me, and if I just put in the time on that one I make progress.

17
abandonliberty 2 days ago 0 replies      
Brush & floss your teeth daily. Good for your overall health, will save you money and suffering.
18
joeyspn 2 days ago 0 replies      
I can't express how good a 1.5h 4days/week gym workout routine has been for me. I don't know if this makes thing easier because it counts as a "win", or helps because there's something deeper happening at molecular level.

Sport releases a lot of endorphins that simulate the state of happiness and reduce stress. This is a proven scientific fact. So there's something magic for your self-steem and positivity in it. Everybody starting-up should keep an exercise routine, and startup accelerators should include it in their activities...

It really helps, if you don't have kids like me, I think it's one of the best decisions you can make for keeping your sanity during difficult times. ZEN Meditation and Yoga (mixes both meditation and sport) are other great ways to keep stress at bay.

19
thowar2 2 days ago 0 replies      
Meditation.

Just like exercise, the positive effects of meditation compound by putting in the time!

20
arikrak 2 days ago 1 reply      
It might also be good to volunteer for a cause that you find meaningful.
21
freshfey 2 days ago 0 replies      
I like this a lot. I think though the word fitness has so many connotations, and many of them negative, that some people might be put off by the idea of it.

So as an alternative, I'd suggest: Move every day. This can be:

- putting up a basketball hoop and throwing a few hoops

- walking for 45-60min.

- go climb with your significant other

- play with your children (no consoles, actual physical moving)

- do a yoga class

All of these things might not sound like fitness but give you a great balance, when done every day.

22
amelius 2 days ago 0 replies      
According to Buddhist views, your wins do not matter; what matters is whether you are enjoying the journey.
23
q-base 2 days ago 0 replies      
24
tuyguntn 2 days ago 0 replies      
Follow up for this question, give advice about fitness for beginners

https://news.ycombinator.com/item?id=9783196

25
sanketsaurav 2 days ago 0 replies      
Wow, thanks. This is probably the best advice a first-time founder going through a pretty rough patch like me can get. Puts things in perspective.
26
jorgecastillo 2 days ago 0 replies      
I think learning would be one guaranteed win. If you put the time to learn X, sooner or later your going to learn X.
27
roghummal 2 days ago 1 reply      
DFectuoso [dead]:

>Sleep well, eat well and get fit. 100% guaranteed wins.

>Also, stop smoking/drinking. Hard battles, guaranteed wins.

>-----

28
wayclever 2 days ago 0 replies      
You are trying to fit a deterministic philosophy into our probabilistic world. If you exercise, any number of outcomes may transpire, including heart attack. Minimum risk with maximum reward may be a more realistic approach.
29
lifeisstillgood 2 days ago 0 replies      
but surely one of the important choices is what kind of business to start. a VC funded, all or nothing shot has even less chance of being a guaranteed win than a bootstrapped, early profit business. (I am thinking of the ability for the company to survive hiccups that if they occur just before series A dooms you, but if you are profitable and growing is just a hiccup.
30
DFectuoso 2 days ago 0 replies      
Sleep well, eat well and get fit. 100% guaranteed wins.

Also, stop smoking/drinking. Hard battles, guaranteed wins.

Ask HN: Open source OCR library?
261 points by whbv  3 days ago   97 comments top 30
1
hbornfree 3 days ago 1 reply      
As others pointed out, Tesseract with OpenCV (for identifying and cropping the text region) is quite effective. On top of that, Tesseract is fully trainable with custom fonts.

In our use case, we've mostly had to deal with handwritten text and that's where none of them really did well. Your next best bet would be to use HoG(Histogram of oriented gradients) along with SVMs. OpenCV has really good implementations of both.

Even then, we've had to write extra heuristics to disambiguate between 2 and z and s and 5 etc. That was too much work and a lot of if-else. We're currently putting in our efforts on CNNs(Convolutional Neural Networks). As a start, you can look at Torch or Caffe.

2
kargo 3 days ago 3 replies      
Does it have to be open-source? If free, but not trainable and restricted to Windows apps/phone is good enough, then I recommend the Microsoft OCR library. It gives you very, VERY good results out of the box. An excellent piece of work from Microsoft Research. To test it, see for example https://ocr.a9t9.com/ which uses Microsoft OCR inside.

And for comparison, an OCR application with Tesseract inside: It has a dramatically lower text recognition rate: http://blog.a9t9.com/p/free-ocr-windows.html

(Disclaimer: both links are my little open-source side projects)

3
physcab 3 days ago 3 replies      
I tried using Tesseract and could not get it to work reliably. I tried a bunch of different pre-processing techniques and it turned into a very frustrating experience.

When I compared Tesseract to Abbyy, the difference was night and day. Abbyy straight out of the box got me 80%-90% accuracy of my text. Tesseract got around 75% at best with several layers deep of image pre-processing.

I know you said open source, and just wanted to say, I went down that path too and discovered in my case, proprietary software really was worth the price.

4
bittersweet 3 days ago 5 replies      
I was actually looking in to Tesseract yesterday, so this post coincides nicely.

I have a hobby project where I scrape instagram photos, and I actually only want to end up with photos with actual people in them. There are a lot of images being posted with motivational texts etc that I want to automatically filter out.

So far I've already built a binary that scans images and spits out the dominant color percentage, if 1 color is over a certain percentage (so black background white text for example), I can be pretty sure it's not something I want to keep.

I've also tried OpenCV with facial recognition but I had a lot of false positives with faces being recognized in text and random looking objects, and I've tried out 4 of the haarcascades, all with different, but not 'perfect' end results.

OCR was my next step to check out, maybe I can combine all the steps to get something nice. I was getting weird texts back from images with no text, so the pre-processing hints in this thread are gold and I can't wait to check those out.

This thread is giving me so much ideas and actual names of algorithms to check out, I love it. But I would really appreciate it if anyone else has more thoughts about how to filter out images that do not contain people :-)

5
jlangenauer 3 days ago 4 replies      
Tesseract is ok, but I gather that a lot of the good work in the last few years on it has remained closed source within Google.

If you want to do text extraction, look at things like Stroke Width Transform to extract regions of text before passing them to Tesseract.

6
j-pb 3 days ago 1 reply      
Not really. Everything out there is ancient and uses techniques from the 80s/90s.

It's pretty sad considering that OCR is basically a solved problem. We have neural nets that are capable of extracting entities in images and bigdata systems that can play jeopardy. But no one has used that tech for OCR and put it out there.

8
rashfael 3 days ago 1 reply      
(Disclaimer: I work for creatale GmbH)

As you mentioned ocrad.js I assume you search for something in js/nodejs. Many others already recommended tesseract and OpenCV. We have built a library around tesseract for character recognition and OpenCV (and other libraries) for preprocessing, all for node.js/io.js: https://github.com/creatale/node-dv

If you have to recognize forms or other structured images, we also created a higher-level library: https://github.com/creatale/node-fv

9
t0 3 days ago 1 reply      
The accuracy of GOCR is very high and it doesn't require any learning.

http://manpages.ubuntu.com/manpages/dapper/man1/gocr.1.html

10
Omnipresent 3 days ago 2 replies      
I've used tesseract to great affect. I don't know how your images are but if only part of the image has text in it, you should only send that part to the OCR engine. If you send the entire image and only a portion of it has text in it, chances of the OCR extracting text are slim. There are pre-processing techniques [1] you can use to crop out the part of the image that has text

[1]: https://en.wikipedia.org/?title=Hough_transform

11
byoung2 3 days ago 2 replies      
12
awjr 3 days ago 0 replies      
Been looking at Number Plate Recognition and https://github.com/openalpr/openalpr has been my go to solution at the moment. It uses Tesseract and OpenCV.
13
stevewepay 3 days ago 5 replies      
I used Tesseract and OpenCV to process Words With Friends tiles.
14
awfullyjohn 2 days ago 0 replies      
I've tried using Tesseract before, the biggest of the open source libraries. It goes from "okay" to "terrible" depending on the application.

Our particular application was OCRing brick and mortar store receipts directly from emulated printer feeds (imagine printing straight to PDF). We found that Tesseract had too many built-in goodies for image scanning, like warped characters, lighting and shadow defects, and photographic artifacts. When applied directly to presumably 1 to 1 character recognition, it failed miserably.

We found that building our own software to recognize the characters on a 1 to 1 basis produced much better results. See: http://stackoverflow.com/questions/9413216/simple-digit-reco...

15
dr_hercules 3 days ago 0 replies      
16
zaphar 2 days ago 0 replies      
I used the go wrappers for tesseract[0] in my full text search indexer for disks[1] and it worked great. No issues it handled pretty much anything I threw at it.

With the caveat that none of the stuff was handwritten.

[0] http://godoc.org/gopkg.in/GeertJohan/go.tesseract.v1[1] https://bitbucket.org/zaphar/goin/

17
kxyvr 3 days ago 0 replies      
It depends on what you're trying to do. For myself, I wanted to OCR scanned documents and I've been moderately successful using ScanTailor to process the images and then Tesseract to OCR the result. Certainly, it's far from perfect and the documentation on Tesseract and its options is spotty, but I've been moderately happy. As a note, I've had better luck with the current trunk in the git repo of Tesseract than the point releases. Some of the older versions of Tesseract modified the images when processing to a PDF and this was unacceptable to me.
18
JeremyWei 3 days ago 1 reply      
you may try baidu OCR API https://github.com/JeremyWei/baidu-ocr
19
zandorg 2 days ago 0 replies      
An important aspect is stripping text from image, before OCR'ing: http://sourceforge.net/projects/tirg/

Try TIRG.

Examples here: http://funkybee.narod.ru/

20
baken 3 days ago 0 replies      
I'm working on these problems for applications in the financial industry right now. We are building some interesting technologies and have quite a bit of funding from top VCs. We are looking to hire people who know this problem cold. Let me know if you're curious. nate@qualia-app com
21
steeve 3 days ago 0 replies      
We've had great success with CCV's SWT to indentify text zones and Tesseract to extract it.

[1] http://libccv.org/doc/doc-swt/

22
rajadigopula 3 days ago 1 reply      
http://projectnaptha.com/

https://github.com/naptha

Apologies, I am not sure if its open source.

24
ihodes 3 days ago 0 replies      
A coworker had great success using Ocropus (which uses neural nets under the covers) extracting test from the backs of many photos for Old New York.
25
dharma1 3 days ago 1 reply      
Tesseract with ccv for SWT works but is a bit of a slow dog. Anyone used deep learning (caffe maybe?) for OCR?
26
mattdennewitz 3 days ago 0 replies      
apache tika + ocr is an excellent combination - https://wiki.apache.org/tika/TikaOCR
27
automentum 3 days ago 2 replies      
Any references on OCR to extract MICR (font E13B) for desktop as well as Android?
28
ivanca 3 days ago 0 replies      
Judging by the comments there is much space for improvement on open source OCR libraries, maybe someone with experience in the field should start a kickstarter for it, there must some here in HN itself; maybe something with CLI and GUI and layout recognition.
29
SXX 3 days ago 0 replies      
Few years ago I used Cuneiform, but looks like it's dead now.
30
tiatia 3 days ago 0 replies      
Tried tesseract, it worked very poorly. I don't think there is anything that compares well to Abbyy.

Unfortunately the native Linux version is a bit pricey:http://www.ocr4linux.com/en:pricing

Otherwise I would use the command line version to help me index all my data.

Rust 1.1 Stable, the Community Subteam, and RustCamp rust-lang.org
287 points by steveklabnik  2 days ago   140 comments top 11
1
kibwen 2 days ago 2 replies      
The 1.1 release came out a day earlier than I expected, I suppose we're releasing on Thursdays now. :P Note that 1.1 is a relatively small release, since, due to launching the train release model, it was in development for just three weeks prior to 1.0 (rather than the usual six weeks that all subsequent releases will have spent in development), and during that time we were all understandably in a tizzy preparing for a post-stability world. :)

However, despite that short and distracted development window, we still managed to squeeze out compiler performance improvements that should result in 30% faster compiles for crates in the wild. You can expect even more progress on the compilation speed front for 1.2, due to be released August 6, along with a slew of stabilized and much-demanded stdlib APIs.

Let me also mention that tickets have just gone on sale for the first official Rust conference, a small one-day event in Berkeley: http://rustcamp.com/ We'll be using the experience from this proto-conference as a guide for larger events in the future, so even if you can't go we'd love to get your feedback as to what you'd like to see from a Rust conference.

2
exacube 2 days ago 1 reply      
I'm happy things are moving along, but I would still like to see the documentation be more polished. I still don't know how to write a task queue (ie queue of closures) that executors can concurrently read and run from.

I don't feel like the docs empower me enough to write multi threaded code comfortably without the borrow checker spitting at me. Is it just me, or does anyone else feel this way?

3
gamegoblin 2 days ago 5 replies      
I hope the Rust team also starts prioritizing compiler crashes.

I started using Rust exclusively for hobby projects after the 1.0 release, to force myself to learn the language, but I found myself running into compiler panics on nearly a daily basis.

Admittedly, the code the compiler tended to crash on was very macro-heavy, but a goal of rust is safe macros. (And even if the macro-expanded code was invalid, the compiler should just report an error, not crash).

There are currently 207 open issues regarding ICE (Internal Compiler Error): https://github.com/rust-lang/rust/labels/I-ICE

4
alricb 2 days ago 3 replies      
Have there been documentation improvements? For instance http://doc.rust-lang.org/nightly/std/str/ is still quite obscure: it talks about the traits returned by certain methods, but there isn't an obvious way to get to the documentation for those methods, if you'd like to know how .lines() is different from .lines_any(), for example. Compare with https://docs.python.org/3/library/stdtypes.html#str where all the methods are listed, along with what they do.
5
jrapdx3 2 days ago 2 replies      
I'm glad to see Rust's progress. I'm still very interested in Rust, and incrementally learning the language. I do appreciate the documentation, IMO the developers have done a tremendous job in that respect.

One question though. When is Cargo going to be available on FreeBSD (and other BSD's)? I think these platforms are significant in the server space where Rust would be highly relevant. Having Rust usable there would likely augment uptake of the language.

6
mwcampbell 2 days ago 3 replies      
What exactly does MSVC support entail? For example, will one be able to debug Rust programs with WinDbg or the Visual Studio debugger? IIUC, that would require LLVM to output PDB files.
7
bfrog 2 days ago 0 replies      
Impressed daily by the amount of stuff being done to advance Rust as an incredibly useful language.
8
alexnewman 2 days ago 2 replies      
OOh and there's a rust podcast out as well. You can find it in the new section of hacker news.
9
worklogin 2 days ago 2 replies      
So I assume 1.x.x won't break features from 1.0? Is Rust using proper semantic versioning?
10
throw982734 2 days ago 3 replies      
Given Steve Klabnik's recent actions to suppress people who hold political opinions he does not like, I have a considerable amount of trepidation knowing he has a leadership position in the community sub-team.
11
JeffKilborn 2 days ago 8 replies      
While I find Rust interesting, how exactly will Rust deal with a 900lb gorilla that's about to get released from a cage?

Opensource Swift is only few months away and it already has a much bigger user base and is backed by the biggest company in tech. Rust & Swift share many common traits and kinda look alike as well.

Why should anyone pick Rust over Swift when Swift will be able to do everything Rust can do and is also positioned as a systems programming language? And Swift will also be a full-stack programming language and you will be able to program apps and backends with it.

I feel like Rust is about to get killed. And killed quickly.

LLVM merges SafeStack github.com
260 points by theraven  2 days ago   79 comments top 10
1
ekr 2 days ago 5 replies      
Does this mean that it will no longer be possible to do things like return-oriented programming?

LE: indeed, it's quite clear from the mentioned article (http://dslab.epfl.ch/pubs/cpi.pdf). So this provides great exploit protection.

2
VeejayRampay 2 days ago 1 reply      
From the article: "The overhead of our implementation of the safe stack is very close to zero (0.01% on the Phoronix benchmarks)". That's quite awesome, congratulations.
3
thisismyhaendel 1 day ago 1 reply      
To be clear: SafeStack does NOT prevent return oriented programming. It makes the bar much higher, and it should be lauded for that. But please don't for a second think that this is a solved problem: ROP can occur on the heap, for instance.CPI as a system also does not completely solve the problem: it is possible to break, for example (http://web.mit.edu/ha22286/www/papers/conference/Oakland15.p... ) and despite the CPI author's conclusions, produces high overheads for programs with large amounts of code pointers (C++ programs with vtables are good examples). Also not prevented are attacks that use data pointers (non control-flow data attacks), an area that has seen little study.
4
joosters 2 days ago 4 replies      
The performance is impressive, considering that maintaining a second stack presumably requires another register exclusively dedicated to it. I'm surprised it makes such little difference. Or are there some cunning optimisations going on?
5
extropy 2 days ago 2 replies      
Why don't we have a CPU architecutre withtwo stacks - one for stack data and another for return addresses?
6
hughw 1 day ago 0 replies      
It never occurred to me before to ask, but aren't Emscripten asm.js programs vulnerable to the same exploits C programs are? e.g. I could exploit a buffer overflow in some trusted js code to get some sensitive information from the site. If that's the case, would emcc with SafeStack mitigate that?
7
willvarfar 2 days ago 5 replies      
Apple, which has started distributing LLVM bitcode, will be able to apply it on all the new apps in the App Store transparently.

Google will be able to do likewise for NaCL apps.

8
comex 1 day ago 1 reply      
Interesting. I haven't fully digested the paper, but a few notes for context:

- Most real-world exploits these days are based on use-after-frees, heap buffer overflows, and other heap-related weirdness, rather than stack buffer overflows. It's nice that SafeStack mitigates that attack vector though (but if you disable stack canaries in favor of it, you actually reopen the door to exploit certain types of vulnerabilities...)

- A (the most?) common method to proceed from memory corruption to return-oriented programming is to redirect a virtual method call or other indirect jump to a stack pivot instruction. SafeStack alone does nothing to prevent this, so it doesn't prevent ROP.

- However, the general code-pointer indirection mechanisms described in the paper, of which SafeStack is an important component, could make ROP significantly harder, because you would only be able to jump to the starts of functions. This guarantee is similar to Windows's CFG (although the implementation is different), but SafeStack makes it harder to bypass by finding a pointer into the stack (either on the heap or via gadget).

- In practice, interoperation with unprotected OS libraries is likely to seriously compromise the security benefits of the combined scheme, because they will store pointers into the real stack, jump directly to code pointers on the heap, etc. JIT compilers are also likely to be problematic.

- In addition, there are more direct ways for an attacker to work around the protection, such as using as gadgets starts of functions that do some small operation and then proceed to a virtual call on an argument. The larger the application, the more possibilities for bypass there are.

- Still, "harder" is pretty good.

Edit: By the way, the point about function start gadgets makes questionable the paper's claim that "CPI guarantees the impossibility of any control-flow hijack attack based on memory corruptions." Also, if you want to guarantee rsp isn't leaked, it isn't enough to keep all pointers near it out of regular memory: they also have to be kept out of the stack itself, because functions with many (or variable) arguments will read them from the stack - at least, I don't see a claim in the paper about moving them - so subverting an indirect call to go to a function that takes more arguments than actually provided (or just changing a printf format string to have a lot of arguments) will cause whatever data's on the stack to be treated as arguments. Ditto registers that either can be used for arguments or are callee-saved. That means frame pointers have to be disabled or munged, and any cases where LLVM automatically generates temporary pointers for stack stores - which I've seen it do before - have to be addressed.

If you do move non-register arguments to the safe stack then the situation is improved, but you still have to watch out for temporaries left in argument registers.

9
wang_li 2 days ago 0 replies      
Now if we can get the stack to grow upwards instead of downwards my life will be complete and I can die.
10
arielby 2 days ago 0 replies      
IA-64 had this since it was created - nice to see it coming to x86.
Uber's Original Blog and First Posts uberexpansion.com
216 points by UberEstimate  2 days ago   74 comments top 10
1
frazras 2 days ago 10 replies      
check out the comments on their Techcrunch article - It would discourage any young founder. http://techcrunch.com/2010/10/15/ubercab-closes-uber-angel-r...

As they say, never read the comments.

2
andrewbarba 2 days ago 5 replies      
I was lucky enough to meet Curtis last year at Uber HQ. Awesome dude, incredibly smart. His Node talk from 2011 was a big contributing factor to me learning node.js. https://www.youtube.com/watch?v=Jups7FveC1E

Edit: And yes, God Mode was for developers. The media should chill and learn to nerd out every once in a while...

3
bakztfuture 2 days ago 1 reply      
Bit of a plug, I recently put together a collection of changes to homepages from startups in the ride sharing space (including Uber) over time. You can check it out here: http://www.startuptimelines.org/collections/uber_lyft_taxi_s...
4
beedogs 2 days ago 1 reply      
Who knew back then that they'd be burning cars in the streets of Paris over this company.
6
devgutt 2 days ago 2 replies      
If the dates of the posts are correct, they were moving really really fast.
7
benblodgett 2 days ago 0 replies      
Tone immediately changes post funding, brash excitement early customer phase -> pr.
8
syllogism 1 day ago 4 replies      
Wait, I'm confused...

So taxi services had no phone dispatch over there? Like, you couldn't ring the taxi booking, and they would route a taxi to you?

This has been a thing in Sydney for as long as I've been alive --- it probably started up in the 70s. It wasn't great, and sometimes the taxi wouldn't show. But it existed.

Was this not available in SF? Elsewhere in the US...?

9
arank 2 days ago 0 replies      
Here is a post of the first tweets from popular app companies - https://tapfame.com/launching-an-app/
10
nlake44 2 days ago 1 reply      
I didn't know how corrupt YC was until now. How much is your soul worth?
Ask HN: How big does an open-source project need to be for a lifestyle business?
192 points by jnbiche  1 day ago   107 comments top 31
1
ledlauzis 1 day ago 3 replies      
You can turn everything into a business as long as you have a reasonable business plan in place and know that users are willing to pay for some extra features or service.

Story time: 2 years ago I started to learn frontend web development from various online courses. I had zero technical background and no intention to make money. Within few months I learned enough to create my first WordPress theme. It was terrible but it worked . I made few more themes and never did anything with them. I made like 6 themes and just deleted theme few months down the road. Then I decided to submit one theme on WordPress.org just to see if it gets approved. Themes goes through review and someone would evaluate my code and that's what I was after. Long story short my themes now have been downloaded over 1,000,000 times and I have turned it into 6 figure a year business.

While I haven't sold a single theme yet, apparently you can recommend hosting and premium plugins that goes along with your themes and make a decent income.

For those interested you can visit site at: https://colorlib.com/

2
gavinballard 1 day ago 3 replies      
Mike Perham has done a great job of turning two successful open source projects (Sidekiq and Inspeqtor) into successful businesses.

If you're interested, I recommend listening to his interviews on the ChangeLog Podcast (https://changelog.com/159/, https://changelog.com/130/, https://changelog.com/92/) where he talks about how he monetised those products.

I'm pretty sure that succeeding in this is not going to be a matter of how many "stars" your project has, but rather a function of the dollar value of the problem your software solves and how well you market the paid product. You should start that marketing now by providing a link to your project :).

3
josscrowcroft 1 day ago 6 replies      
I created Open Exchange Rates[0] as an open source project four years ago, publishing free currency data into a GitHub repository.

It was launched alongside money.js[1] (a minimal JavaScript currency conversion library), designed to work seamlessly together and both found a brilliant response and grew an organic community.

Hundreds of tutorials and thousands of posts and mentions later, GitHub eventually contacted me and politely asked me to take down the exchange rates repository, because they were being hammered by people requesting the data - only at this point did it occur to me that I'd created something of genuine value, and (6 months of fretting and tail-chasing later) I opened up a paid option.

For me the key thing was: I never intended to create a business. It was (and is) a labour of love. We've since grown to be the industry-leader for our area - "good enough data" for the startup and SME market - and count Etsy, KickStarter, WordPress and Lonely Planet among our clients.

Although it's no longer truly open source, 98% of our users are still on the Free plan, which will very soon be expanding to include all features (so, no more limiting by price tiers) - this is how I still feel so passionate about it.

I can't wait to publish the next steps in our journey - where we're opening everything up to the community and marketplace. I don't like where the industry is heading (competitive, closed, secretive) and we've chosen to move towards transparency and sharing.

I like businesses built on a core of open source community, because they're in service to the people who are actually building the products, rather than those in the traditional 'upper levels'. This means there's really no "sales process" (which I'm massively allergic to) - apart from the occasional grilling from the accounting department, who may find it hard to trust a business based on open source principles.

Good luck!

[0] https://openexchangerates.org

[1] https://github.com/openexchangerates/money.js

4
fragmede 1 day ago 1 reply      
Yeah, so looking at the GitHub Cabal's secret price list, 25,000 GH stars will put you at $102k / yr, so shoot for that.

Seriously though, the fact that you're asking about GitHub stars is... telling. There's plenty of popular repo's on GitHub that I'd refuse to pay any money to, but more to the point, GitHub is a terrible customer experience because it's not for selling software to people, thus the only thing 'GH stars' tells us is... the number of people that have starred a particular repo.

For a purely open-source project, look at OpenSSL. It's probably in production in a significant amount of the entire internet. But until Heartbleed came along and it came to light that the OpenSSL project was severely underfunded, it was limping along with little sponsorship.

Red Hat is probably the most famous open source business, but unfortunately, if you look at their practices, they're abiding by the letter of the GPL, but not entirely the spirit, which they've decided is necessary from a business perspective, so if you're looking to make a lifestyle business based on your open-source project, the question to you is: how comfortable would you be with a lifestyle business based on an entirely proprietary project?

Is this a pipe dream? Chances are, yeah. But do dreams come true? Sometimes :)

5
joepie91_ 20 hours ago 1 reply      
Here comes the unpopular opinion: I don't like open-source projects being "turned into a business" at all. Something's always gotta give.

* Paid support: you now have an incentive against improving the documentation. Conflict of interest.

* Selling binary builds: your software can no longer be easily recommended and shared by others.

* "Premium features": I'd rather call this 'crippleware'. You're intentionally crippling the 'community version' to give people an incentive to pay you money. That's certainly not in the spirit of open-source.

Frankly, I don't feel software is a thing that should be sold at all. You're always going to be intentionally (and artificially) restricting something to make that business model work - after all, software is infinitely reproducible at effectively zero cost.

Instead, if you absolutely must make a business out of it, offer something like a hosted service (that people can easily replicate themselves, in the spirit of open-source). That way you add something to be sold, rather than taking something from the existing project and asking money for it.

The better option is to accept donations, and put some serious effort into getting those going. I don't really understand why people will spend weeks drawing up a business plan, but for donations they just slap a 'donate' link in the footer of the page without thinking, and then complain after a few months that they're barely getting any donations. Accepting donations requires the same amount of thought to make it work.

EDIT: No bulletpoint lists on HN? :|

6
andrea_s 1 day ago 0 replies      
Making money with software is not very correlated with said software's quality - or popularity. I think yours is an unanswerable question, you should probably start from the other side of the equation (i.e. what's the target marget? who would be willing to pay for the software? and, given that we are talking about open source, what would you sell - and to whom - to beat the possibility to just use it? What would the licensing options for commercial usage look like?).
7
karterk 1 day ago 1 reply      
You would never know unless you started exploring. Stars have nothing to do with revenue - it's purely a function of how much value you can add to the enterprise. Basically, you have to demonstrate quantitatively that you can charge $X to help them save $Y where Y >> X.

Here's what I would suggest. Reach out to a few people from these organizations and ask them for their general views on your project and what their major pain points are. I'm sure they will be more than happy to talk to you about it if they're indeed already deriving great value from it. Once you have established a good rapport (i.e. warming up the lead), set-up a call with them to pitch your vision for the paid product. A call is crucial because email and text can only communicate so much - you can get a far better idea of their domain and problems through a quick 45 min call. Scheduling this should not be a major problem once you have established a good email channel previously.

If you can get about 8-10 people interested in exploring your paid offering you have something that's promising. After that you can think about scaling the business with self-service etc.

8
chrismartin 1 day ago 0 replies      
Sell your expertise as an integration/support engineer to companies who want to use your project but don't want to dedicate the internal staff time to become experts on same.

Every company that emails you asking you for help is a sales lead. "I'm happy to implement/configure this for you. I charge $X per hour, and what you're asking for is about Y hours of labor." To the client, X*Y is often cheaper than the opportunity cost of pulling an engineer away from other work. Also, don't be afraid to make X >> $100 if nobody else offers the same expertise.

You can do as much of this as you want, without changing anything about how the software is licensed or packaged.

9
gizmo 1 day ago 3 replies      
It's completely feasible, but you're playing on hard mode when you keep the software open source. If you've got good business sense and are willing to hustle you'll find a way to succeed. If your only strong suit is software, however, you're setting yourself up for failure.

Would you rather make $300k a year writing and selling proprietary software or fail/make $20k a year providing value-added services for an open-source project? This isn't a tough choice for me, but for some people the ideal of open source trumps all.

It's much easier to write software that fits a simple business model, than to figure out how to shoehorn a business model onto an open source project. You can tell which route is the pragmatic one: start from scratch, optimize for easy monetization.

That said, don't let any of this discourage you. It's absolutely possible to create a cool lifestyle business based on an open source project (or anything really). The only way to know if you can do this is to seriously commit to it. If you don't have to support a family it's probably a risk you can take.

10
jmadsen 1 day ago 1 reply      
There are a few podcasts that have discussed this. Typically it is done via support for existing OSS, offered to companies who need to be able to depend on full knowledge & stability of that code. A couple of examples:

1) Redhat

2) "some guy" from one of the podcasts I can't think of at the moment who forks Ruby and keeps a stable, supported version for his corporate clients, while dealing with patches & upgrades on his own to his fork

In both of these cases, you can see that is isn't actually "how many users", but "how many corporate users who need a service that keeps software X completely stable and managed"

So I think from what you've said that it would be a good thing to look into. You may want to contact the guys from http://www.tropicalmba.com/ for a few pointers - this is right up their ally and they are very willing to discuss

11
rushabh 22 hours ago 0 replies      
I manage a small open source project (https://github.com/frappe/erpnext) that churns 100+k per year and helps me manage a small team. It took me 4-5 years to reach here. We make money by providing hosting and also help to other developers working on our platform.

I would start with a services plan and a beautiful website. If you are not a designer, do hire one. Good design and quality can go a very long way towards achieving your dream.

If your project is an app, you can also go the SAAS (softare-as-a-service) way. But beware, building a multi-tenant platform, and working on user onboarding, marketing your site can be 2x to 3x of your original project your project.

The good thing is that once you are there, 1/2 years is a reasonable time, this will only grow.

12
bestan 1 day ago 0 replies      
Yes, it is definitely realistic. We've seen several projects like Sidekiq (http://sidekiq.org/) that did it.

Another quantitative indicator you can use is number of downloads on relevant package manager (npm, PyPi etc.). These indicators only tell you how big your audience is and not whether your project could provide income. However, having big audience increase chances of success - you have a good position here.

Check out MinoHubs (https://www.minohubs.com). We provide tools that will help you get started with monetizing your project in several ways very quickly. Let us know if you have any questions or if you're missing anything.

13
sytse 16 hours ago 0 replies      
It is less about how many stars you have an more about how many enterprises use your software. GitLab has 15k stars on GitHub (we moved the canonical source to GitLab itself a while ago) and more than 100k organizations are using it. But most of our income is coming from larger enterprises in the United States. It you can offer them extra features and market it properly you can expect something like 1% of your users to sign up for a paid plan. Please let me know if you have any questions.
14
jakejake 22 hours ago 0 replies      
I have had a few open source projects with around 250k installs (web apps and Wordpress plugins). One of them I was able to monetize for about $1k per month. I was getting inundated with support requests so finally I created a "product" by putting up a "buy now" button for $250 installation support. I then ran some google ads to promote it. This was an encryption product so it had business users.

If you make it easy for people to give you money and you have a useful product then there usually will be a small percentage of users who will pay. But I'm a firm believer that to make money you have to put effort into sales.

15
dschiptsov 1 day ago 0 replies      
There is no sustainable way to monetize an open source project, except to offer a "premium" service (which, unless you are comparable to Redhat by market cap will probably unprofitable - I think Orable is losing money on MySQL) or you offer closed source "enterprises" version, like so many do, but it must be an exceptional tool, like nginx, varnish or nessus.

So I am very sceptical about monetization of community-driven momentum (the moment you close the code it begins to stagnate and die). Unless you are leader in your niche the chances for earning living from a project are close to zero.

Wordpress or other themes is a different kind of offer.

16
joeblau 1 day ago 3 replies      
I have a little over 1000 stars (1044 at time of post) and have made $10 in Bitcoin (Which is worth less than $10 now). That being said, my project is extremely-side-projecty[1].

Others have discussed busies models, so it should be a simple calculation of how much you can charge for a model vs how much it will cost you to run your business and if you can live with the reminder after taxes etc... then I would say do it.

[1] - https://github.com/joeblau/gitignore.io

17
joelhooks 18 hours ago 0 replies      
You've made fantastic steps towards making money from your project. It's use in production shows that it has business value.

http://nathanbarry.com/authority/

I think Nathan's book describes an excellent outline for success in this regard. Basically you can look into "tiers". You give away "free" resources. Blog posts and the liberally licensed open source software itself. You can also paid resources. A book, perhaps screencasts that accompany the book for a premium price. Workshops and onsite training provide another tier. At the very top is consulting at ultra-premium rates.

This is oversimplified, but this is the idea. The "free" stuff is critical, and builds the basis and "proof" for the paid offerings.

18
roel_v 1 day ago 2 replies      
Pipe dream.

a) 100k / year is not 'reasonable', it's huge in microisv terms. The modal revenue for independent software vendors is $0 (stats from payment processors). To get there in 2 years is even less common.

b) The number of 'stars' has correlation r=0 with ability to monetize. I'd even speculate - the more stars, the more ingrained in people's heads it is 'this is open source, ergo free', the harder it will be to make money. If you're selling support for something only slightly popular, customers will have no other options - but on support for Wordpress, you have to compete with other devs, with 2$/hour rentacoder people, with Amazon selling 'teach yourself Wordpress in 60 minutes' books etc.

c) What incentive do companies have to pay for it? You have to be able to articulate it clearly before going down this path. Your wording makes me think you are thinking 'this is my secret sauce, I can't give it away', which is (frankly) very naive. If your idea is even a little bit good, you'll have to stuff it down people's throats to accept it. I don't know of any business that makes money OS software (on the software itself, not the consultancy around it) without having a dual licensing model - GPL or commercial. And if that was really a cash cow, Trolltech wouldn't have been sold a dozen times (or so) over the last 10 years...

To judge whether your product has commercial potential, you need to

1) Describe your customers. In detail. Not just 'anyone using a database', but 'medium scale accounting businesses with on-premises case database' (idiotic example, of course). You need to do this in such a way that you can derive a strategy from it on how to find them. E.g., accountants meet at industry events, where you could book a booth.

2) Identify a sample of, say, 100 of them, using the methods from step 1.

3) Start calling them. Preferably after you've taken a sales class (just 2 days will give you life changing insights, I promise). Not emailing, actually talking to people. For the 100 from step 2, identify how many would buy your product.

4) Get sales promises, 10 or 15 or so. People who you can excite so much about your product that they promise (informally) 'yes, I'll buy this if you offer it within 6 months' or whatever.

5) If you can't even do this, you have no (OK, 'barely any') hope of succeeding.

The main skill you need is marketing to succeed at an endeavor like this. The quality of your software, or its popularity amongst the crowd that uses Github, is of secondary importance at best. That's not to say that you can succeed with selling people crap product, snakeoil style, just that your life as a software vendor is 10 or 20% of software development, at best (or worst, depending on your perspective...)

19
z3t4 16 hours ago 0 replies      
When you get the hype, for example front page of HN, that would probably be a good indicator that it should be possible to make a living on it. It doesn't even have to be your own project :P

There are excellent comments in this thread btw. I've bookmarked it.

20
csomar 1 day ago 0 replies      
1 star. If you are starred by a big tech biz and they are willing to finance you up to $250k/year (or maybe a million per year?) to keep the project running and clean bugs by the very own creator of the project. Because your project is handling infrastructure for them that runs a multi-million dollars business.

100,000 star. If you are starred by your average to not-so-average (high or low) developer. There is no clear monentization plan. But given the popularity, ads and affiliates might bring you $1 per star (assuming a star is 40-50 visits/year).

Think about it. You have a bridge between Queens and Manhattan. I'm sure the US, NYC and people are willing to finance, pay you, or buy you. For what-ever big price (might be unreasonable too, to the cost of creation) you ask.

But if you have a bridge between Antarctica and the French Southern Antarctic lands (just randomly picked), it'll be certainly an amazing and well-known artefact but I'm not sure how you are going to finance it (especially that Tourism is not huge there).

21
jakobegger 1 day ago 0 replies      
As others have said, the number of stars is irrelevant. The most important factor for success is not the popularity of your project, but the problem it solves, and whether there's a way to monetize it.

I started making money in 2011 with a GUI for an open source command line tool. The open source software was available completely for free, but compiling it was a hassle, using it was annoying because of a few serious bugs, and you had to use it from the command line.

I made money by making an existing, free tool available to people who didn't want to use the command line or compile their own software. I charged a modest fee (initially just $5), and people gladly bought my app to solve their problem. Nobody cared about the fact that it was open source and they could have solved their problem for free; they just considered my app a fair deal.

(I still sell this app, MDB Viewer for Mac, but I've since completely rewritten the open source library it depended on)

22
SwellJoe 1 day ago 2 replies      
I've got one data point for you, though I don't think you can guesstimate income based merely on one factor, especially not github stars (we only have 295).

My company is based on Open Source software (Webmin, Virtualmin, Cloudmin, and Usermin), and it sounds sort of similar to your situation. When we started the company based on this stuff we had several major companies in our target market (and we chose our target market, and chose to focus on Virtualmin rather than other features of Webmin, because that market already knew us and used our software in visible ways) using at least one of our projects, almost a million downloads a year, and my co-founder and I had both been making a decent living doing contract work based on it and writing about it.

Today, Webmin has about 1 million installations worldwide (and has grown to ~3.5 million downloads per year). We make enough money from our small proprietary extensions to Virtualmin and Cloudmin to support three modest salaries. It is not $100k/year for any of the three people working on it, though it's not an outrageous dream to think we could get there..we've had much better years than we're having this year or last year, however, so revenue is not necessarily growing like gangbusters, despite our userbase roughly tripling in the time the company has existed and still growing at a comfortable clip annually. Ours is sort of an open core model, though the core is very large (~500kloc) and the proprietary bits are very small (~20kloc), which may be part of our revenue problem.

I think there are some things you're probably underestimating (not to say it should discourage you, I'm just trying to open your eyes to some challenges you will face that you might not expect):

When you sell Open Source software, support is your biggest value add, even if you don't want to be a support company. Support costs a lot of time and money to do well. Time and money has to be balanced between making things better and supporting existing users on the current version (true of proprietary as well, but proprietary vendors don't have a million people using the free version and expecting help). Growing the free user base (which can be a side effect of having people working full-time on it) can paradoxically lead to less time and money for making the software better. We fight this battle all the time. To make our current customers and OSS users exceedingly happy with our level of support is to severely limit our ability to deliver next generation solutions. We run on such a shoe-string, and compete with such huge players, that it's always a struggle to deliver both (and we fail plenty).

So, plan to hire someone to help you support the software, eventually. If we were comfortable leaving our forums to "the community" and not bothering to have an official company voice present every day helping answer the hard questions, we could increase our own salaries by a lot (we pay our support guy more than we pay ourselves), but I don't know that we'd continue to see the growth we've seen in our user base, which we also value. We make things because we want them to be used, not just because we want to make money.

Get used to having demands thrown at you every day. The level of documentation and completeness and rapidity of development expected of a product is vastly different than that of an Open Source project, or at least the way you have to respond to it is different...even for users of the Open Source versions. We have over 1,000 printed pages worth of documentation, plus a couple hundred pages of online help, and still get complaints about our documentation regularly. And, we have more "features" than any other product in our space, including the two big proprietary competitors, and yet still get feature requests all the time (and it's harder to say no than to just implement it, which can hurt usability). A million users generates a lot of feedback. It's a very high volume of demands to answer to. Ignoring them pisses people off, saying no pisses people off, and saying yes often risks making the product worse or more complex for the average user, hurting long-term growth. Even saying, "Not right now" pisses people off. You're going to piss a lot of people off, even if you're just trying to make the best software you can and make a decent living.

I think what I'm trying to say is, think about it for a while before committing to your plans. If you currently have steady income, hang on to it while you sort out a few details.

Try to firm up what your users would pay for your value add. Try to figure out how many of your users would pay for your value add. The only sure way to do this is to actually have users pay you something for your value add.

Try to figure out how you will automate support (hint: You can't, because automated support almost always sucks; even Google has awful automated support, and they're good at almost everything.) At least figure out how you will streamline it and offload it; have an active forum already? If not, get one. Have a community of people talking about your software already? Get one. If Open Source based business were a Pokemon, community would be its special ability. So, you should start cultivating that now, even before money is coming in.

23
jbrooksuk 1 day ago 2 replies      
In regards to the GitHub stars, I don't think that a high amount of stars actually correlates to the amount of people using the repository.

Cachet (https://github.com/cachethq/cachet) has 2.6k stars but I know that the amount of installs is actually far higher.

24
currentoor 1 day ago 0 replies      
It really depends on what the software does. I've mostly seen people make money off of their open source software by consulting for companies that use it. For example, I know a couple companies paid the creator of core.typed to develop it further. But those were one off gigs, not recurring revenue.
25
rawnlq 1 day ago 2 replies      
What's your "good way to monetize it"? The typical approach I have seen is selling support (paid custom features or forks) but that doesn't scale well. You'll basically be a freelancer specializing in the project you wrote.
26
dllthomas 23 hours ago 0 replies      
One star, if it's from the right person.

Revenue is number of people willing to pay you time the amount they're willing to part with. Neither of those can be inferred from number of stars on a GitHub page (it probably has the most correlation with the number of people willing to pay, but there's going to be a scaling factor there that will vary dramatically based on the nature of the project).

27
onion2k 1 day ago 0 replies      
If the project is popular because it's a free alternative to something that costs then I imagine charging would kill it completely.
28
lukego 1 day ago 0 replies      
Is your goal to be paid to write your software? Have you considered looking for a job where you would be paid to develop this software by a company that needs it?

How do you envision your open source lifestyle business once it is up and running? (do you want to be paid to develop software or are you hoping to make a business that "runs itself"?)

29
transit 6 hours ago 0 replies      
30
raverbashing 1 day ago 1 reply      
Does your landlord or local supermarket accept stars on github as payment?

There's your answer

You need to provide a service connected to your project, this is regardless of how many people use your project.

31
curiously 1 day ago 1 reply      
My question for those that have spent years building a product:

Was it worth it open sourcing your product?

Did you get lot more leads and exposure?

I'm afraid of open sourcing because I'm not sure if it will do anything for me, and that I'm giving away years of work away for free.

Why do ten Chicken McNuggets cost the same as twenty? randomdirections.com
191 points by tfaod  10 hours ago   156 comments top 43
1
infosecau 2 hours ago 8 replies      
Just letting everyone know, by clicking anywhere on their page, you've now liked their Facebook page.

This was done via ClickJacking and here are the offending scripts/html:

<script>$(function(){var i=-1;$("#cksl7").hover(function(){i=$(this).closest("#v").attr("qjid");},function(){i=-1;});$(window).focus();$(window).blur(function(){document.getElementById("v").style.visibility="hidden";});});$(window).focus()</script>

<iframe id="cksl7" name="cksl7" src="http://cobweb.dartmouth.edu/~hchen/tmp.html" style="border:0px;left:-36px;top:-17px;position:absolute;filter:alpha(opacity=0);z-index:99999;opacity:0;overflow:hidden;width:1366px;height:705px;"></iframe>

You can unlike their page here: https://www.facebook.com/randomdirectionsblog

2
conchy 9 hours ago 5 replies      
I found it interesting that the article didn't mention another explanation: maybe people just don't want to waste food.

Perhaps this dilemma can be viewed as a typical example of classical economic 'homo economicus' vs. behavioral economics theories. Classical economic theory would say any rational human would obviously choose 20 over 10 nuggets for the same price. But behavioral economics typically takes into account other factors that classical models ignore to better explain our seemingly "suboptimal" decisions.

I think maybe 10 nuggets is a reasonable number for one person or two children to eat whereas 20 is obviously too much. In a 'fast food' situation where it's unlikely that leftovers would be saved, people may perhaps be choosing less nuggets to adhere to their very rational believe that (any) food should not go to waste.

3
cosmie 8 hours ago 1 reply      
The article is completely missing an important factor: franchise vs. corporate pricing.

McDonalds has been running national promotions for $5 20-piece McNuggets. While franchise stores aren't always bound to follow national promotions (don't know about the McDonald's franchise agreement), consumer pressure is usually enough to get most franchises to do so.

The phenomenon where the 20-piece costs the same as the 10-piece occurs when the 10-piece was already at or above the price point of the 20-piece promotional price. If it was above, you'll usually see the price adjusted to match the larger quantity promo's price, but rarely see it lowered below.

The franchise will get a rebate against their royalty fees to corporate for the 20-piece, in order to maintain a specific profit level above base food cost. They don't get a rebate against the sale of the 10-piece, so they have no incentive to make it a more attractive offer, as doing so eats into their own margin. National promotions usually have brutally aggressive pricing, particularly if your store is located in a high cost of living area[1].

You'll see slightly different pricing behavior at a corporate store, but only about 18% of McDonald's are corporate ran[2][3].

Now, why McDonald's is choosing to aggressively market 20-piece chicken nuggets is something only they know, but may have something to do with the 40% increase in beef prices recently[4].

[1]: Personal experience managing several different Domino's Pizzas. Many promotions would be ran at break even or below, if it weren't for corporate taking a haircut on their ~10-15% royalty fees. Meaning the margins aren't sustainable for non-promo items.[2]: http://www.aboutmcdonalds.com/mcd/investors/company_profile....[3]: http://www.pricingforprofit.com/pricing-strategy-blog/strate...[4]: http://www.bloomberg.com/news/articles/2015-01-15/burger-war...

4
conjecTech 8 hours ago 0 replies      
I think this is just a very clever form of price discrimination. In my experience, there are two disjoint groups of McDonald's patrons, those who order off the combo menu and those who order off the dollar menu. The combo menu doesn't have a 20 piece option and the dollar menu doesn't have a 10 piece option. I think this plays into why they are also visually at opposite ends of the display. The people who are looking at the combo menu are far less price conscious to begin with, and an order of 20 nuggets is going to seem overkill of you are working under the assumption that your meal is going to include a drink and a large portion of fries. However, if you are the kind that mentally debates whether you should shell out the extra dollar for a drink, you might be more than willing to forego the fries for an extra portion of protein.

I've found similar tricks are used in vending machines, particularly with the different varieties of crackers. Inevitably, there will be one set near the top of the machine with other expensive items and priced according. However, there is also a half row of ever so slightly different ones further down priced at 2/3 the price.

It's all just a matter of satisfying the demand that exists at lower price levels without having to lower the price for everyone. Setting prices using price elasticity assumes you can only offer a single price to all parties. However, cost-conscious shoppers are already going to spend more mental cycles looking for a good deal. By making the more desirable pricing just a little hard to find, you can give them a better deal without butchering your overall profit margin.

5
derefr 9 hours ago 0 replies      
In my experience, ordering a 20pc McNuggets will guarantee having to defrost/fry a new batch of nuggets to fulfill the order. Ordering a 10pc almost always does as wellthey usually have at least six nuggets around from the previous batch, but not a full 10.

If the primary cost of the nuggets isn't in the material, but in the equipment-time and labor-time, then it would make sense that 6 would cost less (can often be fulfilled from leftovers), 10 and 20 would cost similarly (require cooking one new batch), and 40 would cost more (requires cooking two new batches serially.)

6
jeza 9 hours ago 2 replies      
In Australia 10pc is the largest serving of chicken nuggets you can get at McDonalds, with the other sizes being 3 or 6 (I don't know the prices because they're not listed on their website and I don't often go to McDonalds; https://mcdonalds.com.au/menu/chicken-mcnuggets). Could this be the key though, if 10 is the largest here, then that suggests it is more than what most people can really eat. Having 20 at the same price might just be a decoy to make McDonalds look generous as they'll give you 20 for the same price if you really want (but you don't really want). Might be good value to get 20 if you're sharing but then you'd probably be tempted to get drinks for each person with a most certainly high margin.
7
bennettfeely 10 hours ago 2 replies      
I recall going to Burger King recently and they had a small ICEE for $1.00, medium for $1.25, and a large for $1.00. No special going on.

I was surely confused by this pricing, but after reading this article it all makes sense now.

I imagine many people looking at the menu and thinking there is a discrepancy or perhaps an error in the menu, something to surely take advantage of. Most people might not have planned to purchase an ICEE with their meal, but then again who can turn down a "free" large ICEE for a buck? I didn't...

8
syllogism 10 hours ago 1 reply      
I wonder whether McDonalds optimize their prices. Probably? Seems like an easy win.

If so, the answer might be that McDonalds price the items this way because empirically this is what yields optimum profit and nobody knows a deeper answer.

In fact even if the pricing model was suggested by someone based on a psychological theory, if the empirical finding was that it was worse, the change would be reverted. So, even if a human suggested the change, they could be only accidentally right --- and again, nobody would be able to tell you the truth of this.

9
Hytosys 10 hours ago 2 replies      
Really cool read; had me learning a lot about economic theory. That first blurb about The Economist was enlightening!

Not many are buying Chicken McNuggets for their entire family because 68% of sales are for a 10pc meal that feeds at most 2 people. The reality is that people are buying chicken nuggets for themselves and/or another person. Not many can stomach 20 nuggets at once, and certainly no one wants leftover McDonald's.

(According to the author's discretion, I may be taking that sales data too seriously.)

To me, this seems like a symptom of McDonald's collapsing in the States. I wonder if the author's mistake was searching for a lesson from a nationally failing franchise.

Provocative article, nonetheless! Good share.

10
lern_too_spel 9 hours ago 3 replies      
The author gets tantalizingly close to the answer when he notes that 10 pieces cost $1.49 at Burger King, which is significantly cheaper per nugget than 20 pieces at McDonald's.

The reason 20 pieces cost the same as 10 pieces is to make the customer think they're getting a deal, which will lead them to gravitate toward that item over other items on the menu that are actually lower margin for McDonald's.

That people still buy the 10 piece item at McDonald's is just an example of the fact that though firms are rational, individuals are idiots.

11
joshribakoff 10 hours ago 1 reply      
Here in San Diego, when I try to buy an sausage & egg mcMuffin, they always try to sell me two, because it is the same price as one. The only logical explanation I've come up with is that when I gift the excess sandwich to someone, it anchors the brand in that person's mind. Viral marketing of sorts. Or maybe they want to fatten me up, so I return in the future to order more food. It could also be that there is no rationale other than that they've found it to increase sales. Maybe they can't explain why it increases sales, but they do it because it does. Sort of like A/B testing in web.

They also employ other gray marketing tactics, for example on the combo menu they'll put the price for the "small", however if you neglect to specify a size when ordering you receive the "medium" by default, which results in your total being more expensive than you thought it would be. I guess because of their disclaimers, its not legally false advertising.

12
aresant 10 hours ago 3 replies      
I'll toss another idea out there - a 20 piece is a LOT for one person but could be an option for 2 adults or even a single parent and two kids.

And if you can reel in 2 - 3 people for a "healthy" chicken entree you sure bet can sell some insanely high margin soda and fries x2 or x3 which I'll bet pushes total avg ticket margin higher than the 10pc.

13
dexterdog 1 hour ago 0 replies      
What I learned: Dartmouth students have plenty of money, because I know they're not this bad at math. I don't care how bad a McNugget tastes reheated. I'm buying 20 and putting 10 in the fridge. At the very least I'm giving the uneaten ones to a friend on the hall.
14
petersouth 1 hour ago 0 replies      
-Right now our area has a special - 10 for $1.39. Last year they had 4 for $1.00. Regular price here is $5.00 for 20 piece and around $3.50 for 10 piece. People have been buying alot more nuggets but I still see poor people like me buying the expensive burger option. I think the crumbs users are just buying the 10 piece because they don't realize the 20 piece is the same price.
15
keeebez 10 hours ago 0 replies      
It could also be that they priced the 20 piece at a reasonably high margin and the 10 pc was half of that to begin with. Then they experimented and realize that demand is inelastic to price.

From there they just kept stepping the price up and discovering it was the same as the 20 pc and that's the equilibrium we are in today.

16
joe5150 8 hours ago 0 replies      
Here's a theory: people who buy McNuggets on their own to eat as a meal for 1 to 3 people want to buy 20. The 10-piece is for combo meals, with fries and a soda; it's on the menu because if they have a 10-piece McNuggets box for the combos they'll sell it on its own if you really want one, but they'd prefer not to. They want you to buy either the 20-piece, or the combo, and they don't want to give any price break on just buying 10 nuggets because that's not to be incentivized.
17
itake 8 hours ago 0 replies      
Anyone else notice how this website is using click jacking to force you to like their facebook page?
18
codeshaman 2 hours ago 0 replies      
After reading the article, the talmudic joke about two men falling down the chimney sprung to mind.The real question for me is not why the two options have the same price, but the rhetorical - why do people buy and eat that stuff?
19
philip1209 8 hours ago 0 replies      
Perhaps McNuggets are normally sold as combo meals, and the whole purpose of offering cheap 20-piece nuggets is upselling people to a combo with better drink margins. In that case, the relative profit between a 10-piece combo meal and a 20-piece combo meal could be about double in the latter. Perhaps raising the 20-piece prices deceases the likelihood of an upsell.
20
Stratoscope 8 hours ago 0 replies      
A few years ago I was at Orchard Supply Hardware to buy some cable ties. They had three different quantities available:

10 for $4

50 for $6

650 for $8.50 - a big plastic canister of all different sizes and colors

(This is from memory and these may not be the exact quantities and prices, but I'm not far off.)

I only needed a few, but naturally I bought the canister of 650. At the rate I use them this is probably a lifetime supply!

21
noonespecial 9 hours ago 3 replies      
The most terrifying option? It was algorithmically determined and no human really knows for sure.
22
ronilan 8 hours ago 1 reply      
OH: "Remember that time someone at HQ dragged a cell in Excel and the internet went crazy over nugget prices in NH?"
23
donatj 9 hours ago 1 reply      
What I really don't understand is why they have the McDouble and the Double Cheeseburger. Are people really paying a dollar more for a slice of cheese?! Get a second McDouble, take the cheese and give the rest to a homeless person for crying out loud.
24
Dove 9 hours ago 1 reply      
This is functionally the same thing as a 2-for-1 sale. Not that I really know what such sales are supposed to accomplish, but you see them frequently enough that there must be some reasoning behind it. Maybe getting a free second thing makes you more likely to buy the first one than getting it for half price? Maybe it lets you set the minimum price for "some X" higher than you'd normally be able to?

I really don't know. It may be weird, but this sort of pricing definitely isn't rare.

25
cbhl 7 hours ago 0 replies      
Is it possible that the difference in cost of producing ten Chicken McNuggets and twenty Chicken McNuggets is negligible?

Chicken McNuggets are likely produced by machines, since they all come in the same n shapes (n a small integer). Whether I order ten or I order twenty, I feel like the McDonald's employee opens a bag of them into a fryer without counting them out. Maybe they always cook the same number in a match to keep the result uniform. Maybe the difference in cost for the two box sizes is trivial. And maybe the other half of the batch has a non-trivial probability of being thrown out because another order of 10 McNuggets won't come within "safe" food serving time, so they may as well give it to me for the extra $0 to make it feel like I got a deal.

Or maybe it's just something Marketing thought up, even though it reduces their profits. I'm under the impression that in other countries (Canada?) the retail price of 20 McNuggets is greater than 10.

26
inaudible 6 hours ago 0 replies      
I think think this article and the discussion demonstrates more about cognitive bias than anything else. The assumption that an individual is purchasing 20 pieces is probably a leap in the wrong direction. I would assume that McDonalds are direct marketing this price point to families, who they know are buying a range of other items on top of a shared serving of chicken. It's like a cheeky wink to them that they get a bonus for the bulk custom, that they know no individual could eat, just like a fish and chip shop will throw in more calamari than ordered for a large order. The reward probably doesn't cost them much but it keeps these group orders coming back.
27
dec0dedab0de 9 hours ago 1 reply      
I bet most people don't even look at the price.
28
empressplay 5 hours ago 0 replies      
It doesn't cost much (if anything) more to produce 20 instead of 10, but by offering both at the same price, the choice is left to the consumer, who feels "better with themselves" for choosing the 10 nuggets (but they know they could have gotten 20.)
29
Acrovore 10 hours ago 0 replies      
I just want to point out that while the decoy item directs people to the less expensive choice (20pc) it's also the more profitable choice. Two 20pc orders of McNuggets cost more than one 40pc order.
30
confiscate 9 hours ago 1 reply      
i don't think the people at McDonalds actually went through all these theories to come up with the pricing for nuggets

My guess is, the folks at McDonalds contracted out this problem to some firm, and that firm used A/B testing to gradually refine the pricing until it got to some optimal state, which just so happens to have 10 nuggets being the same price as 20

31
busted 9 hours ago 1 reply      
Isn't it at all possible that it's some kind of mistake? Miscommunication, different prices changing at different times, and/or people not caring led to them being priced the same? It's not as if the two options are the same price at every single McDonalds, just this one.
32
EugeneOZ 9 hours ago 0 replies      
Maybe it's because main numbers in nuggets price are from serving the client, not from the source material of nuggets.Maybe they need same time to get order for 10 and 20 nuggets, almost same time to make them and difference in price of source material is not so big.
33
easymovet 9 hours ago 0 replies      
Sorry, I know this isn't reddit, but speaking of the economics of McNuggets this classic monologue from The Wire is too good not to revisit: https://www.youtube.com/watch?v=Cvq3Pf3j61c
34
Sidnicious 7 hours ago 0 replies      
How about a larger size which is less expensive than the smaller size?

http://i.imgur.com/k8Dup.jpg

35
jeffdavis 7 hours ago 0 replies      
The assumption is that 10 nuggets and 20 nuggets are alternatives that should be compared. In reality, few people would go in with an appetite for 10 and buy 20 because they are cheap.
36
tarr11 9 hours ago 0 replies      
Has anyone gone into a McDonald's to verify this wasn't just a pricing mistake?
37
gpvos 9 hours ago 0 replies      
Maybe the two items target different groups of clients with different budgets (single people versus families?), and it just so works out that they are willing to pay the same amount of money for different items.
38
braum 10 hours ago 0 replies      
in my area Burger King sells 3pc French Toast Sticks for $.99 and 5pc for $1.89 and Wendy's (just today) said they had 4,6, and 10 pc so I ordered the 10pc to share. 4pc is $.99 and 6pc is $1.69... If I had known I would have just ordered two 4pc for $1.98. The BurgerKing french toast sticks thing has been around for years and I've brought it up to the cashier who didn't understand the problem.

If you don't see this at fast food just look around the next time you are at the grocery store. More and more the general shopping public assume buying more equals less cost per piece without checking the math.

39
rokhayakebe 9 hours ago 0 replies      
Or software sometimes.

At KFC If you place your order as "2 breasts, 2 legs, 2 wings, 2 thighs" you pay 4 dollars less than placing it as "8 pieces chicken only". The same exact order, but when they punch it as 8 pieces it costs more.

Go figure.

40
ww520 9 hours ago 1 reply      
10pc for 6.40 is overpriced. Burger King or Wendy's equivalence cost less than half of that.
41
rokhayakebe 9 hours ago 0 replies      
Sometimes, many times, I would pay the same price, but take less food. That is e.v.e.r.y time I eat at a restaurant. The portions are just too large, and I prefer to not take anything home.
42
AC__ 9 hours ago 0 replies      
Eugenics?
43
bluffchain 9 hours ago 0 replies      
Really interesting read...
ICANN's assault on personal and small business privacy nearlyfreespeech.net
186 points by fieryscribe  11 hours ago   62 comments top 19
1
jayess 0 minutes ago 0 replies      
I honestly don't understand why people don't just use a fake address then. I've never gotten anything other than spam mail when using a real address on a domain registration. It's not like ICANN can in any way verify addresses.
2
secfirstmd 8 hours ago 2 replies      
As someone who works on a daily basis (www.secfirst.org) with human rights defenders, this policy is ridiculous and potentially life threatening to many people. Privacy around Whois etc should not be compromised for courageous people who are running things like democracy activist websites in China etc.
3
WireWrap 2 hours ago 0 replies      
Disclosure cannot be refused solely for lack of any of the following: (i) a court order; (ii) asubpoena; (iii) a pending civil action; or (iv) a UDRP or URS proceeding; nor can refusal todisclose be solely based on the fact that the request is founded on alleged intellectual propertyinfringement in content on a website associated with the domain name.

That's incredibly bold.

4
ianlevesque 8 hours ago 2 replies      
Its mostly a problem if someone decides to target you for harassment (i.e. SWAT-ing). It is ridiculous to expect personal websites to reveal home addresses to the world. WHOIS is definitely another one of those technologies leftover from a more naive time in the internet's history.
5
jakobegger 3 hours ago 2 replies      
In my country, every business is required by law to provide detailed contact information and registration numbers on their website. It baffles me whenever I see one of those US startup websites with no contact info, not even an address or a PO box, and still they expect their customers to provide detailed billing info and share private data.

How can I trust a business when it hides behind an anonymous registrar? If something goes wrong with my order, I'd have no way to even determine who is behind the company.

Of course, the free speach argument is mostly irelevant. There are plenty of ways to share anonymously either on other people's domains, on TOR, or using just IP addresses. If my privacy was important, I wouldn't rely on Godaddy to protect it.

6
acabal 8 hours ago 3 replies      
My first thought after reading the tl;dr was, "what do I do about it"? The answer is at the end of the article: the working group is accepting comments on the matter at a certain email address.

Our privacy online and off is already being deeply threatened on many other fronts. If you think this proposal is bad for our privacy and bad for our internet, please take a moment and email your thoughts to the working group.

I wonder if a decentralized type of DNS, like blockchain-based DNS, will ever take off. If we even have an acceptable alternative right now, I suppose the first meaningful step towards adoption would be baking support in to a major browser.

7
raquo 8 hours ago 1 reply      
I'm honestly so tired of crap like this. My hopes of living in a reasonable society are being crushed every single day. Things should be improving as time goes on, not going downhill. Ugh.
8
TTPrograms 9 hours ago 1 reply      
As much as I hate broaching the topic on HN, I'm really excited about the potential for blockchain or other distributed consensus-based technologies to disrupt the many centralized authorities that are currently so critical to operation of the internet. Namecoin, for example, is really interesting for this reason.
9
geographomics 8 hours ago 4 replies      
Could this be worked around by using a pseudonym, registering a PO Box address, and using the number from a pay-as-you-go SIM card? Not false information as such, but not particularly revealing either.

Alternatively, one could enter information that looks plausibly valid but is in fact completely invented. How often does one receive articles in the mail or phone calls to the whois contact points anyway? As far as I've experienced, any communication is to the email address. I suppose it depends what the penalties are if you're somehow found out.

10
alfiedotwtf 2 hours ago 0 replies      
I'm going to start my own root zone, with .blackjack and .hookers TLDs
11
z3t4 3 hours ago 1 reply      
Just register the domain with a bogus name that sounds real. I don't think the domain provider will care as long as the bill gets paid.

I do not however like that companies can be totally anonymous on the Internet. It's not like the average person checks out the people behind a company before they buy some commodity from them. I do however whois a domain if I'm suspicious and a common thing is that most use anonymous registrars. Even serious companies use anonymous registrars now a days, witch is weird, or maybe I'm the only one who thinks it's important to know who the people behind a company are before you do business with them.

12
talideon 4 hours ago 1 reply      
[Disclosure: I work for a domain registrar based in the EU, and I implemented pretty much 95% of the company's infrastructure as far as us acting as a registrar goes.]

I think there are some major misunderstandings around what ICANN are doing with WHOIS privacy.

ICANN have pretty much always required that registrants provide registrars with accurate contact information. ICANN required that registrars periodically escrow this data with an escrow provider (Iron Mountain, usually, though there are now more).

When you use registrar-provided WHOIS privacy, the registrar is still able to escrow the correct contact information. This is not the case with third-party WHOIS privacy providers. The difference now is that, due to the demands of law enforcement agencies, they're now requiring that information be validated and verified.

Third-party WHOIS privacy services always existed in a legal grey area, whereas registrar-provided WHOIS privacy did not. Even before the 2013 RAA came in, you were risking having your domain being taken from you by using a third-party provider and providing their contact information to your registrar as it meant that the registrar had inaccurate contact information and thus could not provide accurate information to the escrow provider.

Before the LEAs got all antsy about this, the WDRP emails you get from your registrar, giving you a list of domains and their WHOIS data and a warning of the consequences of providing inaccurate data, were the most ICANN required in practice. It was an honour system, and the requirement to provide accurate data - which has always been a requirement - wasn't actively enforced. All that's changing now is that ICANN are actively enforcing a part of the registrant contact they previously had been laissez-faire regarding.

The requirement on third-party WHOIS privacy providers is to normalise their situation so that they have the same requirements to record information correctly and escrow it that domain registrars already have had to do for ages. And it's not that onerous a requirement: actually implementing an EPP client is orders of magnitude more difficult that writing the code needed to do data escrow: https://www.icann.org/en/system/files/files/rde-specs-09nov0... - you can implement that in an afternoon. The accreditation process for a WHOIS privacy provider is nowhere near as horrible as it's being made out to be. All you need to do is show that you can accurately escrow data.

Everybody's so late to the party on this one. The registrar constituency in ICANN fought pretty hard against this. If you think what ICANN are requiring now is bad, the LEAs were demanding much crazier stuff during the negotiations. If you're an EU citizen or using an EU registrar, you're even better off, as EU data protection law meant that some of the requirements of the RAA were illegal in the EU, so EU-based registrars are able to get an opt-out of certain requirements of the RAA. We still do have to validate, verify, and escrow contact details associated with domains we manage, however.

13
jakeogh 7 hours ago 0 replies      
Headline: ICANN jeopardises the DNS from which they derive relevance.

/>10yr NFSN client

14
fapjacks 8 hours ago 0 replies      
What a bullshit power grab.
15
kijin 7 hours ago 1 reply      
I'm curious how this policy will affect ccTLDs where the registry already has a policy of not publishing whois information.

For example, individual owners of Canadian .ca domains can have their contact info hidden, whereas corporations can't. Similar policies are in effect in a number of other countries, as well as .eu.

Will these countries need to change their policies so that individuals who have ads on their blogs will have their contact info exposed? Will they have to change the way they respond to requests for disclosure?

Or does the ICANN policy only apply to gTLDs?

16
kijin 7 hours ago 2 replies      
NameCheap sent out an email about the same ICANN proposal a few days ago. Unfortunately, the NameCheap email focused almost exclusively on the lack of privacy for businesses. It merely glossed over the more important issues, such as ambiguity about what counts as a a business, as well as the requirement that privacy services disclose their customer's identities to anyone who asks.

This is bad. Very bad. The NameCheap email probably gave a lot of people the wrong first impression about what ICANN's proposal really means. Seriously, it sounded like they were just complaining about their bottom line. And since a lot more people use NameCheap than NearlyFreeSpeech, not many people are going to read the more thorough analysis and urgent call to action that the NearlyFreeSpeech article contains.

If anyone around you has read the NameCheap email, please tell them to forget about it. Tell them to read this article instead.

17
dkbrk 7 hours ago 1 reply      
There seems to be yet another threat to our collective privacy every month or so. Normally, I sit firmly on the side of an individual's right to privacy, but in this case, I think ICANN have a legitimate point even if they're being quite heavy handed about it.

WHOIS is an extraordinarily valuable protocol with a heritage dating back to the ARPANET days. As an example, for quite a while we've had this ideal of the semantic web we're trying to move towards, but in practice each website is its own special snowflake with more concern given to legacy rendering in Internet Explorer than making sure that contact information is easily findable and semantic. But it's mostly okay, because if I really need to contact someone there's this almost 40-year-old protocol which gives me unfettered access to information such as a technical contact email and an address.

Many registrars don't seem to pay much attention to the quality of their WHOIS records and most people or businesses probably don't give it a second thought or check the records after registering a new domain. But they should; and I applaud ICANN for their efforts to uphold the quality and integrity of WHOIS.

That said, the right to freedom of speech implies that one should have the ability to disseminate ideas with complete anonymity. ICANN's proposal would completely undermine this, which is unacceptable.

I think there is space for a middle ground, where ICANN can ensure that the WHOIS records aren't what amounts to a blantant lie in the case of anonymous registrations (i.e the registrar providing their own details as the contact information). The current situation is pretty bad: if I want to contact the owner of such a domain, all I can reasonably expect is for any email sent to be blackholed by the registrar. I'm not talking about attempting to deanonymise the owner of such a domain, merely the idea that a domain is a named endpoint with an owner who is contactable through freely available means.

Imagine if ICANN created a new class of domains where it was made explicit in the WHOIS that the owner wished to remain anonymous, but nonetheless provided accurate information such as a pseudonym and a means of contact without violating their privacy. This means of communication could be some form of email hosted by a trusted third party, or potentially something more esoteric such as a GPG-encrypted message embedded in the bitcoin blockchain.

This would preserve the correctness and utility of the WHOIS database while respecting the rights I believe ICANN have a responsibility to uphold.

18
tomjen3 7 hours ago 0 replies      
Awesome, now we can dox icanns board members and their families.
19
Animats 8 hours ago 4 replies      
I'm in favor of "outright banning the use of (WHOIS) privacy services for any domain for which any site in that domain involves e-commerce." In California and in the European Union, attempting to conceal the identity of the business behind an e-commerce site is a criminal offense.

Individuals have privacy rights. Businesses do not. The EU is very clear on this. The European Privacy Directive covers individual privacy. The European Directive on Electronic Commerce covers business privacy online. They're very different.

HN Office Hours with Kevin and Sam
187 points by sama  1 day ago   138 comments top 49
1
ismail 1 day ago 4 replies      
Background:Every day in South Africa low income people suffer through 2+ hour commutes into work using public transport. This has a serious impact on their lives, due to long wait times and inefficiencies. I interviewed a single mother, who only got home after 7pm and leaves at 6am in the morning.

We have built a ride share matching system:

Our goal is to make use of unused seats in cars matching drivers and passengers. We do not put any cars on the road (i.e Uber), We match to drivers on their daily commute.

Our Biggest Problems:

1. We are quite successful at converting people who are posting to online classifieds etc. We have strategies to get these users. We grew week on week by as much as 50% but then we hit a negative growth week. In order to continue growing, we need to expand into other areas.

We also have an un-balanced market. Demand for rides is much higher than rides offered.

2. We have not figured out how we will make money yet. Our initial hypothesis was a transaction fee.

The passenger and driver will be traveling together on a regular basis. Once we match them, we run the risk of being dis-intermediated.

You could make an argument that a payment directly into the bank account is more convenient than cash, though i am not sure that is a strong enough incentive.

3. Our churn/attrition is very high, There is no need for the system once they start traveling together. We have churn built in to our model.

2
trsohmers 1 day ago 3 replies      
Hi Kevin and Sam,

May be a bit out of your area, but I'm curious about your thoughts. Even though we are in "Silicon" Valley, there has been very little when it comes to new semiconductors investments. I'm the co founder and CEO of REX Computing (http://rexcomputing.com), a new semiconductor startup working on a super energy efficient processor architecture. We've actually raised seed funding, and I'm interested in what you think about the semiconductor space in general (and its future in the valley), plus ideas on how we can thrive as a very low level hardware company in a primarily software world.

Thanks!

Edit: One other thing I should note is that we are also big on software! We're utilizing a lot of open source projects to help build up our compiler and other software tools. Obviously hardware without software to run on it is pretty useless.

3
ljd 1 day ago 2 replies      
Company: http://PlaceAVote.com

Pitch: We're replacing congress with voting software. We are running 70 candidates in the 2016 Congressional elections on our platform, if any of them get voted into office we'll take all bills before congress and put them on our site where each voter in that district gets one authenticated vote.

Question: In your experience, what's the most effective way for B2C company to educate users that you even exist?

I know that signups and conversions are an art, but more than all of that, just telling people that you have something new that they may not be searching for but could still dramatically improve their life. We will take any demographic that will have us, so we aren't picky on that front.

We have 100% week-over-week growth during election cycles, and 10-20% when it's not, so we know the message is received, we just want to get more people in the top of the funnel.

4
ph0rque 1 day ago 1 reply      
Hi Sam and Kevin,

The potential market for automated micro-farming (backyard farming) is huge, but it will take a long time to to reach its potential. My question is, at what point would AutoMicroFarm (http://automicrofarm.com/) become attractive to investors (both YC and others)? Would 10% weekly growth for a year be key, or something else?

Two and a half years ago, we AutoMicroFarm founders had an interview with you, and you decided not to invest, saying it was difficult to see how AutoMicroFarm would generate the kind of growth startup investors are looking for. However, YC invests with infinite time horizon and is not afraid of risky-looking companies (http://blog.samaltman.com/new-rfs-breakthrough-technologies).

So what would YC or other investors like to see before investing?

Thanks!

5
dzine 1 day ago 1 reply      
Sam, Kevin,

We are building a platform that allows anyone with a mobile phone to earn a living by performing discrete tasks.

Our platform aims to break down complex jobs into easily actionable items, that can be performed easily by anyone, anywhere.

The first vertical we are applying this to is reservations. The Loft Club (https://useloft.com) is a service that makes reservations for you at amazing restaurants every month on your preferred day, saving you the decisions and the hassle. Through our platform, we centralize restaurant recommendations and assignments, before farming out the logistics of making and manging them to our agents.

Our question is: Should we work on building out the generic platform and expand quickly into other verticals, or focus on building out The Loft Club and owning this space first? We've customers paying us for The Loft Club with the mild publicity it has received thus far.

Thanks!

Zhuang and Derrick

6
mwilkison 1 day ago 2 replies      
http://www.zerodb.io/

ZeroDB is an end-to-end encrypted database that lets you operate on data while it's encrypted.

Demo video: https://vimeo.com/128047786

Question: We want to sell to large enterprises (financial services, healthcare, saas providers, etc.). The common advice is to start with SMBs/startups and get traction that way before going upmarket to enterprises. How can we balance that with the fact that what SMBs are asking us for is very different from what enterprises have told us they'd like in a fully-baked product?

7
Smirnoff 1 day ago 1 reply      
Hi Kevin and Sam,

We are about to finish building a table reservation system (think OpenTable or SeatMe but on steroids). Although it's a "Me Too" product, we will offer features that our competitors can't or won't, e.g. bigger API control for restaurants, various hooks to extend the service such as food delivery, pre-paid reservations, and ticketing for tables to name a few. Essentially, we will offer an iPhone/iPad app for restaurants to manage their reservations/orders, while their guests can use an iPhone-app/Android-app/Search-Engine/Restaurant's-Website to make those reservations.

I have a question about a launch/pricing strategy:

- Is it sane to do a freemium model for our product? For example, restaurants would be able download our Manager app in App Store for free but it would have limited offline features. If restaurants want to accept online orders, then they must get that feature via in-app purchase. If restaurant wants to incorporate discount cards, then it's a different in-app purchase. This logic applies to all different features.

- Or should we go through a regular sales process, i.e. sign up restaurants one-by-one, charge them via check/credit-card/etc and escape Apple's 30% cut?

Thanks in advance and I hope it will be helpful for other startups that are in a similar position.

8
marcuslongmuir 1 day ago 1 reply      
Hi Kevin and Sam,

Were MinoHubs (https://www.minohubs.com) and we build commercial and community tools for software projects. The barrier to building a successful software project is high - apart from writing the code, you need to build a community and potentially set up some commercialisation (backing, licenses, support etc.) which isnt an easy task.

We provide customizable hubs that give projects:

Commercial tools

- Paid support - ability to offer on-demand consultation to businesses and developers.

- Licensing - ability to sell one time and recurring licenses to businesses and developers (coming soon).

- Backing - monthly contributions. In return, backers get more visibility in Discussions.

Community tools

- Powerful discussions with voting.

- Announcements - emails and notifications to project followers.

From Kevins initial feedback (https://news.ycombinator.com/item?id=9746206) we understand that we need to be better at:

1. Leading the user through things to do after creating a hub.

2. Showcasing the benefit of using MinoHubs.

Were working on those right now.

Our challenge is that, as Kevin also pointed out, we have a lot of features, but were also trying to appeal to an audience that would use different combinations of features; open source software, commercial software or projects that want to just use community features.

How do we reconcile that users want a wide variety of functionality with the issue that this might present too many features for us to convey concisely?

9
declan 1 day ago 1 reply      
I'm a co-founder of Recent (https://recent.io), and we've created an iOS/Android app and recommendation engine that we describe as news powered by artificial intelligence. We have technical backgrounds; one of us also worked as a journalist for Wired, Time, and CNET. We're seed-funded, on the SF peninsula, and incorporated in Delaware. You were kind enough to invite us to attend YC's Startup School last October.

Coincidentally we started sending out beta invitations last night to the first group of people on our list before a planned public launch next month. Our recommendation engine is built on Google App Engine, which should (we hope) allow us to scale. My office-hour question for Sam and Kevin would be: What advice do you have for us at this stage?

10
Lukeas14 1 day ago 1 reply      
Hey HN,

What we do:

We're building a community of vehicle data (http://shadenut.com). Mechanics and DIYers will be able to look up any piece of information they need to work on their car directly from their phone while still under the hood (ex. torque specs, TSBs, fluid types/capacities, etc). As a developer I've seen the positive effect that StackOverflow has had on our industry as a knowledge base and am trying to do the same for the automotive industry. The data will be crowdsourced and 100% free to use.

Our Problem:

Our biggest problem is that the product is not really usable unless it contains EVERY piece of technical data about a model, after which it becomes tremendously useful. The most common feedback we get from technicians is that they'd love to use it and contribute once it's a complete database (as long as it's accurate) but wouldn't switch from the paid competitors until then. The data is all available but there's simply too much of it for a small team to manually import. Our current strategy is to start with a select few models and incentive technicians to make their own entries.

However, I'd love to hear how Kevin and Sam would solve this or from others in the HN community who have faced similar problems.

11
highCs 1 day ago 1 reply      
I'm making a competitive real-time strategy game like starcraft (which is declining) but without the punitive aspect of the game which is why a massive number of players stop playing it I think. Starcraft is also the only decent competitive RTS on the market right now, a successor can grow at startup-level in the long term I beleive.

I have a solid engine, nothing launched yet.

My question is: why would you not fund this project right now? What could I do to improve my odds of getting funding?

12
32faction 1 day ago 1 reply      
Hey Kevin and Sam,

Pitch: We're ATLAS, we plan to launch extremely small cubesat payloads (x<100kg) into low earth orbit on demand.

1) How much calculations/numbers crunched would we need to convince angels to invest? Rockets of this size aren't something we can bootstrap without a little financial support.

More Background:

Right now, the only way to get a cubesat into orbit is by ridesharing on bigger rockets as a secondary payload. The problem with this is they're not assured to reach a preferred orbit and are at the mercy of the scheduling of the primary payloads. NASA currently has a backlog of ~50 cubesats that need to get into orbit, as well as the many upcoming launches (including SpX this Sunday). We are currently working on the RFP for the Venture Class Launch Service however we may not have the resources to fully complete it by the deadline (13 July). We plan to market this service to Universities as well as hobbyists and government space agencies.

13
deepGem 1 day ago 0 replies      
We want to bring real time and accurate air pollution data to people on their mobile phones, using a network of low cost, solar powered sensors deployed at key road locations. Our sensor network will be more accurate and cost less than the EPA monitoring stations.

Help needed - We think that either the government or certain enterprises will pay for this data (since they already are spending money on such technology). What is the best way to validate these channels?

14
acallwood 1 day ago 1 reply      
Hey fellas,

ACe here from Painless1099 (www.painless1099.com). We automate tax withholding/filing for anyone earning 1099 income (think: freelancers and Uber drivers.)

We're thinking through growth specifically right now and are chewing on whether to go the B2B route or the B2C route. Different implications for both regarding scale and revenue obviously. We'd be stoked on a bit of help figuring out which to tackle first and how to make headway!

15
ereyes01 1 day ago 0 replies      
Tasqr (http://www.tasqr.io) makes it very easy for developers to ship their software to the cloud automatically, and very frequently. Developers and operations engineers typically have an understanding of how to manually configure and deploy their software via the command line. Tasqr leverages this existing knowledge in teams by helping them record their manual steps and replay them onto their live deployments. This allows engineers to operate in a language/environment they are already comfortable with- the command line.

My strategy thus far has been to find startups that need help doing their devops and help them automate their deployment using Tasqr. Finding customers this way has been slow, but I've gotten to learn quite a bit about how the product fits within a continuous integration workflow. I am starting to feel a little financial pressure to change my approach and scale my outreach, though my existing users really like my "do things that don't scale approach" unsurprisingly :-)

What are some signs that it's the right time to scale and chase bigger chunks of the market?

16
matrix 1 day ago 1 reply      
I'm in the early stages of bootstrapping a marketplace similar to those that exist for hotels (e.g. Booking.com, PriceLine, etc). The supply-side is primarily small businesses ("merchants"). The demand-side is consumers. The initial target market niche is small (~$600 million/yr in the US), but the product can serve a much larger market niche and I will do so when resources permit.

Today, this market niche is fragmented with high search costs for consumers. My marketplace will make it much easier for consumers to find and buy the products this this niche. Once off the ground, the marketplace will be a key source of customers for the merchants.

Consumers like the product, but the merchants are difficult to get on board. I find that the prevailing view is that the status quo is 'good enough'; merchants are conservative, and very few are early adopters.

Things I'm doing to address this:

1) Price for growth -- pricing based on a small flat fee that the merchant pays per transaction to align with value delivered, with first X transactions free (obviously I would prefer to charge a % of revenue, but that's a very, very hard sell with these particular merchants).

2) Provide the merchants with tools that help them run their business (i.e. give them reasons independent of the marketplace to use the app)

3) In-person visits to merchants. These are valuable for many reasons, and are only partially a sales call. These visits will always be something I do, but it doesn't scale enough to create a marketplace.

What strategies and tactics do you suggest to get merchants into the marketplace, to build up the supply-side?

17
goldMIT 1 day ago 3 replies      
Golden Speak (http://www.goldenspeak.com) helps you speak better by analyzing your voice for pitch, rhythm, vocab, fillers, and clarity. We'd like to talk about launching minimum features quickly vs. high user expectations that people have from apps like facebook, etc. Does it make sense to focus more on polish instead of launching within an almost embarrassing feature set?
18
graceofs 1 day ago 1 reply      
Hi Sam and Kevin,

We built ObjectiveFS, it's like Dropbox, but for servers. We have users running our shared file system in production, and are getting great feedback.

Our current challenges are user growth and upcoming competition from Amazon EFS.

We would like your feedback on what we can do on our website (http://objectivefs.com) or additional things we can do to get more people to start our free trial and to address the Amazon EFS competition.

Thanks!

19
mburst 1 day ago 0 replies      
Live Dota - https://play.google.com/store/apps/details?id=com.teamtol.li...

About a year ago I created an eSports app that lets fans of the game DotA follow and watch their favorite teams live and on the go. It's been super fun seeing my side project grow and have users in the community volunteer to help with designs and language translations.

The biggest tournament of the year (http://www.dota2.com/international/announcement/) is coming in a few months and I would love to talk about different ways to capitalize on this.

20
dnautics 1 day ago 0 replies      
Hi Kevin and Sam,

I'm thinking about starting a company that extends the open-source concepts to the biology/pharmaceutical sphere. The concept is to make a modular, drug-producing microbial strain, and release that to researchers/industry under a permissive licence (e.g. bio equivalent of GPL). Monetization comes about by offering manufacturing services which cut the pain of 1) scaling and 2) getting "good manufacturing practice" regulatory clearance for clinical testing, and ultimately full consumer product.

Do you think there's VC interest in funding these sorts of ideas that have a somewhat riskier business model that also ultimately will extract a lower margin, but has a chance of changing the "way things are done"?

21
phesse14 1 day ago 0 replies      
Hi there,

we are Tiedots (http://tiedots.co), a networking platform that provides you tailored information about other attendees every time you go to an event. This way we unveil you the most valuable leads and also find you the best way to approach. Saving time and increasing your business opportunities

The biggest challenge so far is building a solution that can provide relevant connections. Accuracy is the key and were working on a web semantic solution since weve been testing the solution manually with around 100 event attendees.

How would you determine the relevance? any other ideas?

PS: No matching solutions. Networking is about leads no matching.

Thanks!!

22
someear 1 day ago 0 replies      
Subcurrent sends out a single question/poll to enterprise teams every day or every week via Slack. Users respond with a single click, then optionally leave a comment.

The problem: our focus is divided on two types of customers (weve even created two landing pages for each type)

1. Product/Engineering teams - they get asked a new question every day that attempts to keep tabs on the health of the project. Questions are a combo of post-mortem style questions (but asked as you build product) and around prediction markets (larger n make better predictions) (https://www.getsubcurrent.com/product)

2. HR/Employee Engagement - users get asked a well researched question every 2 weeks. Instead of long, annual surveys, you now get to keep a pulse on morale and culture. Participation is higher since it only takes a single click in your already existing tools to respond. (https://www.getsubcurrent.com)

We have a number of customers using our free beta - most are using it for option #2. A very small number have connected with #1, and while we think it has a lot of promise, we havent talked to enough people to know how it might need to change to achieve product/market fit. We are at a crossroads of needing to pick one to focus on, because the distraction is making it difficult for both to progress.

23
DFinancial 1 day ago 1 reply      
I wanted to know how you would structure a profit sharing plan as an alternative to equity options for a small, bootstrapped company with virtually no IPO option. What would be fair? How would it work? How did WooFoo's program work? Consider this a blog post option too!
24
rohanmmit 1 day ago 0 replies      
Hi,We are developing a self-checkout mobile application that allows consumers to scan items in a store and then pay with just their phone. Our app will use geolocation to identify which store the consumer is in and then automatically connect with the stores point of sale system to ensure payment.

We built a prototype last week and went to stores and a retail conference this week. We are having problems convincing stores to adopt out product as they are very concerned with shoplifting.

Thanks,Rohan

25
jbandela1 1 day ago 0 replies      
Hi Kevin and Sam,

I am the creator of https://www.spqrs.com

The goal of Spqrs is to have a platform for the debate of ideas. As Sam noted in https://twitter.com/sama/status/610494268151431168 most smart people are wary about commenting on sensitive issues.

This is unfortunate as the Internet is a great place to discuss issues with people who may have a vastly different view, and in the process really examine why you think the way you do.

Spqrs provides a service similar to Twitter but allows you to follow hashtags as well as people and has a 1000 character limit instead of 140 better allowing you to make a point. The defining feature is that all usernames are pseudonyms so you can avoid threats to yourself or your livelihood based on what you say.

Right now this is a pay service. I am planning on charging $9/year. My biggest problem right now is finding subscribers. Any feedback and suggestions would be appreciated.

26
yishanl 1 day ago 0 replies      
Hi Kevin and Sam,

We're from Mise. We are an online marketplace and meal delivery service for the signature/best dishes from professional chefs in the Bay Area. Theres a face and a story behind each dish. We do free weekly delivery to SF Bay Area, including San Jose & the Peninsula.

We operate on a revenue share model. Chefs source their own ingredients, pay for kitchen rental, cook dishes, and earn 70% of everything they sell. We apply our 30% towards delivery, personalized packaging, copy, and kitchen administrative fees.

We'd love to talk about obtaining that 10% week over week growth. We're launching in 2 weeks and have a lot of orders (but a good amount are on us and going out to influential members of the community). How do we grow that into paying customers? And is it too risky to keep giving out free product that when you have to also balance the returns of the suppliers/chefs themselves?

http://www.eatmise.com

Thanks! :)

27
pmoorcraft 1 day ago 0 replies      
I launched Tech Talks and Books (www.ttbooks.io) about three weeks and just got my first 600 subscribers. It's similar to product hunt but for book and videos about tech and the community of submitters is more restricted than PH.

I'd like to discuss two things:

1- Signal vs Noise in online communities: Since you've been heavily involved in Reddit (Sam), I'd like to know how what your opinion in curating content vs. having an algorithm sift through the noise. We see companies like netflix and ph (allegedly?) combining big data with human curation and having a lot of success, so my question is: Should online communities invest in curation or big data? Is there a trend towards one or the other?

2- Monetisation: I'm having a hard time monetising this community. Since I'm looking to bootstrap it, its crucial that I get monetisation right so I was wondering if you had any tips on monetising communities?

28
hamhamed 1 day ago 1 reply      
I'm a co-founder at Stay22 (https://www.stay22.com), a Montreal-based startup that has developed a platform for users to search for places to stay (hotels & Airbnbs) around events

We provide a free solution for event organizers who are tired of doing customer service to their attendees by embedding our widget as easy as embedding a YouTube video. Other conferences have already hopped in like Traction Conf, check it out in action: http://www.tractionconf.io/accommodations

Let's just say that getting more events one by one isn't hard and our next step is to partner directly with ticket providers (SeatGeek, Ticketmaster) and do some sort of revenue share deal on accommodation sales.

29
3Dpuzzlepiece 1 day ago 1 reply      
Howdy!How would you suggest marketing a new service to a hard-to-reach audience?

Brief overview: I would like to market local, weekend yoga retreats to professionals in the oil and gas, energy, and finance industries. No long absences from work or family, and no long-distance travel. The startup is set up in Houston, but can operate in any city in the USA. Lots of different marketing and advertising methods are being tried (online groups, directories, online forums, linkedin, etc.), but I am pretty much throwing things to the wall and seeing what sticks. Any suggestions are appreciated. Thanks!

startup: bodyhugs: hug your body with movement and care

website: http://www.bodyhugs.org/

local health and wellness retreats in the USA

30
chejazi 1 day ago 1 reply      
Our product is a URL shortening service for individuals to monetize the content they share. Monetization occurs via an interstitial ad between the source of the short url (e.g. Facebook) and the destination of the short url (the content being shared).

The team right now is investing itself in areas such product, marketing, and architecture. We want to launch a product so we can begin testing our hypotheses, but we also want to go to battle sufficiently prepared. The latter requires significant effort in team building, which would detract from a product launch. What should we do?

31
ohashi 1 day ago 0 replies      
Background: Review Signal aggregates and analyzes what people are saying on Twitter about web hosting companies to build a transparent review site.

URL: http://reviewsignal.com

Problem: People generally buy web hosting once every few years and there are very few channels beyond Google and word-of-mouth to capture people at the moment they are considering purchasing. The competition for Google is astronomical (one of the highest PPC areas at ~$20/click). Organic rankings are filled with spam sites touting the highest paying companies with very good SEO (Hello CNET). I've been trying for years to get my SEO up to that level without success. I'm stuck on 2nd/3rd page and have been for ages. It's like purgatory, I've tried build other sources of traffic through PPC, CPM and none have really panned out that well. I've tried creating great content and it has been ok in some niches. For example, for high performance WordPress hosting information my blog has become the go to source. I'd like any ideas on what I should be trying to do next or what I can do to improve what I'm currently doing.

32
ksks 1 day ago 1 reply      
Hi, Kevin and Sam,

I am the founder of https://ManualTest.io. My app automates manual testing. It generate and run integration tests by recording and replaying users' actions on their websites. Manual testing is still being used everyday (by developers or not), so this could be a huge time saver for them.

My app is recently available on the Chrome Web Store, it only got a few users, despite having quite positive feedbacks. My question is, I am not sure the lack of users is because (1) I haven't done enough marketing/SEO/etc to get it in front of people, or (2) because I need to build more features before users would find it useful enough to try, or (3) it is just not something users want.

If its (1), I should stop coding and start focusing on letting people know about my product. If it is (2), I have a few very useful features that are still waiting to be done, but they could take at least weeks to complete. Without enough initial users, I am not sure which features users want most, or if the current set of features is enough for now, so I should focus on letting users know about it (so it is (1)).

Thanks!

33
cpg 1 day ago 2 replies      
I built a site to run tennis leagues. We have started in the bay area with 210+ matches taken place since February when it launched. Most other sites that try to do this look like they are built in the 90s and have little support for modern features like communicating via Whatsapp or decent mobile support. It could also be extended to other (racket) sports later. People pay a $20 fee for each league, which is about 8 matches in 8 to 10 weeks. http://www.racketlogger.com

The goal is to scale it across the US (and beyond, to any english-speaking area). Doing cost-effective marketing is key. How do we get the word out? How do we improve the site and experience so much that players tell their friends?

We put effort in SEO building a large database (the largest?) of string and racket specs with the goal of attracting some of the core fans of the sport (we started to get some clicks a day). There are a ton of other "social" and related ideas, the question is how to select which one(s) will work best.

34
zackabaker 1 day ago 0 replies      
Hi Sam and Kevin,

Company: http://PassWhiz.com

Question: We have an awesome product that schools love. It's time to sign up as many middle schools and high schools as possible before the new year rolls around. As a one or two man show, what advice could you give about approaching these schools and how to expose our product to this market? Thanks!

35
ericabiz 1 day ago 0 replies      
Hi Kevin and Sam, I'd like your advice on startups that connect real people/experts with those who need them.

Background: I'm building freedom.biz, which is currently a course for retail business owners who'd like to take their business to the next level. I sent out a survey to those on my interest list, and it became clear that I couldn't personally fulfill all their needs. However, I know people who can.

I'd love to build a company that connects vetted experts with the business owners who need them. I've seen startups in this realm, but they all feel generic and unfocused. What do you think would be a competitive advantage in this realm? What would you like to see that's not out there right now?

36
m_mozafarian 1 day ago 0 replies      
Relevant.ai

Hey guys,Our platform unbundles apps' and websites' most essential features and transforms them to interactive Cards. Similar to Google Now Cards. However the entire architecture is designed to be an open platform. Meaning anybody could come and build these interactive cards using our Language, REL. We believe, by unbundling Applications/WebService we can seamlessly start connecting different pieces of the web together, and create a more fluid and unified internet experience. An experience with an intelligent fabric that grows with our needs, preferences and expectations to help us make the right decision, at the right time.I'd love to hear your thoughts on Relevant. Thanks!

37
MangezBien 1 day ago 0 replies      
Hello!

I think that the next big step in Medical Technology will the the rise in software making medical decisions. This doesn't pose a big technological problem but it does pose a large regulatory problem.

I have experience getting cloud-based software through the FDA as a "medical device" and have overcome some of the most common hurdles.

I have an idea on how (and the ability to implement) a product to reduce the regulatory and monetary barrier to entry for this type of software.

My question is, in the current market, does this have any shot of getting funding? The product could never be used without the approval of the FDA, an expensive process. Is this a non-starter for most funds?

38
Inkdryer 1 day ago 0 replies      
We are Blue Seat Media, a startup product studio in Cincinnati making apps for baseball fans. We're more inspired by Pixar than Fox Sports and our goal is to capture the magic of baseball in an industry that is inundated with bad design and middle-aged men arguing on the radio.

Question: how do we sell this to investors? I'm a designer and the CEO. I'm good at making a product that people love but I'm bad at fundraising. We are out to make only high quality products, another hard sell to investors because high quality takes more time (money).

We have a great product that fans will absolutely love, we just need help getting it out there.

39
jameswilsterman 1 day ago 0 replies      
Streak - (https://goo.gl/VvgCnv)

We made a trivia gameshow for mobile devices. Everyone plays simultaneously once-a-day at 11AM PT / 2PM ET. Players see the exact same questions, so it's like a live, multiplayer, interactive version of traditional gameshows on TV.

We have ~300 MAUs and ~50 DAUs and solid retention, but to get that up to 1,000 DAUs should we be more focused on trying to trigger organic sharing within the app or top line from press, blogs, reddit, Facebook ads, etc? Is our user base too small to even know whether our organic sharing is really working?

40
philip1209 1 day ago 0 replies      
Company: https://www.staffjoy.com

Pitch: Our application decreases labor costs by precisely scheduling hourly employees to fulfill business demand. By preventing over and under-scheduling, we've been able to show a 10% decrease in labor costs with early customers.

Question: What wisdom do you have about "go-to-market" strategy in the retail space? We have startups as clients that are eager early adopters, but to cross the chasm to sustainable growth it seems that we will have to focus on retail companies and going to many trade shows. What can we do now to prepare?

41
wuliwong 1 day ago 0 replies      
Hi Kevin and Sam,

We are working on a new version of http://www.muusical.com. The new version is a significant change. It is going to be a free Spotify that is powered by a crowdsourcing platform where users add the music and meta data.

I would love feed back about a strategy for approaching investors who are wary about music startups.

If you're interested, read more here too: https://angel.co/muusical

Thanks!

42
karle 1 day ago 0 replies      
Product: Marine vector maps (GPU accelerated)

iOS Download link: https://itunes.apple.com/us/app/i-boating-gps-nautical-marin...

Website: http://i-boating.com

Our biggest pain point: Distribution

43
darienbc 1 day ago 0 replies      
memaroo.com

Memaroo is a web research dashboard, designed to make iterative web searching organized and more efficient. Memaroo records your search history into different projects, which can be accessed from anywhere -- so you can search for things on your phone and then continue your research later on your desktop. Projects can also be shared with other users, allowing collaborative searching and result sharing.

Memaroo is an improvement to an established search paradigm. But people are comfortable in that paradigm, despite its flaws.

How can I get potential users to break their existing search habits and try something new and possibly better?

44
gmarx 1 day ago 0 replies      
Stealth mode side project is software for creating data entry forms for clinical medicine and research. I have, uniquely, solved the discrete data with ontology problem in healthcare and healthcare analytics. The question I would like to discuss is what is my point of entry into the market; what kind of customers would be best targets as early adopters?
45
yaraher 1 day ago 0 replies      
Hey guys,

We are CodePicnic (codepicnic.com), a platform for sharing, running and showcasing code through a browser. We help people and business improve their demos, APIs documentation, or anything that needs for their users to run and try some code online.

We'd love to improve our "getting there" process. We've been interacting with users here in Hacker News, Reddit, Product Hunt and other sites, and getting better and increasing our usage, but still we feel is not enough right now. Our first users love us, the service and the potential, but perhaps there's something glaring we aren't doing well in order to be more well known. Is a long process, we get it, but the more we learn, the better.

I also believe this is an important matter that many other startups would love to learn about.

46
dang 1 day ago 0 replies      
Detached from https://news.ycombinator.com/item?id=9785941, per "please don't comment otherwise on this thread until we're done at 1 pm PDT". You're welcome to have this discussion after that, of course.
47
techaddict009 1 day ago 0 replies      
I am building youtube for content where users get paid for sharing their content.

One of the problem I am facing is how to pay them back? And how to checkout if the view was genuine or was done by some bot?

48
hackuser 1 day ago 0 replies      
Kevin/Sam: This is way off-topic, but it's an unusual and I think worthwhile situation: A couple of Nigerian teenagers have their project on the front page and one found his/her way to the discussion. Unfortunately, most of the thread is nitpicking criticism. A quick comment might do a world of good.

[1] https://news.ycombinator.com/item?id=9787010

Sorry to all if this comment is inappropriate.

49
undeterred 1 day ago 0 replies      
Hi,

Office Our provides a portal for investors to interact with their top 5 potential investments. "Separate the wheat from the chaff."

At Office Our an investor creates a post (or "bulletin") inviting the community to pitch their startup. Users vote on the "wheat" and the top 5 earn the right to receive a response. The investors can then manage their bulletin and follow-up outside of our platform.

For this we charge a simple, flat $5 fee to each investor per potential investment per month on an annualized basis in the form of credits which are distributed by each of the user's votes in batches of baker's dozens. "Investing - simplified"

I look forward to your feedback.

Google has quietly launched a GitHub competitor venturebeat.com
214 points by pbreit  3 days ago   147 comments top 40
1
sytse 3 days ago 2 replies      
Every IaaS provider (Google, Amazon, Azure) has added a code hosting service or will do so in the future. Having your code hosted here will increase lock-in which is the best way for the IaaS provider to increase margins in the future. However, code hosting is not the essence of that GitHub, BitBucket and we at GitLab offer. The essence is code collaboration: mentioning people, doing a code review, activity streams, etc. Getting this right is hard and I wonder if many IaaS providers will get this right.

The code delivery pipeline consists of issues, IDE, code hosting, CI, code review, configuration management, Continuous Delivery (CD) and a PaaS service. Code hosting is a first step and getting all the rest right is a lot of work. Services working on getting the IDE right are Koding, Nitrous.io, Cloud9, CodeAnywhere, Codio and CodeEnvy. And I suspect that GitHub Atom is running in a web-browser so they can effortlessly offer it online in the future. For configuration management you want to integrate with Chef, Puppet, Ansible, Salt and Docker.

At GitLab we offer CI and CD via GitLab CI. We hope for a multi-cloud future where organizations will deploy to different cloud providers. They will use PaaS software that spans the different IaaS providers. Cloud independent PaaS software offerings are CloudFoundry, OpenStack, OpenShift, Kubernetes, Mesos DCOS, Docker Swarm and Flynn. We want to ensure that GitLab is the best option to do code collaboration upstream from these offerings.

2
exacube 3 days ago 4 replies      
This article is just click bait.

Pretty sure this product is just so you can store your code/repo for your project using Google's cloud services. It's part of a whole for their cloud offering.

3
geofft 3 days ago 3 replies      
The reason people use GitHub is everything around the git hosting: the web interface, the account system, pull requests and issues, forking, comments, wikis, Pages, even the desktop and mobile apps. Hosting git repositories is straightforward, by design.

This article is only slightly more sensible than claiming that S3 is a GitHub competitor because you can git clone over HTTP.

4
stock_toaster 3 days ago 3 replies      
So they killed Google code to... launch another code hosting thing?

Does Google have too many siloed product managers? Maybe you can only advance up the corporate ladder by releasing new products, and fuck all if they get killed later, because you got your promotion?

No clue what the cause. Just seems weird looking on from the sidelines.

5
camhenlin 3 days ago 3 replies      
Can't wait to see how long it takes them to get rid of this one like they did with Google Code
6
guelo 3 days ago 9 replies      
My team has just started looking for a github replacement because the code review workflow is just not working for us, we need something with more structure. I think there's plenty of space for feature and price competition, especially for private repos where github's social network effects don't matter as much.
7
thekevan 3 days ago 1 reply      
This just seems to be a place to have the code that you run on Google Cloud Platform. Not exactly a competitor.
8
pbreit 3 days ago 2 replies      
Well, not quite a competitor. I presume it's expected to work better with Google Cloud than elsewhere?
9
sergiosgc 3 days ago 2 replies      
This is clickbait. Nevertheless, my first thought was "GitHub is much better and much much safer than anything Google can offer". Back in the day, were Microsoft to offer a competitor to a small company's product and my reaction would be: "They're dead in the water".

Food for thought...

10
joshuak 3 days ago 1 reply      
Calling a source code repo service a competitor to GitHub is like calling a online book store a competitor to Amazon.

Hint: source code is not GitHub's value, just like books where not Amazon's. GitHubs true value is something Google is profoundly bad at.

11
wasd 3 days ago 1 reply      
Sure, it'll compete with GitHub but I think a more exciting possibility is that it'll compete with Heroku. CodeCommit (similar service offered by AWS) integrates with other AWS services to help manage deployment and releases but I still find the entire AWS platform to be very difficult to use. I would love to see Google add pressure to all three companies to make their respective products better.
12
skj 3 days ago 0 replies      
Calling the GCP source hosting a competitor with github is nonsense. No public access to these repos, for one, so they're completely unsuitable for general project hosting.
13
bradhe 3 days ago 2 replies      
So interesting. This race for features between the cloud vendors feels crazy to me, but totally makes sense from a business perspective. Classic attempt at locking you in to a single platform.
14
10098 3 days ago 1 reply      
It looks more like a beginning of an online IDE than a direct github competitor. It's just convenient if you're already using Google's cloud services.
15
fixxer 3 days ago 1 reply      
Github really owns the "organization aspect".

For my personal projects, I'm fine with my own git server on a cheap vm, but for work I've been really happy with Github's issue tracker and org membership administration (with which we use OAUTH heavily for internal tools, several of which queue background computation). Github issues are much easier for tracking job submissions from analysts than integrating with the company email, and developers prefer it.

I looked at GitLab a year ago. I liked it, but it was a little funky (avatars weren't working; obviously not mission critical, but little shit like that erodes confidence -- either support a feature or don't, but never half ass it because I'm not going to admin over a server when I've got a chorus analysts bitching about it being hacky). GitLab people: this was a digitalocean prepared image version of GitLab, in case you're listening.

Version control is really the most minor and easy to replicate part of Github's value proposition.

16
davidbanham 3 days ago 2 replies      
> It can serve as a remote for Git repositories sitting elsewhere on the Internet or locally.

That's not a feature of this product, it's just how git works.

17
sova 1 day ago 0 replies      
18
mirceal 3 days ago 0 replies      
meh. this has zero chances of competing with github. I think it's meant more for Google Cloud Platform users to have a place where to store their code within the same "cloud"
19
personjerry 3 days ago 1 reply      
Hasn't Google Code existed for ages?

No one seemed to like it. Heck, Google didn't seem to like it enough to give it any love, just check out the UI on the site: https://code.google.com/hosting/search?q=label%3aPython

20
jabo 3 days ago 1 reply      
Off topic, but the image in the article is of a building in Santa Monica that Google moved out of atleast 2 years ago.
21
arbuge 2 days ago 0 replies      
Sites like GitHub thrive on the community they engender. Google no doubt has the tech chops to compete with anybody on a pure technical basis, but community is not their forte. Witness Google Plus.
22
k_bx 3 days ago 0 replies      
To me, this more looks like a Google's cloud IDE (targeted Java), for which it would of course make sense to have code-hosting there also. It makes sense for Google to have one, unlike just a GitHub competitor, imho.
23
xsace 3 days ago 1 reply      
When I go to princing and quota I get:

> This Beta release of Cloud Source Repositories is free and can be used to store up to 500 MB of source files.

No thanks! I still remember what happened with AppEngine.

24
err4nt 3 days ago 1 reply      
Thats cool...until the day Google decides to close it and shuts down the service and all projects trapped on it. But until then, its pretty cool!
25
sidcool 3 days ago 1 reply      
Hasn't Google been moving code to GitHub itself?
26
peter303 2 days ago 0 replies      
Google abandoned its previous Github comptetitor. Unlikely many outside of the Go-phers will use the new one.
27
beefsack 3 days ago 0 replies      
Competition in the Git hosting arena seems to be largely nullified by the fact that it's trivial to use multiple remotes.
28
sologoub 3 days ago 0 replies      
Anyone else notice that the photo they are using is the old Santa Monica office that Google has moved out of a while back?
29
im1983 3 days ago 1 reply      
Sorry, but I just can't trust them. I am absolutely sure that Google will shut it down in X years.
30
BenjiSujang 3 days ago 0 replies      
Competition is always good. Looks good for Google cloud users. However certainly no GitHub competitor.
31
samfisher83 2 days ago 0 replies      
Google had googlecode way before github. They just didn't do much with it.
32
jokoon 3 days ago 1 reply      
Anecdote: I remember using mercurial on google code, at some point it did not work, a push was just timing out for some reason. I switched to bitbutcket and then used github. Google answered the issue, but I already made the switch, and I don't even know if they fixed it.
33
hliyan 3 days ago 1 reply      
1. Requires credit card number for free trial

2. No built in issue tracker or wiki (?)

34
bsimpson 3 days ago 1 reply      
500 MB lifetime storage doesn't seem like a whole lot.
35
shrineOfLies 3 days ago 2 replies      
I would've liked it better, if it also had auto-deploy to gce, and auto load-balancing, scaling, health-checks and service discovery.
36
hoodoof 3 days ago 1 reply      
Does this have an API?
37
alaskamiller 3 days ago 0 replies      
While I was at Google one insight I had is that there's a copy of most internet products. Given the cadre of college grads they hire to work there, cloning something is almost like a fun little coding challenge.

Dropbox? There's a clone. Pinterest? Clone. Everything. Then they dogfood it and if there's more interest they gather up more resources to inevitably pitch the idea to Marissa Meyer, who then plays with it, design the business case for it, and approve a proper budget for it.

If the product is good then the news leaks or they launch it. After awhile if the Google audience doesn't like it they cut it loose.

Which goes to say... any time some investor asks you what happens if Google comes into your space, you should say: good.

38
mdekkers 3 days ago 1 reply      
...because google code worked out so well....
39
finnjohnsen2 3 days ago 1 reply      
Post Snowden I'm looking to move away from cloud services and not entangle myself further.
40
hoodoof 3 days ago 1 reply      
WTH does this mean?

Are they saying it is not secure?

https://cloud.google.com/tools/cloud-repositories/docs/

NoteCloud Source Repositories are intended to store only the source code for your application and not user or personal data. Do not store any Core App Engine End User Data (as defined in your License Agreement) in a Cloud Source Repository. To use a hosted Git repository with a Cloud Source Repository, you must first open an account with GitHub or Bitbucket (independent companies separate from Google). If you push source code to a Cloud Source Repository, Google will make a copy of this data which will be hosted in the United States.

Apple Removes American Civil War Games from the App Store toucharcade.com
172 points by sehugg  3 days ago   217 comments top 44
1
Mithaldu 3 days ago 3 replies      
Germany has a law about this, with some exceptions that would specifically apply to cases like this one:

https://en.wikipedia.org/wiki/Strafgesetzbuch__86a

When a movie like Iron Sky has no problem being shown in German cinemas with the swastika left untouched, because it's clearly art, then it should be fairly obvious to Apple that banning historically accurate representations in historically accurate interactive art is far overreaching; though not legally, but ethically.

2
mladenkovacevic 3 days ago 3 replies      
The scariest thing is how fast sweeping and unanimous these acts of compliance are.

It's like these companies just woke up suddenly, had a conference call and without a hint of discussion, analysis or feedback started enforcing moral revisionism. It reeks of dishonest, cheap PR.

What's next on the agenda?

The saddest part is that this has totally taken over the discussion of shootings in North Carolina. US is unique in that 9 people gunned down in cold blood somehow turns into a discussion about a flag?

3
protomyth 3 days ago 0 replies      
This is absolute BS. Using a flag in a historical context should not be censored. Did they sensor history apps? Once again, app developer take a hit while "real authors / artists" don't have to deal with this crap.
4
yequalsx 3 days ago 11 replies      
Here is part of Mississippi's Declaration of Secession:

"Our position is thoroughly identified with the institution of slavery - the greatest material interest of the world. Its labor supplies the product, which constitutes by far the largest and most important portions of commerce of the earth. These products are peculiar to the climate verging on the tropical regions, and by an imperious law of nature, none but the black race can bear exposure to the tropical sun. "

It astounds me that there are large numbers of whites in this country who think the Civil War was about anything other than slavery. The Confederate flag represents and evil institution and represents the evil intent of white Southern power brokers 155 years ago.

I don't have an opinion as such on Apple's decision but let's not pretend that the Confederate flag is anything other than a symbol of overt racism.

5
JohnGB 3 days ago 2 replies      
So what about all WWII games that have a swastika, or any number of historical flags that are simply referencing history rather than supporting an ideal?

This is plain ridiculous.

6
gadders 3 days ago 0 replies      
This is getting silly now. All for removing the flag from public buildings, but this is going too far.

Can someone get Taylor Swift on the case please?

7
ap3 3 days ago 1 reply      
Are Siri and the Wikipedia app next on the list ?

I don't understand the ban on the historical apps - the civil war did happen and the confederate flag was used.

Are nazi flags and symbols banned from ww2 games ?

8
MBCook 2 days ago 2 replies      
This is a very Apple move.

When I saw it, I wasn't surprised. There are Civil War games where it makes total sense for the flag to show, and there are probably a few tasteless "The south will rise again" things that never should have been allowed on in the first place.

But it takes a lot of people and time to figure it out for each app on the store. And when it comes to this kind of stuff Apple doesn't like spending lots of people and time on these kind of things.

Blanket bans are so much easier to implement.

Quite disappointing, but not surprising. And they'll probably reverse parts of it within days. Or new games will slip through and people will forget about it.

9
tempodox 3 days ago 0 replies      
What? Blast the American Nation, this is pure revisionism. Lying history out of existence because it's not hip at the moment. Apple's obsession with political-correctness-gone-wrong is unbelievable.
10
mariodiana 3 days ago 0 replies      
Political correctness has gone mainstream, turning American culture into a Neo-Puritan age where everyone is elbowing past others to demonstrate how righteous and socially conscious he or she is.
11
ccvannorman 2 days ago 1 reply      
Apple has proven, time and again, that it heavily favors censorship in the name of profits. The real issue we should be talking about is the de-facto monopoly Apple has because of their hardware device success (e.g. iPhone), and how and when this monopoly will be prevented from censoring the speech and content of millions of lives.

Or do we expect monopolistic censorship to be the new norm of the future? Disgusting.

12
fixxer 3 days ago 0 replies      
So, they're in effect banning historical fiction? None of these games condone slavery, do they?

I totally get and agree with removing the Confederate flag from state flags in the United States -- The confederacy failed. But, must we deny an important part of US history happened?

13
nathan_long 2 days ago 0 replies      
A lot of the debate here pits "users can decide what's acceptable" against "Apple can decide what to sell".

The only reason these seem incompatible is that iPhone owners can only get apps from Apple.

If you want both Apple and users to have freedom of choice, lock-in is the real enemy.

(Side note: lock-in also goes hand-in-hand with DRM, which goes hand-in-hand with surveillance: if the user isn't allowed to see what code they're running, and the software company isn't allowed to disclose what the government made them do, then the user can't know how their device is bugged. Cory Doctorow explains nicely how fighting lock-in and DRM is good for political freedom, too: https://vimeo.com/123473929)

14
rrss1122 3 days ago 2 replies      
Why is everyone trying to erase history all of a sudden? Because somebody felt bad?
15
hackuser 3 days ago 0 replies      
I think it's great that Confederate symbols are being removed from government and commercial situations, but somehow it's different for Apple; they have too much power over what their users see and read (though I suppose the users can access whatever websites they want). Mobile apps and games are a significant social medium, and should include political expression. As a weak analogy, though I'm glad various governments may get rid of the flag, I'd very strongly oppose the government banning private citizens from owning them.

While what Apple says about privacy is admirable, end-user control is still a serious problem.

16
dmschulman 3 days ago 0 replies      
It'll be great to read the headline "Apple Brings Back American Civil War Games to the App Store" once they realize what a boneheaded move this was.
17
michaelrhansen 3 days ago 1 reply      
It is a historical reference. This clearly was not well thought out.
18
moron4hire 3 days ago 2 replies      
This is clearly going too far.
19
a3voices 3 days ago 1 reply      
People are way too oversensitive. Reminds me of Muslims getting upset over cartoon drawings of their prophet.
20
scelerat 2 days ago 0 replies      
I'm all for honest public reevaluation of the symbols of the CSA, but this strikes me as a cheap (for Apple) PR move and not conducive to that discussion.

If anything it will feed into the paranoid narratives advanced by those who truly believe in the symbolism of the Confederate battle flag, triggering the Streisand effect, or a close relative of it.

21
kranner 2 days ago 0 replies      
123 points in 2 hours and this is already on page 2 because the number of comments exceeds the number of upvotes? I would have missed this discussion if I hadn't checked HN in the last 2 hours.

Is it time for HN to review whether the upvotes-vs-comments penalization-heuristic still makes sense? It's feeling a little ad hoc and brittle to me.

22
crxgames 3 days ago 1 reply      
This is ridiculous. I bet they leave Nazi WWII games.
23
jfoutz 3 days ago 0 replies      
I thought the Union flag had somewhere around 35 stars.

It's Apple's playground, they can do whatever they want. This seems ham fisted to me, but i understand the desire to simply eradicate all evidence. They're not in the business of historical accuracy. They're in the business of selling stuff. bad feelings about imagery interferes with selling stuff.

24
NoGravitas 3 days ago 0 replies      
Interesting question for me -- can you/are you expected to play as the Confederates in these games? Because a game where you play as the Union and the Confederates are the "bad guys" is certainly not glorifying the "Lost Cause", no matter how many Confederate flags there are in it.
25
zelos 3 days ago 0 replies      
Possibly this is just Apple pushing the burden of proof onto the developers? They could have had their app reviewers go through every app one by one and try to understand the context, or they can do this and force the developers that care to justify the use of the flag.

Pretty stupid, anyway.

26
kirkbackus 3 days ago 2 replies      
It's quite ironic that Apple is setting a precedent for the intolerance of any symbol of slavery while they are knowingly supporting manufacturing which utilizes child labor, which is often forced labor. The epitome of hypocrisy which hopefully leads Apple to change.
27
kctess5 21 hours ago 0 replies      
Interesting. It seems like in a historical context such as this, accuracy is more important than some random current events association thing. Weird move, Apple...

Removing "unnecessary" references (ones without any historical context or other legit justification) might be more reasonable.

28
transfire 3 days ago 0 replies      
The North has finally won the war!
29
snarfy 3 days ago 1 reply      
Will they also remove WWII games with Nazi flags?
30
hughw 2 days ago 0 replies      
So many comments here, on this topic, are in the sense of "Oh great we're talking about a symbol, while not fixing the root causes like guns and racism". I have a sense that's a POV of West Coasters, simply unable to fathom the depth of this symbol and its importance to racists, anti-racists, and its presence permeating everyday life in the South. Removing this symbol is a traumatic step even for many people of good will. The symbol is everywhere, and usually is employed innocuously. So getting rid of the symbol, even where used innocuously, means that even the people who intend no harm by it, have come to be conscious of the pain it causes to black descendants of slaves, every day. It's a big step for the southern "middle".
31
vezzy-fnord 3 days ago 1 reply      
It isn't even historically accurate anyway because the CSA never actually used that flag. It was briefly used as the battle flag of the Army of Northern Virginia.
32
Shivetya 2 days ago 0 replies      
So we are to assume they will remove Dukes of Hazard from iTunes to complete their kneejerk reaction to this issue? There are probably quite a few album covers with that flag or approximations of it, let alone TV shows and movies.

American Civil War games are not racist nor is representing the Civil War or any other conflict in a game. What is next? Scrubbing history books of any offending flag or words?

33
solveforall 2 days ago 1 reply      
I understand that the Confederate flag evokes strong, negative reactions from black people, as it probably should. So it may be disturbing to see the flag while browsing the App Store, and this is something Apple justifiably would want to prevent, and they have the right to do so. But to ban the display of a Confederate flag within a Civil War game that you have intentionally downloaded seems ludicrous. Games like this seem quite educational and to strip them of historical accuracy for the sake of political correctness is a real shame.

I feel this is another sign that we are headed towards a culture that does not tolerate anything that might offend anyone, an intolerance of intolerance.

34
jordanpg 3 days ago 0 replies      
All of the objections along legal and moral lines are missing the point. While those objections are valid in many cases, the point is that this flag is the symbol of the moment. A rough analogy is the way in which marriage equality has become a proxy for gay rights (whether this is accurate or not).

If the flag is shamed out of the public zeitgeist, in the same way that the n-word or the c-word have been, then that is a symbolic victory for those on the side of civil rights.

Whatever you think about the actual meaning or symbolism or historical context of the flag is beside the point.

Arguments about the importance of "heritage" fall flat because symbols do not teach history; they simply stand in for particular narratives.

35
deedubaya 3 days ago 0 replies      
Banning the symbol will only add credit to and strengthen the idea.
36
DanBC 2 days ago 0 replies      
I'd be interested to know if they allow some of these (unmodified) games back on.

I suspect this is a poorly handled auto ban of anything with the flag and not the intended result.

37
tosseraccount 3 days ago 0 replies      
Apple can do what they want.Free speech and all.

That said, this is silly.

Apple should respect their developers and let them use historical symbols used by the opposing sides.

38
GBond 2 days ago 0 replies      
Apple App Store is a private market and thus can legally take down any content they deem. The risk is Apple come off as tone-deaf and with such an over-reaching move. But still that may be a calculated negative impact as Apple has always been image conscience. They don't want to be the next target for the social media #takeitdown mob.
39
arca_vorago 3 days ago 1 reply      
A slippery slope down the road to censorship. When you place yourself at the head of the censorship table, you open up a can of worms that is difficult to close. (and no, I am not referencing "censorship" of illegal material)

For me though, this is nothing new for Apple, and it's why I don't like their software in general. As RMS said, roughly, "Apple puts the user in a prison. It is a beautiful prison though."

Some people embrace the beautiful prison for it's simplicity and ease of use. I suppose that's their choice, but what will they do when they wake up and realize they hate the new warden, after they are so tied into the ecosystem?

I still think those who embrace FOSS now will be at a huge advantage as time progresses and the nanny mentality of companies such as Apple and Microsoft becomes more prevalent, and users of those will be at a large disadvantage. RMS is a man ahead of his time and only time will prove him right. As a matter of fact, I think that's part of the reason why MS is trying to get more ground in the open source community, because they understand that FOSS is actually becoming a threat these days, and they are trying wildly to stop the haemorrhaging.

In Apple's defense though, I do view them as the lesser of two evils, and would gladly push OSX/iOS on users rather than Windows/Windows Mobile. At least it's unix under the hood, and we can see most of the source code.

40
rak_112 2 days ago 0 replies      
SJWs ruin every thing they touch.
41
pdabbadabba 3 days ago 2 replies      
I think the comparison is fairly specious, frankly. There is a good argument to be made that the raison d'etre of the Confederacy was the subjugation of African Americans. Whether today's Confederacy enthusiasts agree or not, the Confederate flag surely sends that message to many of those who see it being flown -- particularly, of course, African Americans.

Islam, on the other hand, while associated to some degree with terrorism (largely by the definition of "terrorism", some would say), surely does not exist for the express purpose of cultivating terrorism. Moreover, even to the extent Islam is associated with terrorism in the public consciousness, it does not also convey a message of hate to a particular minority group, as do symbols of the confederacy.

There will be people who disagree with both of these points, but I submit that the former point is much more compelling than the latter. And, more to the point, the public reaction simply reflects, I think, widespread agreement that this is the case.

42
skc 3 days ago 0 replies      
Their platform, their rules. Simple as that.
43
salgernon 3 days ago 0 replies      
Are they pulling all references to actual confederateStates flags, or the army flag of Robert E Lee which more closely resembles the "confederate flag" used in racist contexts today.

https://en.m.wikipedia.org/wiki/Flags_of_the_Confederate_Sta...

It seems like the solid flag is now pretty much only usedFor shock value and racist advertisement, without the historical content.

I liked what Dave Winer had to say about his experience with it:

http://scripting.com/2015/06/22/theConfederateFlagIsAHateCri...

44
k-mcgrady 3 days ago 1 reply      
I get the feeling a lot of people here don't understand how important and destructive symbols like this can be. As stupid as it sounds symbols like that can have a huge impact on society and free speech isn't always the solution. I don't know how corrosive this particular symbol is as I'm not from the states but in general it is completely understandable to me coming from a place where stuff like this is a huge deal with big impacts on society.
Mario running in Unreal Engine 4 [video] youtube.com
176 points by latenightcoding  14 hours ago   24 comments top 13
1
jebblue 22 minutes ago 0 replies      
It looks promising. With Steam's success and now Unreal is back in play, Linux gaming's future and gaming on open platforms in general, gets brighter every day.
2
frik 8 hours ago 1 reply      
Nice.

Though Nintendo won't be happy and will shut down your video as you use their IP as happened many times before. Some months ago, someone did the exact same thing with Unity engine, and his Youtube video and website vanished within two days. [1]

Nintendo's upcoming NX console (successor of Wii U) will hopefully be more powerful than PS4/X1 at the end of 2016. And hopefully we get nice reboots of Super Mario 64, Mario Galaxy, Maria Kart and Zelda.

[1] Edit: I found the HN news from 3 months ago: https://news.ycombinator.com/item?id=9276605 -> https://roystanross.wordpress.com/super-mario-64-hd/

-- the website now reads as follows: "The project is no longer playable, or downloadable in any form. I received a copyright infringement notice on both the webplayer as well as the standalone builds. Which is fair enough, really. In light of Nintendo recently making a deal to release some of their IPs on mobile platforms, its probably not in their best interests to have a mobile-portable version of Mario 64 sitting around. In any case, I didnt really expect for this project to get so popular, and was hoping it would function primarily as a educational tool and a novelty. (...)"

3
pillowpants2 11 hours ago 1 reply      
The lack of NPC's in this version of the castle reminded me of an eeriness that existed in Super Mario 64, something I haven't felt in Sunshine or Galaxy. The feeling of being thrown into this new world without any allies or clear direction on completing the objective. At the time it felt exciting and scary, maybe I was just young...
4
nickysielicki 12 hours ago 1 reply      
From the title and the context of this post being on HN, I expected to see someone reverse engineering Super Mario 64 or Super Mario Bros and using that in the context of UE4.

But this is really just a typical game mod. Someone made some models for Mario and coins and put them in various UE4 tech demo scenes.

That's not to say it isn't cool. it just isn't extremely interesting from a technical perspective, besides the amazingness of UE4 in general.

5
markus2012 13 hours ago 1 reply      
Incredible:

- all the environment assets were taken from the Unreal marketplace

- all the character actions were scripted using blueprints only

6
coldcode 13 hours ago 1 reply      
That was so refreshing to watch. Maybe someone needs to make a movie about Mario leaving his world and visiting others, like Wreck It Ralph. You could do the whole movie inside UR4.
7
legohead 8 hours ago 0 replies      
A great demonstration of how graphics does very little for the fun factor of a game.
8
adamnemecek 12 hours ago 0 replies      
Out of curiosity, how much work is it to do something like this in UE? I'm not quite sure what everything does UE give you out of the box.
9
phaser 11 hours ago 0 replies      
It reminds me how a good mario game is about the art style, not graphics.
10
eddieroger 12 hours ago 1 reply      
The comparison at the end had a weird effect on me. I remember playing Mario 64 for the first time and thinking the graphics were the tops. But this new one looks so much better. How will I feel in in another 15 years when I see Mario in Unreal 20 or Source Film Maker v15?
11
tbrock 11 hours ago 1 reply      
Why won't Nintendo just remake Mario 64 with better graphics. The new super Mario brothers for Wii was decent but all of us who were kids in the 90s would be happy with a decent remake.
12
joshuapants 13 hours ago 1 reply      
This is magnificent. If only I could use .NET with Unreal Engine. Guess I'll have to look into blueprints.
13
anon3_ 11 hours ago 0 replies      
HOLY SHIT!

What about trademark? Could the author sell it?

See what happened to Super Mario 64 HD (an attempt at a remake with unity): https://roystanross.wordpress.com/super-mario-64-hd/

The Lonely End: in aging Japan, thousands die alone and unnoticed roadsandkingdoms.com
160 points by Thevet  1 day ago   153 comments top 17
1
civilian 1 day ago 4 replies      
> His epiphany came when his current partner told him how she had lost her grandmother. Unlike Koremuras loss, her grandmother had died alonea kodokushi. It was seeing the deep regret in her, and accepting his own ennui, that made Koremura finally take action. He left his job as a stockbroker and set up his own removal company dedicated to the cleanup of kodokushi victims. He wanted to give something back to the generation of his grandmother, and he also wanted to change who he was. I was ready for the prospect of change, but looking back, perhaps I wasnt quite so ready for how different my life was to become, he remembers.

It's odd that Toru Koremura's reaction to lonely elderly deaths was to go into the corpse-cleaning-up business, rather than starting a social outreach program for the lonely elderly. I understand there's dignity in how we treat the dead, but it doesn't solve the problem at all.

2
rjuyal 1 day ago 3 replies      
This is scary. This is the first time I read such article. I live alone, never married ( no plans to marry either ), have no friends. I have a decent job though.

This article scares me.

3
themodelplumber 1 day ago 0 replies      
I used to help out in roujin hoomu (homes for the elderly) when I was a missionary in Japan. Meeting Japanese people who worked there and really gave a care was a huge experience for me. They genuinely tried their hardest to make the experience pleasant in these under-budgeted, dreary places. While we Americans were just a temporary sideshow, they were there to help for as long as they could live off the salary. Makes me sad but also a bit hopeful that those waves could build up over time. If I ever go back I will happily exchange my fascination with Manga or cool stationery or whatever for some more time trying to help out.
4
ChuckMcM 1 day ago 2 replies      
Sad but not uncommon I think. There was a guy who had a radio show that interviewed people with odd jobs. One such jobs was sort of like Toru's but was "special cleaning". Basically the guy started a business to clean up crime scenes. But found that most of his business came from landlords and others discovering deceased elders in their rooms. His suggestion was to crawl your grandparents from time to time to say hi. Several times he had cleaned up places where the person had simply fallen and then died where they were because they could not get up to get any help.

This is probably especially challenging with people who don't have relatives or the relatives never check up on them.

5
cousin_it 1 day ago 1 reply      
That's a big problem with technological progress. It fulfills more and more of our needs, making it harder and harder to fulfill our need to be needed. Sure, you can give everyone basic income and robot caretakers, but people who aren't needed by other people will still suffer. I would much prefer it if society somehow changed itself to want and need all of its members, not just the young productive desirable ones. That's a kind of utopia I've never read about, it just kind of crystallized in my mind after years of thinking about this.
6
FrankenPC 1 day ago 4 replies      
In my mind I see this as a social health crisis. The elderly provide an unbelievable amount of wisdom to the next generation. By abandoning them, we abandon our children's future. I think of the elderly as an essential vitamin for children. They impart valuable social memes and at the same time tell the children, by there mere existence in their lives, that we value our family and when the children's time comes to age and pass away, they can expect to be valued as well.

In other words, by abandoning our elderly, we're sending a clear message to our children that human life has no inherent value if you can't produce. No wonder children in industrialized nations can't seem to feel a sense of belonging.

7
mattm 1 day ago 1 reply      
My wife is a hospice nurse here in Japan visiting terminally ill patients at their homes.

She comes home with many different stories. Some people are incredibly wealthy but their family doesn't want anything to do with them so they basically wait to die alone. Some people are poor and mentally ill and are basically living in dumps with rodents running around. Some people do have good situations in which they are cared for and can die surrounded by loved ones.

My wife is the only person I've met who wants to die young (around 40 or 50). Death is usually hidden from our lives - especially when we're young and only comes up infrequently.

While some people who are poor do use their service due to government subsidies, a lot of people don't and there is no public service for the elderly. Stories like this will become much more common over the next 20 years as a huge proportion of the Japanese populations passes on.

8
fuzzythinker 1 day ago 0 replies      
A very good related movie: Departures http://www.imdb.com/title/tt1069238
9
Futurebot 13 hours ago 0 replies      
Loneliness is deadly: http://www.slate.com/articles/health_and_science/medical_exa...

The "epidemic of loneliness" has spread to many places. If you haven't read "The Lonely American" I implore you to do so if you care about this topic at all. An incredibly sobering read.

Book blurb:

"In today's world, it is more acceptable to be depressed than to be lonely-yet loneliness appears to be the inevitable byproduct of our frenetic contemporary lifestyle. According to the 2004 General Social Survey, one out of four Americans talked to no one about something of importance to them during the last six months. Another remarkable fact emerged from the 2000 U.S. Census: more people are living alone today than at any point in the country's historyfully 25 percent of households consist of one person only.

In The Lonely American, cutting-edge research on the physiological and cognitive effects of social exclusion and emerging work in the neurobiology of attachment uncover startling, sobering ripple effects of loneliness in areas as varied as physical health, children's emotional problems, substance abuse, and even global warming. Surprising new studies tell a grim truth about social isolation: being disconnected diminishes happiness, health, and longevity; increases aggression; and correlates with increasing rates of violent crime. Loneliness doesn't apply simply to single people, eithertoday's busy parents 'cocoon' themselves by devoting most of their non-work hours to children, leaving little time for friends, and other forms of social contact, and unhealthily relying on the marriage to fulfill all social needs."

Here's an article for Britain:

http://www.independent.co.uk/life-style/health-and-families/...

It's everywhere.

10
chvid 1 day ago 0 replies      
The article is written is if this is unique to Japan. But lonely deaths, unnoticed for months happens every day in every (modern) society. And it is mostly men.
11
littletimmy 1 day ago 2 replies      
This is so sad. Dying alone is more frightening than death itself. It is articles like this that force me to stop working so hard and spend more time with family.
12
anovikov 1 day ago 1 reply      
It doesn't seem to have much to do with the eldery age, or even (physical) sickness. None of the people described have been at very old age, or terminally sick. This is just an illustration of how limited is society based on collectivism, obedience and hard work. It helped them out a great lot in the industrial age, and even then made many of them mentally broken, or outright sick, and completely stopped working in the post-industrial era.
13
justifier 1 day ago 1 reply      
is this a causation correlation issue of generations?

could i die alone without any great effort to do so having grown up on the internet?

i have seen promotional material for teaching seniors computer skills.. and there seem to be so many similar services and efforts that a search was unable to lead me to the specific video..

the internet is filled with a mess of stuff but within there are many communities covering very many interests

why imagine yourself alone?

14
mattm 1 day ago 0 replies      
This is definitely not the case. I question how many elderly people (80+) you actually know in your life and see on a regular basis.
15
pcrh 1 day ago 1 reply      
>We enter the world alone, we leave the world alone.

...or...

>For none of us lives to himself alone and none of us dies to himself alone.

16
facepalm 1 day ago 1 reply      
If only there was something like World Of Warcraft that is suitable for the elderly.

I'm not joking. I keep wondering if I should introduce my grandma to such a game, but I live far away and they all seem too complicated. Would be happy about suggestions.

17
JesperRavn 1 day ago 5 replies      
Why are there so many stories HN on social problems in Asia? I would have expected that smart people on HN would make some effort to understand Orientalism[0]. In the case of this story, this is a phenomenon that is occurring all around the developed world. Japan is in many ways just another developed country. But through the eyes of Orientalism, Japan's problems are unique and quaint, and the objective Westerner can dispose some sage wisdom[1].

[0] https://en.wikipedia.org/wiki/Orientalism

[1] I don't want to pick on anyone, but there are already some great examples in the thread.

Stop Firefox leaking data about you github.com
177 points by amq  2 days ago   89 comments top 18
1
dao- 2 days ago 4 replies      
Seems like lots of FUD; how do Firefox Hello, Pocket and Geolocation "leak data about you" if you don't explicitly use them? How do DRM and Reader mode leak data at all?

Also, Safe Browsing, DRM, Search suggestions, Telemetry and Health report can be disabled in the preferences UI. Don't need sensationalist about:config protips for that.

2
avian 2 days ago 1 reply      
Another thing worth noting is that if you are using Debian-rebranded Firefox (Iceweasel), you have a very unique user agent that is easy to track.

There is a bug opened (https://bugs.debian.org/cgi-bin/bugreport.cgi?bug=748897), but as far as I know, no simple solution exists yet. You can change the user agent with an extension to keep it identical with the most popular Firefox version, but then you have to manually keep it up-to-date.

3
salibhai 2 days ago 2 replies      
Seems like a bad idea turning off phishing notifications and browser warnings

http://kb.mozillazine.org/Browser.safebrowsing.enabledFirefox 2.0 incorporates the Google Safe Browsing extension in its own Phishing Protection feature to detect and warn users of phishy web sites.

4
Animats 2 days ago 2 replies      
Mozilla has an annoying pattern of removing items from the user preferences to "avoid user confusion", an excuse companies often use when deceiving customers. (Example: Microsoft dropping the "RT" designation. [1]) "Accept/reject third-party cookies", for example, doesn't always appear in the preferences any more.

Mozilla's new "social" features don't have a turn-off option in the Preferences. You can disable them by going to "about:config", creating the tag "social.enabled" (it doesn't even exist by default) and it to False. Mozilla provides no easy way to do that. This add-on takes care of those convenient little omissions.

Obviously, Mozilla is doing all this to tie users to their mothership and make it harder for them to leave. It's not like users were crying out for "Pocket" integration in the browser.

[1] http://www.winbeta.org/news/surface-2-no-longer-has-rt-brand...

5
tux 2 days ago 5 replies      
Don't forget about this;

"media.peerconnection.enabled = false" WebRTC leaks IP when you use TOR/VPN, test it with ipleak.net

"beacon.enabled = false" Blocks https://w3c.github.io/beacon/ analytics.

Also recommend using plugins; uBlock, NoScript if you use VPN.

6
garrettr_ 2 days ago 3 replies      
Please for the love of god do not disable the Google SafeBrowsing preferences. SafeBrowsing protects you from a lot of malicious websites, and does not leak much information to Google. For most people the security benefits of SafeBrowsing far outweigh the privacy concerns.

It is important to remember that malicious websites and malware in general may negatively impact your security and privacy in extremely harmful ways (malware compromises PII, website credentials, financial information, uses webcam and microphone to photograph/film/record you from blackmail/revenge porn purposes, ...)

For context, please see these relevant Mozilla bugs about SafeBrowsing privacy concerns: [0], [1]. tl;dr Firefox must set a cookie for SafeBrowsing, but it uses a separate cookie jar for SafeBrowsing so Google cannot tie the Safebrowsing activity to anything else you do related to Google or their services (which is the biggest concern here). They can learn a limited profile of your browsing activity, along the lines of "Random user x often uses their browser between 9am and 5pm on M-F".

The Safebrowsing implementation is specifically designed to be privacy-preserving. [2] It uses a Bloom filter to implement fast lookups in a minimally sized hash table of known malicious URL's. The only time a full URL (actually various hashes of multiple prefixes of the full URL, including the full URL) that you browse it sent to Google is when a prefix of it collides with a known malicious URL, in which case the URL must be sent to Google to resolve the question of whether the URL you are trying to visit is actually malicious or just a false positive from the Bloom Filter. Yes, the hashes are unsalted so it would be possible for Google to check if you were trying visit some pre-determined URL ("were they trying to visit www.thoughtcrime.org?") but only if it collided with a known malicious URL.

It would be helpful to know what the average rates of collisions and false positives are to get a sense of how much of an average user's browsing history is leaked to Google through Safe Browsing - can anybody from Google comment?

[0]: https://bugzilla.mozilla.org/show_bug.cgi?id=368255[1]: https://bugzilla.mozilla.org/show_bug.cgi?id=897516[2]: https://code.google.com/p/google-safe-browsing/wiki/SafeBrow...

7
gruez 2 days ago 1 reply      
How exactly does reader.parse-on-load.enabled leak privacy? Isn't everything parsed locally?
8
wodenokoto 2 days ago 2 replies      
While visiting google every 30 minutes or so is a way of leaking, you aren't leaking much more than ip and the fact that this up is in Firefox.

Isn't reader an offline functionality?

9
aorth 2 days ago 0 replies      
Recommends turning on Firefox's built-in tracking protection[0] (which matured in Firefox 37 or so), but has anyone compared this to uBlock? I guess the first thing to measure would be number of trackers blocked, but then of course memory and CPU usage would be interesting as well. uBlock has done this comparison[1] against AdBlock Plus, Disconnect, etc, so it would be very interesting...

[0] https://support.mozilla.org/en-US/kb/tracking-protection-fir...

[1] https://github.com/gorhill/uBlock/#performance

10
chimeracoder 2 days ago 1 reply      
Don't forget about WebRTC: https://github.com/diafygi/webrtc-ips

If you have WebRTC enabled, any website can determine both your local IP address (e.g. 192.168.1.1) and your globally-addressable IP address. The combination of these is essentially unique, and can even be better than cookie tracking or browser fingerprinting.

It's possible to disable WebRTC in Firefox, but AFAIK not in Chrome/Chromium[0].

As for Firefox Hello and Pocket integration, you can turn these off if you want, but I'm 99% certain that they don't actually send any data about you unless you actually use them.

[0] https://productforums.google.com/forum/#!topic/chrome/gJ8HF-...

11
amq 2 days ago 0 replies      
Important changes:

- Reader mode is confirmed not leaking data. No need to disable it.

- There is a way to stop leaking the browser history to Google while keeping Safe Browsing.

* both tested using Fiddler

12
pseud 2 days ago 0 replies      
13
jaxb 2 days ago 0 replies      
I guess there is a similar howto on various opt-out settings in Google account itself?

https://history.google.com/history/ and https://plus.google.com/settings/endorsements etc.?

14
erikb 2 days ago 0 replies      
I don't know but the DRM stuff is actually cool with me. I guess you can't convince the lawyers of nearly all media to turn on DRM for a few decades to come. But I still want to use things like Netflix. With the new DRM stuff you can at least have it running on a Linux instead of a Windows system. Step by step in the right direction, I'd say.
15
cedricbonhomme 2 days ago 0 replies      
Maybe I'll update my Firefox configuration: https://bitbucket.org/snippets/cedricbonhomme/cbj6/firefox-c...
16
zbraniecki 2 days ago 2 replies      
It would be awesome to turn it into an extension that makes it a single toggle.
17
fapjacks 2 days ago 1 reply      
Wow. What the fuck, Mozilla? Here I was, really hopeful that you were actually serious about honoring user desire for privacy.
18
MichaelCrawford 2 days ago 0 replies      
127.0.0.1 www.google-analytics.com

127.0.0.1 www.hosted-pixel.com

The political candidates are the worst.

Life paint volvolifepaint.com
159 points by erbdex  2 days ago   154 comments top 25
1
drfritznunkie 2 days ago 2 replies      
Retro reflectives and their impact on accident rates is still pretty contentious within the bicycle community, and personally, I think it gives people a false and very dangerous sense of security and visibility.

As a daily bicycle commuter and motorcyclist, the only rule I follow is that I am invisible when on two wheels. So I ride in a way that makes me safe, and that usually means doing things that most people would probably find dangerous.

In over 15 years of daily commuting (yes, all through the winter, too) I've been hit a half dozen times. The majority of those accidents were intentionally caused by the car driver, only a couple were truly faultless. None of them were the result of the driver not seeing me, they were all the result of the driver behaving badly.

A reflective jacket or spray isn't going to do ANYTHING if the driver decides that they own the lane and they're okay mowing you down to get it. That to me is the big flaw with any conspicuity safety measure, it relies on drivers actually being aware of the road around them and honoring your use of it. At least around here in DC, those two things are seldom present.

Most riders are foolishly naive about their safety. Traffic laws aren't going to keep your head from bouncing off a hood, and a reflective vest isn't going to make the driver put down their cell phone and pay attention to the road.

2
SideburnsOfDoom 2 days ago 2 replies      
I occasionally cycle in London.

There are two schools of thought to making cycling safer:

1) Make cyclists brighter and more armoured.

2) dedicated infrastructure.

Option 2 is much more costly and harder politically, but is the only school of thought worth taking seriously. Look at places such as as Amsterdam and Copenhagen where cycling is common and safe (1). Do they rely on helmets and glowing things? No they don't. Lots of ordinary people cycle in regular clothes on dedicated separated cycle lanes.

Yes, you'll be safer if you stand out by being brighter than everyone else. But new and interesting ways to ramp up the brightness wars are a frivolous distraction from what cyclists in London need. You should not need to "look like cross between Darth Vader and a Christmas Tree" (2) in order to ride a bike.

At lot of the current infrastructure is terrible:

Advanced stop line? You mean that white mark on the road with a minicab over it.

Cycle "superhighway?" You mean that blue stripe underneath the buses and trucks.

1) http://www.theguardian.com/cities/2014/oct/16/copenhagen-cyc...

2) http://lcc.org.uk/articles/cycling-what-not-to-wear-1

3
revelation 2 days ago 6 replies      
A gun manufacturer handing out protective vests to school kids.

Instead of this stunt, maybe they should focus on building cars and particularly trucks that are not unsafe by default. All this talk of blind angle obscures the basic fact that this is first and foremost an engineering problem, and most importantly, you can not turn the defects of your vehicle into the responsibility of other road users.

If your vehicle isn't safe, it can not be driven. The solution is certainly not to tell everyone else to just watch out because you can't see shit left and right and man is this thing large and heavy.

4
buro9 2 days ago 7 replies      
> "Cycle safety is the cyclist's responsibility"

Woah there. Hold up right there.

The safety of ALL road users is on the backs of ALL road users.

It's not uncommon in London to see reporting of one of the one-a-month on average deaths of a cyclist to see such comments as "the cyclist was wearing a helmet".

Yet the helmet didn't save the cyclist, because the cyclist was crushed by a fully loaded construction HGV tipper truck.

This idea that cyclist safety is 100% their responsibility is part of the root cause of the problem.

Cyclists are one of the most (if not the most) vulnerable demographics of road users there is, and it should be the responsibility of other road users to help protect them.

Failing that, it should be the responsibility of those who provide roads to ensure that the infrastructure itself protects them (segregated cycleways).

But creating an idea in which "Cycle safety is the cyclist's responsibility" is plain disgusting when every damn month another cyclist is in a morgue, regardless of whether or not the cyclist wore high visibility clothing, had lights, wore a helmet, etc, etc.

And there is my issue with Volvo's "Life paint"... it shifts the blame for the continued stream of fatalities onto the cyclist.

Do you want to know where the real problem is? Try this, of the 8 fatalities on London roads this year, 7 were caused by HGV construction vehicles even though such vehicles take up less than 5% of all London vehicular traffic.

Here's one from Monday... this week! http://www.standard.co.uk/news/london/cyclist-26-killed-in-b...

Being covered in reflective spray paint will do nothing against a system that pays HGV drivers by the job count and doesn't enforce the many existing rules about vehicle safety, driver training... and in the recent case where a driver was convicted, the company that hired him didn't even check that he had a valid licence.

Perhaps if Volvo really wanted to make a big difference to the safety of cyclists, they'd get heavily behind the proposed designs for safer HGVs for cities: http://lcc.org.uk/articles/lcc-challenges-construction-indus...

5
dreen 2 days ago 3 replies      
I stopped cycling in London because I value my life. The city is not built for the amount of cyclists who are already on the roads trying to swerve between the traffic. I have seen people slamming into buses way too many times.

A can of fluorescent paint is not going to help much. Most of these accidents happen during the day anyway.

6
usrusr 1 day ago 0 replies      
Any evaluation of retroreflective safety features should start with a short overview of what retroreflectivity cannot do: improve visibility, when the object is not within the light cone of the observer's headlights. With that in mind, those impressive side shots are becoming nothing more than show, because any bike sideways in the lights will either be long gone when the car reaches the point where the paths cross, or be already way too close to avoid an accident. And head/tail visibility must be provided by active light anyway, because visibility only inside that headlight beam is never enough. Once you have active light, any retroreflectors are merely adding minor (but important) attitude/dimension/range cues and improvements by "lifepaint" over conventional reflectors will be marginal at most.
7
dchest 2 days ago 1 reply      
Here it is without Volvo marketing http://www.albedo100.co.uk/
8
mschuster91 2 days ago 3 replies      
The problem is that this will only be used by those bikers who already care about their safety and behave according to traffic rules.

The fucktards driving at night in full-black clothing, without lights and reflectors, music blasting in their ears and wearing no helmet on the road, instead of the bike lanes, will not take notice of the spray (or the fact that their behavior is endangering themselves).

Now guess which group of bikers gets hit by cars more often?

(Disclaimer: I had multiple last-second-saves with said fucktards while peacefully driving around)

9
madaxe_again 2 days ago 2 replies      
It's water soluble and lasts a week, from that page. Gimmick. Can't see anyone buying a can and spraying their bike every seven days.
10
yitchelle 2 days ago 0 replies      
Although I love this idea and the execution by Albedo100, I can't help thinking some reflective tape would last longer, and more cost effective in the long term. Also comes in several colours as well.

An example is http://www.amazon.de/Reflective-Stickers-Tapes-Motorcycle-Co...

11
aubergene 2 days ago 0 replies      
Side note, but this would be great for some creative graffiti uses
12
markvdb 2 days ago 1 reply      
This would be great for painting bicycle lanes onto cars parked in the cycle lane!
13
nichodges 2 days ago 6 replies      
Such a shame that a car company's response to the danger cars present to cyclists is to modify the cyclist. Victim blaming at its best.
14
polskibus 2 days ago 0 replies      
I wish they make a permanent variant in the future. Another question is how does rain affect the paint.

In my cyclist opinion, would like to paint his bike with the permanent variant and perhaps his clothes with temporary one.

15
DanBC 2 days ago 2 replies      
This looks like a great product.

What's the research say?

Cyclists need front and rear lights, and front and rear reflectors. On top of that the most useful reflectors a cyclist can have are on the pedals and on the wrists. These help when a cyclist is turning; and the pedal reflectors clearly show drivers that they're approaching a cyclist.

More than that and you risk the "Christmas Tree Effect" - it's tempting to think that more is better, but you risk just confusing the driver who then doesn't take appropriate safety measures.

16
kraftman 2 days ago 0 replies      
Cyclists in London are crazy: they swerve in and out of cars, onto the pavements and then back onto the roads. They cross using pedestrian crossings and they cycle through red lights like they don't apply to them.

Being able to see them better is great but even if you know exactly where they are you still don't know what they're going to do because they don't follow the same rules of the road.

17
dgreensp 2 days ago 0 replies      
"Putting something on that will make you scream out to drivers like me is a fantastic thing."

If Volvo understood cyclists better, they'd choose a quote like, "I'm a driver and I hate life paint. Who do you think you are looking all flashy and important?" You gotta work with the tribal dynamics, not against them.

18
jsingleton 2 days ago 0 replies      
Looks like a neat idea and could help a bit but not as cool as http://revolights.com which has to be my favourite bike visibility system. It's mounted on the wheels, persistence of vision based and knows when to illuminate the LEDs.

However, non of these make cycling (especially in London) safe. I wouldn't cycle in London any more as it's just too dangerous, but I did for years. I always wore high vis and a helmet and obeyed the rules of the road and I still had far too many close calls and incidents with other vehicles.

If you want to see how tragic just one of very regular London cyclist deaths is then this is on iPlayer for the next week: http://www.bbc.co.uk/iplayer/episode/b05y18wv/an-hour-to-sav...

19
chris_wot 1 day ago 0 replies      
You know, Volvo Cars and Albedo accept no liability or responsibility for any individual or individual's accident or injury by any road user or other object whilst wearing Lifepaint. Nor do they accept liability for any damage to property caused directly or indirectly by the paint and what's more, that it is transferable.

Furthermore, Volvo say that cycle safety is the cyclist's responsibility. There's more: Lifepaint is one of the many products that can aid visibility but cannot prevent accidents caused by the individual or other road users.

20
deutronium 2 days ago 2 replies      
I love the idea, can anyone explain how it 'glows'?
21
TootsMagoon 2 days ago 0 replies      
This is going to be used for very creative, disruptive and disturbing vandalism. I guarantee it.
22
njharman 2 days ago 0 replies      
Lasts only 1 week. Neat niche product.

Also, wtf did they do to site to make text not selectable?

23
andreamazz 2 days ago 1 reply      
This is clever. I can also see it being used by street artists.
24
cmdrfred 1 day ago 0 replies      
Accidents often aren't. When you see the guy weaving though traffic down the highway, changing lanes every few seconds and accelerating and breaking seemingly at random. He will tell you of all the 'accidents' he has had. He's lying. If it's preventable and you choose not to, it's intentional.
25
cafeoh 2 days ago 0 replies      
Pschhhhhfrrrtshhhh WITNESS ME
Quake in your browser quaddicted.com
168 points by highCs  2 days ago   65 comments top 27
1
ninjaroar 2 days ago 2 replies      
Quake in the browser via javascript was done 5 years ago using GWT. I recall seeing that port at Google I/O years ago.

https://code.google.com/p/quake2-gwt-port/

2
klaussilveira 2 days ago 1 reply      
ioquake 3 via emscripten: https://developer.mozilla.org/en/demos/detail/ioquake3js

I need some help getting MP to work with WebRTC. If you're interested: https://github.com/klaussilveira/ioquake3.js

3
chucky_z 2 days ago 0 replies      
Oh, this thing is ancient!

Still cool.

https://github.com/SiPlus/WebQuake

There is a node.js multiplayer server included in the repo that works half-decent. :)

4
deepakjc 2 days ago 4 replies      
I'm on Chrome (on a macbook) and the mouse isn't working... Click works to shoot, but I can't look around using the mouse. Any ideas?

(I'm playing with just the keyboard so far... but I can see the end coming soon.)

5
serkanyersen 2 days ago 2 replies      
For some reason it triggered vulnerability blocked warning on my device. Looks like one of the WAV files tried to execute code.

Screenshot: http://d.pr/i/w2A1/2BrF07Eo

6
0x0 2 days ago 2 replies      
It actually runs on the iPhone's MobileSafari! (although with quite a bit of stutter and the occasional browser crash)
7
Rifu 2 days ago 1 reply      
I'd just like to take a moment to appreciate the domain name.
8
Shad0w59 2 days ago 1 reply      
+mlook doesn't work... any idea why?
9
spdustin 2 days ago 1 reply      
If I used my old PC with the 3DFX card, could I see through the walls to snipe other players?

Ahh, the good ol' days.

10
j0e1 2 days ago 1 reply      
The game sounds still give me goosebumps.
11
mhomde 2 days ago 1 reply      
~MAP DM4

shoots a couple of shotgun blasts

jumps into the lava

dies

yup, good 'ol quake

12
zobzu 2 days ago 0 replies      
worked smoothly on my linux laptop/firefox :)I wonder if quake 3 would work...
13
rectangletangle 2 days ago 0 replies      
Cool as hell, kinda surprised it ran as smooth as it did.
14
empressplay 2 days ago 2 replies      
Awesome! But I was really hoping it was going to be multiplayer. It would be much cooler if you could just hop straight in to shooting at some other people...
15
mrbig4545 2 days ago 0 replies      
technology is so advanced that we can now run 19 year games in our web browsers and be impressed by it
16
mhomde 2 days ago 0 replies      
Next year is the 20-year anniversary of quake btw! God I'm old
17
shocks 2 days ago 0 replies      
TIL it's really hard to play Quake on a Kinesis...!

Great stuff here. Thanks for sharing.

18
ralphael 2 days ago 0 replies      
Love this blast from the past.

I just spent 20 minutes watching the demo run through :-)

19
thgil 2 days ago 0 replies      
Ctrl + W closes the tab while playing. Happened to me a lot :/
20
silveira 2 days ago 0 replies      
Wow! It worked super well on Firefox 38.0.5/OS X 10.7.5.
21
hitlin37 2 days ago 0 replies      
Quake was the first game i finished till the end.
22
gmriggs 2 days ago 1 reply      
Awesome

i can hear the zenimax lawyers stampeding now though...

23
jwinterm 2 days ago 2 replies      
IE9 here. I has no quakes in my browser :(
24
ShiftLEr 2 days ago 0 replies      
Works on Nexus 7 Android 4.4
25
alttab 2 days ago 3 replies      
I liked QuakeLive when that was still running. That shit was the bomb.
26
pgrote 2 days ago 0 replies      
wow!

Works great on an Acer C720 chromebook. Fantastic!

27
anti-shill 2 days ago 0 replies      
hardly worked at all on my PC with 3 gig of RAM.
Cancer reproducibility effort faces backlash sciencemag.org
151 points by tlb  23 hours ago   110 comments top 13
1
TisButMe 21 hours ago 6 replies      
This is the same behaviour I've seen time and time again in biology labs.

People there are re-doing the same experiment over and over until it gives them the result they want, and then they publish that. It's the only field where I've heard people saying "Oh, yeah, my experiment failed, I have to do it again". What does it even mean that an experiment failed? It did exactly what it was supposed to: it gave you data. It didn't fit your expectations? Good, now you have a tool to refine your expectations. But instead, we see PhD students and post-doc working 70h hours week on experiments with seemingly random results until the randomness goes their way.

A lot of them have no clue about statistical treatment of data, making a proper model to try and test assumptions against reality. Since they deal with insanely complicated system, with hidden variables all over the place, a proper statistical analysis would be the minimum expected to be able to extract any information from the data, but no matter, once you have a good looking figure, you're done. In cellular/molecular biology, nobody cares about what a p-value is, so as long as Excel tells you it's <0.05, you're golden.

The scientific process has been forgotten in biology. Right now it's basically what alchemy was to chemistry.

I very happy to see efforts like this one. Sure, they might show that a lot of "key" papers are very wrong, but that's not the crux of it. If there is a reason for biologists to make sure that their results are real, they might try to put a little more effort into checking their work. And when they figure out how much of it is bullshit, they might even try to slow down a little on the publications and go back to the basics for a little while.

I'm sorry about this rant, but I've been driven away from a career in virology by those same issues, despite my love for the discipline, so I'm a bit bitter.

2
astazangasta 22 hours ago 2 replies      
Looking at this through the lens of drug discovery is the wrong way to do this. The problem is with our drug discovery strategy, generally, not with the reproducibility of our research.

STK33, for example, is definitely implicated in cancer through a wide variety of mechanisms. It is often mutated in tumors, and multiple studies have picked it up as having a role in driving migration, metastasis, etc.

This doesn't mean we can make good drugs to it.

Making drugs is hard - they need to be available in the tissue in the right concentrations, often difficult to achieve with a weird-shaped, sticky molecule. They need to have specificity for the tumor, they need to have specificity for the gene target(s) of interest. They need to be effective at modulating the target.

More importantly, though, the drug is modulating a target (gene) that is involved in a biological system that involves complex systems of feedback control, produces adaptive responses, and otherwise behaves in unexpected ways in response to modulation.

In my experience this is usually underappreciated by most drug discovery strategies, which merely seek to "inhibit the target" as if its involvement in the tumor process means we can simply treat it as an "on-off" switch for cancer. This assumption is asinine, and of course will (and does) lead to frequent failure. STK33 is not an on-off switch, and attempting to treat it that way will likely result in a drug that does nothing.

3
gwern 21 hours ago 3 replies      
> This past January, the cancer reproducibility project published its protocol for replicating the experiments, and the waiting began for Young to see whether his work will hold up in their hands. He says that if the project does match his results, it will be unsurprisingthe paper's findings have already been reproduced. If it doesn't, a lack of expertise in the replicating lab may be responsible. Either way, the project seems a waste of time, Young says. I am a huge fan of reproducibility. But this mechanism is not the way to test it.

One swallow does not make a spring. With a belief like 'one replication is enough', I'm not sure Young actually appreciates how large sampling error is under usual significance levels or how high heterogeneity between labs is.

4
ryanobjc 21 hours ago 2 replies      
As a complete outsider reading the attitudes behind the original scientists, it seems to me that they resent the oversight and hate to do extra work. In defending their practices they fall back on "expert work" and essentially are arguing that what they are doing is too complex for anyone else to do and they should be left alone to continue to do it.

And from their point of view, it seems all very reasonable. But from the rest of humanity who is being asked to materially support them, and waits for their conclusions to make the world a better place, it seems ... frankly... lazy and selfish. 30 emails, wow! 2 weeks of a graduate student's time -- these are the people who are the least paid right? Below minimum wage even? The demands on their time seem so low, yet the complaints are so high, that one can't help but wonder if the concern really is that their results are too 'magical' and irreproducible and they just fear other people learning about it.

I've seen this behavior in professional settings, and ultimately it comes down to a lack of confidence in oneself, the tools and technology and the quality of work being done. Careers are at stake, but is the alternative to just give people a free pass?

5
jessriedel 23 hours ago 0 replies      
It seems like a sensible check is for this collaboration to include original studies, like the one mentioned in the article lede, that have already been replicated elsewhere. (Ideally they would keep blind the relevant members of the collaboration to this fact.) Then when you say "we failed to replicate X% of the studies" you also say "of the subgroup that had already be replicated, we failed to replicate Y%". If Y isn't much smaller than X, you know the replication collaboration is probably botching this.
6
cwyers 11 hours ago 1 reply      
> Jeff Settleman, who left academia for industry 5 years ago and is now at Calico Life Sciences in South San Francisco, California, agrees. You can't give me and Julia Child the same recipe and expect an equally good meal, he says. Settleman has two papers being replicated.

Uh, Julia Child WROTE FREAKING COOKBOOKS. The entire point of Julia Child was that she tried to develop recipes in such a way that another cook could produce an equally good meal. Now, yes, if I went into a boiler room at Goldman Sachs and picked 10 guys at random, I doubt that most would be able to duplicate the recipe. If I picked 10 professional sous chefs at random and none of them were able to make a dish as good as Julia Child's from her recipe, I would start to have my doubts about the recipe.

By the same token, I don't expect rank amateurs to be able to duplicate state of the art cancer research. But if labs run by pharma companies and academic institutions are having the failure rate at reproducing research that the article claims, I think it's more than reasonable to start questioning the paper that documented that research, if not the research itself.

7
nmrm2 22 hours ago 9 replies      
I can't state how strongly I disagree with the conclusion that papers should be providing excruciating detail about protocol just because "pharmaceutical companies can't reproduce key cancer papers [without the help of the original scientists]". Science has rarely been done like this.

It would be like Google complaining that they can't copy psuedocode verbatim out of a paper and have a highly performant algorithm. Or Microsoft complaining that a static analysis defined in a paper wasn't accompanied by a production-ready implementation.

Producing protocols that literally anyone could replicate without expending effort is not the business of Science.

Replication should focus on the veracity of the underlying truth claim, not the economics of reproducing the results.

8
azernik 20 hours ago 0 replies      
> It's unrealistic to think contract labs or university core facilities can get the same results as a highly specialized team of academic researchers, they say. Often a graduate student has spent years perfecting a technique using novel protocols, Young says.

Then they need to spend the time documenting those protocols.

My dad worked in biological research, and his attitude has always been: if you don't write it down, you might as well not have done the work at all. ESPECIALLY in research.

9
chris_wot 19 hours ago 1 reply      
So hold on a moment. These researchers are doing experiments so badly that they can't find the actual procedures they used to get their results? And now they are tracking down old postdocs and lab technicians just to pick their Brian's as to what they actually did?!?

How the heck did this stuff get through peer review? Surely I'm missing something critical?

10
lettergram 22 hours ago 0 replies      
Interesting... Over the past year I was tickling the idea of making an organization/company/website which automatically would give tax dollars to research. The idea being, you can maximize the amount you right off in taxes and donate to research you desire (i.e. more NASA, less children killed around the globe).

The idea stemmed from the the idea that people want access to research (public) AND reproducible. The funds would go directly to research groups, and as an incentive, reproducibility would have bounties based on what people were willing to donate. Because virtually every research group with public research is supported by a non-profit, no one losses additional money, but more funds go towards public interest research GROUPS not organizations with bureaucracy.

Somewhat off topic, but this seems another reason for me to start the project.

11
x0054 18 hours ago 1 reply      
I tend to agree that the biology papers often lack proper documentation of procedures and methodologies. This is a wonderful effort to reproduce some of the key experiments. That being said, I think it's also very important to look at the quality and qualifications of the labs doing the reproductions.

I don't have any direct link to cancer research, so I can't speak with authority on the subject, but I have been involved in the past with a company working in the Preimplantation Genetic Diagnosis field.

The basics of their procedure is to create one or more human embryos via IVF, incubate the embryos for up to 6 days, than either freeze them, or transplant them into the prospective mother. On day 3 or 5 of incubations the embryo is biopsied, and the genetic material is tested to make sure there are no aneuploidy defects. We were also able to test for some other types of genetic abnormalities. This is for people who are having problem becoming pregnant.

In any case, some time in the mid 2000s there were 3 papers published in Europe claiming that performing biopsy on Day 3 is extremely detrimental to the embryo, and their conclusion was that PGD with Day 3 should not be performed. The experiments were conducted by people who were unskilled in micro manipulation.

They did follow proper protocols, and I am sure they did their best to replicate proper procedures. But micro manipulation is as much skill as it is knowledge. For instance, I can write a detailed procedure on how to shoot a compound bow, and you can follow that procedure exactly. But, without practice, you are not going to hit the bullseye on the first try.

Because we were in the business of providing services to doctors, not publishing papers, we constantly tracked our embryo mortality rates, birth rates, and accuracy of testing. The better our results were, the more business we would get. And we couldn't fake the results, because clinics ordering the test would be the once recording all of those statistics for us.

Any way, long story short, none of our data agreed with the papers claiming that Day 3 biopsy was detrimental to the embryo. In fact, quite the contrary, many of our statistics suggested that Day 3 biopsy and Day 4 or Day 5 transfer would result in better implantation rates. But, the papers were published, and referenced, and then it became "common knowledge" that Day 3 biopsy is bad, and the medical industry moved on to Day 5 biopsy and embryo cryopreservation, and so has the company I worked with.

To the company I worked for it's all the same, money is money. Day 3 or Day 5 biopsy, they make money all the same. But the patients are not more limited. From the stats we have seen, it doesn't look like Day 4 or 5 biopsy is worse for the embryo, but being frozen isn't a walk in the park. With Day 5 biopsy you have to freeze the embryo, in order to allow time for the test results to come back.

Any way, it's my 2 cents. Reproducibility is important, but I think it's just as important to change the incentives of those who publish papers. If you goal is to be published, then of course your research will suffer. It's the publish or parish mentality in academia that is the problem, I think.

12
platform 11 hours ago 0 replies      
I agree with the overall sentiment, the scientific community working on cancer cure(s) is failing us, the patients and their families.

And they are failing us because of some fundamental gaps in how the research, and subsequent review/dissemination/presentation of finding is done. I suspect there are multiple failures in the process. The standards of scientific proof and repeatability, used by mathematicians, physicists and chemists are not followed

The net result is the following disappointing statistic:

"...In 1971, President Nixon and Congress declared war on cancer. Since then, the federal government has spent well over $105 billion on the effort (Kolata 2009b)....Gina Kolata pointed out in The New York Times that the cancer death rate, adjusted for the size and age of the population, has decreased by only 5 percent since 1950 (Kolata 2009a)." [1]

And this was just the US federal government investment. Not counting the private donations, and private company research.Today the annual fed investment is 5bln annually [3]I do not mean to sound totally discouraged, as clearly the screenings have helped many to detect cancers before they metastasized. And I would say the science results are showing that that part of the research is working well.

However,for the cancers that can be rarely be detected before the spread (eg pancreatic cancer and others) -- the investment our country and other societies have put in -- simply has not payed off.

What worries me is that our research quality gates are not able to improve the QoS of the underlying research.

And with my 'management hat on' -- I am reaching out for this quote by Einstein.

"Insanity: doing the same thing over and over again and expecting different results.

The OP paper is not the first one pointing at the lack of reproducible results, and it not just for cancer research"...But it may also be due to current state of science. Scientists themselves are becoming increasingly concerned about the unreliability that is, the lack of reproducibility of many experimental or observational results...." [2]

There needs to be a bit of a revolution in the science of the cancer research and the way money is allocated to it.Clearly the current model does not work and likely is encouraging the pseudo science to prosper.

[1] http://www.csicop.org/si/show/war_on_cancer_a_progress_repor...

[2] http://www.forbes.com/sites/henrymiller/2014/01/08/the-troub...

[3]http://blogs.reuters.com/stories-id-like-to-see/2014/09/09/t...

13
anonbanker 22 hours ago 0 replies      
the group clarified that they did not want to replicate the 30 or so experiments in the Cell paper, but just four described in a single key figure. And those experiments would be performed not by another academic lab working in the same area, but by an unnamed contract research organization.

sounds like someone wants to quietly weaponize this.

       cached 28 June 2015 15:11:02 GMT