What is the root problem? People on both sides of the debate agree (if given the option) that the government probably never should have messed with marriage, at least not as the cultural/religious thing that it is.
In a nation where we care so much about the separation of church (broadly defined to include ideologies that may not be formal religions) and state, I don't understand why we're seeking to only expand that connection.
What should happen is the government should stop defining marriage of any form (leave that to religion or personal tradition), and simply define all these rights under civil union (or a similar phrase with no significant religious/cultural attachment).
"It is now clear that the challenged laws burden the liberty of same-sex couples, and it must be further acknowledged that they abridge central precepts of equality . . . Especially against a long history of disapproval of their relationships, this denial to same-sex couples of the right to marry works a grave and continuing harm. The imposition of this disability on gays and lesbians serves to disrespect and subordinate them. And the Equal Protection Clause, like the Due Process Clause, prohibits this unjustified infringement of the fundamental right to marry." (page 22, from the coverage on SCOTUSBlog)
For others, the opinion also reasserted that people who are really mad about this can continue to be mad and vocal about it, as guaranteed by the First Amendment.
"Finally, it must be emphasized that religions, and those who adhere to religious doctrines, may continue to advocate with utmost, sincere conviction that, by divine precepts, same-sex marriage should not be condoned. The First Amendment ensures that religious organizations and persons are given proper protection as they seek to teach the principles that are so fulfilling and so central to their lives and faiths, and to their own deep aspirations to continue the family structure they have long revered. The same is true of those who oppose same-sex marriage for other reasons. In turn, those who believe allowing same-sex marriage is proper or indeed essential, whether as a matter of religious conviction or secular belief, may engage those who disagree with their view in an open and searching debate." (page ~32)
The full majority opinion [PDF]: http://www.supremecourt.gov/opinions/14pdf/14-556_3204.pdf
Which, unlike same-sex marriage, is an institution with deep roots both in America (the Mormons were forced to give up this sacrament as a condition of statehood) and in the majority of world cultures, where it ranges from condoned to celebrated.
Without getting unduly personal, let's say that I have a stake in that question being resolved. I know several triples living quietly among us; they face the same kind of problems (child custody, hospital visitation, inheritance rights) as same-sex couples faced prior to this decision.
What the polygamists of the nation lack is a powerful lobby. <shrug> One may hope that nonetheless, reason and freedom will prevail here as well.
EDIT: nation, not world. Worldwide the situation is different. America is suffering from its Christian legacy here. Most Christian countries are adamant about denying this right to their citizens.
But the idea that "the Constitution guarantees a right to same-sex marriage" is pretty laughable.
Does anyone really believe this right was in the Constitution for 250 years, only to be discovered recently? In reality public opinion and culture changed, and 5 justices decided to change the law.
As a kid, gay people were practically lepers.
Eleven Years ago, Dave Chappelle just outright said "gay sex is just gross, sorry it just is", and it was considered funny and acceptable. (Not harping on him specifically, just pointing out what it was like in 2004)
Seven years ago prop 8 passed, if barely with some caveats about lack of understanding.
And now? SCOTUS upholds gay marriage and it's socially reprehensible to mock homosexuality. It's a strange and very positive feeling watching a country's world view shift like this.
Unfortunately it's still legal in many states to discriminate against employees on the basis of sexual orientation. Hopefully that's next to be fixed.
This talk about "church's definition of marriage", etc. is a red herring, and just a couched way of saying "we don't like homosexuality and homosexual behavior".
I got married in India. In a ceremony presided over by a local priest. There was no "church" involved. But guess what? No Christian here (in the US) has ever doubted the authenticity of my marriage.
And then I got divorced in the US. The courts here had no problem recognizing my marriage, even though it was performed in some other country, by some unknown religious authority. The officials had no hesitation in breaking up this marriage. Why don't we require the Church's blessing to break up a marriage (I am aware that Catholics have a certain process of appealing to the Pope, but not all churches do)?
If you don't support the idea of the government getting involved in marriage, you shouldn't support the idea of government-approved divorces either! Go to your church and get a divorce!
Polygamy is the next hurdle for society.
The judges should declare all consenting marriage legal !
I understand the social consequences. But what legal rights have been granted now that did not exist previously ?
I think in the long term all less efficient, less happiness producing systems will loose out anyway. All you need to do is ensure that your system protects the minority enough so that they can freely compete and coexists with all the rest.
But I'm a libertarian, so I think all laws regarding sexual relations between consenting adults should be repealed whether gay or straight...
Edit: To clarify, I support marriage equality and believe this is great news that deserves celebrating.
Not just because I'm a lesbian (I'm happily single, so no marriage for me regardless), but because it's a fundamental right that shouldn't be denied to anyone. It really warms my heart that everyone can finally marry the people they love.
> Lawyers for the four states said their bans were justified by > tradition and the distinctive characteristics of opposite-sex > unions.
Take a dollop of tradition says it should be so, and sprinkle on distinctive characteristics to taste.
Mary Bonauto argued it perfectly, I thought.
The other day I was watching Eddie Murphy's "Delirious" special . It's widely considered one of the best standup specials ever and it's Eddie in his prime. But he spends the first ~5 minutes just spewing anti-gay jokes. Not hateful stuff, but just saying over and over how he's scared of gay people, etc. And he was probably the biggest star in the country at the time and at the peak of his abilities as a comic. I don't think that would go too well now (even if it is comedy).
Anyway, congratulations to anyone who was previously unable to get married and can now do so. It's a real victory for the good guys.
0 - https://en.wikipedia.org/wiki/Eddie_Murphy_Delirious
The government should not enforce marriage contracts, they should enforce legal contracts. Marriage is a private matter, not a government matter.
They shouldn't officially recognize marriage at all, nor should they discriminate on marital status for tax purposes. All people should be equal in the eyes of the law.
People should be free to enter into legal contracts with whomever they want.
The ruling that had vastly more wide-reaching effects this week is that they upheld the the terrible "Affordable" Care Act. This act is a capitalist abomination of much more well thought-out socialist single-payer plans. Now that health insurance companies have a state granted monopoly, there's no reason to bring prices down or change anything.
Any social conservatives on HN -- that is to say, both of you -- should keep in mind that if you're worried about how this affects the sanctity of marriage, that institution has long since been sullied by a) allowing government to get involved with it, b) easily-obtainable divorces and c) that whole Henry VIII business. Same-sex couples can't possibly do any more damage than that.
Of the threats to American culture or even Western Civilization as a whole, the SSM boogeyman pales in comparison to a feckless electorate, unaccountable government with Big Brother aspirations, crushing debt and even Islamic extremists.
That's why my reaction is "meh": as a "problem", SSM isn't even on the radar.
> As late as October, the justices ducked the issue, refusing to hear appeals from rulings allowing same-sex marriage in five states. That decision delivered a tacit victory for gay rights, immediately expanding the number of states with same-sex marriage to 24, along with the District of Columbia, up from 19.
> Largely as a consequence of the Supreme Courts decision not to act, the number of states allowing same-sex marriage has since grown to 36, and more than 70 percent of Americans live in places where gay couples can marry.
An earlier SCOTUS decision would have taken away our ability to show consensus on the issue.
I'm glad that I was able to personally vote "No" to an amendment in MN that would have banned marriage rights.
only 60% of US citizens support same-sex marraige? in my area it is more like 90-95%, this was surprising and sad for me to hear
Can someone sense the creepiness in this? It tells you how/who/why you should love.
It still doesn't include certain groups and it could revoke such 'rights' in other circumstances.
Do I have to wait another century?
15 years ago we solved the Y2K problem. Now we've got to solve the SQL2Gays problem!
Some countries where gay marriage is illegal: Germany, Italy, Australia, Japan, Austria, Switzerland, Greece.
Hacker News you disappoint me.
I'm a Sublime license holder, but I use Atom as much as I can, because the more open source can win, the better.
However, yesterday I was doing some complex regex's (porting a random sql dump file into a seeds.rb), and Atom kept dying, whereas Sublime was pretty much instantaneous.
I'm not doing the usual "Atom is slow" drum beating, but saying some undertones of the announcement make me worry a bit. I hear discussion of things like Electron and "social coding" as the future, and I'm hoping that means that no one considers 1.0 to equate to the core editing experience being finished. It's not, and I hope the Atom team continues to iterate before moving on to new features.
Being able to open files larger than 2MB isn't sexy, but it's necessary. Having to hard-kill my editor because the save dialog is trapped on my other full screen session that it won't let me get to deserves more than a "but it's open source" response.
tl;dr congrats team and your core users want the best editor possible over bells and whistles
I usually install software on my user folder on the work laptop, as I don't have enough priviledges. This time the installer worked, but why override the questions to the user, like install location, etc.? There's a standard for Windows installers, why did they ignore it? Not cool.
Atom is my favourite editor for coding in, and it just keeps getting better.
I introduced my team to it today (pre 1.0 release, this is a nice surprise) and they were surprised by how pleasant the experience was - just a few minor hiccups. We've tried a bunch of editors and usually stick with Sublime because it's easiest to use while pairing, but I think that will change now.
Sorry for the tough HN crowd, you can never please them.
Here's to Atom 2.0 <3
It's super easy to hack on and contribute to.
Also! I was able to create the colorscheme of my dreams in about 15 minutes, thanks to to the dev tools integration.
E.g. I can't type '@', '\', and ''. Yes, I can't write metadata annotations or escape some characters.
The ease of finding and installing themes and plugins is unparalleled.
Considering trying it for a week or two as my daily driver (with vim mode, of course.)
However, one thing that stands out to me, the file size of Atom.app is 203MB!! How in the world can a text editor be that large? Compare that with MacVim, which is about 27MB.
The installer is almost 10x times as large as the sublime text installer.
Please leave "web technologies" where they belong.
Later in my search for an editor that handles EJS, I rediscovered Atom. It really has improved since it first started. AFAIK, Atom and Sublime are the only editors that handle EJS. I also use Atom to edit JS, JSX, gradle, and FTL which work well as well. Still I stick to IntelliJ for most programming languages since I haven't found a way to get code completion, reference jumping, etc to work on Atom.
Very impressive work from the Atom team and the contributors!
I started using Atom a year ago, but at that time it was very unstable and the performance sucks so I switched back to Vim.
This 1.0 still has something to pine for: some of the essential packages are still not updated for the 1.0 API (vim-mode, etc), and when processing large files it still slows down significantly, but as they say, it's now a good foundation to build upon.
This issue is still present in the current release. It seems like a minor annoyance but when it happens it really kills my productivity.
I wanted to download this but after clicking every link I still hadn't seen a way to do it anywhere...
Obviously if I go to the homepage now the first thing I see is a big download link, which is great.
I think a 'download' link on the site though would be good since if anyone links ANYWHERE else it's hard to find.
But the biggest issue for me is a battery usage, it reduces my battery usage on RMBP15 by 2 hours compared to sublime text, I am mostly working from remote places and having a good battery usage is vital for me.
 - https://www.evernote.com/shard/s21/sh/cc73487c-08c9-4937-ac6...
I do need to give Visual Studio Code a fair shot. Heard a lot of good things about it.
Then I remember trying to give it a try once again a few months ago but gave up because I've heard so many horror story about performance issues.
Now I'm willing to give it yet another try because of vim-bindings and performance issue improvements. Is it at workable state?
And in case you're wondering about the video, well, its this 50 year old documentary "The Home Of The Future: Year 1999 A.D.": https://www.youtube.com/watch?v=0RRxqg4G-G4
...but realizing the full potential of Atom is about more than polish. We're considering questions such as: What does super deep git integration look like? What does "social coding" mean in a text editor? How do we enable package authors to build IDE-level features for their favorite language?
Things I'm interested in ---> A hackable, fast, extensible editor.
Things I have no interest in at all ---> 'Super Deep' github integration, 'social' coding in my text editor.
Dont get me wrong, atoms a great piece of work, and making it extensible for building custom tooling is really great, but what on earth are you talking about?
I hope this is just 'and now we're going to make some plugins' talking...
I really can't get my head around it. It's such a non-issue for me.
I care so much more about general performance post-startup. I wouldn't even bring startup speed up as an issue as long as it's in the few-seconds range, which it always was for me using Atom.
Is there no recent files menu? Or am I just missing where they placed it?
I do like the find/replace UI compared to ST3, but the lack of a recents menu and it choking if I accidentally click a large file just aren't making me feel the need to swap to this.
The editor still seems crude though. The new install used some old packages from a previews install that I though was uninstalled!? It also called home to report a bug without asking for permission. Then it froze after I had uninstalled the old package.
That mid century video is hilarious.
Incredible promo video though!!!
no. please stop spamming unrelated stuff in the hn comments
I don't know anything about Atom, but I'm willing to bet there's really nothing this software is doing that prevents you from sleeping at night.
This article neatly demonstrates that resumes are not necessary and that not using them can unlock new sorts of candidates.
However, I don't think there's a conclusion to be made about the actual method used here. I suspect that it worked because it was different, not because it carried a fundamentally strong signal. If everyone did this, project descriptions would be gamed even more than resumesit would select for people who prepared for the selection process more than anything else.
This reminds me of various captcha strategies I've seen used by small forums to great effectsolving some math, typing a word into a text box, choosing a popular character's picture etc. They all work, perfectly. But only because spammers don't care about the small fry: it's not worth their time to modify their bots for your little site. If any given captcha becomes used widelyor your forum grows big enoughthey will bypass it trivially.
Now, an essay like this isn't quite as bad as a captcha, but the idea is the same: it works because it's new and different. If everybody used it, it would probably be a step back.
Ultimately, I think the real moral is that more companies should do their own thing, even if that thing is not great in the abstract. Being different carries a value of its own, and it breeds biodiversity that's healthy for the system as a whole. (Of course, many of the things companies try are really bad for various reasons, but that's a different story)
In particular, most people have a bunch of "red flags" they look for with, at best, cursory rationaleeverything from passing on people who didn't go to the right school to those who have breaks in their work history, based on "common sense" or "experience" rather than anything meaningful. Most of these criteria seem counter-productive.
I also think this is really true for college admissions and especially the admissions essay. A project blurb for hiring is more or less the same idea in a new context.
I have to say though, that in my experience, these experiments in sourcing work quite well when your hiring is small. The moment you hit some sort of scale, it becomes very very difficult, if not impossible to run and rely on such experiments.
E.g. in the first growth phase at Box, we were tasked with hiring 25 engineers a quarter. At that scale, the company deals with too many resumes and too many stakeholders in the hiring process. And at that point, you also have a group of people explicitly looking at resumes, less involvement from actual hiring managers, deadlines to meet, land to grab etc. Not saying one thing is better than the other, just that hiring at scale is an entirely different game.
The other thing, which is implied in the article, but may get lost if the reader isn't careful: regardless of how a candidate is sourced, the interview bar still remains the same. i.e. AJ also must have had to clear same or similar technical interviews like other engineers that got hired there.
It would be great if the majority of companies used both the resume and the cover letter effectively. It feels like most companies that require a cover letter only do so to screen out the laziest 10% who can't be bothered to write up a generic 1 page essay filled with ass-kissing and vague jargon.
The cover letter is just a relic from the olden days when the application was slower and more formal. There were less applicants for each position so HR probably had more time to read/screen.
This study presents an interesting alternative: Let people submit some text along with their resume on any topic of any length, and see how their personality comes through in the writing. Probably wouldn't work extremely well at a large company, but it seems like it served KeepSafe quite well.
When I interview, I tend to spend most of the time asking in depth questions about the projects I find most interesting on the resume. What was easy? What was hard? X sounds like it would be a problem, how did you solve it? What was fun? What was headbangonthewall miserable? Generally this gives a sense as to whether or not there's any bullshitting going on, and gives a sense for whether or not the candidate has a good head for thinking about hard problems.
Finally, I'll ask a few questions to probe for "difficult-to-work-with" red flags and finish with a few fairly easy "technical challenges" that offer opportunity for the candidate to either walk about having solved the problem, or walk away having solved the problem and demonstrated understanding of the solution from top to bottom.
I don't really care where anyone went to school. It doesn't mean anything. Really, going to school at all doesn't mean much. I need to see what you've done outside of that to make any meaningful evaluation. It doesn't matter if it's a huge project. You can give me a couple 10-line things that do something useful and I'll still get to see how you name things, format code, use built-in libraries, etc. Then we can chit chat about project management and how much you love or hate it.
The benchmark was performance in a long form coding interview for TripleByte, whereas Aline's is the final offer, so not exactly apples to apples.
For example, I don't do well in whiteboard interviews, which is odd because I normally don't have a public speaking issue. It feels like there's some muscle memory attached to coding that isn't well replicated with poor handwriting in a room full of people.
Whiteboard lines of code are simply not the manner in which developers work once hired. That is the reason for the disconnect between speaking well about projects (easy to verbally explain and sketch) and the programming portion (bizarre.)
Right the industry is doing the equivalent of interviewing lawyers by asking them to write a legal brief on a white board.
We're testing the wrong thing: a proxy for the work, when we we could easily test the work itself.
I much prefer work sample tests rather than whiteboard Q/A as it better replicates the actual job. Give me a few hours with problems I would actually face on the job, my dev environment, internet access, and a set of problems that truly reflect the work, and I find it much more natural.
Is it too much to ask that an interview measure skills the job actually requires, in an environment that emulates the work?
A few months ago I launched a side project, doing (of all things) resume review and revision services. When my clients want a review of a resume that I know won't get results, and I ask "Give me more to work with", the types of things I hear are eerily similar to the "awesome stuff" quotes in this post. I try to incorporate those things into the resume when possible.
Is it the resume itself that is the problem, or is it that candidates are just less inclined to include additional details (that may seem irrelevant) that could differentiate them from others? Some resumes will list accomplishments that make it rather clear of their qualifications, but everyone doesn't have that luxury.
When a candidate doesn't have a long list of work accomplishments, do they think to include this type of content that might get our attention?
The difficulty we had was not seeing a strong correlation between talking about projects and doing well at programming during an interview.
> It was AJ, a candidate that Zouhair Belkoura, KeepSafes cofounder and CEO, readily admits he would have overlooked, had he come in through traditional channels.
This was the story of my job search three years ago. It still kinda is.
That's true, but a good cover letter can go a long way toward helping with this. Most cover letters are generic, bland, and obviously copy-pasted from a template. (Or more often from a previous application, sometimes with info about the previous company left in!) A cover letter that talks about something exciting you've done recently, and ideally how it might be related to the job, or even just how it demonstrates skills you'll use in the job (and describes exactly how), is awesome in comparison. A letter like that would absolutely get you an interview with me, almost regardless of experience. One of our current co-op students actually had almost no programming experience on paper; he had actually switched out of a theatre degree iirc. But his cover letter was awesome (the theatre degree probably not being coincidental). Got him the interview, which got him the job, and I haven't regretted it. Just a co-op of course, but the point stands. The cover letter is probably the most important part of your application. Take the time to write a good one.
It comes down to whether or not the people doing the recruiting all have the same subjective opinion.
Edit: Clarifying that this is for non-engineering roles.
This lady claims that the company is fighting for candidates with Google, although the only thing they do (if i read it right) is provide an encrypted version of Dropbox. How does this require world class engineers? I've coded a file syncing app quite fast as a personal project once, and I don't think I could call myself even a regular developer. I do not believe such application would be even remotely as complex as anything Google does.
Seems your example changed the incentives, got useful information in return, and that led to a positive result. Unsurprising in hindsight. I'm going to send your article to a few people to see if I can get any to try that approach.
I am a little puzzled, though, about why others seem to find resumes so opaque. It seems like resume-reading is a lost art. A resume is usually a document that someone has spent a lot of effort on to make themselves look good. If you learn to read them, that can tell you a lot about the author. (Note: searching for buzzwords is not "reading.") A resume should not be regarded as simply a collection of facts - of course you'll be misled if you do that; a resume should be regarded as a document of self-expression. After a while, you can see useful patterns in what people put in resumes - a least for more-experienced applicants. Almost every resume suggests a bunch of next questions, which can be asked in a phone screen or interview to get a pretty good idea of what a person is about.
It's worth recalling that absolutely all software engineers at all software companies in the world from the first ones around 1955 up to 2002 were hired without benefit of LinkedIn, StackOverflow, Github. Almost all of these engineers submitted resumes, which were reviewed prior to offering interviews. Yes, there were hiring mistakes in the old days, but I don't see a huge number of people taking about how the hiring process now is so much easier, smoother and more foolproof than it used to be.
Huh? The companies this woman hires for don't look at github? Not looking at public code that someone has published is more broken than relying on resumes. If someone has published code and it doesn't suck, I'll probably bring them in for an on-site, period. I may even tell them that "We're going to talk about "file foo.c in your code where you implemented feature Z. So be prepared."
And, I suspect with startups it was more a case of "How many years were you in government? That would makes us so unhappy that we would leave. Why didn't you?" That's a different way of asking "Is this really the place for you?"
As a hiring manager in a startup, when I knew I only had 9 months of runway without more funding, I'd feel REALLY bad about taking someone with a family away from their very stable job. As someone who has recruited employee single digit, I often have made a point to meet the family when recruiting someone--even if I have to fly to them. I need both the prospective employee and their partner to understand that the big probability is that the company won't be around in 24 months, there won't be any payoff, and a new employment search is likely to be the result. Yeah, there is a small probability that we'll survive and an even smaller probability that we'll get some money. It's a really delicate balance for me, at least, to properly sell the company (Startup! Options! Novel!) and reality (Bankrupt! Flameout! Layoffs!).
I'd say I'm batting about 50%. For every employee I scare off, I absolutely convince one to join. Funnily enough, every single one who didn't run away said the same thing: "My wife told me I had to work with you." They were stunned that someone so important (Hah! Management in a startup is a good way to understand how unimportant you are really quickly ...) would take the time to make sure the family was informed properly about the risks and rewards.
I was a help desk pleb at a well known inkjet/scanner/camera company 15 years ago and this company "extended" their clipper database to record third party cartridges, but recorded them in .ini format. That's right, one file per record, in key=value pairs. I was bored and accidentally mentioned to the guy whose job it was to copy and paste the data from each of all 90,000 ini files into an Excel spreadsheet that Perl could do it, and I'd even use references to hashes to do it. He had no idea about that last bit, but I did it for him on the proviso that he didn't tell anyone, and reduced 10 weeks of work to 30 seconds. They unfortunately made me employee of the quarter but neglected to tell me so I missed my awards ceremony.
Not saying the process described in the article is bad, even though I believe anything can be gamed, but I don't really see a big difference. Main change is the way recruiters looked at what they got, resume or essay wouldn't have change a lot I think.
Maybe off topic but if companies want the best people, maybe THEY should write the essay explaining why people should join instead of sitting in their high tower waiting for minions to come.
Worked out really well. Not sure it's to be duplicated, but for me it went fantastically.
However, how can we conclude anything from a procedure that only examines the hired population and none of the unhired?
Even if not using GapJumpers itself, you can follow the concept by requesting solving a problem or submitting a piece of original technical content along with the resume.
1. Github a/c
2. StackOverflow a/c
3. Their blog
4. Anything they made online
If candidate fails to submit link for any of above then just don't interview them. I would guesstimate this simple check filters out 70% of the junk resume and probably 20% of the good resumes. It can scale like crazy and expanded even more (for example, use APIs to get their profile information and rank resumes).
I can just see this guy going out to Web devs, System devs, DBAs etc. and them all disagreeing because they're looking for different things (and value things differently).
This article starts out with an air of science and ends with a completely unproven conclusion.
While I do agree in my gut that resumes are not an amazing filter, she has completely failed to present evidence that her alternative interview process is better.
And in fact, while KeepSafe still has the no resumes option open, they are now accepting resumes again -- I do not great confidence that the alternative system was anything more than a PR move by the company.
Many of SBCL's optimizations are fine grained selectable, using internal SB-* declarations. I know I was at least able to turn off all optimizations for debug/disasm clarity, while specifically enabling tail recursion so that our main loop wouldn't blow up the stack in that build configuration. These aren't in the main documentation; I asked in the #sbcl IRC channel on FreeNode.
You can directly set the size of the nursery with sb-ext:bytes-consed-between-gcs, as opposed to overprovisioning the heap to influence the nursery size. While we've run in the 8-24GB heap ranges depending on deployment, a minimum nursery size of 1GB seems to give us the best performance as well. We're looking at much larger heap sizes now, so who knows what will work best.
While we haven't hit heap exhaustion conditions during compilation, we did hit multi-minute compilation lags for large macros (18,000 LoC from a first-level expansion). That was a reported performance bug in SBCL and has been fixed a while back. Since the Debian upstream for SBCL lags the official releases quite a bit, it's always a manual job to fetch the latest versions, but quite worth it.
Great read, and really familiar. :-)
I wonder if they're still hiring Lispers. I once passed on the opportunity to work in their Kiev office, but I might give it a shot again.
"We've built an esoteric application (even by Lisp standards), and in the process have hit some limits of our platform. One unexpected thing was heap exhaustion during compilation. We rely heavily on macros, and some of the largest ones expand into thousands of lines of low-level code. It turned out that SBCL compiler implements a lot of optimizations that allow us to enjoy quite fast generated code, but some of which require exponential time and memory resources. "
I used CL in a production environment a while back for a threaded queue worker and nowadays as the app server for my turtl project, and I still have yet to run into problems. It seems like you guys managed to push the boundaries and find workable solutions, which is really great.
Thanks for the writeup!
I may be in the minority, but that would drive me mad. I assume they're not routinely jumping between those stacks multiple times a day, but even so is there really that much benefit that it's worth keeping track of how to do things in that many different environments?
Anyone here has any experience with the GCs of Allegro or LispWorks or any other commercial Lisp implementations?
Which I exactly why I feel Lisp doesn't see much use elsewhere :(
Edit: It's still not advice I would pay for, though.
it's for assessing a project on day one, when you join, especially for "rescue mission" consulting. it's most useful for large projects.
the idea is, you need to know as much as possible right away. so you run these scripts and you get a map which immediately identifies which files are most significant. if it's edited frequently, it was edited yesterday, it was edited on the day the project began, and it's a much bigger file than any other, that's obviously the file to look at first.
we tend to view files in a list, but in reality, some files are very central, some files are out on the periphery and only interact with a few other files. you could actually draw that map, by analyzing "require" and "import" statements, but I didn't go that far with this. those vary tremendously on a language-by-language basis and would require much cleverer code. this is just a good way to hit the ground running with a basic understanding which you will very probably revise, re-evaluate, or throw away completely once you have more context.
but to answer your actual question, you do some analysis like this every time you go into an unfamiliar code base. you also need to get an idea of the basic paradigms involved, the coding style, etc. -- stuff which would be much harder to capture in a format as simple as bash scripts.
one of the best places to start is of course writing tests. Michael Feather wrote a great book about this called "Working Effectively with Legacy Code." brudgers's comment on this is good too but I have some small disagreements with it.
My comment from that thread:
I do the deep-dive.
I start with a relatively high level interface point, such as an important function in a public API. Such functions and methods tend to accomplish easily understandable things. And by "important" I mean something that is fundamental to what the system accomplishes.
Then you dive.
Your goal is to have a decent understanding of how this fundamental thing is accomplished. You start at the public facing function, then find the actual implementation of that function, and start reading code. If things make sense, you keep going. If you can't make sense of it, then you will probably need to start diving into related APIs and - most importantly - data structures.
This process will tend to have a point where you have dozens of files open, which have non-trivial relationships with each other, and they are a variety of interfaces and data structures. That's okay. You're just trying to get a feel for all of it; you're not necessarily going for total, complete understanding.
What you're going for is that Aha! moment where you can feel confident in saying, "Oh, that's how it's done." This will tend to happen once you find those fundamental data structures, and have finally pieced together some understanding of how they all fit together. Once you've had the Aha! moment, you can start to trace the results back out, to make sure that is how the thing is accomplished, or what is returned. I do this with all large codebases I encounter that I want to understand. It's quite fun to do this with the Linux source code.
My philosophy is that "It's all just code", which means that with enough patience, it's all understandable. Sometimes a good strategy is to just start diving into it.
After that, if I don't have a particular bug I'm looking to fix or feature to add, I just go spelunking. I pick out some interesting feature and study it. I use pencil and paper to make copious notes. If there's a UI, I may start tracing through what happens when I click on things. I do this, again with pencil and paper first. This helps me use my mind to reason about what the code is doing instead of relying on the computer to tell me.If I'm working on a bug, I'll first try and recreate the bug. Again, taking copious notes in pencil and paper documenting what I've tried. Once I've found how to recreate it, I clean up my notes into legible recreate steps and make sure I can recreate it using those steps. These steps are later included in the bug tracker. Next I start tracing through the code taking copious notes, etc, etc. yada yada. You get the picture.
Set a breakpoint, burn through the code. Chrome has some really nice features - you can tell it to skip over files (like jQuery) you can open the console and poke around, set variables to see what happens.
Stepping though the code line by line for a few hours will soon show you the basics.
I use a large format (8x11 inch) notebook and start going through the abstractions file by file, filling up pages with summaries of things. I'll often copy out the major classes with a summary of their methods, and arrows to reflect class relationships. If there's a database involved, understanding what's being stored is usually pretty crucial, so I'll copy out the record definitions and make notes about fields. Call graphs and event diagrams go here, too.
After identifying the important stuff, I read code, and make notes about what the core functions and methods are doing. Here, a very fast global search is your friend, and "where is this declared?" and "who calls this?" are best answered in seconds. A source-base-wide grep works okay, but tools like Visual Assist's global search work better; I want answers fast.
Why use pen and paper? I find that this manual process helps my memory, and I can rapidly flip around in summaries that I've written in my own hand and fill in my understanding quite quickly. Usually, after a week or so I never refer to the notes again, but the initial phase of boosting my short term memory with paper, global searches and "getting my hands to know the code" works pretty well.
Also, I try to get the code running and fix a bug (or add a small feature) and check the change in, day one. I get anxious if I've been in a new code base for more than a few days without doing this.
Two things I do to familiarize with a code base is to look at how the data is stored. Particularly if its using a database with well named tables I can get some rough ideas of how the system works. Then from there I look at other data objects. Data is easier to understand than behavior.
The other is watching the initialization process of the application with a debugger or logger. Along those lines if your lucky (my opinion) and the application uses dependency injection of some sort you can look to see how the components are wired together. Generally there is an underlying framework to how code pieces work together and that generally reveals itself in the initialization process if its not self evident.
As such my first steps are:
1. tidy/beautify all the code in accordance with a common standard
2. read though all of it, while making the code more clear (split up if/elsif/else christmas trees, make functions smaller, replace for loops with list processing)
While doing that i add todo comments, which usually come with questions like "what the fuck is this?" and make myself tickets with future tasks to do to clean up the codebase.
By the end of it i've looked at everything once, got a whole bunch of stuff to do, and have at least a rough understanding of what it does.
I just cannot believe people praising 'Unit Test'-ing. Fellow programmers, how exactly do you unit test a method / function which draws something on the canvas for example? You assert that it doesn't break the code?!
I see some really talented people out there who write unit test as proof that their code works without issues, that it's awesome and it cooks eggs and bacon etc. They write such laughable tests you cannot even tell if they are joking or not. They test if the properties / attributes they are using in methods are set or not at various points in the setup routine. Or if some function is being called after an event is being triggered.
My point is this: unit testing can only cover such tiny, tiny scenarios and mostly logic stuff that it is almost useless in understanding what is going on in the big picture. Take for example a backbone application like the Media Manager in WordPress. Please tell me how somebody can even begin to unit test something like that.
Unit testing is a joke. And sometimes a massive time consuming joke with a fraction of a benefit considering the obvious limitation(s).
I always start by gauging how much source code there is and how it's structured. The *nix utility "tree" and the source code line counter "cloc" are usually the first 2 things I run on a codebase. This tells me what languages the applications uses, how much of each, how well commented it is and where those files are.
The next thing I usually do is find the entry point of the program. In my case this is usually an executable that calls into the core of the library and sets up the initial application state and starts the core loop and routine that does the guts of the work.
Once I have found said core routine I try to get a grasp for how the state machine of the program looks like. If it's a complicated program this step takes quite a while but is very important for gaining an intuitive understanding of how to either add new features or fix bugs. I like to use my pen and paper to help me explore this part as I often have to back track over source files and re-evaluate what portions mean.
Once I have what I think is the state machine worked out I like to understand how the program takes input or is configured. In the case of a daemon that often means understanding how configuration files are loaded and how the configuration is represented in memory. Important to cover here is how default values are handled etc. I actually prioritise this over exploring the core loops ancillary functions (the bits that do the "real" work) as I find it hard to progress to that stage without understanding how the initial state is setup.
Which brings us to said "real" work. Hanging off of the core loop will be all the functions/modules are called to do the various parts of the programs function. By this time you should already know what these do even if you don't know how they work. Because you already have a good high level understanding at this point you can pick and choose which modules you need to cover and when to cover them.
1. The Mile High View: A layered architectural diagram can be really helpful to know how the main concepts in a project are related to one another.2. The Core: Try to figure out how the code works with regards to these main concepts. Box and arrow diagrams on paper work really well.3. Key Use Cases: I would suggest tracing atleast one key use case for your app.
This allows you to go down any code rabbit hole, figure stuff out, then get back to where you were. If you can't do those things it will take much longer to understand how things are interconnected.
Some others have mentioned recency/touchTime as another signal. For large complex codebases, that may or may not always work.
1. Be sure what you can compile and run the program
2. Have good tools to navigate around the code (I use git grep mostly)
3. Most of the app contain some user or other service interaction - try to get some easy bit (like request for capabilities or some simple operation) and follow this until the end. You don't need a debugger for it - grep/git grep is enough, these simple tools will force you to understand codebase deeply.
4. Sometimes writing UML diagrams works -
- Draw the diagrams (class diagrams, sequence diagrams) of the current state of things
- Draw the diagrams with a proposal of how you would like to change
5. If it is possible, use a debugger, start with the main() function.
For the ajax data source thing, I would try to modify or extend the existing data source code to add the behavior you are looking for. As you mess around with it trying to figure out what you need to change, you will encounter the areas of the code that you need to understand.
With this sort of strategy you can avoid having to fully understand all the code while still being able to modify it. You might end up implementing stuff in a way which is not the best, but you will probably be able to implement it faster. It's the classic technical debt dilemma: understanding the complete codebase will allow you to design features that fit in better and are easier to maintain and enhance, but it will take a lot longer than just hacking something together that works.
Michael's code looks clean and well organized. Shouldn't be terribly difficult for someone proficient at JS.
Once I've found and fixed a few things, or if the code base is particularly small or clean that I can't find bugs to fix, I'll set about hacking in the feature I'd like.
I usually start by doing it in the most hacky way possible. That sounds like a bad approach but it narrows the search of how to implement it and means I'm not constraining myself to fit the code base that I don't yet appreciate.
In hacking that feature I'll often break a few things through my carelessness. In then trying to alter my hacked approach so it no longer breaks stuff I'll become more aware of the wider code base from the point of view of my initial narrow focus. This lets me build up the mental model.
Eventually I'll be comfortable enough I can re-write the feature in a way more consistent with the wider code base.
I don't normally start by trying to "read all the code" because that guarentees I won't understand much of it (I'm not quick at picking up function from code). I might have a skim if it is well organised, but I find the "better" written a lot of stuff is, the harder it is to grok what it is actually doing from reading it. to me, reading good code is often like trying to read the FizzBuzz Enterprise Edition.
I've worked on many legacy systems: I was last year implementing new features into a VB6 code base, this year (at a different job) I am helping migrate from asp webforms to a more modern system. I've found that starting with trying to fix an issue to be the best way to dive into the code base.
Use good source control so you're never "worried" about changing anything or worrying that you might lose your current state. Commit early, commit often, even when "playing around".
I keep the applications I want to study in a YAML file (https://github.com/tony/.dot-config/blob/master/.vcspull.yam...) and type "vcspull" to get the latest changes.
You can read my ~/.vcspull.yaml to see some of the projects I look over by programming language. You can setup your config anyway you want (perhaps you wanted to study programming languages implementations, so have ~/work/langs and cpython, lua, ruby, etc. inside it.
As you read the code and encounter terms/words you don't know, write them down. Try to explain what they mean and how they relate to other terms. Make it a hyperlinked document (markdown #links plus headings on github works pretty well), that way you can constantly refresh your memory of previous items while writing
Items in the glossary can range from class names / function names to datatype names to common prefixes to parts of the file names (what is `core`? what belongs there?)
Bonus: parts of the end result can be contributed back to the project as documentation.
1. If it's on Github, find an issue that seems up your alley and check the commits against it. Or the commit log in general for some interesting commits. I often use this approach to guide other devs to implement a new feature using nothing more than a previous commit or issue as a reference and starting point.
2. Unit tests are a great way to get jump started. It functions as a comprehensive examples reference--having both simple and complex examples and workflows. Not only will it contain API examples but it will also let you use experiment with the library using the unit test code as a sandbox.
* Read the README.
* Install it and start using it with a couple of sample cases. That will give you an idea of what it does.
* Read the test suite. This will give you a better idea of what the library does.
* Look at the directory structure. This should tell you where things are.
* Start reading the core files.
* Start looking at open issues. Try to solve one by adding a test and changing the code.
* Submit a pull request.
While static typing helps a lot with this kind of exploration and navigation, I don't know of any IDEs or other tooling for any language that would really help you with it.Sure, you can probably generate UMLs or something, but it usually requires some additional tool and the output is pretty static. You can't just zoom in from a package-level view to an interface-level and then keep zooming until you are eventually shown line-by-line implementation of a specific function.
I've been thinking about this lately, and I've come to the conclusion that the way we think and reason about code is pretty far from the way our tools present it to us. I tend to think in terms of various levels abstractions and relations between units, yet the tools just show me walls of text in some file system structure (that may or may not mirror the abstractions) and hardly any relationships.
Sure, this is not the best practice, and unsuitable for many, but it's what works for me.
- find . -type f
- find . -name \* .ext | wc -l (get an idea of complexity)
- git log (is this thing maintained?)
- find . -name \* .ext | ctags
- find main entry points, depending on platform and language
- vim with NerdTree enabled
- CTRL-], CTRL-T to jump to browse tags in vim
Generally a lot of find, grep and vim gets me started.
We become fast friends and feel like we really understand each other.
But days pass, and each encounter feels less magical. It's almost like we having nothing in common. Like we're from two completely different worlds. One where its stuck in the past and one where I'm ambitious and excited about the future.
After awhile we don't really speak to each other anymore, and after some pretty ugly fights at work that get too personal... I rewrite it.
From my experience, there are really two ways that learning a new codebase can happen. One is that there's an existing test suite that's fairly comprehensive, and you can learn a lot by examining the tests, making changes to add features / make bug fixes, and then validate that work by rerunning the tests and adding new ones. That's really a great place to be as someone unfamiliar with a new codebase. The other is that there are no tests, and you inevitably need to rely on people familiar with the code, and make peace with the idea that you're going to write bad code that breaks things as you learn the depth of how the project works.
1. Just make sure I can build project;
2. Play around with services/application (just run, send some requests, get response);
3. Pick up simplest case (for example, some request/response);
4. Find breakpoints (for debugging) somewhere connected with this simplest case (for example, which stopped somewhere when I send request) and setup them in debugger. Usually, I find place where to put breakpoint by just searching keyword associated with my request;
5. Play around with these breakpoints while performing simplest case (for example, sending request) and try to find out call graph;
6. Try to change code and see what happens;
After I do this stuff several days/weeks, I become more and more familiar with the project.
A good way to get hang of the code base was to read it (usually using a tool like sourcegraph , pfff , open-grok , doxygen , javadocs ). Although a lot of people have argued that code is to be not treated like literature , but in this case, there was no choice.
The second step was to see if assumptions about the what the code does is correct. This is usually achieved by adding log statements, writing sample apps, and debugging in general.
Repeat the steps above, over and over again.
1. No matter what you do, you absolutely need to document everything you understand / misunderstand about the code base.
2. Never underestimate value of having a different pair of eyes look at code you have hard time reasoning about.
3. Be in constant search for resources (like books, blogs) available on the code / topic of your interest. You'd learn amazing amount by reading through other people's analysis. Stackoverflow is a great start. Heck, you can even ask well thought-out questions on Quora/Stackoverflow.
4. Hang out on related IRC channels / community mailing lists. For things written in esoteric languages such as OCaml, I found these to be pretty helpful.
5. You could blog about it, share the information you know over email lists, setup wikis; and people who know better would correct you. Its a win-win.
1) Read docs for how to USE the library if they exist
2) Review example code that describe how a person would use the library to accomplish tasks.
3) In order to start diving in, find a specific example that does something interesting, then hop in from there. Read the code within the methods / functions the user calls, then the functions / methods called inside those, etc.
4) As you dig deeper you may start finding that you understand, or you'll start building up your own hypotheses like "If I change X to Y in this function then something different should happen when I call it". Try it out, and see if your hypothesis is correct.
After a few iterations of doing something like this you'll probably start getting an idea of how the code is structured and where you'd need to go in order to make the changes you'd like to make, or add the features you want to add.
Every code base takes time to digest all the information. Sure the information passed your eyes, but is it committed to memory?
This can be especially useful for event driven code (looks like SlickGrid is jQuery-based, so that definitely applies here); you can start a recording profile, carry out the action you're interested in, then stop recording, and you can then find out exactly which anonymous function is handling that particular click or scroll or drag.
I just skim throught all the sources, then somehow I am able to point approximate file and line of code where a specific question might be answered.
This might sound "out there" but I realized during college I had the ability to recall the approximate location of specific information I needed from a textbox If I just skimmed through the whole book at the start of the semester.
For years I did this out of intuition, then about 10 years ago I took a course named "photoreading" and to my surprise they were teaching my "ability" but with clear steps so anybody could use it effectively.
For example I would start by looking up out a basic example of that codebase and for each of the function calls go through the files and see what is happening. This gives me an idea of how the code base is written and how it works. It also gives a clear understanding of the level of separation/specificity of the different functions.
Disclaimer: not very experienced so there might be better ways to familiarising ones self with a new codebase, this is just one way of doing it and it has worked for me in the past.
A couple of things that I typically do:
- Start with a fully working state, i.e. setup your environment, make sure tests (if there are any) are passing. If you can't get things to work properly, that's your first issue to investigate and fix.
- Don't try to understand all of the code at once. You don't need it yet. I'm assuming you want to take over the project for a particular issue. So just focus on that and ignore the rest of the code. If you ask any senior developer about something in their project, there is a great chance they will not remember the exact details, but know where in the code to look at. Aim to get at that level, not memorizing how everything works on the lowest level.
- Don't make any changes to the code that you don't understand. I have a recent example of this. Yesterday I was trying to find a bug in the Phoenix database, which was failing to start after an upgrade. I have never seen the code in my life. After some debugging I realized it's doing something with an empty string that shouldn't be empty. The obvious "solution" is to add an check if the string is empty and be done. Don't do that. Understand exactly why is the problem happening and only do a change like that after you are sure of all the implications. This has two effects, you are not introducing new bugs and you are learning about the codebase. At the end, the fix from my example was just a simple "if", but without understanding how is it ending up with an empty string, I might have caused more problems than I fixed.
- Use the VCS a lot when figuring our why something is done they way it's done. Use "blame" to see when things have been changed, read through the logs, etc. This is one of the main reasons why I don't like people rebasing/squashing their commits before merging. There is so much information they are throwing away this way.
- Adopt the coding style of the existing code. Don't try to push your style, either by having inconsistent style in different parts of the code or re-formatting everything. It's just not worth it.
- Don't be afraid to change things that need changing. There is nothing worst than making a copy of some module, call it v2 and then having to maintain two versions. If you are afraid to make a change in the existing code, make yourself familiar with the part of the code first.
You first have to localize a region (function) you want to study, then you reach one of its execution with a breakpoint, or a conditional breakpoint.
Then, you inspect:
- the callstack: in which condition the function was call
- the parameters / local variables
- the subfunctions: in both tools, you can manually call any (reachable) function, try different parameter values and check the result. Pay attention through to the side effects!
The first few mods are inevitably disgusting hacks, so don't pick anything you want to keep for your first couple of goals. It is pretty easy to go back and do them right once you've got your head around the rest of the project if you do end up wanting to keep them though.
TL;DR; Start with the minimum exposed surface area of the project (API), dig through these functions first. Definitely know the initialization sequences the library needs.
This is my approach concerning JS projects or for dealing with other peoples code in general.
First, I make a mental model of what I want to do. !important.Then I write the smallest wrapper needed to start fledging out points where "separation-of-concern" happens.
At this point I should have an idea of what the other persons libraries expose as API. I also should have an idea of what can be done with a unmodified library, and what would need patching.
Then comes monkey-patching the lib at individual function levels with a healthy dose of TODO markers and NotImplemented Method signatures.
By this point I should have a good picture of what goes on in the library apart from what gets exposed and would probably have forked a branch by now.
This strategy has been useful not just for JS projects but bigger codebases of java/scala libraries like Lucene Core/Solr or Play framework, Django in the python realm and to limited success with Research code releases like Stanford Core NLP.
We generally approach it with heavy customer/owner involvement at first. We need to know what the application's intended purpose is. It is sort of like a lightening BA session. We get what the application should do, and what it isn't doing properly out of this session (and more importantly, what it should be doing instead).
Our first step: get it into a repo.
Now that we have an understanding of what the application's intended purpose is, we can dive into the code. We don't have any analysis tools (but if there are some that people could recommend, I'm all ears) outside of our IDE (Visual Studio). We generally look for the last-modified date as an indicator of what needed work most recently. Of course, we don't have file history so we don't know exactly what changed, but it gives us a rough idea of what was worked on and when.
Next we usually try and use the application in our development environment. We chase each action a user takes in the code to determine what is the core/central part of the application. After that, we try to determine the cause of the problem (and while we are at it, we generally do a security review of the code).
It takes time, and is painstakingly nuanced and very boring. But I'm not sure what other options we have in such cases. As I said, I'm all ears as to what other might do in these situations.
Going a step farther still would be to add to the user documentation as you go...
Do something small, and iterative, and go out from there... for that matter, just getting a proper build environment is hard enough for some projects... automate getting the environment setup if it's complex. I've seen applications with 60+ step processes for getting all the corresponding pieces setup.
First, we read the Native Client papers (http://www.chromium.org/nativeclient/reference/research-pape...) to understand how Native Client sandboxes untrusted code. We then looked at the tests in the Native Client source repository to see how to run untrusted code within a Unix process. We're yet to be able to debug executables via GDB for reasons we don't quite understand - so at present we:
1. Set NaClVerbosity to 10 and trace the system calls and functions invoked in the tests2. Run "grep -r" in the src folder to find the source files for each of the functions invoked then read and understand the code for each3. Insert our own calls to NaClLog in the source code to read the state of variables and to validate our hypotheses of paths of execution within Native Client
For example, just this afternoon we found out how to send data via inter-module communication instantiated from the trusted code to the untrusted code. We first thought this wasn't possible - and that communication had to be initiated from the untrusted code, handled in the form of a callback function in the trusted code. However it simply turned out we had set the headers incorrectly in that the first four bytes of the header should be 0xd3c0de01. What's crazy is that we haven't yet understood what these bytes mean - so we're back in the Native Client source code to try and see why it works.
This probably sounds like a rant about Native Client and the Native Client developers. However, the complete opposite is true. The folks on the Native Client Discuss forum have been very helpful and have been more than happy to answer our questions. Quick shoutout to mseaborn: thank you for your help!!!
If you have access to logs from a production service / component, I find TextAnalyzer.net quite invaluable. I take an example 500 mb log dump - opened in TextAnalyzer.net and just scroll through the logs (often jumping, following code paths etc), while keeping the source code side by side. This allows me to understand the execution flow, and is typically faster than attaching a debugger. If it's a multi-threaded program, the debugger is hard to work with - and logs are your best friend. You are lucky if the log has thread information (like threadId etc)
In your case, frozen columns seems to be a hard feature. So I would start with ajax data source. I'd start with a simple SlickGrid example and get it to run. Then go find how SlickGrid sets up data source. Expand that piece of code to add ajax data source. Once I finished ajax data source, I'd dig into frozen columns.
If you are working on a new codebase and worry about bugs, you just give yourself more stress. Bugs (that are not yours) are expected. If they aren't blocking your task, ignore them. Most likely, they aren't relevant to what you are trying to do.
I include function names and the names of the variables passed as parameters. But, no braces or other syntax. Almost always omit branches/variable decls/error checking. Include all interesting function calls along the path, but omit any branches/function bodies that lead off the desired path. Inline callbacks as function calls with addition notation. If the process has separate steps that aren't a single call/callback tree, start a new tree with the note "then later..."
To do this, I have to start from the line of code that enacts the outcome and determine the backtrace with a combo of debugger stack traces and examining the code for branches/callbacks of interest.
But, when it's complete, I'll have the start-to-finish process of some complicated task in the code --usually on a single screen of text. It's a tremendously better use of my short term memory to scan over that than to constantly bounce around the actual code base.
Thisd make a great contribution already: SlickGrids codebase is somewhat poorly documented, which is a barrier to the involvement of interested developers.
As you write the docs weak spots in existing implementation will come to your attention, helping you figure out what to fix first.
One downside is that writing down and structuring your knowledge in easy for others to grasp way is a challenge in itself, though arguably a useful exercise.
Once I know what the data is I can look at the code with an eye towards maintenance of data integrity. I might still need some "playtime" to grok the system but the one truism of large software is that data is always getting shoved from one big complicated system to another, and I can usually identify boundaries on those systems to narrow the search space.
(the exception to this is if you have code that leaks global state across the boundaries. Then much swearing will occur.)
If the problem is some bug and there are stack traces that is my starting point, debugger and a few breakpoints chosen from the trace and then follow the stack and from there I start knowing how it is structured, and then the next bug and so on (fixing them of course)For code where I need to add features things get a little more tricky, but there is always some entry point, a web-service invocation, some web page, and try to understand what it is currently doing, again using the debugger to follow the calls and how the data is changed (sometimes even going into libraries).
Reading the docs if there are any is also a good place to start.
Once again, use the debugger a lot, makes it easier to understand than just reading the code.
As you start to add to a project the IDE can also prove valuable in discovering how everything fits together, since it will provide smart and helpful completions with docstrings, method signatures, types etc. This can really help you start writing new code a lot faster.
Also, an IDE will usually also have a decent UI for running the code with a debugger attached, which can be incredibly useful for understanding the changing state of a running program.
While it is focused on CPython, most of the techniques are applicable elsewhere. It also mentions a great article by Peter Seibel (http://www.gigamonkeys.com/code-reading/) that discusses why we don't often read code in the same way we would literature.
Essentially, as the complexity of software has grown people have been forced to take a more experimental approach to understand software even though it was created by other people.
One thing I do NOT recommend is changing the code style, unless you're ready to take full ownership of the project. It can make it much harder for the project owner to merge in and if there are any lingering PRs those will typically need work to merge in properly.
If you're looking at a large Go codebase with many packages, I find it helpful to visualize their import graph with a little command .
Here are results of running it on consul codebase:
$ goimportgraph github.com/hashicorp/consul/...
The first read-through is not about comprehending everything. It's about exposing your mind to the codebase and getting it to start sinking into your subconscious. It's kinda like learning a new piece on the piano.
1. Access some data in the highest level component from one of the lowest level components
2. Access some data in one of the lowest level components from one of the highest level components
In a lot of cases, good architecture will prevent one or both of these from being possible, but identifying how data flows through the app seems to be a good way to understand the general architecture, limitations and strengths of most apps. These two tasks give concrete starting points for tracing the data flow.
> What tools and techniques do you use for exploring and learning an unknown code base?
I'd try to fix it using the same style used in the codebase. This way anybody else reading, maintaining ,or using it won't have to make sense of the new style. Pay attention to how each method is defined. They are very readable. Very few traces of complex one line statements.
Most importantly, be patient. You won't be any good with it in less than 2 weeks of constant tinkering. Good luck.
Then I run the app and put it through its paces, while watching the output in another console.
If there's some code that doesn't make sense, I use console.log() heavier in that section, to help me fully understand what it does. Once I have that level of understanding, I then write some comments in the code and commit them so that other contributors may benefit in the future.
I think reading each file or reading the data structures is more difficult because you have no familiarity as to what is going on and you have no knowledge of why things are structured as they are, so it'd end up like reading a math paper straight down: memorize a ton of definitions without knowing why, until you finally get to the gist of it.
Then I prefer to jump into fixing any existing issue. Working on fixing an issue teaches a lot, more fixes, then features, rinse, lather, repeat.
While this post talks about fixing compiler bugs, the overall steps are much replicable: http://random-state.net/log/3522555395.html
I search for strings that appear in the frontend (or generated HTML source, or whatever), and then I use a search tool (git grep) to find where it comes from. And then I the same search tool again to trace my way backwards from there to where it's called, until I find the code that interests me.
And then I form a hypothesis how it works, and test it by patching the code in a small way, and observe the result.
Oh, and don't forget 'git grep'. Or ack, or ag, or your IDE's search feature.
Programming with unit tests really helps. And it points out where certain parts are too entangled and bound to implementation.
Start from main() and start from the one click event(or any end-game action). Try to connect the two.
Generally I find it hard to just start reading through packages, source, functions, etc. and find it much easier to try and solve some sort of problem. By tracking and debugging a particular issue through to the end, I find a learn a lot about the codebase.
People really do learn quite differently and everyone needs to find their mode of learning - there is no one single true way. This is one of the most important skills in software development, IMO. Once you learn how you learn you can apply it to most new contexts.
I write stuff down because for me that-the process of writing seems to be the most effective way to learn.
The Ruby application server I looked at was for doing social network feeds. Posts/Likes/Comments go in, feeds come out.
I followed some common code paths for things such as posting a comment and getting a feed. I would write the stack trace down on paper as I went.
It also helped that I happen to know that this ruby server used wisper and sidekiq. This way I didn't overlook single lines of code such as 'publish: yada yada'
I'll start reading the files using any of the strategies mentioned here and looking for things I can cleanup. Formatting, Simple Refactors, Normalizing Names.
These are all things that are comparatively easy to do and safe but force you to reason about the code you are reading. Asking yourself what you can refactor or fix the naming for is a deent forcing function for actually understanding the code.
Some people use graphical tools to visualize a codebase (e.g. codegraph). It can help you understand what pieces of code are related to each other.
Simply digging through code, tests or reading commit messages in an unfamiliar code base takes at least an order of magnitude more time.
EDIT: tried call graphs too, better than reading through code, but still require you to understand and filter out a lot of unnecessary information.
I'd like to print out a big graph and stick it to the office walls so I'll have a good view of the logical structure.
You do not need to familiarize yourself with the full codebase at the start. It's too time-consuming and mostly not worth the effort. Set up an objective and go for it slashing your coding axe around until it works.
: Unless you have an special interest or you are assumed to familiarize with the codebase.
My typical strategy is to get the project running, then just get to work. Start fixing bugs, and adding requested features. Use the code around you as a guide on what is right and wrong within that company, and forge forward. When you are unsure of something turn to grep, find some examples, and keep going.
Take the extreme programming approach. Don't try to familiarize yourself with a new codebase all at once. Start small. Work on a small ticket. It will, organically, help you assimilate what's happening.
Try to fix a bug and you'll soon find yourself having to learn how the code involved works, and with a goal your focus will be better than just reading through the code flow.
This is probably not the best way to approach this, but I am somehow ADHDish and I need a clear task to avoid perpetual diving in the codebase.
something usually comes up.
Then I write unittests
Like changing some function like:
Text -> Text -> IO ()
ServerHost -> Path -> IO ()
In any language I'll try to read the project like the Tractatus.
In stuff that isn't Haskell? Break stuff and run the tests.
- Get a functional dev environment set up where you can mess around with things in a risk-free manner. This includes setting up any dev databases and other external dependencies so that you can add, update and delete data at will. There's nothing that gives more insight than changing a piece of code and seeing what it breaks or alters. Change a lot of things, one at a time.
- Dive deep. This is time consuming, but don't be satisfied with understanding a surface feature only. You must recursively learn the functions, modules and architecture those surface features are using as well until you get to the bottom of the stack. Once you know where the bottom is you know what everything else is based on. This knowledge will help you uncover tricky bugs later if you truly grok what's going on. It will also give you insight as to the complexity of the project (and whether it's inherent to the problem or unnecessary). This can take a lot of time, but it pays off the most.
- Read and run the tests (if any). The tests are (usually) a very clear and simple insight into otherwise complex functionality. This method should do this, this class should do that, we need to mock this other external dependency, etc.
- Read the documentation and comments (if any). This can really help you understand the how's and why's depending on the conscientiousness of the prior engineers.
- If there's something that you really can't untangle, contact the source. Tell him what you're attempting, what you tried, exactly why and how it's not working as you expect, and ask if there's a simple resolution (I don't want to waste your time if there's not). You may not get an answer, but if you've done a lot of digging already and communicate the issue clearly you might get a "Oh yeah, there's a bug with XYZ due to the interaction with the ABC library. I haven't had time to fix it but the problem is in the foo/bar file." You may be able to find a workaround or fix the bug yourself.
- When you do become comfortable enough to add features or fix issues, put forward the effort to find the right place in the code to do this. If you think it requires refactoring other things first, do this in as atomic a manner as possible and consult first with other contributors.
- Pick a simple task to attack first, even if it's imaginary. Get to more complicated stuff after you've done some legwork already.
There are other minor things but this is generally my approach.
My dayjob is with a Ruby on Rails consultancy. Said dayjob involves familiarizing myself with a lot of different codebases. My strategy here is rarely to try and digest the whole codebase all at once, but rather to focus on the portions of code specific to my task, mapping out which models, controllers, views, helpers, config files, etc. I need to manipulate in order to achieve my goal.
The above strategy tends to be my preference for most complex projects. The less I have to juggle in my brain to do something, the better. I tend towards compartmentalizing my more complex programs as a result. For simpler programs (and portions of compartmentalized complex programs), I just start at the entry point and go from there.
Languages with a REPL or some equivalent are really nice for me, especially if they support hot-reloading of code without throwing out too much state. Firing up a Rails console, for example, is generally my first step when it comes to really understanding the functionality of some Rails app. For non-interactive languages, this typically means having to resort to a debugger or writing some toy miniprogram that pulls in the code I'm trying to grok and pokes it with function calls.
For some non-interactive languages, like C or Ada, I'll start by looking at declaration files (.h for C and friends; .ads for Ada) to get a sense of what sorts of things are being publicly exposed, then find their definitions in body files (.c/.cpp/etc. for C and friends; .adb for Ada) and map things out from there. Proper separation of specification from implementation is a godsend for understanding a large codebase quickly.
For a rigorously-tested codebase, I'll often look at the test suite, too, for good measure. When done right, a test suite can provide benefits similar to specification files as described above; giving me some idea of what the code is supposed to do and where the entry points are.
That is not a sensible comparison. When you scale something mass changes as the cube of dimension. Strength changes as the square of dimension. So small things are inherently stronger with respect to their mass.
Interesting article, but in the span of one paragraph here we have confused velocity, acceleration, and pressure - and there are similar errors in the following one. For an article about physics, I would expect this to at least be proofread.
The Gell-Mann Amnesia effect: http://harmful.cat-v.org/journalism/
>> [Hu] and Dickerson constructed a flight arena consisting of a small acrylic cage covered with mesh to contain the mosquitoes but permit entry of water drops. The researchers used a water jet to simulate rain stream velocity while observing six mosquitoes flying into the stream. Amazingly, all the mosquitoes lived.
The researchers used simulated rain drops on six mosquitoes. There are more than six species of mosquitoes. They controlled for wind effects (which are part and parcel of rain). So they excluded horizontally travelling raindrops. My immediate reaction to the conclusion that mosquitoes can fly in rain was "Really? Not always". Here is a methodologically lacking and wholly unscientific anecdote: I have lived in Johannesburg my entire life, where mosquitoes are quite prevalent during the summer months. When it is raining heavily (it is usually quite windy as well), the local species of mosquito that feeds of humans do not present a problem as the number of airborne mosquitoes tends to zero.
Has anyone actually done any research on dragonflies being hit by raindrops, or is this just speculation?
> If you want to see this for yourself, take a look at Hus video
What? Nothing like that happens in it.
Big enough to shrug off a raindrop hit, or small enough to surf along the surface tension until it can slide off?
I don't get it, the scientificamerican blog that they are quoting has the right units, where did they come up with this?
>"Why, one species even secretes an enzyme to dissolve the organic matter in blood leaving only the iron in haemoglobin. Then another enzyme causes the iron atoms to join to form biological drill pipe! These structures are known to be as much as 6 inches in diameter and to extend a mile deep."
Is there something to it or he just went to on the internet to tell lies?
So if the mosquito's weight is insignificant compared to that of the heavier and denser water drop and that's what keeps it from having the force transferred, would this equally apply to hailstorms? (Where our mosquitoes are pelted by small hail balls the size of raindrops)
Anyone who lives in a mosquito heavy area knows that mosquitos (like almost all airborne insects) go into hiding during heavy rain and/or wind.
Facebook also employs Bryan O'Sullivan, an epic Haskell library writer (Aeson, Attoparsec, Text, Vector, and on and on http://hackage.haskell.org/user/BryanOSullivan). Bryan also co-authored the "Real World Haskell" book.
So Facebook has hired two prolific Haskellers and probably others I don't know about.
I dislike the underlying premise, the adverts, and (especially) the "real names" policy.
But... between great bits of Open Source like React, cool infrastructure projects like this, and a technical culture which seems a whole lot more open than many other big companies, it's getting kind-of hard to go on hating. Walk back a bit from the obsession with open plan offices, and I might just cave...
I would like to know more about this. What is a request exactly? An API call? If so, when an existing policy is changed, do the memoization tables have to change as well? How are the memoization tables shared? If this is running on a cluster, I would imagine that lookups in a memoization table could be a bottleneck to performance.
Is it plain impossible to pick the best fit language without implementing a solution in the first place and fleshing out the requirements and challenges that specific to the problem space? Or do the problems evolve fast enough that no matter how well you design the system, it will need to be deprecated once in a few years?
That they could use the system dynamic linker makes me think they're using some form of relatively basic dlopen/dlsym/call-method procedure, or something along those lines. That's fine, though the use of "hotswapping" evokes the image of some more elaborate DSU mechanism.
1. They have CORE Haskell contributors on their payroll to deliver this type of project (what this mean is that no... Haskell isn't any better than other language, it's just that they have people who know Haskell very very deep to the compiler level...)
2. In-house custom language eventually does not scale (the EOL is much much much shorter than other programming languages), plan for that :)
As a mere mortal programmer, who knows a little Haskell, my takeaway is that if you want to run Haskell in at web scale for a a large userbase, you need the language's primarily compiler author to help build the application and to modify the Haskell compiler to make it performant. And you also need your team led by a 20-year-veteran Haskell expert who is one of the language's handful of luminaries whonwrotr a plurality of its main libraries. What are the rest of us to do, who aren't at Facebook?
> [600 points] Why only web development matters (http :// nautil.us medium wordpress theverge gawker .com)
We're disrupting the 1gorillion dollar [insert industry] sign up for our beta to check it out.
We just need your name, address, credit card, and birth date. To verify your a human.
 and we store all of this in clear text files on our server.
 which was written using [insert new hipster language] by some guy who's been programming for 3 weeks.
 but we promise not use your data to mine the shit out of you and sell it to advertisers.
I liked this one.
"Ask HN: Why won't VCs invest in our dating app, and why is it because we're women founders?"
" We taught 13 women from Sierra Leone node.js"
> [450 points] Why I have private Github repos at my startup but everyone else should give away their software for free.
It's so true ;~;
 theatlantic.com theverge.com blog.tumblr.com
An important difference between HN and other fourms I frequent(ed) however is that instead of taking offense and going on the defensive when 'attacked' by 4chan, they recognize the joke and find it funny. That alone puts HN lightyears ahead of those other organizations.
And regardless of why, it's exactly this kind of self-awareness and identity that enables people to discuss ideas without feeling threatened by them, something which has held back both social and scientific progress in the past.
How many people know who Terry Davis is?
I wouldn't be surprised to see this in a few years.
I actually like this layout. It's fast and easy to read, renders ok on mobile and is lightweight. I'm grateful that the maintainers didn't switch to an over-the-top look-at-my-framework.js thing just to make it look modern at the detriment of usability.
> Anonymous 06/26/15(Fri)17:51:52 No.48697104[600 points] [meta] 4chan technology board satires hacker news, hilarious.
I know people that don't read HN because it's too virulently sexist, so having 4chan see it as too SJW is interesting.
Will we end up with two "social justice" realities, like we have with vaccination, creationism/evolution and climate change, where it's entirely possible to spend your entire browsing time on sites that agree with your opinions on everything?
And goddamn is this hilarious.
HN has become host to feminist shilling and corporate endorsements, on top of the already flawed content model that encourages disengagement to the point where people are just reposting headlines and treating HN as a comments section for the article itself.
Either way, there's something to be said for constructive criticism like this, and HN can potentially learn from this.
It won't. But it could.
Having now read it, I've come to the conclusion that the blog gives the wrong impression about the implications of having a custom language. Readers of the blog post come away with the idea that Wasabi was so full of "o.O" that someone was moved to write a book about that. In reality, the book is simply documentation of the language features, with callouts for weird interactions between VB-isms, ASP.NET-isms, and their language.
You should definitely read the "Brief And Highly Inaccurate History Of Wasabi" that leads the document off. It's actually very easy now to see how they ended up with Wasabi:
1. The ASP->PHP conversion was extremely low-hanging fruit (the conversion involved almost no logic).
2. Postprocessing ASP meant PHP always lagged, so they started generating ASP from the same processor.
3. Now that all their FogBugz code hits the preprocessor, it makes sense to add convenience functions to it.
4. Microsoft deprecates ASP. FogBugz needs to target ASP.NET. They can manually port, or upgrade the preprocessor to do that for them. They choose the latter option: now they have their own language.
It's step (3) where they irrevocably commit themselves to a new language. They want things like type inference and nicer loops and some of the kinds of things every Lisp programmer automatically reaches for macros to get. They have this preprocessor. So it's easy to add those things. Now they're not an ASP application anymore.
Quick rant: if this had been a Lisp project, and they'd accomplished this stuff by writing macros, we'd be talking this up as a case study for why Lisp is awesome. But instead because they started from unequivocally terrible languages and added the features with parsers and syntax trees and codegen, the whole project is heresy. Respectfully, I call bullshit.
I disagree. First of all, Wasabi solved a real problem which doesn't exist anymore: customers had limited platform support available and Fog Creek needed to increase the surface area of their product to cover as many of the disparate platforms as possible. Today, if all else fails, people can just fire up an arbitrarily configured VM or container. There is much less pressure to, for example, make something that runs on both ASP.NET and PHP. We are now in the fortunate position to pick just one and go for it.
Second, experimenting with language design should not be reserved for theoreticians and gurus. It should be a viable option for normal CS people in normal companies. And for what it's worth, Wasabi might have become a noteworthy language outside Fog Creek. There was no way to know at the time. In hindsight, it didn't, but very few people have the luxury of designing a language which they know upfront will be huge. For example, Erlang started out being an internal tool at just one company, designed to solve a specific set of problems. Had they decided that doing their own platform was doomed to fail, the world would be poorer for it today.
I also find this kind of phrasing weird:
> The people who wrote the original Wasabi compiler moved on for one reason or another. Some married partners who lived elsewhere; others went over to work on other products from Fog Creek.
It's like the author of this article goes out of their ways to avoid saying that some people left the company, period. It also wouldn't surprise me if some of these defections were caused by Wasabi itself. As a software engineer, you quickly start wondering how wise it is to spend years learning a language that will be of no use once you leave your current company (yet another reason why rolling your own language as a critical part of your product is a terrible idea).
Scroll down to the part "Fear, Uncertain, and Doubt by Joel Spolsky" from September 01 (permalink 404s)
1) Maintenance Nightmare2) No-one likes programming in propriety language as it dead ends your career3) Company leavers take vast amounts of knowledge away with them and impossible to hire-in to replace that knowledge
All technical debt decisions should be made based on what the business hopes to get from the debt. Considerations for the alternatives, when the debt is retired, etc should all play a part. To globally say "this is a terrible idea" like so many in this thread are doing totally ignores these types of factors in favor of "It's a bad computer science" idea and thus miss the point of technical debt in the first place.
The way I read this article, creating Wasabi a decade ago was not a mistake, given what they were doing and what was available at the time. Not open-sourcing Wasabi was a mistake, though.
When I learn a new, perhaps hyped-up computer language, I soon run into difficulties, no matter what the merits of the language. The difficulties are lack of tooling. eg, no debugger - or only a rudimentary one, no static analysis, no refactoring, no advanced code browser.
If the language is successful, these things come with time.
When you develop an in-house language, you'll never get the advanced tooling that makes for a great software development experience. This, for me, was why I was surprised by Joel Spolsky's announcement of an in-house language.
(Although, to be fair, these things didn't really exist of VBScript nor for PHP at the time Wasabi came to be.)
It's listed from a Google search, but from just clicking around the Fogbugz site I can't even find the page/pricing for on premise installation.
I wonder to what extent the generated C# code depends on C#'s dynamic typing option. I ask because the original VBScript was certainly dynamically typed. So by the end, to what extent was Wasabi statically typed, through explicit type declarations or type inference? And how much did the compiler have to rely on naming conventions such as Hungarian notation?
It's funny how they teamed up 2 years later to build stack overflow (and again used an MS stack)
While I hesitate to endorse a language based on VBScript, it seems like the extensions they added to it were pretty nice. I mean, if you're inclined to use a VBScript style language, Wasabi wasn't horrible, and given Spolsky's following and Fog Creek's mindshare, it seems at least possible it could have become a useful thing rather than a legacy thing to be replaced. I mean, at the very least, it's probably not worse than PHP. (Granted: my opinion of PHP is very low). Maybe it's for the best though, the world is probably better off without new wasabi projects.
Programmers just love to change good working code into the new style or new language. One has to always view the impulse to change skeptically.
When this has been mentioned previously there is a strong "they did a crazy thing with Wasabi" but progress depends on doing crazy things that just might work sometimes.
Related blog post from a couple of years ago.
Commercial reasons dictated why Wasabi was created and retired. Bravo Joel.
Dang, it's significantly less entertaining but I'm afraid he's right.
Is it dumb to assume that it's normal for upvotes and comments to increase at a similar rate?
It's one of those things that people do because it seems like the path of least resistance (and in the short run, is), but it inevitably snowballs into a pit of technical debt. Spolsky knew this quite well (he'd written eloquently on the subject).
...and yet he still did it. His defence was that it was the easiest option in the short term, and he was probably right, but it doesn't matter. People only do stupid stuff that seems smart; saying "this stupid thing seems smart!" is only a defence if you have no idea that it's actually fundamentally stupid. Of all the people in the world, Spolsky is one of the least able to mount this defence.
Contemporaneously with his decision to go all in on Wasabi, he wrote a scathing condemnation of Ruby for being slow, unserious, obscure; he suggested that a serious company shouldn't opt for Ruby because it was risky, and that choosing it would put you at risk of getting fired.
Was he right? In 2006, maybe? I mean, he turned out to be wrong, but I don't think it was entirely obvious that Ruby was a serious choice 15 years ago. Of course, he wasn't writing 15 years ago, but even nine years ago, a very conservative, safe approach to choosing a technical stack very possibly did mitigate against selecting Ruby, for all the reasons he outlined. But those arguments applied twice as hard to Wasabi. You don't get to argue that there "just isn't a lot of experience in the world building big mission critical web systems in Ruby" (and hence you shouldn't use Ruby), and then turn around and use Wasabi for your big mission critical web system.
Of all the people in the world, Spolsky probably had the best understanding of why Wasabi was a stupid, short sighted decision. He did it anyway. And it was stupid and short sighted. Rarely is someone so right and so wrong about the same thing at once.
(And yes, Fogcreek is still around, and so is FogBugz. But I don't buy for a moment that Wasabi was actually a good choice. They survived it, but they didn't benefit from it.)
Edit: Spolsky has written too much about why things like writing something like Wasabi is terrible idea to link it all. Besides, a lot of it has been linked in other comments. But I don't think I can express strongly enough that my anti-Wasabi position is simply repeating the things the guy that signed off on developing it and using in production wrote. ...then he decided to write a new language because apparently Ruby was too slow to possibly use to generate a graph, and there was literally no alternative to using Ruby for graph generating than writing your own compile-to-VBScript/PHP language. Words fail.
Then instead of cutting their losses, they doubled and tripled-down on it until they had their own language and sophisticated tools around it. Around this time, Django and Rails had been started already. And several decent cross-platform web frameworks were years old, such as CherryPy. Even PHP would have been a better choice. One of these could have been phased-in parts at a time to minimize disruption.
Did I get this right? Because there are so many WTFs that I must have missed something.
Yes, modularity is important. However, in some cases, this philosophy has resulted in the "tangled mess held together by duct tape" kind of systems architecture that no one dares to touch for fear of breaking things.
I think Unix philosophy is struggling with a fundamental dilemma:
On one hand, creating systems from programs written by different people requires stronger formal guarantees in order to make interfaces more reliable, stronger guarantees than interfaces within one large program written by one person or a small team would require.
On the other hand, creating systems from programms written by different people requires more flexibile interfaces that can deal with versioning, backward and forward compatibility, etc, something that is extremely difficult to do across programming languages without heaping on massive complexity (CORBA, WS-deathstar, ...)
I think HTTP has shown that it can be done. But HTTP is also quite heavy weight. It doesn't exactly favor very small programs. Handling HTTP error codes is not something you'd want to do on every other function call.
In any event, I think Unix philosophy is a good place to start but needs a refresh in light of a couple of decades worth of experience.
It's also free online:
Except that a program-to-program interface based on formatting and parsing text is anything but clean.
Anyway, I've been with my company for a few years and soon my contract expires and I'm going to study the field our company is in and get a degree in that, then I'm going to apply for a position doing our core business. I would still like to be involved with the software my current position is touching on, though, if possible. (Our company has 1000+ employees and several different sub-sections, so even though I might get back into the company, it's not a given that I'll be working with the group of people I am now even though I'd like to.)
I also sometimes think that if possible, perhaps I'd like to work for that other company in our neighbouring country for a few years and be on the dev team of the software. After all, I have experience from the user side which the dev team has not and the dev team has seen some of the tools I've made and a couple of the guys seemed to think that some of that stuff was pretty decent.
> Rule of Diversity: Distrust all claims for one true way.
Although, does the python rule "There should be one-- and preferably only one --obvious way to do it." contradict this one ?
> Rule 5. Data dominates. If you've chosen the right data structures and organized things well, the algorithms will almost always be self-evident. Data structures, not algorithms, are central to programming.
I wonder what rob pike has to say about OOP or java, I wish I could listen to it.
Also it says that text is a good representation of data, but I think he meant it as intermediary. I don't think xml or html are really good choices when you see all the CPU cycles spent parsing those.
> Rule of Economy: Programmer time is expensive; conserve it in preference to machine time.
I prefer this rules versus the "no premature optimization" rule.
Which means that no program is more un-UNIX-y than Emacs...
As a happy subscriber to Unsplash since it first launched here, I'm glad that the team ignored the comments and kept making this.
Is this an engaging and inspiring story? Yes, and it's great that people feel free to share their stories (successes and otherwise) here.
Having read it, do I have a better idea how likely this strategy is to work for any given person/company? No, and I don't know that anything short of an exhaustive longitudinal study would help there. (There's some mention of studies on creative hobbies, but it's a bit of a leap from there to repeatable ROI.)
If i'm working on a business for pharmacists, i'm not sure my side project playing around with neural networks is going to get me the right eyeballs.
Honestly, HN sometimes (but not always) feels like it is made up of a bunch of gold-diggers, clinging to the hope of one day making a big breakthrough, without proportional effort. It has a very shallow feel to it.
- What type of manager quiz are you: http://www.staffsquared.com/what-type-of-manager-are-you-qui...
- Timesheet calculator: http://www.staffsquared.com/timesheet-calculator/
- Maternity calculator: http://www.staffsquared.com/calculator/maternitycalulator/
...and much much more.
Generally these "side projects" take a few days to put together from concept through to launch. They're very minimal overhead, and they drive good numbers that convert to trials to the site.
The best side projects don't just link back to the website you're actually selling, but somehow draw users in. A good example of this is the http://invoiceomatic.io/ by Freeagent. They grab you by giving you the opportunity to create an invoice, next thing you know you're knee deep creating a Freeagent account...it works.
Andrew recently posted an interview  with the founder of betalist.com, which was also born out of desperation and as a side project. Marc talks about his betalist experiment and the impact it had .
I would love to see other examples and write-ups about this. Was it accidental or strategic? I'm the sole developer of our product right now, but we're also struggling with marketing at the moment. How much does it make sense for me to put the effort into such side projects?
 http://mixergy.com/interviews/marc-kohlbrugge-betalist/ https://medium.com/beta-list/how-i-tricked-techcrunch-into-w...
There's an entire industry that revolve around this idea.
We call the people that work in that industry "publicists".
cueyoutube.com has been a good source of seo juice work workingsoftware.com.au but beware tools that have a maintenance overhead: youtube updated their api weeks ago and i haven't had time to fix it.
The idea reminds me of Vaynerchuks Jab, jab, right hook strategy http://www.forbes.com/sites/danschawbel/2013/10/11/13-memora... sorry for the popups)
Anyway, after all the fuss and after gathering all the necessary documents to open an account and get a card, I thought that a lot of people were in a similar situation.
I created a wordpress.com blog and listed all the necessary documents. I created a PDF the bank required but didn't even bother to ask for until you went there (so extra trip) with fillable fields and all and uploaded it there. The whole thing. It was so frustrating to me that I went overboard and listed other options like comparing other card providers to the specific context of the country, and how each one could be used differently.
After that, I got proper hosting and redirected the .wordpress there. There were about 300 people daily on the site. Not much, but that's 100k people who read a very long post. The post alone had more than 700 comments (I changed to Disqus) and I replied to 99.99% of them. The remaining was spam. Soon, other readers were answering questions of "new" readers. They also sent me different documents to attach to the article. The site was linked to from a whole bunch of geek sites in the country. Sites to buy cars linked to it, too (they were interested in buying car accessories).
Often times, people I knew would read the article and then read the author's name and laugh because they knew me personally. Another contacted me and said good things and asked if I was related to an author/Gynechology Professor (my uncle) and he said he avoided his wife a couple years of prison time (she was to be jailed for medical mistake and my uncle apparently made a report it wasn't, the investigation was reopened). Others said it would be cool to meet IRL for coffee, etc. Others said I should monetize it.
The site ranked 1 on Google for "MasterCard Algrie" (it's not anymore as I was too busy to renew hosting, etc. But the wordpress.com blog ranks 7th).
It all came because I was too frustrated by the paperwork and the 18th century style banks have to do business.
The point is: It might not seem like it's a big thing (I mean, it's only a darn card, right).. But you never know how bad the itch is for someone else. A good indication is how it is bad for you, though. It doesn't matter if it's not revolutionary, only that it needs violent scratching.
Good luck with your projects.
The content is some times a little repetitive (just how many shots of amazing, beautiful scenery could one need?), but it has become an absolute go-to for me.
Article is old, but still like hearing about Unsplash
Too bad the article only focuses on the success stories. I for one would be really interested to know how many failed/abandoned side projects they created and how they relate to the successful ones.
 - https://news.ycombinator.com/item?id=1772357
The Nigerian Teenagers Who Built Crocodile Browserhttps://news.ycombinator.com/item?id=9787010
One of the teens found his/her way to HN, but all that's in the thread is nitpicking about their website. It would be great if he/she got some engagement and encouragement.
I think this demonstrates 2 very major problems with SSL Certificates we have today:
1. Nobody checks which root certificates are currently trusted on your machine(s).
2. Our software vendors can push new Root Certificates in automated updates without anyone knowing about it.
More content on this rant: https://ma.ttias.be/the-broken-state-of-trust-in-root-certif...
EDIT: I googled the first two. "GDCA TrustAUTH R5 ROOT" and "S-Trust Universal Root CA" are both new certificates (~Novemeber 2014). The latter is in Firefox already, and is a new SHA-256 root certificate to eventually replace a SHA-1 certificate for an existing CA.
Maybe the system could be changed to one where domains can only be signed by the DNS registrar, or something.
It's pretty crazy that any CA can issue a certificate for any domain. And paying for more validation doesn't help you at all, since it won't keep the "bad guys" from getting an illegitimate one from a cheaper/lazier CA.
I don't mean the simple ability exposed in most browsers to add/remove certs. That still assumes one set of trust that used globally, which is completely incorrect.
Maybe I don't trust $COUNTRY to handle their root certificate for most uses. Currently we handle that case by removing the cert completely. Trust, however, is not a simple boolean value, and maybe I do trust that certificate for $COUNTRY's official government pages. I should be able to specify that I trust some certificate for some domain (or other, non-domain based use!), but not for others.
As another example, consider a local Web Of Trust. whenever Web Of Trust is brought up, people complain about the difficulty of key exchange. Well yes, that's a difficult problem, but there is no reason that it has to be solved for all use-cases before anybody starts using it. Maybe a circle of (usually physically local) local friends want to have secure communications. They can share a key in person easily, and so it should be easy to give access to a private forum by simply sharing a key/cert on a USB disk.
We can currently approximate those cases, but it is not well supported, and is certainly not something that most users would be expected to be able to do. We can fix some of that with a better UI, but I'm suggesting a far more fundamental change, because actually solving problems like key sharing will not be easy, and I suspect they will only be solved once we have infrastructure in place. HTTP was successful because it did not require that everybody implements the full, fairly complex specification. Instead, we had a fluid, extensible protocol that allowed anybody to extend it, and that allowed for the development of a wide variety of software.
The problem with traditional PKI (at least as implemented) is that it assumes that we can assign an absolute trust value to anything. In reality, trust is relative, and may in fact have multiple values at the same time. Until software is designed around those realities, it will always be inflexible and insecure for any use case where the needed trust assumptions do not reflect the assumptions made by the authors the software.
Unfortunately, I'm a old-style UNIX nerd who is fine with using GPG, and I'm not sure what the UI for a dynamic-trust system would even look like. sigh
Essentially he's proposing side-loading a new application under a custom url scheme so that the browsers will launch a helper app that's used to handle web applications with the following url format:
web: publickey @ ipaddress / capability
He's planning on developing the helper app that's based on a sand-boxed node.js and QT application which just uses a TCP session to communicate with the server.
I think the author might be a bit behind the news on Tunisia, though.
> A flaw in this system is that any compromised root certificate can in turn subvert the entire identity model. If I steal the Crap Authority's private key and your browser trusts their certificate, I can forge valid certificates for any website. In fact, I could execute this on a large scale, performing a man-in-the-middle (MITM) attack against every website that every user on my network visits. Indeed, this happens.
> HPKP is a draft IETF standard that implements a public key pinning mechanism via HTTP header, instructing browsers to require a whitelisted certificate for all subsequent connections to that website. This can greatly reduce the surface area for an MITM attack: Down from any root certificate to requiring a specific root, intermediate certificate, or even your exact public key.
related previous articles:
Firefox 32 Supports Public Key Pinning (188 points by jonchang 304 days ago | 100 comments): https://news.ycombinator.com/item?id=8230690
About Public Key Pinning (72 points by tptacek 43 days ago | 5 comments): https://news.ycombinator.com/item?id=9548602
Public Key Pinning Extension for HTTP (70 points by hepha1979 242 days ago | 28 comments): https://news.ycombinator.com/item?id=8520812
We want an open, yet secure web, anonymous at best. With the current setup, that is not so easily possible. Letsencrypt might help, but even with that, there is still someone you need to beg for a signed cert.
Maybe we need to think ahead.
Maybe I should, but I am not going to individually double check every root certificate. I don't think I have the means to do so either.
When I was in school back in the 60's, I had the chance to see the healing effects of lithium before it was approved here in the US in 1970. I saw a man in a florid manic state dramatically improve in two week's time, kind of magical and it left a lasting impression on me.
A few years later I happened to be walking in town, and a man stopped me. "I know you. You were one of those students there when I was in the hospital." Only then did I know who he was. I asked how he was doing. He said "I'm doing quite well. Lithium saved my life and I'm still taking it."
Since then I've had the responsibility of treating many people with mood disorders, and I didn't forget what I'd learned. Anyway, lithium is still a godsend for many people, but of course it really isn't a magic bullet, nothing is.
Like all medications it can produce bad effects. I've seen that happen too. Renal failure is a risk as the article points out. Careful monitoring can prevent some bad outcomes, though not all. Doing whats best requires utmost dedication by patient and doctor to the cause of stability and quality of life.
In the words of Spinoza, "all things excellent are as difficult as they are rare." Success is possible, we just have to find the courage and strive to get there.
Notably, there is enough lithium in the groundwater in certain areas of the US that this "study" has been happening for a long time. El Paso, Texas has high naturally occuring lithium in the groundwater, and is widely reputed to have less violence than comparable cities with less lithium in their water. I haven't read the whole thing, but remarkably, a recent paper seems to have shown this to be true, at least for suicide mortality.
Lithium in the public water supply and suicide mortality in Texas (Blml et al, 2013)
There is increasing evidence from ecological studies that lithium levels in drinking water are inversely associated with suicide mortality. Previous studies of this association were criticized for using inadequate statistical methods and neglecting socioeconomic confounders. This study evaluated the association between lithium levels in the public water supply and county-based suicide rates in Texas. A state-wide sample of 3123 lithium measurements in the public water supply was examined relative to suicide rates in 226 Texas counties. Linear and Poisson regression models were adjusted for socioeconomic factors in estimating the association. Lithium levels in the public water supply were negatively associated with suicide rates in most statistical analyses. The findings provide confirmatory evidence that higher lithium levels in the public drinking water are associated with lower suicide rates.
Edit: I just realized that the Op Ed linked from the main article mentions the same evidence, although without reference to that particular paper: http://www.nytimes.com/2014/09/14/opinion/sunday/should-we-a...
Aside: If you are a person who uses the word "bipolar" as a synonym for moody or indecisive, I hope reading this will help you understand what actual bipolar mania looks like.
So about eight months ago I started taking a low dosage of lithium, in the form of drops added to my drinking water, in a dosage that amounts to about 2-3mg per day - similar to the amounts in naturally occurring high-lithium drinking water. (In comparison, therapeutic doses are several 100 mgs per day).
Anecdotally, it might just be placebo effect, but I do feel it has had some effect. I have always been a bit anxious, particularly socially, and I feel that has diminished over this period. However, this experiment coincides with a better exercise regimen, and also simply growing older, so it's difficult to 100% attribute the effect (if any) to the added lithium. It would be very interesting to see more studies on this.
If anyone's interested you can buy these drops as 'trace mineral drops' from the Great Salt Lake.
Coke, on the other hand, lost its cocaine content much earlier, in 1903.
Since then Ive taken valproate which works well and so far I tolerate well. However there is significant risk to my liver. I take regular blood tests to watch for that.
Lately Ive been feeling physically ill, as if I have been poisoned. I dont know the cause but will request a liver function test this week.
I was pleasantly surprised.
Claiming that you don't believe in God is equally annoying as those sect guys knocking at your door. I do believe in God and I find the use of lithium in treating these illnesses a hope (with potential dead serious side effects: http://www.ncbi.nlm.nih.gov/pmc/articles/PMC4456600/) but not the potential meaning of life (?! how can you compare lithium with the notion of God?!).
Well, they kinda are. Theraputic doses of lithium are disturbingly near toxicity levels.
Please, everyone, let's stop talking such nonsense and missense. There's a framing error at play here, at a very fundamental level, and a whole field has gone down this rabbit hole for far too long. There is no such illness called "manic depression"; there is a symptome called "hope-despair spectra dysregulation disorder". The phrase "manic-depression", like the word "harassment", is a confusing misnomer almost deliberately invoked by a langauge switcheroo, mostly by professionals who are never trained in the original humanisms from which the word originated and is imparted and imported. As with harassment, which is more clearly expressed as "exhaustion", the term "mania" is more clearly expressed as an assessed "unreasonable and/or extreme hope, leading to reckless energy or cognitive chain investments or behavioural drivers". The term "depression" is simply a prolonged despair, wherein a person is seen to be desperate for air. Psychiatrists and psychologists who speak of manic depression as something more than a persistent "hope-despair dysregulation" are usually, in my experience, blowing smoke, and owe a duty to assess whether the hope-despair complex is the result of illogic, illmotion, or both, and whether that illogic, illmotion, or both is exogenous or endogenous. The postulations in the DSM are not credible, as the Director of NIMH, the National Institute of Mental Health, asserts in pointing out that the field of psychiatry is terrible at identifying causes, and dresses up symptom complexes and symptomologies to look like mechanical medical dis-eases. There are very few diagnoses that psychiatrists can do, and calling "hope-despair spectra dysreg disorder" (or, manic depression, as the DSM calls it) a diagnosis is, in my humble opinion, a fraudulent claim. It's not a diagnosis... it's a symptosis, or [symp]tomosis.
It's also completely imprecise and inaccurate, rather like saying, "@phren0logy has a cough", rather than saying "@phrenology has a rhinovirus" .
Hope-Despair Dysregulation Disorder (HD3), from....Manic = A state of prolonged hopeDepression = A state of prolonged despair
It's only natural that We should have evolved, have had revealed, been given, and overwritten and at times, overridden environmental and social expectancies, and that those should altar the pattern of our hope and despair. The persistence of these patterns can, in the eyes of another, be seen as "abnormal" and an "unwanted deviance from socially integrated expectancy patterns". The response pattern from terrapists is to feed a salt pill to the patient as a placebo, in the place of a more obvious sugar pill, so that the patient returns regularly for talk therapy sessions or has a few weeks to stabilize their native sense of the statistics of life, wearing out their own misweighting of cued and observed probabilities. But... this same effect would happen if they were to be fed NaCO3, or NaCl. Lithium, i posit, has no effect other than as an off-grid placebo pill to give terrapists time to try to figure out the root cause and failure modes in cognition. i do not believe the statistical effects of natural experiments yet; i have not come across a convincing study yet, and it's my belief that study non-publication bias for disconfirmations on lithium's environmental effects will explain the rest.
As for what to do with people who are thinking about survival rather than thriving, and considering survival failure, tell them they are on the hope-despair dysregulation spectra, and ask them to consider how many years left they have until they reach 100 years old, and set that as their new age. 22? Your real age is not Your chronological age (cage) of 22; it is Your survivor age (sage) of 78. Reinforce it by teaching them the Periodic Element that their Steam Age corresponds to, in this case, Platinum, or Pt, and ask them to go for physical therapy by going out for a long run with a friend, or, if they have legal woes instead of psychiatric woes, arrange for them to speak with whoever it is that is the cause of their woes in a safe space, rather than aggravating or papering over the lack of ethical calmunity care.
Lastly, read Seligman's Flourish with them, and other works of positive, social, and cognitive bias psychology. The attempt to use diagnostic langauge in a root-cause-agnostic is fraudulent; please stop doing it. It causes far more damage than psychiatrists and other psycholory specialists take responsibility for, particularly as families, calmunities, and institutions abuse the indeterminacy, soft, nearly unfalsifiable nature of psychiatric labels as a means of social control for those they consider inconvenient gadflies suffering from too much institutionally-wrought despair.
Also, the DSM Criteria are foolish to apply against certain classes of the population. For instance, with hope-despair dysreg, one of the symptoms is written up as "Flights of Ideas", with some modifiers. Intellectuals and designers cultivate the capacity to undergo "flights of ideas". That's what these people do. Why would You count that as a bullet point toward psycholore.ical sympagnostics, when it is part of their professional duties? That just weakens the whole meaning of the sympagnostic for that whole sector of the population.
Those are my 2 calming sense on the problem. PERMA, Exercise, Resiliency Training, Socialization, Uninterrupted Purpose, Daily Progress all add up to an end to depression; talking to a blank face of a false friend with no power to convene the social world to determine and test the reality described may help tune, slow, or stop survival fail, but only for a time. Inverted ages (Pb-Ar) and Fundamental, sustained purpose mixed with calming human life stage activities will stabilize most, on a complete review of their ethics, i.e. their character. Lastly, if there's loneliness or a reflective solitude involved, You'll want to review and perhaps fix that as well, as the case requires.
Best wishes, everyone. Let me know if You're ever in need of a call to point out how many Years You'd be sacrificing should You go "Canary" prematurely. Reach out to me; i can help you Flag Sentinal instead of losing Your life to self-organized survival fails.
Sent without much editing.
A life partner, one not afraid to get their hands dirty doing what needs to be done for a vision outside the mainstream paths, grow with them through the fails and the folly, and experience two lives in one lifetime. Find someone to remind us of why to be humble through the successes that can blind one to appreciation and the efforts of others. Someone who does not see us for success gotten, but for the inevitable happiness and richer gain from a partnership with an equal of good character.
Children, to see the world through innocent and new eyes. To see value in things we take for granted, and to give us a reason to think of the future as a prospect even though much of our prospecting years may now be behind us.
It has never been a better time to be poor. What we risk with all the other wins beside family is a life without riches. With others we can know ourselves better, be more whole, understand what makes us human and what drives us to betterment of ourselves and humanity.
The ups and downs of life is the most enriching experience, to know genuine empathy and share sincere hopefulness. One can find such experience in family.
While your life might totally suck beyond what you could ever have ever imagined, after a point there is a sort of humor to be found in the absurdity of it all.
Failure becomes an afterthought, because it simply means ending up right where you currently are - the bottom. This is the classic "nothing to lose" dynamic working in your favor.
Counter-intuitively, the opportunity costs involved seem to matter less and less the more your career and family prospects dwindle. In a weird way, that can be liberating.
However, such thinking can be extremely destructive. It's essentially tantamount to starving yourself as a means of motivation, while at the same time applying a Martingale betting system to life planning. Certainly not for everyone, and seeking out such a state is ill advised. I view it more as a way to cope with a situation one finds themself already in the midst of.
Despite this, a person still needs to have hopes and desires, just like everyone else - even if such things are in direct conflict with any hardcore apathy they may harbor.
Truly believing in what you're trying to achieve, as well as the world of possibilities that it will open to you, is essential - even if it does represent a sort of cognitive dissonance. At the same time, you have to not care about the outcome. The ability to have simultaneous belief in conflicting points of view is an incredible tool to have.
To quote an old trading maxim: "You can't win if you have to."
It is something that you can noodle around on (like a guitar, or an electric bass, or even on a midi keyboard software) .. go to a pawn shop and grab an old instrument (put on new strings if necessary) and just play a little every day.
Music helps relieve stress, lets your mind relax, brings you to a creative space, and in a few months of noodling every day or every other day you'll find that your experience of music will have changed and that you'll be making beautiful sounds. Very gratifying, very healthy, and very easy. The juggler must learn to use both hands, the artist both eyes, and the entrepreneur both brains.
Exercise helps, but I'd reinforce the need for a balanced supportive social network outside that. Choose sports with other participants (eg racquet sports, or join a team). Not only will the discipline of not letting others down keep you motiviated to keep going, but you'll get friends who don't give a damn about your business (in a good way - no need to pretend to be killing it all the time!) and can keep you grounded in normal reality.
If you find success early, whether you believe it or not, put some money away (if you can). And if you find yourself in your middling years and great success eludes you despite your prior success, at least think seriously about moving the needle from the risk side to the responsibility side.
Source: I didn't heed this advice.
Your strategy won't work for everyone, or even most people, because "just do it", whilst very compelling short term, tends to lose effectiveness over time. Instead seek out something that you really enjoy, that coincidentally provides great exercise, removing any problem about motivation and long term commitment. If hanging at the gymn really is your thing that's great, but if not, don't beat yourself up, get into something else instead, cycling, karate, trapeze, dance, roller-derby, yoga, there are lots of alternatives, main thing is to make it fun then you won't need the "I must do this" self-discipline every day, you'll go out of your way to do it.
And as others say, explore creative and cultural activities too. Your mind does not thrive on pure coding and hustling alone, it needs its own free-form exercise too.
While not dead certain I may have just found a woman who wanted to marry me in 1985, but her grandmother did not approve of me.
I miss her so. Im going to write her soon, if its her I will go visit but now she is far too old to bear my child.
Several times I have been kissed by beautiful women who made it plain they were mine for the asking but each time I pursued the impossible dream.
My ex did not want to have children. I know why but cannot tell you. Deciding to be with her was one of the most difficult decisions of my life. Now i feel I chose wrong.
Hey Anne whatever happened to Cheryl, she was one of our vendors back in the day.
She died of some very rare cancer. I am completely convinced thats because I did not kiss her back. Not that I was not interested but that i was painfully shy then.
He who hesitates does not get To swim in the gene pool.
Ive written lots of code ive made lots of money. nsome of my products were huge hits.
Consider homer's illiad and odyssey. what code that any of us write will last that long.
There are 3 things that I practice and I feel has guaranteed wins.
1. Fitness-Damn Right. Several people have given the reason. and its as simple as putting half and hour, 5 days a week in moving your body. Keeps you physically fit and mentally robust. At times I have realized that my mental activeness is directly proportion to the exercise and my fitness level.
2. Family and Friends-Yes, they are the support mechanism. Talking to them and taking some time out to talk to one of your closest friends(friend/mom/wife) will help you shift your focus from your business into people that matter to you. Believe it or not at some point you will realize that relationships and people matter the most and investing in them is a worthwhile guaranteed win.
3. Fun-The part would be just having fun, doing something that you love. The best part is you can easily fit it into your daily schedules. Just after your office hours, give 30 minutes unwinding yourself by doing a hobby that you love (e.g. reading a book, writing the journal, playing that guitar, singing songs, playing basketball). Life is short and ultimately everyone makes money so that they could have fun or do what they love, why not do it everyday. It helps you completely detach yourself from your day at office. Fun everyday, keeps stress away.
I'm 100% AGAINST doing it for a "guaranteed win" :-)
First of all, there may be a direct relationship between effort and win when you're an unfit twenty-something, but if you get an injury - or just if your body gets older - you may be struggling to get back to the level you've been at before.
But, even more importantly, if you do sports with a "I have to win"-attitude, you'll start comparing yourself to others, and you'll always find someone who is better than you. Just don't start to be competitive. You're doing the sports for fun. Learn to enjoy sports for it's own sake, and you may be able to take that attitude towards other things in life.
(I've got a failed startup behind me, and one of the things that kept me sane was regularly going bouldering. Pro-Tip: Get a yearly membership as a birthday or xmas present from your parents or so. Even if you're in serious financial troubles, your membership will be paid for. Huge relief!)
This is the magic pill we've all been looking for, and it's been in front of our eyes this whole time. Exercise and a good diet were the answer all along.
You need a lot of these little victories to have a successful business. But that doesn't stop them from being victories. And when you set your scale small enough, some of them are basically guaranteed.
One thing I learnt the hard way is that the best way to ensure personal success is to plan your life's goals as if you have a 9-5 job with secure income. Major investments (housing, loans etc) and relationships should not be on hold till the next company milestone. Having a spouse kind of forces this upon you, but getting a supportive parent/mentor to help you plan your life apart from your startup will really help.
A friend once told me that a guaranteed win for them was cleaning their apartment.
For me, exercise is a good one. Also reading. Reading great books on a regular basis is a life goal for me, and if I just put in the time on that one I make progress.
Sport releases a lot of endorphins that simulate the state of happiness and reduce stress. This is a proven scientific fact. So there's something magic for your self-steem and positivity in it. Everybody starting-up should keep an exercise routine, and startup accelerators should include it in their activities...
It really helps, if you don't have kids like me, I think it's one of the best decisions you can make for keeping your sanity during difficult times. ZEN Meditation and Yoga (mixes both meditation and sport) are other great ways to keep stress at bay.
Just like exercise, the positive effects of meditation compound by putting in the time!
So as an alternative, I'd suggest: Move every day. This can be:
- putting up a basketball hoop and throwing a few hoops
- walking for 45-60min.
- go climb with your significant other
- play with your children (no consoles, actual physical moving)
- do a yoga class
All of these things might not sound like fitness but give you a great balance, when done every day.
>Sleep well, eat well and get fit. 100% guaranteed wins.
>Also, stop smoking/drinking. Hard battles, guaranteed wins.
Also, stop smoking/drinking. Hard battles, guaranteed wins.
In our use case, we've mostly had to deal with handwritten text and that's where none of them really did well. Your next best bet would be to use HoG(Histogram of oriented gradients) along with SVMs. OpenCV has really good implementations of both.
Even then, we've had to write extra heuristics to disambiguate between 2 and z and s and 5 etc. That was too much work and a lot of if-else. We're currently putting in our efforts on CNNs(Convolutional Neural Networks). As a start, you can look at Torch or Caffe.
And for comparison, an OCR application with Tesseract inside: It has a dramatically lower text recognition rate: http://blog.a9t9.com/p/free-ocr-windows.html
(Disclaimer: both links are my little open-source side projects)
When I compared Tesseract to Abbyy, the difference was night and day. Abbyy straight out of the box got me 80%-90% accuracy of my text. Tesseract got around 75% at best with several layers deep of image pre-processing.
I know you said open source, and just wanted to say, I went down that path too and discovered in my case, proprietary software really was worth the price.
I have a hobby project where I scrape instagram photos, and I actually only want to end up with photos with actual people in them. There are a lot of images being posted with motivational texts etc that I want to automatically filter out.
So far I've already built a binary that scans images and spits out the dominant color percentage, if 1 color is over a certain percentage (so black background white text for example), I can be pretty sure it's not something I want to keep.
I've also tried OpenCV with facial recognition but I had a lot of false positives with faces being recognized in text and random looking objects, and I've tried out 4 of the haarcascades, all with different, but not 'perfect' end results.
OCR was my next step to check out, maybe I can combine all the steps to get something nice. I was getting weird texts back from images with no text, so the pre-processing hints in this thread are gold and I can't wait to check those out.
This thread is giving me so much ideas and actual names of algorithms to check out, I love it. But I would really appreciate it if anyone else has more thoughts about how to filter out images that do not contain people :-)
If you want to do text extraction, look at things like Stroke Width Transform to extract regions of text before passing them to Tesseract.
It's pretty sad considering that OCR is basically a solved problem. We have neural nets that are capable of extracting entities in images and bigdata systems that can play jeopardy. But no one has used that tech for OCR and put it out there.
As you mentioned ocrad.js I assume you search for something in js/nodejs. Many others already recommended tesseract and OpenCV. We have built a library around tesseract for character recognition and OpenCV (and other libraries) for preprocessing, all for node.js/io.js: https://github.com/creatale/node-dv
If you have to recognize forms or other structured images, we also created a higher-level library: https://github.com/creatale/node-fv
Our particular application was OCRing brick and mortar store receipts directly from emulated printer feeds (imagine printing straight to PDF). We found that Tesseract had too many built-in goodies for image scanning, like warped characters, lighting and shadow defects, and photographic artifacts. When applied directly to presumably 1 to 1 character recognition, it failed miserably.
We found that building our own software to recognize the characters on a 1 to 1 basis produced much better results. See: http://stackoverflow.com/questions/9413216/simple-digit-reco...
With the caveat that none of the stuff was handwritten.
 http://godoc.org/gopkg.in/GeertJohan/go.tesseract.v1 https://bitbucket.org/zaphar/goin/
Examples here: http://funkybee.narod.ru/
Apologies, I am not sure if its open source.
Unfortunately the native Linux version is a bit pricey:http://www.ocr4linux.com/en:pricing
Otherwise I would use the command line version to help me index all my data.
However, despite that short and distracted development window, we still managed to squeeze out compiler performance improvements that should result in 30% faster compiles for crates in the wild. You can expect even more progress on the compilation speed front for 1.2, due to be released August 6, along with a slew of stabilized and much-demanded stdlib APIs.
Let me also mention that tickets have just gone on sale for the first official Rust conference, a small one-day event in Berkeley: http://rustcamp.com/ We'll be using the experience from this proto-conference as a guide for larger events in the future, so even if you can't go we'd love to get your feedback as to what you'd like to see from a Rust conference.
I don't feel like the docs empower me enough to write multi threaded code comfortably without the borrow checker spitting at me. Is it just me, or does anyone else feel this way?
I started using Rust exclusively for hobby projects after the 1.0 release, to force myself to learn the language, but I found myself running into compiler panics on nearly a daily basis.
Admittedly, the code the compiler tended to crash on was very macro-heavy, but a goal of rust is safe macros. (And even if the macro-expanded code was invalid, the compiler should just report an error, not crash).
There are currently 207 open issues regarding ICE (Internal Compiler Error): https://github.com/rust-lang/rust/labels/I-ICE
One question though. When is Cargo going to be available on FreeBSD (and other BSD's)? I think these platforms are significant in the server space where Rust would be highly relevant. Having Rust usable there would likely augment uptake of the language.
Opensource Swift is only few months away and it already has a much bigger user base and is backed by the biggest company in tech. Rust & Swift share many common traits and kinda look alike as well.
Why should anyone pick Rust over Swift when Swift will be able to do everything Rust can do and is also positioned as a systems programming language? And Swift will also be a full-stack programming language and you will be able to program apps and backends with it.
I feel like Rust is about to get killed. And killed quickly.
LE: indeed, it's quite clear from the mentioned article (http://dslab.epfl.ch/pubs/cpi.pdf). So this provides great exploit protection.
Google will be able to do likewise for NaCL apps.
- Most real-world exploits these days are based on use-after-frees, heap buffer overflows, and other heap-related weirdness, rather than stack buffer overflows. It's nice that SafeStack mitigates that attack vector though (but if you disable stack canaries in favor of it, you actually reopen the door to exploit certain types of vulnerabilities...)
- A (the most?) common method to proceed from memory corruption to return-oriented programming is to redirect a virtual method call or other indirect jump to a stack pivot instruction. SafeStack alone does nothing to prevent this, so it doesn't prevent ROP.
- However, the general code-pointer indirection mechanisms described in the paper, of which SafeStack is an important component, could make ROP significantly harder, because you would only be able to jump to the starts of functions. This guarantee is similar to Windows's CFG (although the implementation is different), but SafeStack makes it harder to bypass by finding a pointer into the stack (either on the heap or via gadget).
- In practice, interoperation with unprotected OS libraries is likely to seriously compromise the security benefits of the combined scheme, because they will store pointers into the real stack, jump directly to code pointers on the heap, etc. JIT compilers are also likely to be problematic.
- In addition, there are more direct ways for an attacker to work around the protection, such as using as gadgets starts of functions that do some small operation and then proceed to a virtual call on an argument. The larger the application, the more possibilities for bypass there are.
- Still, "harder" is pretty good.
Edit: By the way, the point about function start gadgets makes questionable the paper's claim that "CPI guarantees the impossibility of any control-flow hijack attack based on memory corruptions." Also, if you want to guarantee rsp isn't leaked, it isn't enough to keep all pointers near it out of regular memory: they also have to be kept out of the stack itself, because functions with many (or variable) arguments will read them from the stack - at least, I don't see a claim in the paper about moving them - so subverting an indirect call to go to a function that takes more arguments than actually provided (or just changing a printf format string to have a lot of arguments) will cause whatever data's on the stack to be treated as arguments. Ditto registers that either can be used for arguments or are callee-saved. That means frame pointers have to be disabled or munged, and any cases where LLVM automatically generates temporary pointers for stack stores - which I've seen it do before - have to be addressed.
If you do move non-register arguments to the safe stack then the situation is improved, but you still have to watch out for temporaries left in argument registers.
As they say, never read the comments.
Edit: And yes, God Mode was for developers. The media should chill and learn to nerd out every once in a while...
So taxi services had no phone dispatch over there? Like, you couldn't ring the taxi booking, and they would route a taxi to you?
This has been a thing in Sydney for as long as I've been alive --- it probably started up in the 70s. It wasn't great, and sometimes the taxi wouldn't show. But it existed.
Was this not available in SF? Elsewhere in the US...?
Story time: 2 years ago I started to learn frontend web development from various online courses. I had zero technical background and no intention to make money. Within few months I learned enough to create my first WordPress theme. It was terrible but it worked . I made few more themes and never did anything with them. I made like 6 themes and just deleted theme few months down the road. Then I decided to submit one theme on WordPress.org just to see if it gets approved. Themes goes through review and someone would evaluate my code and that's what I was after. Long story short my themes now have been downloaded over 1,000,000 times and I have turned it into 6 figure a year business.
While I haven't sold a single theme yet, apparently you can recommend hosting and premium plugins that goes along with your themes and make a decent income.
For those interested you can visit site at: https://colorlib.com/
If you're interested, I recommend listening to his interviews on the ChangeLog Podcast (https://changelog.com/159/, https://changelog.com/130/, https://changelog.com/92/) where he talks about how he monetised those products.
I'm pretty sure that succeeding in this is not going to be a matter of how many "stars" your project has, but rather a function of the dollar value of the problem your software solves and how well you market the paid product. You should start that marketing now by providing a link to your project :).
Hundreds of tutorials and thousands of posts and mentions later, GitHub eventually contacted me and politely asked me to take down the exchange rates repository, because they were being hammered by people requesting the data - only at this point did it occur to me that I'd created something of genuine value, and (6 months of fretting and tail-chasing later) I opened up a paid option.
For me the key thing was: I never intended to create a business. It was (and is) a labour of love. We've since grown to be the industry-leader for our area - "good enough data" for the startup and SME market - and count Etsy, KickStarter, WordPress and Lonely Planet among our clients.
Although it's no longer truly open source, 98% of our users are still on the Free plan, which will very soon be expanding to include all features (so, no more limiting by price tiers) - this is how I still feel so passionate about it.
I can't wait to publish the next steps in our journey - where we're opening everything up to the community and marketplace. I don't like where the industry is heading (competitive, closed, secretive) and we've chosen to move towards transparency and sharing.
I like businesses built on a core of open source community, because they're in service to the people who are actually building the products, rather than those in the traditional 'upper levels'. This means there's really no "sales process" (which I'm massively allergic to) - apart from the occasional grilling from the accounting department, who may find it hard to trust a business based on open source principles.
Seriously though, the fact that you're asking about GitHub stars is... telling. There's plenty of popular repo's on GitHub that I'd refuse to pay any money to, but more to the point, GitHub is a terrible customer experience because it's not for selling software to people, thus the only thing 'GH stars' tells us is... the number of people that have starred a particular repo.
For a purely open-source project, look at OpenSSL. It's probably in production in a significant amount of the entire internet. But until Heartbleed came along and it came to light that the OpenSSL project was severely underfunded, it was limping along with little sponsorship.
Red Hat is probably the most famous open source business, but unfortunately, if you look at their practices, they're abiding by the letter of the GPL, but not entirely the spirit, which they've decided is necessary from a business perspective, so if you're looking to make a lifestyle business based on your open-source project, the question to you is: how comfortable would you be with a lifestyle business based on an entirely proprietary project?
Is this a pipe dream? Chances are, yeah. But do dreams come true? Sometimes :)
* Paid support: you now have an incentive against improving the documentation. Conflict of interest.
* Selling binary builds: your software can no longer be easily recommended and shared by others.
* "Premium features": I'd rather call this 'crippleware'. You're intentionally crippling the 'community version' to give people an incentive to pay you money. That's certainly not in the spirit of open-source.
Frankly, I don't feel software is a thing that should be sold at all. You're always going to be intentionally (and artificially) restricting something to make that business model work - after all, software is infinitely reproducible at effectively zero cost.
Instead, if you absolutely must make a business out of it, offer something like a hosted service (that people can easily replicate themselves, in the spirit of open-source). That way you add something to be sold, rather than taking something from the existing project and asking money for it.
The better option is to accept donations, and put some serious effort into getting those going. I don't really understand why people will spend weeks drawing up a business plan, but for donations they just slap a 'donate' link in the footer of the page without thinking, and then complain after a few months that they're barely getting any donations. Accepting donations requires the same amount of thought to make it work.
EDIT: No bulletpoint lists on HN? :|
Here's what I would suggest. Reach out to a few people from these organizations and ask them for their general views on your project and what their major pain points are. I'm sure they will be more than happy to talk to you about it if they're indeed already deriving great value from it. Once you have established a good rapport (i.e. warming up the lead), set-up a call with them to pitch your vision for the paid product. A call is crucial because email and text can only communicate so much - you can get a far better idea of their domain and problems through a quick 45 min call. Scheduling this should not be a major problem once you have established a good email channel previously.
If you can get about 8-10 people interested in exploring your paid offering you have something that's promising. After that you can think about scaling the business with self-service etc.
Every company that emails you asking you for help is a sales lead. "I'm happy to implement/configure this for you. I charge $X per hour, and what you're asking for is about Y hours of labor." To the client, X*Y is often cheaper than the opportunity cost of pulling an engineer away from other work. Also, don't be afraid to make X >> $100 if nobody else offers the same expertise.
You can do as much of this as you want, without changing anything about how the software is licensed or packaged.
Would you rather make $300k a year writing and selling proprietary software or fail/make $20k a year providing value-added services for an open-source project? This isn't a tough choice for me, but for some people the ideal of open source trumps all.
It's much easier to write software that fits a simple business model, than to figure out how to shoehorn a business model onto an open source project. You can tell which route is the pragmatic one: start from scratch, optimize for easy monetization.
That said, don't let any of this discourage you. It's absolutely possible to create a cool lifestyle business based on an open source project (or anything really). The only way to know if you can do this is to seriously commit to it. If you don't have to support a family it's probably a risk you can take.
2) "some guy" from one of the podcasts I can't think of at the moment who forks Ruby and keeps a stable, supported version for his corporate clients, while dealing with patches & upgrades on his own to his fork
In both of these cases, you can see that is isn't actually "how many users", but "how many corporate users who need a service that keeps software X completely stable and managed"
So I think from what you've said that it would be a good thing to look into. You may want to contact the guys from http://www.tropicalmba.com/ for a few pointers - this is right up their ally and they are very willing to discuss
I would start with a services plan and a beautiful website. If you are not a designer, do hire one. Good design and quality can go a very long way towards achieving your dream.
If your project is an app, you can also go the SAAS (softare-as-a-service) way. But beware, building a multi-tenant platform, and working on user onboarding, marketing your site can be 2x to 3x of your original project your project.
The good thing is that once you are there, 1/2 years is a reasonable time, this will only grow.
Another quantitative indicator you can use is number of downloads on relevant package manager (npm, PyPi etc.). These indicators only tell you how big your audience is and not whether your project could provide income. However, having big audience increase chances of success - you have a good position here.
Check out MinoHubs (https://www.minohubs.com). We provide tools that will help you get started with monetizing your project in several ways very quickly. Let us know if you have any questions or if you're missing anything.
If you make it easy for people to give you money and you have a useful product then there usually will be a small percentage of users who will pay. But I'm a firm believer that to make money you have to put effort into sales.
So I am very sceptical about monetization of community-driven momentum (the moment you close the code it begins to stagnate and die). Unless you are leader in your niche the chances for earning living from a project are close to zero.
Wordpress or other themes is a different kind of offer.
Others have discussed busies models, so it should be a simple calculation of how much you can charge for a model vs how much it will cost you to run your business and if you can live with the reminder after taxes etc... then I would say do it.
 - https://github.com/joeblau/gitignore.io
I think Nathan's book describes an excellent outline for success in this regard. Basically you can look into "tiers". You give away "free" resources. Blog posts and the liberally licensed open source software itself. You can also paid resources. A book, perhaps screencasts that accompany the book for a premium price. Workshops and onsite training provide another tier. At the very top is consulting at ultra-premium rates.
This is oversimplified, but this is the idea. The "free" stuff is critical, and builds the basis and "proof" for the paid offerings.
a) 100k / year is not 'reasonable', it's huge in microisv terms. The modal revenue for independent software vendors is $0 (stats from payment processors). To get there in 2 years is even less common.
b) The number of 'stars' has correlation r=0 with ability to monetize. I'd even speculate - the more stars, the more ingrained in people's heads it is 'this is open source, ergo free', the harder it will be to make money. If you're selling support for something only slightly popular, customers will have no other options - but on support for Wordpress, you have to compete with other devs, with 2$/hour rentacoder people, with Amazon selling 'teach yourself Wordpress in 60 minutes' books etc.
c) What incentive do companies have to pay for it? You have to be able to articulate it clearly before going down this path. Your wording makes me think you are thinking 'this is my secret sauce, I can't give it away', which is (frankly) very naive. If your idea is even a little bit good, you'll have to stuff it down people's throats to accept it. I don't know of any business that makes money OS software (on the software itself, not the consultancy around it) without having a dual licensing model - GPL or commercial. And if that was really a cash cow, Trolltech wouldn't have been sold a dozen times (or so) over the last 10 years...
To judge whether your product has commercial potential, you need to
1) Describe your customers. In detail. Not just 'anyone using a database', but 'medium scale accounting businesses with on-premises case database' (idiotic example, of course). You need to do this in such a way that you can derive a strategy from it on how to find them. E.g., accountants meet at industry events, where you could book a booth.
2) Identify a sample of, say, 100 of them, using the methods from step 1.
3) Start calling them. Preferably after you've taken a sales class (just 2 days will give you life changing insights, I promise). Not emailing, actually talking to people. For the 100 from step 2, identify how many would buy your product.
4) Get sales promises, 10 or 15 or so. People who you can excite so much about your product that they promise (informally) 'yes, I'll buy this if you offer it within 6 months' or whatever.
5) If you can't even do this, you have no (OK, 'barely any') hope of succeeding.
The main skill you need is marketing to succeed at an endeavor like this. The quality of your software, or its popularity amongst the crowd that uses Github, is of secondary importance at best. That's not to say that you can succeed with selling people crap product, snakeoil style, just that your life as a software vendor is 10 or 20% of software development, at best (or worst, depending on your perspective...)
There are excellent comments in this thread btw. I've bookmarked it.
100,000 star. If you are starred by your average to not-so-average (high or low) developer. There is no clear monentization plan. But given the popularity, ads and affiliates might bring you $1 per star (assuming a star is 40-50 visits/year).
Think about it. You have a bridge between Queens and Manhattan. I'm sure the US, NYC and people are willing to finance, pay you, or buy you. For what-ever big price (might be unreasonable too, to the cost of creation) you ask.
But if you have a bridge between Antarctica and the French Southern Antarctic lands (just randomly picked), it'll be certainly an amazing and well-known artefact but I'm not sure how you are going to finance it (especially that Tourism is not huge there).
I started making money in 2011 with a GUI for an open source command line tool. The open source software was available completely for free, but compiling it was a hassle, using it was annoying because of a few serious bugs, and you had to use it from the command line.
I made money by making an existing, free tool available to people who didn't want to use the command line or compile their own software. I charged a modest fee (initially just $5), and people gladly bought my app to solve their problem. Nobody cared about the fact that it was open source and they could have solved their problem for free; they just considered my app a fair deal.
(I still sell this app, MDB Viewer for Mac, but I've since completely rewritten the open source library it depended on)
My company is based on Open Source software (Webmin, Virtualmin, Cloudmin, and Usermin), and it sounds sort of similar to your situation. When we started the company based on this stuff we had several major companies in our target market (and we chose our target market, and chose to focus on Virtualmin rather than other features of Webmin, because that market already knew us and used our software in visible ways) using at least one of our projects, almost a million downloads a year, and my co-founder and I had both been making a decent living doing contract work based on it and writing about it.
Today, Webmin has about 1 million installations worldwide (and has grown to ~3.5 million downloads per year). We make enough money from our small proprietary extensions to Virtualmin and Cloudmin to support three modest salaries. It is not $100k/year for any of the three people working on it, though it's not an outrageous dream to think we could get there..we've had much better years than we're having this year or last year, however, so revenue is not necessarily growing like gangbusters, despite our userbase roughly tripling in the time the company has existed and still growing at a comfortable clip annually. Ours is sort of an open core model, though the core is very large (~500kloc) and the proprietary bits are very small (~20kloc), which may be part of our revenue problem.
I think there are some things you're probably underestimating (not to say it should discourage you, I'm just trying to open your eyes to some challenges you will face that you might not expect):
When you sell Open Source software, support is your biggest value add, even if you don't want to be a support company. Support costs a lot of time and money to do well. Time and money has to be balanced between making things better and supporting existing users on the current version (true of proprietary as well, but proprietary vendors don't have a million people using the free version and expecting help). Growing the free user base (which can be a side effect of having people working full-time on it) can paradoxically lead to less time and money for making the software better. We fight this battle all the time. To make our current customers and OSS users exceedingly happy with our level of support is to severely limit our ability to deliver next generation solutions. We run on such a shoe-string, and compete with such huge players, that it's always a struggle to deliver both (and we fail plenty).
So, plan to hire someone to help you support the software, eventually. If we were comfortable leaving our forums to "the community" and not bothering to have an official company voice present every day helping answer the hard questions, we could increase our own salaries by a lot (we pay our support guy more than we pay ourselves), but I don't know that we'd continue to see the growth we've seen in our user base, which we also value. We make things because we want them to be used, not just because we want to make money.
Get used to having demands thrown at you every day. The level of documentation and completeness and rapidity of development expected of a product is vastly different than that of an Open Source project, or at least the way you have to respond to it is different...even for users of the Open Source versions. We have over 1,000 printed pages worth of documentation, plus a couple hundred pages of online help, and still get complaints about our documentation regularly. And, we have more "features" than any other product in our space, including the two big proprietary competitors, and yet still get feature requests all the time (and it's harder to say no than to just implement it, which can hurt usability). A million users generates a lot of feedback. It's a very high volume of demands to answer to. Ignoring them pisses people off, saying no pisses people off, and saying yes often risks making the product worse or more complex for the average user, hurting long-term growth. Even saying, "Not right now" pisses people off. You're going to piss a lot of people off, even if you're just trying to make the best software you can and make a decent living.
I think what I'm trying to say is, think about it for a while before committing to your plans. If you currently have steady income, hang on to it while you sort out a few details.
Try to firm up what your users would pay for your value add. Try to figure out how many of your users would pay for your value add. The only sure way to do this is to actually have users pay you something for your value add.
Try to figure out how you will automate support (hint: You can't, because automated support almost always sucks; even Google has awful automated support, and they're good at almost everything.) At least figure out how you will streamline it and offload it; have an active forum already? If not, get one. Have a community of people talking about your software already? Get one. If Open Source based business were a Pokemon, community would be its special ability. So, you should start cultivating that now, even before money is coming in.
Cachet (https://github.com/cachethq/cachet) has 2.6k stars but I know that the amount of installs is actually far higher.
Revenue is number of people willing to pay you time the amount they're willing to part with. Neither of those can be inferred from number of stars on a GitHub page (it probably has the most correlation with the number of people willing to pay, but there's going to be a scaling factor there that will vary dramatically based on the nature of the project).
How do you envision your open source lifestyle business once it is up and running? (do you want to be paid to develop software or are you hoping to make a business that "runs itself"?)
There's your answer
You need to provide a service connected to your project, this is regardless of how many people use your project.
Was it worth it open sourcing your product?
Did you get lot more leads and exposure?
I'm afraid of open sourcing because I'm not sure if it will do anything for me, and that I'm giving away years of work away for free.
This was done via ClickJacking and here are the offending scripts/html:
<iframe id="cksl7" name="cksl7" src="http://cobweb.dartmouth.edu/~hchen/tmp.html" style="border:0px;left:-36px;top:-17px;position:absolute;filter:alpha(opacity=0);z-index:99999;opacity:0;overflow:hidden;width:1366px;height:705px;"></iframe>
You can unlike their page here: https://www.facebook.com/randomdirectionsblog
Perhaps this dilemma can be viewed as a typical example of classical economic 'homo economicus' vs. behavioral economics theories. Classical economic theory would say any rational human would obviously choose 20 over 10 nuggets for the same price. But behavioral economics typically takes into account other factors that classical models ignore to better explain our seemingly "suboptimal" decisions.
I think maybe 10 nuggets is a reasonable number for one person or two children to eat whereas 20 is obviously too much. In a 'fast food' situation where it's unlikely that leftovers would be saved, people may perhaps be choosing less nuggets to adhere to their very rational believe that (any) food should not go to waste.
McDonalds has been running national promotions for $5 20-piece McNuggets. While franchise stores aren't always bound to follow national promotions (don't know about the McDonald's franchise agreement), consumer pressure is usually enough to get most franchises to do so.
The phenomenon where the 20-piece costs the same as the 10-piece occurs when the 10-piece was already at or above the price point of the 20-piece promotional price. If it was above, you'll usually see the price adjusted to match the larger quantity promo's price, but rarely see it lowered below.
The franchise will get a rebate against their royalty fees to corporate for the 20-piece, in order to maintain a specific profit level above base food cost. They don't get a rebate against the sale of the 10-piece, so they have no incentive to make it a more attractive offer, as doing so eats into their own margin. National promotions usually have brutally aggressive pricing, particularly if your store is located in a high cost of living area.
You'll see slightly different pricing behavior at a corporate store, but only about 18% of McDonald's are corporate ran.
Now, why McDonald's is choosing to aggressively market 20-piece chicken nuggets is something only they know, but may have something to do with the 40% increase in beef prices recently.
: Personal experience managing several different Domino's Pizzas. Many promotions would be ran at break even or below, if it weren't for corporate taking a haircut on their ~10-15% royalty fees. Meaning the margins aren't sustainable for non-promo items.: http://www.aboutmcdonalds.com/mcd/investors/company_profile....: http://www.pricingforprofit.com/pricing-strategy-blog/strate...: http://www.bloomberg.com/news/articles/2015-01-15/burger-war...
I've found similar tricks are used in vending machines, particularly with the different varieties of crackers. Inevitably, there will be one set near the top of the machine with other expensive items and priced according. However, there is also a half row of ever so slightly different ones further down priced at 2/3 the price.
It's all just a matter of satisfying the demand that exists at lower price levels without having to lower the price for everyone. Setting prices using price elasticity assumes you can only offer a single price to all parties. However, cost-conscious shoppers are already going to spend more mental cycles looking for a good deal. By making the more desirable pricing just a little hard to find, you can give them a better deal without butchering your overall profit margin.
If the primary cost of the nuggets isn't in the material, but in the equipment-time and labor-time, then it would make sense that 6 would cost less (can often be fulfilled from leftovers), 10 and 20 would cost similarly (require cooking one new batch), and 40 would cost more (requires cooking two new batches serially.)
I was surely confused by this pricing, but after reading this article it all makes sense now.
I imagine many people looking at the menu and thinking there is a discrepancy or perhaps an error in the menu, something to surely take advantage of. Most people might not have planned to purchase an ICEE with their meal, but then again who can turn down a "free" large ICEE for a buck? I didn't...
If so, the answer might be that McDonalds price the items this way because empirically this is what yields optimum profit and nobody knows a deeper answer.
In fact even if the pricing model was suggested by someone based on a psychological theory, if the empirical finding was that it was worse, the change would be reverted. So, even if a human suggested the change, they could be only accidentally right --- and again, nobody would be able to tell you the truth of this.
Not many are buying Chicken McNuggets for their entire family because 68% of sales are for a 10pc meal that feeds at most 2 people. The reality is that people are buying chicken nuggets for themselves and/or another person. Not many can stomach 20 nuggets at once, and certainly no one wants leftover McDonald's.
(According to the author's discretion, I may be taking that sales data too seriously.)
To me, this seems like a symptom of McDonald's collapsing in the States. I wonder if the author's mistake was searching for a lesson from a nationally failing franchise.
Provocative article, nonetheless! Good share.
The reason 20 pieces cost the same as 10 pieces is to make the customer think they're getting a deal, which will lead them to gravitate toward that item over other items on the menu that are actually lower margin for McDonald's.
That people still buy the 10 piece item at McDonald's is just an example of the fact that though firms are rational, individuals are idiots.
They also employ other gray marketing tactics, for example on the combo menu they'll put the price for the "small", however if you neglect to specify a size when ordering you receive the "medium" by default, which results in your total being more expensive than you thought it would be. I guess because of their disclaimers, its not legally false advertising.
And if you can reel in 2 - 3 people for a "healthy" chicken entree you sure bet can sell some insanely high margin soda and fries x2 or x3 which I'll bet pushes total avg ticket margin higher than the 10pc.
From there they just kept stepping the price up and discovering it was the same as the 20 pc and that's the equilibrium we are in today.
10 for $4
50 for $6
650 for $8.50 - a big plastic canister of all different sizes and colors
(This is from memory and these may not be the exact quantities and prices, but I'm not far off.)
I only needed a few, but naturally I bought the canister of 650. At the rate I use them this is probably a lifetime supply!
I really don't know. It may be weird, but this sort of pricing definitely isn't rare.
Chicken McNuggets are likely produced by machines, since they all come in the same n shapes (n a small integer). Whether I order ten or I order twenty, I feel like the McDonald's employee opens a bag of them into a fryer without counting them out. Maybe they always cook the same number in a match to keep the result uniform. Maybe the difference in cost for the two box sizes is trivial. And maybe the other half of the batch has a non-trivial probability of being thrown out because another order of 10 McNuggets won't come within "safe" food serving time, so they may as well give it to me for the extra $0 to make it feel like I got a deal.
Or maybe it's just something Marketing thought up, even though it reduces their profits. I'm under the impression that in other countries (Canada?) the retail price of 20 McNuggets is greater than 10.
My guess is, the folks at McDonalds contracted out this problem to some firm, and that firm used A/B testing to gradually refine the pricing until it got to some optimal state, which just so happens to have 10 nuggets being the same price as 20
If you don't see this at fast food just look around the next time you are at the grocery store. More and more the general shopping public assume buying more equals less cost per piece without checking the math.
At KFC If you place your order as "2 breasts, 2 legs, 2 wings, 2 thighs" you pay 4 dollars less than placing it as "8 pieces chicken only". The same exact order, but when they punch it as 8 pieces it costs more.
That's incredibly bold.
How can I trust a business when it hides behind an anonymous registrar? If something goes wrong with my order, I'd have no way to even determine who is behind the company.
Of course, the free speach argument is mostly irelevant. There are plenty of ways to share anonymously either on other people's domains, on TOR, or using just IP addresses. If my privacy was important, I wouldn't rely on Godaddy to protect it.
Our privacy online and off is already being deeply threatened on many other fronts. If you think this proposal is bad for our privacy and bad for our internet, please take a moment and email your thoughts to the working group.
I wonder if a decentralized type of DNS, like blockchain-based DNS, will ever take off. If we even have an acceptable alternative right now, I suppose the first meaningful step towards adoption would be baking support in to a major browser.
Alternatively, one could enter information that looks plausibly valid but is in fact completely invented. How often does one receive articles in the mail or phone calls to the whois contact points anyway? As far as I've experienced, any communication is to the email address. I suppose it depends what the penalties are if you're somehow found out.
I do not however like that companies can be totally anonymous on the Internet. It's not like the average person checks out the people behind a company before they buy some commodity from them. I do however whois a domain if I'm suspicious and a common thing is that most use anonymous registrars. Even serious companies use anonymous registrars now a days, witch is weird, or maybe I'm the only one who thinks it's important to know who the people behind a company are before you do business with them.
I think there are some major misunderstandings around what ICANN are doing with WHOIS privacy.
ICANN have pretty much always required that registrants provide registrars with accurate contact information. ICANN required that registrars periodically escrow this data with an escrow provider (Iron Mountain, usually, though there are now more).
When you use registrar-provided WHOIS privacy, the registrar is still able to escrow the correct contact information. This is not the case with third-party WHOIS privacy providers. The difference now is that, due to the demands of law enforcement agencies, they're now requiring that information be validated and verified.
Third-party WHOIS privacy services always existed in a legal grey area, whereas registrar-provided WHOIS privacy did not. Even before the 2013 RAA came in, you were risking having your domain being taken from you by using a third-party provider and providing their contact information to your registrar as it meant that the registrar had inaccurate contact information and thus could not provide accurate information to the escrow provider.
Before the LEAs got all antsy about this, the WDRP emails you get from your registrar, giving you a list of domains and their WHOIS data and a warning of the consequences of providing inaccurate data, were the most ICANN required in practice. It was an honour system, and the requirement to provide accurate data - which has always been a requirement - wasn't actively enforced. All that's changing now is that ICANN are actively enforcing a part of the registrant contact they previously had been laissez-faire regarding.
The requirement on third-party WHOIS privacy providers is to normalise their situation so that they have the same requirements to record information correctly and escrow it that domain registrars already have had to do for ages. And it's not that onerous a requirement: actually implementing an EPP client is orders of magnitude more difficult that writing the code needed to do data escrow: https://www.icann.org/en/system/files/files/rde-specs-09nov0... - you can implement that in an afternoon. The accreditation process for a WHOIS privacy provider is nowhere near as horrible as it's being made out to be. All you need to do is show that you can accurately escrow data.
Everybody's so late to the party on this one. The registrar constituency in ICANN fought pretty hard against this. If you think what ICANN are requiring now is bad, the LEAs were demanding much crazier stuff during the negotiations. If you're an EU citizen or using an EU registrar, you're even better off, as EU data protection law meant that some of the requirements of the RAA were illegal in the EU, so EU-based registrars are able to get an opt-out of certain requirements of the RAA. We still do have to validate, verify, and escrow contact details associated with domains we manage, however.
/>10yr NFSN client
For example, individual owners of Canadian .ca domains can have their contact info hidden, whereas corporations can't. Similar policies are in effect in a number of other countries, as well as .eu.
Will these countries need to change their policies so that individuals who have ads on their blogs will have their contact info exposed? Will they have to change the way they respond to requests for disclosure?
Or does the ICANN policy only apply to gTLDs?
This is bad. Very bad. The NameCheap email probably gave a lot of people the wrong first impression about what ICANN's proposal really means. Seriously, it sounded like they were just complaining about their bottom line. And since a lot more people use NameCheap than NearlyFreeSpeech, not many people are going to read the more thorough analysis and urgent call to action that the NearlyFreeSpeech article contains.
If anyone around you has read the NameCheap email, please tell them to forget about it. Tell them to read this article instead.
WHOIS is an extraordinarily valuable protocol with a heritage dating back to the ARPANET days. As an example, for quite a while we've had this ideal of the semantic web we're trying to move towards, but in practice each website is its own special snowflake with more concern given to legacy rendering in Internet Explorer than making sure that contact information is easily findable and semantic. But it's mostly okay, because if I really need to contact someone there's this almost 40-year-old protocol which gives me unfettered access to information such as a technical contact email and an address.
Many registrars don't seem to pay much attention to the quality of their WHOIS records and most people or businesses probably don't give it a second thought or check the records after registering a new domain. But they should; and I applaud ICANN for their efforts to uphold the quality and integrity of WHOIS.
That said, the right to freedom of speech implies that one should have the ability to disseminate ideas with complete anonymity. ICANN's proposal would completely undermine this, which is unacceptable.
I think there is space for a middle ground, where ICANN can ensure that the WHOIS records aren't what amounts to a blantant lie in the case of anonymous registrations (i.e the registrar providing their own details as the contact information). The current situation is pretty bad: if I want to contact the owner of such a domain, all I can reasonably expect is for any email sent to be blackholed by the registrar. I'm not talking about attempting to deanonymise the owner of such a domain, merely the idea that a domain is a named endpoint with an owner who is contactable through freely available means.
Imagine if ICANN created a new class of domains where it was made explicit in the WHOIS that the owner wished to remain anonymous, but nonetheless provided accurate information such as a pseudonym and a means of contact without violating their privacy. This means of communication could be some form of email hosted by a trusted third party, or potentially something more esoteric such as a GPG-encrypted message embedded in the bitcoin blockchain.
This would preserve the correctness and utility of the WHOIS database while respecting the rights I believe ICANN have a responsibility to uphold.
Individuals have privacy rights. Businesses do not. The EU is very clear on this. The European Privacy Directive covers individual privacy. The European Directive on Electronic Commerce covers business privacy online. They're very different.
We have built a ride share matching system:
Our goal is to make use of unused seats in cars matching drivers and passengers. We do not put any cars on the road (i.e Uber), We match to drivers on their daily commute.
Our Biggest Problems:
1. We are quite successful at converting people who are posting to online classifieds etc. We have strategies to get these users. We grew week on week by as much as 50% but then we hit a negative growth week. In order to continue growing, we need to expand into other areas.
We also have an un-balanced market. Demand for rides is much higher than rides offered.
2. We have not figured out how we will make money yet. Our initial hypothesis was a transaction fee.
The passenger and driver will be traveling together on a regular basis. Once we match them, we run the risk of being dis-intermediated.
You could make an argument that a payment directly into the bank account is more convenient than cash, though i am not sure that is a strong enough incentive.
3. Our churn/attrition is very high, There is no need for the system once they start traveling together. We have churn built in to our model.
May be a bit out of your area, but I'm curious about your thoughts. Even though we are in "Silicon" Valley, there has been very little when it comes to new semiconductors investments. I'm the co founder and CEO of REX Computing (http://rexcomputing.com), a new semiconductor startup working on a super energy efficient processor architecture. We've actually raised seed funding, and I'm interested in what you think about the semiconductor space in general (and its future in the valley), plus ideas on how we can thrive as a very low level hardware company in a primarily software world.
Edit: One other thing I should note is that we are also big on software! We're utilizing a lot of open source projects to help build up our compiler and other software tools. Obviously hardware without software to run on it is pretty useless.
Pitch: We're replacing congress with voting software. We are running 70 candidates in the 2016 Congressional elections on our platform, if any of them get voted into office we'll take all bills before congress and put them on our site where each voter in that district gets one authenticated vote.
Question: In your experience, what's the most effective way for B2C company to educate users that you even exist?
I know that signups and conversions are an art, but more than all of that, just telling people that you have something new that they may not be searching for but could still dramatically improve their life. We will take any demographic that will have us, so we aren't picky on that front.
We have 100% week-over-week growth during election cycles, and 10-20% when it's not, so we know the message is received, we just want to get more people in the top of the funnel.
The potential market for automated micro-farming (backyard farming) is huge, but it will take a long time to to reach its potential. My question is, at what point would AutoMicroFarm (http://automicrofarm.com/) become attractive to investors (both YC and others)? Would 10% weekly growth for a year be key, or something else?
Two and a half years ago, we AutoMicroFarm founders had an interview with you, and you decided not to invest, saying it was difficult to see how AutoMicroFarm would generate the kind of growth startup investors are looking for. However, YC invests with infinite time horizon and is not afraid of risky-looking companies (http://blog.samaltman.com/new-rfs-breakthrough-technologies).
So what would YC or other investors like to see before investing?
We are building a platform that allows anyone with a mobile phone to earn a living by performing discrete tasks.
Our platform aims to break down complex jobs into easily actionable items, that can be performed easily by anyone, anywhere.
The first vertical we are applying this to is reservations. The Loft Club (https://useloft.com) is a service that makes reservations for you at amazing restaurants every month on your preferred day, saving you the decisions and the hassle. Through our platform, we centralize restaurant recommendations and assignments, before farming out the logistics of making and manging them to our agents.
Our question is: Should we work on building out the generic platform and expand quickly into other verticals, or focus on building out The Loft Club and owning this space first? We've customers paying us for The Loft Club with the mild publicity it has received thus far.
Zhuang and Derrick
ZeroDB is an end-to-end encrypted database that lets you operate on data while it's encrypted.
Demo video: https://vimeo.com/128047786
Question: We want to sell to large enterprises (financial services, healthcare, saas providers, etc.). The common advice is to start with SMBs/startups and get traction that way before going upmarket to enterprises. How can we balance that with the fact that what SMBs are asking us for is very different from what enterprises have told us they'd like in a fully-baked product?
We are about to finish building a table reservation system (think OpenTable or SeatMe but on steroids). Although it's a "Me Too" product, we will offer features that our competitors can't or won't, e.g. bigger API control for restaurants, various hooks to extend the service such as food delivery, pre-paid reservations, and ticketing for tables to name a few. Essentially, we will offer an iPhone/iPad app for restaurants to manage their reservations/orders, while their guests can use an iPhone-app/Android-app/Search-Engine/Restaurant's-Website to make those reservations.
I have a question about a launch/pricing strategy:
- Is it sane to do a freemium model for our product? For example, restaurants would be able download our Manager app in App Store for free but it would have limited offline features. If restaurants want to accept online orders, then they must get that feature via in-app purchase. If restaurant wants to incorporate discount cards, then it's a different in-app purchase. This logic applies to all different features.
- Or should we go through a regular sales process, i.e. sign up restaurants one-by-one, charge them via check/credit-card/etc and escape Apple's 30% cut?
Thanks in advance and I hope it will be helpful for other startups that are in a similar position.
Were MinoHubs (https://www.minohubs.com) and we build commercial and community tools for software projects. The barrier to building a successful software project is high - apart from writing the code, you need to build a community and potentially set up some commercialisation (backing, licenses, support etc.) which isnt an easy task.
We provide customizable hubs that give projects:
- Paid support - ability to offer on-demand consultation to businesses and developers.
- Licensing - ability to sell one time and recurring licenses to businesses and developers (coming soon).
- Backing - monthly contributions. In return, backers get more visibility in Discussions.
- Powerful discussions with voting.
- Announcements - emails and notifications to project followers.
From Kevins initial feedback (https://news.ycombinator.com/item?id=9746206) we understand that we need to be better at:
1. Leading the user through things to do after creating a hub.
2. Showcasing the benefit of using MinoHubs.
Were working on those right now.
Our challenge is that, as Kevin also pointed out, we have a lot of features, but were also trying to appeal to an audience that would use different combinations of features; open source software, commercial software or projects that want to just use community features.
How do we reconcile that users want a wide variety of functionality with the issue that this might present too many features for us to convey concisely?
Coincidentally we started sending out beta invitations last night to the first group of people on our list before a planned public launch next month. Our recommendation engine is built on Google App Engine, which should (we hope) allow us to scale. My office-hour question for Sam and Kevin would be: What advice do you have for us at this stage?
What we do:
We're building a community of vehicle data (http://shadenut.com). Mechanics and DIYers will be able to look up any piece of information they need to work on their car directly from their phone while still under the hood (ex. torque specs, TSBs, fluid types/capacities, etc). As a developer I've seen the positive effect that StackOverflow has had on our industry as a knowledge base and am trying to do the same for the automotive industry. The data will be crowdsourced and 100% free to use.
Our biggest problem is that the product is not really usable unless it contains EVERY piece of technical data about a model, after which it becomes tremendously useful. The most common feedback we get from technicians is that they'd love to use it and contribute once it's a complete database (as long as it's accurate) but wouldn't switch from the paid competitors until then. The data is all available but there's simply too much of it for a small team to manually import. Our current strategy is to start with a select few models and incentive technicians to make their own entries.
However, I'd love to hear how Kevin and Sam would solve this or from others in the HN community who have faced similar problems.
I have a solid engine, nothing launched yet.
My question is: why would you not fund this project right now? What could I do to improve my odds of getting funding?
Pitch: We're ATLAS, we plan to launch extremely small cubesat payloads (x<100kg) into low earth orbit on demand.
1) How much calculations/numbers crunched would we need to convince angels to invest? Rockets of this size aren't something we can bootstrap without a little financial support.
Right now, the only way to get a cubesat into orbit is by ridesharing on bigger rockets as a secondary payload. The problem with this is they're not assured to reach a preferred orbit and are at the mercy of the scheduling of the primary payloads. NASA currently has a backlog of ~50 cubesats that need to get into orbit, as well as the many upcoming launches (including SpX this Sunday). We are currently working on the RFP for the Venture Class Launch Service however we may not have the resources to fully complete it by the deadline (13 July). We plan to market this service to Universities as well as hobbyists and government space agencies.
Help needed - We think that either the government or certain enterprises will pay for this data (since they already are spending money on such technology). What is the best way to validate these channels?
ACe here from Painless1099 (www.painless1099.com). We automate tax withholding/filing for anyone earning 1099 income (think: freelancers and Uber drivers.)
We're thinking through growth specifically right now and are chewing on whether to go the B2B route or the B2C route. Different implications for both regarding scale and revenue obviously. We'd be stoked on a bit of help figuring out which to tackle first and how to make headway!
My strategy thus far has been to find startups that need help doing their devops and help them automate their deployment using Tasqr. Finding customers this way has been slow, but I've gotten to learn quite a bit about how the product fits within a continuous integration workflow. I am starting to feel a little financial pressure to change my approach and scale my outreach, though my existing users really like my "do things that don't scale approach" unsurprisingly :-)
What are some signs that it's the right time to scale and chase bigger chunks of the market?
Today, this market niche is fragmented with high search costs for consumers. My marketplace will make it much easier for consumers to find and buy the products this this niche. Once off the ground, the marketplace will be a key source of customers for the merchants.
Consumers like the product, but the merchants are difficult to get on board. I find that the prevailing view is that the status quo is 'good enough'; merchants are conservative, and very few are early adopters.
Things I'm doing to address this:
1) Price for growth -- pricing based on a small flat fee that the merchant pays per transaction to align with value delivered, with first X transactions free (obviously I would prefer to charge a % of revenue, but that's a very, very hard sell with these particular merchants).
2) Provide the merchants with tools that help them run their business (i.e. give them reasons independent of the marketplace to use the app)
3) In-person visits to merchants. These are valuable for many reasons, and are only partially a sales call. These visits will always be something I do, but it doesn't scale enough to create a marketplace.
What strategies and tactics do you suggest to get merchants into the marketplace, to build up the supply-side?
We built ObjectiveFS, it's like Dropbox, but for servers. We have users running our shared file system in production, and are getting great feedback.
Our current challenges are user growth and upcoming competition from Amazon EFS.
We would like your feedback on what we can do on our website (http://objectivefs.com) or additional things we can do to get more people to start our free trial and to address the Amazon EFS competition.
About a year ago I created an eSports app that lets fans of the game DotA follow and watch their favorite teams live and on the go. It's been super fun seeing my side project grow and have users in the community volunteer to help with designs and language translations.
The biggest tournament of the year (http://www.dota2.com/international/announcement/) is coming in a few months and I would love to talk about different ways to capitalize on this.
I'm thinking about starting a company that extends the open-source concepts to the biology/pharmaceutical sphere. The concept is to make a modular, drug-producing microbial strain, and release that to researchers/industry under a permissive licence (e.g. bio equivalent of GPL). Monetization comes about by offering manufacturing services which cut the pain of 1) scaling and 2) getting "good manufacturing practice" regulatory clearance for clinical testing, and ultimately full consumer product.
Do you think there's VC interest in funding these sorts of ideas that have a somewhat riskier business model that also ultimately will extract a lower margin, but has a chance of changing the "way things are done"?
we are Tiedots (http://tiedots.co), a networking platform that provides you tailored information about other attendees every time you go to an event. This way we unveil you the most valuable leads and also find you the best way to approach. Saving time and increasing your business opportunities
The biggest challenge so far is building a solution that can provide relevant connections. Accuracy is the key and were working on a web semantic solution since weve been testing the solution manually with around 100 event attendees.
How would you determine the relevance? any other ideas?
PS: No matching solutions. Networking is about leads no matching.
The problem: our focus is divided on two types of customers (weve even created two landing pages for each type)
1. Product/Engineering teams - they get asked a new question every day that attempts to keep tabs on the health of the project. Questions are a combo of post-mortem style questions (but asked as you build product) and around prediction markets (larger n make better predictions) (https://www.getsubcurrent.com/product)
2. HR/Employee Engagement - users get asked a well researched question every 2 weeks. Instead of long, annual surveys, you now get to keep a pulse on morale and culture. Participation is higher since it only takes a single click in your already existing tools to respond. (https://www.getsubcurrent.com)
We have a number of customers using our free beta - most are using it for option #2. A very small number have connected with #1, and while we think it has a lot of promise, we havent talked to enough people to know how it might need to change to achieve product/market fit. We are at a crossroads of needing to pick one to focus on, because the distraction is making it difficult for both to progress.
We built a prototype last week and went to stores and a retail conference this week. We are having problems convincing stores to adopt out product as they are very concerned with shoplifting.
I am the creator of https://www.spqrs.com
The goal of Spqrs is to have a platform for the debate of ideas. As Sam noted in https://twitter.com/sama/status/610494268151431168 most smart people are wary about commenting on sensitive issues.
This is unfortunate as the Internet is a great place to discuss issues with people who may have a vastly different view, and in the process really examine why you think the way you do.
Spqrs provides a service similar to Twitter but allows you to follow hashtags as well as people and has a 1000 character limit instead of 140 better allowing you to make a point. The defining feature is that all usernames are pseudonyms so you can avoid threats to yourself or your livelihood based on what you say.
Right now this is a pay service. I am planning on charging $9/year. My biggest problem right now is finding subscribers. Any feedback and suggestions would be appreciated.
We're from Mise. We are an online marketplace and meal delivery service for the signature/best dishes from professional chefs in the Bay Area. Theres a face and a story behind each dish. We do free weekly delivery to SF Bay Area, including San Jose & the Peninsula.
We operate on a revenue share model. Chefs source their own ingredients, pay for kitchen rental, cook dishes, and earn 70% of everything they sell. We apply our 30% towards delivery, personalized packaging, copy, and kitchen administrative fees.
We'd love to talk about obtaining that 10% week over week growth. We're launching in 2 weeks and have a lot of orders (but a good amount are on us and going out to influential members of the community). How do we grow that into paying customers? And is it too risky to keep giving out free product that when you have to also balance the returns of the suppliers/chefs themselves?
I'd like to discuss two things:
1- Signal vs Noise in online communities: Since you've been heavily involved in Reddit (Sam), I'd like to know how what your opinion in curating content vs. having an algorithm sift through the noise. We see companies like netflix and ph (allegedly?) combining big data with human curation and having a lot of success, so my question is: Should online communities invest in curation or big data? Is there a trend towards one or the other?
2- Monetisation: I'm having a hard time monetising this community. Since I'm looking to bootstrap it, its crucial that I get monetisation right so I was wondering if you had any tips on monetising communities?
We provide a free solution for event organizers who are tired of doing customer service to their attendees by embedding our widget as easy as embedding a YouTube video. Other conferences have already hopped in like Traction Conf, check it out in action: http://www.tractionconf.io/accommodations
Let's just say that getting more events one by one isn't hard and our next step is to partner directly with ticket providers (SeatGeek, Ticketmaster) and do some sort of revenue share deal on accommodation sales.
Brief overview: I would like to market local, weekend yoga retreats to professionals in the oil and gas, energy, and finance industries. No long absences from work or family, and no long-distance travel. The startup is set up in Houston, but can operate in any city in the USA. Lots of different marketing and advertising methods are being tried (online groups, directories, online forums, linkedin, etc.), but I am pretty much throwing things to the wall and seeing what sticks. Any suggestions are appreciated. Thanks!
startup: bodyhugs: hug your body with movement and care
local health and wellness retreats in the USA
The team right now is investing itself in areas such product, marketing, and architecture. We want to launch a product so we can begin testing our hypotheses, but we also want to go to battle sufficiently prepared. The latter requires significant effort in team building, which would detract from a product launch. What should we do?
Problem: People generally buy web hosting once every few years and there are very few channels beyond Google and word-of-mouth to capture people at the moment they are considering purchasing. The competition for Google is astronomical (one of the highest PPC areas at ~$20/click). Organic rankings are filled with spam sites touting the highest paying companies with very good SEO (Hello CNET). I've been trying for years to get my SEO up to that level without success. I'm stuck on 2nd/3rd page and have been for ages. It's like purgatory, I've tried build other sources of traffic through PPC, CPM and none have really panned out that well. I've tried creating great content and it has been ok in some niches. For example, for high performance WordPress hosting information my blog has become the go to source. I'd like any ideas on what I should be trying to do next or what I can do to improve what I'm currently doing.
I am the founder of https://ManualTest.io. My app automates manual testing. It generate and run integration tests by recording and replaying users' actions on their websites. Manual testing is still being used everyday (by developers or not), so this could be a huge time saver for them.
My app is recently available on the Chrome Web Store, it only got a few users, despite having quite positive feedbacks. My question is, I am not sure the lack of users is because (1) I haven't done enough marketing/SEO/etc to get it in front of people, or (2) because I need to build more features before users would find it useful enough to try, or (3) it is just not something users want.
If its (1), I should stop coding and start focusing on letting people know about my product. If it is (2), I have a few very useful features that are still waiting to be done, but they could take at least weeks to complete. Without enough initial users, I am not sure which features users want most, or if the current set of features is enough for now, so I should focus on letting users know about it (so it is (1)).
The goal is to scale it across the US (and beyond, to any english-speaking area). Doing cost-effective marketing is key. How do we get the word out? How do we improve the site and experience so much that players tell their friends?
We put effort in SEO building a large database (the largest?) of string and racket specs with the goal of attracting some of the core fans of the sport (we started to get some clicks a day). There are a ton of other "social" and related ideas, the question is how to select which one(s) will work best.
Question: We have an awesome product that schools love. It's time to sign up as many middle schools and high schools as possible before the new year rolls around. As a one or two man show, what advice could you give about approaching these schools and how to expose our product to this market? Thanks!
Background: I'm building freedom.biz, which is currently a course for retail business owners who'd like to take their business to the next level. I sent out a survey to those on my interest list, and it became clear that I couldn't personally fulfill all their needs. However, I know people who can.
I'd love to build a company that connects vetted experts with the business owners who need them. I've seen startups in this realm, but they all feel generic and unfocused. What do you think would be a competitive advantage in this realm? What would you like to see that's not out there right now?
Hey guys,Our platform unbundles apps' and websites' most essential features and transforms them to interactive Cards. Similar to Google Now Cards. However the entire architecture is designed to be an open platform. Meaning anybody could come and build these interactive cards using our Language, REL. We believe, by unbundling Applications/WebService we can seamlessly start connecting different pieces of the web together, and create a more fluid and unified internet experience. An experience with an intelligent fabric that grows with our needs, preferences and expectations to help us make the right decision, at the right time.I'd love to hear your thoughts on Relevant. Thanks!
I think that the next big step in Medical Technology will the the rise in software making medical decisions. This doesn't pose a big technological problem but it does pose a large regulatory problem.
I have experience getting cloud-based software through the FDA as a "medical device" and have overcome some of the most common hurdles.
I have an idea on how (and the ability to implement) a product to reduce the regulatory and monetary barrier to entry for this type of software.
My question is, in the current market, does this have any shot of getting funding? The product could never be used without the approval of the FDA, an expensive process. Is this a non-starter for most funds?
Question: how do we sell this to investors? I'm a designer and the CEO. I'm good at making a product that people love but I'm bad at fundraising. We are out to make only high quality products, another hard sell to investors because high quality takes more time (money).
We have a great product that fans will absolutely love, we just need help getting it out there.
We made a trivia gameshow for mobile devices. Everyone plays simultaneously once-a-day at 11AM PT / 2PM ET. Players see the exact same questions, so it's like a live, multiplayer, interactive version of traditional gameshows on TV.
We have ~300 MAUs and ~50 DAUs and solid retention, but to get that up to 1,000 DAUs should we be more focused on trying to trigger organic sharing within the app or top line from press, blogs, reddit, Facebook ads, etc? Is our user base too small to even know whether our organic sharing is really working?
Pitch: Our application decreases labor costs by precisely scheduling hourly employees to fulfill business demand. By preventing over and under-scheduling, we've been able to show a 10% decrease in labor costs with early customers.
Question: What wisdom do you have about "go-to-market" strategy in the retail space? We have startups as clients that are eager early adopters, but to cross the chasm to sustainable growth it seems that we will have to focus on retail companies and going to many trade shows. What can we do now to prepare?
We are working on a new version of http://www.muusical.com. The new version is a significant change. It is going to be a free Spotify that is powered by a crowdsourcing platform where users add the music and meta data.
I would love feed back about a strategy for approaching investors who are wary about music startups.
If you're interested, read more here too: https://angel.co/muusical
iOS Download link: https://itunes.apple.com/us/app/i-boating-gps-nautical-marin...
Our biggest pain point: Distribution
Memaroo is a web research dashboard, designed to make iterative web searching organized and more efficient. Memaroo records your search history into different projects, which can be accessed from anywhere -- so you can search for things on your phone and then continue your research later on your desktop. Projects can also be shared with other users, allowing collaborative searching and result sharing.
Memaroo is an improvement to an established search paradigm. But people are comfortable in that paradigm, despite its flaws.
How can I get potential users to break their existing search habits and try something new and possibly better?
We are CodePicnic (codepicnic.com), a platform for sharing, running and showcasing code through a browser. We help people and business improve their demos, APIs documentation, or anything that needs for their users to run and try some code online.
We'd love to improve our "getting there" process. We've been interacting with users here in Hacker News, Reddit, Product Hunt and other sites, and getting better and increasing our usage, but still we feel is not enough right now. Our first users love us, the service and the potential, but perhaps there's something glaring we aren't doing well in order to be more well known. Is a long process, we get it, but the more we learn, the better.
I also believe this is an important matter that many other startups would love to learn about.
One of the problem I am facing is how to pay them back? And how to checkout if the view was genuine or was done by some bot?
Sorry to all if this comment is inappropriate.
Office Our provides a portal for investors to interact with their top 5 potential investments. "Separate the wheat from the chaff."
At Office Our an investor creates a post (or "bulletin") inviting the community to pitch their startup. Users vote on the "wheat" and the top 5 earn the right to receive a response. The investors can then manage their bulletin and follow-up outside of our platform.
For this we charge a simple, flat $5 fee to each investor per potential investment per month on an annualized basis in the form of credits which are distributed by each of the user's votes in batches of baker's dozens. "Investing - simplified"
I look forward to your feedback.
The code delivery pipeline consists of issues, IDE, code hosting, CI, code review, configuration management, Continuous Delivery (CD) and a PaaS service. Code hosting is a first step and getting all the rest right is a lot of work. Services working on getting the IDE right are Koding, Nitrous.io, Cloud9, CodeAnywhere, Codio and CodeEnvy. And I suspect that GitHub Atom is running in a web-browser so they can effortlessly offer it online in the future. For configuration management you want to integrate with Chef, Puppet, Ansible, Salt and Docker.
At GitLab we offer CI and CD via GitLab CI. We hope for a multi-cloud future where organizations will deploy to different cloud providers. They will use PaaS software that spans the different IaaS providers. Cloud independent PaaS software offerings are CloudFoundry, OpenStack, OpenShift, Kubernetes, Mesos DCOS, Docker Swarm and Flynn. We want to ensure that GitLab is the best option to do code collaboration upstream from these offerings.
Pretty sure this product is just so you can store your code/repo for your project using Google's cloud services. It's part of a whole for their cloud offering.
This article is only slightly more sensible than claiming that S3 is a GitHub competitor because you can git clone over HTTP.
Does Google have too many siloed product managers? Maybe you can only advance up the corporate ladder by releasing new products, and fuck all if they get killed later, because you got your promotion?
No clue what the cause. Just seems weird looking on from the sidelines.
Food for thought...
Hint: source code is not GitHub's value, just like books where not Amazon's. GitHubs true value is something Google is profoundly bad at.
For my personal projects, I'm fine with my own git server on a cheap vm, but for work I've been really happy with Github's issue tracker and org membership administration (with which we use OAUTH heavily for internal tools, several of which queue background computation). Github issues are much easier for tracking job submissions from analysts than integrating with the company email, and developers prefer it.
I looked at GitLab a year ago. I liked it, but it was a little funky (avatars weren't working; obviously not mission critical, but little shit like that erodes confidence -- either support a feature or don't, but never half ass it because I'm not going to admin over a server when I've got a chorus analysts bitching about it being hacky). GitLab people: this was a digitalocean prepared image version of GitLab, in case you're listening.
Version control is really the most minor and easy to replicate part of Github's value proposition.
That's not a feature of this product, it's just how git works.
No one seemed to like it. Heck, Google didn't seem to like it enough to give it any love, just check out the UI on the site: https://code.google.com/hosting/search?q=label%3aPython
> This Beta release of Cloud Source Repositories is free and can be used to store up to 500 MB of source files.
No thanks! I still remember what happened with AppEngine.
2. No built in issue tracker or wiki (?)
Dropbox? There's a clone. Pinterest? Clone. Everything. Then they dogfood it and if there's more interest they gather up more resources to inevitably pitch the idea to Marissa Meyer, who then plays with it, design the business case for it, and approve a proper budget for it.
If the product is good then the news leaks or they launch it. After awhile if the Google audience doesn't like it they cut it loose.
Which goes to say... any time some investor asks you what happens if Google comes into your space, you should say: good.
Are they saying it is not secure?
NoteCloud Source Repositories are intended to store only the source code for your application and not user or personal data. Do not store any Core App Engine End User Data (as defined in your License Agreement) in a Cloud Source Repository. To use a hosted Git repository with a Cloud Source Repository, you must first open an account with GitHub or Bitbucket (independent companies separate from Google). If you push source code to a Cloud Source Repository, Google will make a copy of this data which will be hosted in the United States.
When a movie like Iron Sky has no problem being shown in German cinemas with the swastika left untouched, because it's clearly art, then it should be fairly obvious to Apple that banning historically accurate representations in historically accurate interactive art is far overreaching; though not legally, but ethically.
It's like these companies just woke up suddenly, had a conference call and without a hint of discussion, analysis or feedback started enforcing moral revisionism. It reeks of dishonest, cheap PR.
What's next on the agenda?
The saddest part is that this has totally taken over the discussion of shootings in North Carolina. US is unique in that 9 people gunned down in cold blood somehow turns into a discussion about a flag?
"Our position is thoroughly identified with the institution of slavery - the greatest material interest of the world. Its labor supplies the product, which constitutes by far the largest and most important portions of commerce of the earth. These products are peculiar to the climate verging on the tropical regions, and by an imperious law of nature, none but the black race can bear exposure to the tropical sun. "
It astounds me that there are large numbers of whites in this country who think the Civil War was about anything other than slavery. The Confederate flag represents and evil institution and represents the evil intent of white Southern power brokers 155 years ago.
I don't have an opinion as such on Apple's decision but let's not pretend that the Confederate flag is anything other than a symbol of overt racism.
This is plain ridiculous.
Can someone get Taylor Swift on the case please?
I don't understand the ban on the historical apps - the civil war did happen and the confederate flag was used.
Are nazi flags and symbols banned from ww2 games ?
When I saw it, I wasn't surprised. There are Civil War games where it makes total sense for the flag to show, and there are probably a few tasteless "The south will rise again" things that never should have been allowed on in the first place.
But it takes a lot of people and time to figure it out for each app on the store. And when it comes to this kind of stuff Apple doesn't like spending lots of people and time on these kind of things.
Blanket bans are so much easier to implement.
Quite disappointing, but not surprising. And they'll probably reverse parts of it within days. Or new games will slip through and people will forget about it.
Or do we expect monopolistic censorship to be the new norm of the future? Disgusting.
I totally get and agree with removing the Confederate flag from state flags in the United States -- The confederacy failed. But, must we deny an important part of US history happened?
The only reason these seem incompatible is that iPhone owners can only get apps from Apple.
If you want both Apple and users to have freedom of choice, lock-in is the real enemy.
(Side note: lock-in also goes hand-in-hand with DRM, which goes hand-in-hand with surveillance: if the user isn't allowed to see what code they're running, and the software company isn't allowed to disclose what the government made them do, then the user can't know how their device is bugged. Cory Doctorow explains nicely how fighting lock-in and DRM is good for political freedom, too: https://vimeo.com/123473929)
While what Apple says about privacy is admirable, end-user control is still a serious problem.
If anything it will feed into the paranoid narratives advanced by those who truly believe in the symbolism of the Confederate battle flag, triggering the Streisand effect, or a close relative of it.
Is it time for HN to review whether the upvotes-vs-comments penalization-heuristic still makes sense? It's feeling a little ad hoc and brittle to me.
It's Apple's playground, they can do whatever they want. This seems ham fisted to me, but i understand the desire to simply eradicate all evidence. They're not in the business of historical accuracy. They're in the business of selling stuff. bad feelings about imagery interferes with selling stuff.
Pretty stupid, anyway.
Removing "unnecessary" references (ones without any historical context or other legit justification) might be more reasonable.
American Civil War games are not racist nor is representing the Civil War or any other conflict in a game. What is next? Scrubbing history books of any offending flag or words?
I feel this is another sign that we are headed towards a culture that does not tolerate anything that might offend anyone, an intolerance of intolerance.
If the flag is shamed out of the public zeitgeist, in the same way that the n-word or the c-word have been, then that is a symbolic victory for those on the side of civil rights.
Whatever you think about the actual meaning or symbolism or historical context of the flag is beside the point.
Arguments about the importance of "heritage" fall flat because symbols do not teach history; they simply stand in for particular narratives.
I suspect this is a poorly handled auto ban of anything with the flag and not the intended result.
That said, this is silly.
Apple should respect their developers and let them use historical symbols used by the opposing sides.
For me though, this is nothing new for Apple, and it's why I don't like their software in general. As RMS said, roughly, "Apple puts the user in a prison. It is a beautiful prison though."
Some people embrace the beautiful prison for it's simplicity and ease of use. I suppose that's their choice, but what will they do when they wake up and realize they hate the new warden, after they are so tied into the ecosystem?
I still think those who embrace FOSS now will be at a huge advantage as time progresses and the nanny mentality of companies such as Apple and Microsoft becomes more prevalent, and users of those will be at a large disadvantage. RMS is a man ahead of his time and only time will prove him right. As a matter of fact, I think that's part of the reason why MS is trying to get more ground in the open source community, because they understand that FOSS is actually becoming a threat these days, and they are trying wildly to stop the haemorrhaging.
In Apple's defense though, I do view them as the lesser of two evils, and would gladly push OSX/iOS on users rather than Windows/Windows Mobile. At least it's unix under the hood, and we can see most of the source code.
Islam, on the other hand, while associated to some degree with terrorism (largely by the definition of "terrorism", some would say), surely does not exist for the express purpose of cultivating terrorism. Moreover, even to the extent Islam is associated with terrorism in the public consciousness, it does not also convey a message of hate to a particular minority group, as do symbols of the confederacy.
There will be people who disagree with both of these points, but I submit that the former point is much more compelling than the latter. And, more to the point, the public reaction simply reflects, I think, widespread agreement that this is the case.
It seems like the solid flag is now pretty much only usedFor shock value and racist advertisement, without the historical content.
I liked what Dave Winer had to say about his experience with it:
Though Nintendo won't be happy and will shut down your video as you use their IP as happened many times before. Some months ago, someone did the exact same thing with Unity engine, and his Youtube video and website vanished within two days. 
Nintendo's upcoming NX console (successor of Wii U) will hopefully be more powerful than PS4/X1 at the end of 2016. And hopefully we get nice reboots of Super Mario 64, Mario Galaxy, Maria Kart and Zelda.
 Edit: I found the HN news from 3 months ago: https://news.ycombinator.com/item?id=9276605 -> https://roystanross.wordpress.com/super-mario-64-hd/
-- the website now reads as follows: "The project is no longer playable, or downloadable in any form. I received a copyright infringement notice on both the webplayer as well as the standalone builds. Which is fair enough, really. In light of Nintendo recently making a deal to release some of their IPs on mobile platforms, its probably not in their best interests to have a mobile-portable version of Mario 64 sitting around. In any case, I didnt really expect for this project to get so popular, and was hoping it would function primarily as a educational tool and a novelty. (...)"
But this is really just a typical game mod. Someone made some models for Mario and coins and put them in various UE4 tech demo scenes.
That's not to say it isn't cool. it just isn't extremely interesting from a technical perspective, besides the amazingness of UE4 in general.
- all the environment assets were taken from the Unreal marketplace
- all the character actions were scripted using blueprints only
What about trademark? Could the author sell it?
See what happened to Super Mario 64 HD (an attempt at a remake with unity): https://roystanross.wordpress.com/super-mario-64-hd/
It's odd that Toru Koremura's reaction to lonely elderly deaths was to go into the corpse-cleaning-up business, rather than starting a social outreach program for the lonely elderly. I understand there's dignity in how we treat the dead, but it doesn't solve the problem at all.
This article scares me.
This is probably especially challenging with people who don't have relatives or the relatives never check up on them.
In other words, by abandoning our elderly, we're sending a clear message to our children that human life has no inherent value if you can't produce. No wonder children in industrialized nations can't seem to feel a sense of belonging.
She comes home with many different stories. Some people are incredibly wealthy but their family doesn't want anything to do with them so they basically wait to die alone. Some people are poor and mentally ill and are basically living in dumps with rodents running around. Some people do have good situations in which they are cared for and can die surrounded by loved ones.
My wife is the only person I've met who wants to die young (around 40 or 50). Death is usually hidden from our lives - especially when we're young and only comes up infrequently.
While some people who are poor do use their service due to government subsidies, a lot of people don't and there is no public service for the elderly. Stories like this will become much more common over the next 20 years as a huge proportion of the Japanese populations passes on.
The "epidemic of loneliness" has spread to many places. If you haven't read "The Lonely American" I implore you to do so if you care about this topic at all. An incredibly sobering read.
"In today's world, it is more acceptable to be depressed than to be lonely-yet loneliness appears to be the inevitable byproduct of our frenetic contemporary lifestyle. According to the 2004 General Social Survey, one out of four Americans talked to no one about something of importance to them during the last six months. Another remarkable fact emerged from the 2000 U.S. Census: more people are living alone today than at any point in the country's historyfully 25 percent of households consist of one person only.
In The Lonely American, cutting-edge research on the physiological and cognitive effects of social exclusion and emerging work in the neurobiology of attachment uncover startling, sobering ripple effects of loneliness in areas as varied as physical health, children's emotional problems, substance abuse, and even global warming. Surprising new studies tell a grim truth about social isolation: being disconnected diminishes happiness, health, and longevity; increases aggression; and correlates with increasing rates of violent crime. Loneliness doesn't apply simply to single people, eithertoday's busy parents 'cocoon' themselves by devoting most of their non-work hours to children, leaving little time for friends, and other forms of social contact, and unhealthily relying on the marriage to fulfill all social needs."
Here's an article for Britain:
could i die alone without any great effort to do so having grown up on the internet?
i have seen promotional material for teaching seniors computer skills.. and there seem to be so many similar services and efforts that a search was unable to lead me to the specific video..
the internet is filled with a mess of stuff but within there are many communities covering very many interests
why imagine yourself alone?
>For none of us lives to himself alone and none of us dies to himself alone.
I'm not joking. I keep wondering if I should introduce my grandma to such a game, but I live far away and they all seem too complicated. Would be happy about suggestions.
 I don't want to pick on anyone, but there are already some great examples in the thread.
Also, Safe Browsing, DRM, Search suggestions, Telemetry and Health report can be disabled in the preferences UI. Don't need sensationalist about:config protips for that.
There is a bug opened (https://bugs.debian.org/cgi-bin/bugreport.cgi?bug=748897), but as far as I know, no simple solution exists yet. You can change the user agent with an extension to keep it identical with the most popular Firefox version, but then you have to manually keep it up-to-date.
http://kb.mozillazine.org/Browser.safebrowsing.enabledFirefox 2.0 incorporates the Google Safe Browsing extension in its own Phishing Protection feature to detect and warn users of phishy web sites.
Mozilla's new "social" features don't have a turn-off option in the Preferences. You can disable them by going to "about:config", creating the tag "social.enabled" (it doesn't even exist by default) and it to False. Mozilla provides no easy way to do that. This add-on takes care of those convenient little omissions.
Obviously, Mozilla is doing all this to tie users to their mothership and make it harder for them to leave. It's not like users were crying out for "Pocket" integration in the browser.
"media.peerconnection.enabled = false" WebRTC leaks IP when you use TOR/VPN, test it with ipleak.net
"beacon.enabled = false" Blocks https://w3c.github.io/beacon/ analytics.
Also recommend using plugins; uBlock, NoScript if you use VPN.
It is important to remember that malicious websites and malware in general may negatively impact your security and privacy in extremely harmful ways (malware compromises PII, website credentials, financial information, uses webcam and microphone to photograph/film/record you from blackmail/revenge porn purposes, ...)
For context, please see these relevant Mozilla bugs about SafeBrowsing privacy concerns: , . tl;dr Firefox must set a cookie for SafeBrowsing, but it uses a separate cookie jar for SafeBrowsing so Google cannot tie the Safebrowsing activity to anything else you do related to Google or their services (which is the biggest concern here). They can learn a limited profile of your browsing activity, along the lines of "Random user x often uses their browser between 9am and 5pm on M-F".
The Safebrowsing implementation is specifically designed to be privacy-preserving.  It uses a Bloom filter to implement fast lookups in a minimally sized hash table of known malicious URL's. The only time a full URL (actually various hashes of multiple prefixes of the full URL, including the full URL) that you browse it sent to Google is when a prefix of it collides with a known malicious URL, in which case the URL must be sent to Google to resolve the question of whether the URL you are trying to visit is actually malicious or just a false positive from the Bloom Filter. Yes, the hashes are unsalted so it would be possible for Google to check if you were trying visit some pre-determined URL ("were they trying to visit www.thoughtcrime.org?") but only if it collided with a known malicious URL.
It would be helpful to know what the average rates of collisions and false positives are to get a sense of how much of an average user's browsing history is leaked to Google through Safe Browsing - can anybody from Google comment?
: https://bugzilla.mozilla.org/show_bug.cgi?id=368255: https://bugzilla.mozilla.org/show_bug.cgi?id=897516: https://code.google.com/p/google-safe-browsing/wiki/SafeBrow...
Isn't reader an offline functionality?
If you have WebRTC enabled, any website can determine both your local IP address (e.g. 192.168.1.1) and your globally-addressable IP address. The combination of these is essentially unique, and can even be better than cookie tracking or browser fingerprinting.
It's possible to disable WebRTC in Firefox, but AFAIK not in Chrome/Chromium.
As for Firefox Hello and Pocket integration, you can turn these off if you want, but I'm 99% certain that they don't actually send any data about you unless you actually use them.
- Reader mode is confirmed not leaking data. No need to disable it.
- There is a way to stop leaking the browser history to Google while keeping Safe Browsing.
* both tested using Fiddler
https://history.google.com/history/ and https://plus.google.com/settings/endorsements etc.?
The political candidates are the worst.
As a daily bicycle commuter and motorcyclist, the only rule I follow is that I am invisible when on two wheels. So I ride in a way that makes me safe, and that usually means doing things that most people would probably find dangerous.
In over 15 years of daily commuting (yes, all through the winter, too) I've been hit a half dozen times. The majority of those accidents were intentionally caused by the car driver, only a couple were truly faultless. None of them were the result of the driver not seeing me, they were all the result of the driver behaving badly.
A reflective jacket or spray isn't going to do ANYTHING if the driver decides that they own the lane and they're okay mowing you down to get it. That to me is the big flaw with any conspicuity safety measure, it relies on drivers actually being aware of the road around them and honoring your use of it. At least around here in DC, those two things are seldom present.
Most riders are foolishly naive about their safety. Traffic laws aren't going to keep your head from bouncing off a hood, and a reflective vest isn't going to make the driver put down their cell phone and pay attention to the road.
There are two schools of thought to making cycling safer:
1) Make cyclists brighter and more armoured.
2) dedicated infrastructure.
Option 2 is much more costly and harder politically, but is the only school of thought worth taking seriously. Look at places such as as Amsterdam and Copenhagen where cycling is common and safe (1). Do they rely on helmets and glowing things? No they don't. Lots of ordinary people cycle in regular clothes on dedicated separated cycle lanes.
Yes, you'll be safer if you stand out by being brighter than everyone else. But new and interesting ways to ramp up the brightness wars are a frivolous distraction from what cyclists in London need. You should not need to "look like cross between Darth Vader and a Christmas Tree" (2) in order to ride a bike.
At lot of the current infrastructure is terrible:
Advanced stop line? You mean that white mark on the road with a minicab over it.
Cycle "superhighway?" You mean that blue stripe underneath the buses and trucks.
Instead of this stunt, maybe they should focus on building cars and particularly trucks that are not unsafe by default. All this talk of blind angle obscures the basic fact that this is first and foremost an engineering problem, and most importantly, you can not turn the defects of your vehicle into the responsibility of other road users.
If your vehicle isn't safe, it can not be driven. The solution is certainly not to tell everyone else to just watch out because you can't see shit left and right and man is this thing large and heavy.
Woah there. Hold up right there.
The safety of ALL road users is on the backs of ALL road users.
It's not uncommon in London to see reporting of one of the one-a-month on average deaths of a cyclist to see such comments as "the cyclist was wearing a helmet".
Yet the helmet didn't save the cyclist, because the cyclist was crushed by a fully loaded construction HGV tipper truck.
This idea that cyclist safety is 100% their responsibility is part of the root cause of the problem.
Cyclists are one of the most (if not the most) vulnerable demographics of road users there is, and it should be the responsibility of other road users to help protect them.
Failing that, it should be the responsibility of those who provide roads to ensure that the infrastructure itself protects them (segregated cycleways).
But creating an idea in which "Cycle safety is the cyclist's responsibility" is plain disgusting when every damn month another cyclist is in a morgue, regardless of whether or not the cyclist wore high visibility clothing, had lights, wore a helmet, etc, etc.
And there is my issue with Volvo's "Life paint"... it shifts the blame for the continued stream of fatalities onto the cyclist.
Do you want to know where the real problem is? Try this, of the 8 fatalities on London roads this year, 7 were caused by HGV construction vehicles even though such vehicles take up less than 5% of all London vehicular traffic.
Here's one from Monday... this week! http://www.standard.co.uk/news/london/cyclist-26-killed-in-b...
Being covered in reflective spray paint will do nothing against a system that pays HGV drivers by the job count and doesn't enforce the many existing rules about vehicle safety, driver training... and in the recent case where a driver was convicted, the company that hired him didn't even check that he had a valid licence.
Perhaps if Volvo really wanted to make a big difference to the safety of cyclists, they'd get heavily behind the proposed designs for safer HGVs for cities: http://lcc.org.uk/articles/lcc-challenges-construction-indus...
A can of fluorescent paint is not going to help much. Most of these accidents happen during the day anyway.
The fucktards driving at night in full-black clothing, without lights and reflectors, music blasting in their ears and wearing no helmet on the road, instead of the bike lanes, will not take notice of the spray (or the fact that their behavior is endangering themselves).
Now guess which group of bikers gets hit by cars more often?
(Disclaimer: I had multiple last-second-saves with said fucktards while peacefully driving around)
An example is http://www.amazon.de/Reflective-Stickers-Tapes-Motorcycle-Co...
In my cyclist opinion, would like to paint his bike with the permanent variant and perhaps his clothes with temporary one.
What's the research say?
Cyclists need front and rear lights, and front and rear reflectors. On top of that the most useful reflectors a cyclist can have are on the pedals and on the wrists. These help when a cyclist is turning; and the pedal reflectors clearly show drivers that they're approaching a cyclist.
More than that and you risk the "Christmas Tree Effect" - it's tempting to think that more is better, but you risk just confusing the driver who then doesn't take appropriate safety measures.
Being able to see them better is great but even if you know exactly where they are you still don't know what they're going to do because they don't follow the same rules of the road.
If Volvo understood cyclists better, they'd choose a quote like, "I'm a driver and I hate life paint. Who do you think you are looking all flashy and important?" You gotta work with the tribal dynamics, not against them.
However, non of these make cycling (especially in London) safe. I wouldn't cycle in London any more as it's just too dangerous, but I did for years. I always wore high vis and a helmet and obeyed the rules of the road and I still had far too many close calls and incidents with other vehicles.
If you want to see how tragic just one of very regular London cyclist deaths is then this is on iPlayer for the next week: http://www.bbc.co.uk/iplayer/episode/b05y18wv/an-hour-to-sav...
Furthermore, Volvo say that cycle safety is the cyclist's responsibility. There's more: Lifepaint is one of the many products that can aid visibility but cannot prevent accidents caused by the individual or other road users.
Also, wtf did they do to site to make text not selectable?
I need some help getting MP to work with WebRTC. If you're interested: https://github.com/klaussilveira/ioquake3.js
There is a node.js multiplayer server included in the repo that works half-decent. :)
(I'm playing with just the keyboard so far... but I can see the end coming soon.)
Ahh, the good ol' days.
shoots a couple of shotgun blasts
jumps into the lava
yup, good 'ol quake
Great stuff here. Thanks for sharing.
I just spent 20 minutes watching the demo run through :-)
i can hear the zenimax lawyers stampeding now though...
Works great on an Acer C720 chromebook. Fantastic!
People there are re-doing the same experiment over and over until it gives them the result they want, and then they publish that. It's the only field where I've heard people saying "Oh, yeah, my experiment failed, I have to do it again". What does it even mean that an experiment failed? It did exactly what it was supposed to: it gave you data. It didn't fit your expectations? Good, now you have a tool to refine your expectations. But instead, we see PhD students and post-doc working 70h hours week on experiments with seemingly random results until the randomness goes their way.
A lot of them have no clue about statistical treatment of data, making a proper model to try and test assumptions against reality. Since they deal with insanely complicated system, with hidden variables all over the place, a proper statistical analysis would be the minimum expected to be able to extract any information from the data, but no matter, once you have a good looking figure, you're done. In cellular/molecular biology, nobody cares about what a p-value is, so as long as Excel tells you it's <0.05, you're golden.
The scientific process has been forgotten in biology. Right now it's basically what alchemy was to chemistry.
I very happy to see efforts like this one. Sure, they might show that a lot of "key" papers are very wrong, but that's not the crux of it. If there is a reason for biologists to make sure that their results are real, they might try to put a little more effort into checking their work. And when they figure out how much of it is bullshit, they might even try to slow down a little on the publications and go back to the basics for a little while.
I'm sorry about this rant, but I've been driven away from a career in virology by those same issues, despite my love for the discipline, so I'm a bit bitter.
STK33, for example, is definitely implicated in cancer through a wide variety of mechanisms. It is often mutated in tumors, and multiple studies have picked it up as having a role in driving migration, metastasis, etc.
This doesn't mean we can make good drugs to it.
Making drugs is hard - they need to be available in the tissue in the right concentrations, often difficult to achieve with a weird-shaped, sticky molecule. They need to have specificity for the tumor, they need to have specificity for the gene target(s) of interest. They need to be effective at modulating the target.
More importantly, though, the drug is modulating a target (gene) that is involved in a biological system that involves complex systems of feedback control, produces adaptive responses, and otherwise behaves in unexpected ways in response to modulation.
In my experience this is usually underappreciated by most drug discovery strategies, which merely seek to "inhibit the target" as if its involvement in the tumor process means we can simply treat it as an "on-off" switch for cancer. This assumption is asinine, and of course will (and does) lead to frequent failure. STK33 is not an on-off switch, and attempting to treat it that way will likely result in a drug that does nothing.
One swallow does not make a spring. With a belief like 'one replication is enough', I'm not sure Young actually appreciates how large sampling error is under usual significance levels or how high heterogeneity between labs is.
And from their point of view, it seems all very reasonable. But from the rest of humanity who is being asked to materially support them, and waits for their conclusions to make the world a better place, it seems ... frankly... lazy and selfish. 30 emails, wow! 2 weeks of a graduate student's time -- these are the people who are the least paid right? Below minimum wage even? The demands on their time seem so low, yet the complaints are so high, that one can't help but wonder if the concern really is that their results are too 'magical' and irreproducible and they just fear other people learning about it.
I've seen this behavior in professional settings, and ultimately it comes down to a lack of confidence in oneself, the tools and technology and the quality of work being done. Careers are at stake, but is the alternative to just give people a free pass?
Uh, Julia Child WROTE FREAKING COOKBOOKS. The entire point of Julia Child was that she tried to develop recipes in such a way that another cook could produce an equally good meal. Now, yes, if I went into a boiler room at Goldman Sachs and picked 10 guys at random, I doubt that most would be able to duplicate the recipe. If I picked 10 professional sous chefs at random and none of them were able to make a dish as good as Julia Child's from her recipe, I would start to have my doubts about the recipe.
By the same token, I don't expect rank amateurs to be able to duplicate state of the art cancer research. But if labs run by pharma companies and academic institutions are having the failure rate at reproducing research that the article claims, I think it's more than reasonable to start questioning the paper that documented that research, if not the research itself.
It would be like Google complaining that they can't copy psuedocode verbatim out of a paper and have a highly performant algorithm. Or Microsoft complaining that a static analysis defined in a paper wasn't accompanied by a production-ready implementation.
Producing protocols that literally anyone could replicate without expending effort is not the business of Science.
Replication should focus on the veracity of the underlying truth claim, not the economics of reproducing the results.
Then they need to spend the time documenting those protocols.
My dad worked in biological research, and his attitude has always been: if you don't write it down, you might as well not have done the work at all. ESPECIALLY in research.
How the heck did this stuff get through peer review? Surely I'm missing something critical?
The idea stemmed from the the idea that people want access to research (public) AND reproducible. The funds would go directly to research groups, and as an incentive, reproducibility would have bounties based on what people were willing to donate. Because virtually every research group with public research is supported by a non-profit, no one losses additional money, but more funds go towards public interest research GROUPS not organizations with bureaucracy.
Somewhat off topic, but this seems another reason for me to start the project.
I don't have any direct link to cancer research, so I can't speak with authority on the subject, but I have been involved in the past with a company working in the Preimplantation Genetic Diagnosis field.
The basics of their procedure is to create one or more human embryos via IVF, incubate the embryos for up to 6 days, than either freeze them, or transplant them into the prospective mother. On day 3 or 5 of incubations the embryo is biopsied, and the genetic material is tested to make sure there are no aneuploidy defects. We were also able to test for some other types of genetic abnormalities. This is for people who are having problem becoming pregnant.
In any case, some time in the mid 2000s there were 3 papers published in Europe claiming that performing biopsy on Day 3 is extremely detrimental to the embryo, and their conclusion was that PGD with Day 3 should not be performed. The experiments were conducted by people who were unskilled in micro manipulation.
They did follow proper protocols, and I am sure they did their best to replicate proper procedures. But micro manipulation is as much skill as it is knowledge. For instance, I can write a detailed procedure on how to shoot a compound bow, and you can follow that procedure exactly. But, without practice, you are not going to hit the bullseye on the first try.
Because we were in the business of providing services to doctors, not publishing papers, we constantly tracked our embryo mortality rates, birth rates, and accuracy of testing. The better our results were, the more business we would get. And we couldn't fake the results, because clinics ordering the test would be the once recording all of those statistics for us.
Any way, long story short, none of our data agreed with the papers claiming that Day 3 biopsy was detrimental to the embryo. In fact, quite the contrary, many of our statistics suggested that Day 3 biopsy and Day 4 or Day 5 transfer would result in better implantation rates. But, the papers were published, and referenced, and then it became "common knowledge" that Day 3 biopsy is bad, and the medical industry moved on to Day 5 biopsy and embryo cryopreservation, and so has the company I worked with.
To the company I worked for it's all the same, money is money. Day 3 or Day 5 biopsy, they make money all the same. But the patients are not more limited. From the stats we have seen, it doesn't look like Day 4 or 5 biopsy is worse for the embryo, but being frozen isn't a walk in the park. With Day 5 biopsy you have to freeze the embryo, in order to allow time for the test results to come back.
Any way, it's my 2 cents. Reproducibility is important, but I think it's just as important to change the incentives of those who publish papers. If you goal is to be published, then of course your research will suffer. It's the publish or parish mentality in academia that is the problem, I think.
And they are failing us because of some fundamental gaps in how the research, and subsequent review/dissemination/presentation of finding is done. I suspect there are multiple failures in the process. The standards of scientific proof and repeatability, used by mathematicians, physicists and chemists are not followed
The net result is the following disappointing statistic:
"...In 1971, President Nixon and Congress declared war on cancer. Since then, the federal government has spent well over $105 billion on the effort (Kolata 2009b)....Gina Kolata pointed out in The New York Times that the cancer death rate, adjusted for the size and age of the population, has decreased by only 5 percent since 1950 (Kolata 2009a)." 
And this was just the US federal government investment. Not counting the private donations, and private company research.Today the annual fed investment is 5bln annually I do not mean to sound totally discouraged, as clearly the screenings have helped many to detect cancers before they metastasized. And I would say the science results are showing that that part of the research is working well.
However,for the cancers that can be rarely be detected before the spread (eg pancreatic cancer and others) -- the investment our country and other societies have put in -- simply has not payed off.
What worries me is that our research quality gates are not able to improve the QoS of the underlying research.
And with my 'management hat on' -- I am reaching out for this quote by Einstein.
"Insanity: doing the same thing over and over again and expecting different results.
The OP paper is not the first one pointing at the lack of reproducible results, and it not just for cancer research"...But it may also be due to current state of science. Scientists themselves are becoming increasingly concerned about the unreliability that is, the lack of reproducibility of many experimental or observational results...." 
There needs to be a bit of a revolution in the science of the cancer research and the way money is allocated to it.Clearly the current model does not work and likely is encouraging the pseudo science to prosper.
sounds like someone wants to quietly weaponize this.