I can't help but picture a large, fluorescent-lit room of jolly old British "trainers" in safari khakis running around admonishing misbehaving AI for telling bad jokes, all the while trying to juggle placing calls to the DMV and restaurants to make reservations for 700 million messenger users.
So it can spend my money in behalf of me?
Tweet is here: https://twitter.com/sama/status/636586179970752512
It still won't be 90% though, but YC companies are widely known to not be representative. It's the highest profile accelerator, so it attracts the best talent. Just like people who graduate from Ivy League schools earn more on average, but mostly that's because they attract people who are above average to begin with.
Total Investment: ~$131,600,000Total Companies Value: >$65,000,000,000YCombinator's 7% Value: $4,550,000,000Total Return: 3357%Annualized Return: 42.5%
Obviously, it costs more than the initial investment, but these are really nice numbers if compared to a mutual fund, etc. Did I make any mistaken assumptions?
> Oh, another important stat: about 300 of the companies we have funded have shut down.
This is a hard gig. Even after you get into the top accelerator in silicon valley.
It would allow prospective applicants to think about whether to apply, and hopefully keep the pile a manageable size for the people reading it.
I assume YC is doing great for themselves, so I'm wondering, what is the rule of thumb to say an investment was successful? (for YC, knowing they invest cheap and early.)
Is it once a company reaches 10M? 1M? Where there is an exit?
It would be interesting to know how many companies YC says the investment was a success (either made money or will if no exit yet).
Does this include aqui-hires?
Also, would you know how many companies rejected by YC are worth more than $1 billion? Zero or non-zero? :)
But just remember that, even though these numbers look amazing, you can create a business via other means and still possibly achieve whatever sort of success you're after -- you don't have to join an accelerator, or get prestigious seed or venture funding.
Or at least, I think that you shouldn't have to.
Good to see the 90% of startups fail 'statistic' blown out the water with some facts!
Number of companies we offered to fund yesterday for the first YC Fellowship: 32
This reads like a scene straight out of Hackers or some other campy tech movie. Life imitates art.
The one positive thing this hack has done is really give serious ammo to the battle for online privacy, because the demographic hit by this hack is the most politically & economically powerful demographic in the world....
Edit: found it in "Ashley Madison 2nd dump 20 GB"
Perhaps that's all Zu is? A bot, or a covert chat channel of some kind. Perhaps prime numbered words from every third tweet contain the real message, or something like that?
As he himself admits:
> It is possible that Zu is instead a white hat security researcher or confidential informant
Jeez, how about talking to the police and let them do their job, or at the very least censor the name.
This is just a witch hunt.
It's curious - we are all being affected by the new digital pollution
I know a lot of young lawyers who'd love to take on a case like this one. But matching people who need help to lawyers who have the time and institutional resources to take on these projects pro bono isn't easy when you're talking about folks wronged in rural Louisiana.
Also, I think there's something of a hesitancy to for business law firms to add suing municipalities to their pro bono docket. Which is a shame--a few New Orleans firms building up a pipeline of pro bono suits against places like Woodsworth could do a lot of good.
EDIT: If you're interested in (potential) justice porn, it's worth following Fant v. City of Ferguson. The DOJ gave a lot of ammunition to plaintiffs' lawyers when it concluded that the city was systematically using fines and other police actions as revenue sources, particularly against poor black people. It's Woodsworth writ large. The complaint is here: http://www.nytimes.com/interactive/2015/02/08/us/ferguson-co.... The docket is available here on Justia: https://dockets.justia.com/docket/missouri/moedce/4:2015cv00.... With her opinion last month on reconsideration, the judge has effectively denied the city's motion to dismiss. Dispositive motions are due May of next year, so we'll likely see by then whether this is granted class status. If so, things will get interesting.
The state should fund the court for hearing those cases (but not enforcement).
Then we'll see if cities really care about safety or about money.
According to IDOT, there were 2,334 traffic stops in Thornton in 2014. This town's population? About 2,300. That's one stop per man, woman, and child in this town! This is naked profiteering and pretty much all small towns in strategic areas turn into police-led profit centers. The real question is why isn't anyone stopping this?
Published on Aug 22, 2012Google Tech TalksNovember 9, 2006
ABSTRACTThis is not your father's fusion reactor! Forget everything you know about conventional thinking on nuclear fusion: high-temperature plasmas, steam turbines, neutron radiation and even nuclear waste are a thing of the past. Goodbye thermonuclear fusion; hello inertial electrostatic confinement fusion (IEC), an old idea that's been made new. While the international community debates the fate of the politically-turmoiled $12 billion ITER (an experimental thermonuclear reactor), simple IEC reactors are being built as high-school science fair projects.
Dr. Robert Bussard, former Asst. Director of the Atomic Energy Commission and founder of Energy Matter Conversion Corporation (EMC2), has spent 17 years perfecting IEC, a fusion process that converts hydrogen and boron directly into electricity producing helium as the only waste product. Most of this work was funded by the Department of Defense, the details of which have been under seal... until now.
Dr. Bussard will discuss his recent results and details of this potentially world-altering technology, whose conception dates back as far as 1924, and even includes a reactor design by Philo T. Farnsworth (inventor of the scanning television).
Can a 100 MW fusion reactor be built for less than Google's annual electricity bill? Come see what's possible when you think outside the thermonuclear box and ignore the herd.
Google engEDUSpeaker: Dr. Robert Bussard
Their next step is burning D-T fuel (needs 10x temperature increase). Their goal is burning H-B fuel which requires much higher temperatures, but has numerous advantages.
Update: "Tri Alpha is backed by Sam Altman, among other things." -> not at all.
The video was quite nicely done. I recommend watching.
Caveat: As with all fusion companies, they're only 10-20 years from being ready to market : )
1. The NIF is and always has been a sideshow for the Nuclear Weapons development research that goes on there.
2. US last serious investment was the TFTR back in the 90s. http://www.huffingtonpost.com/2015/01/20/fusion-energy-react...
3. IMO ITER is a joke, too large scale.
3 billion degrees... that blows my mind.
Though there is a certain appreciable irony in having two totally different things named unity.
But there's something that has been bothering me for a while about games for Linux compiled with the Unity SDK.
Does anybody know what kind of dependencies a game written with the Unity SDK has? I'm pretty clueless about that stuff but I'd still like to know what it would take to get a game running on a minimal Linux install (like a naked Arch or Gentoo). Ubuntu obviously comes well equipped for the task but I don't really care about that.
So where do the graphics come from? Does it need some special libraries apart from OpenGL? Where do fonts come from? How does it interface with hardware, i.e. does it need X, or does it come with its own drivers for keyboard, mouse, gamepad?
I fear that it is necessary to install half of Ubuntu to get the games running but - as I said - I don't really know anything about that.
Or are there multiple game engines called Unity (on top of the existing confusion between the game engine and the Ubuntu thing)?
EDIT: I think the announcement is about Linux support in the SDK, not the actual software developed with Unity?
I think we all know the future of this then.
Qasar will help scale our organization and operations as we tackle bigger and more ambitious projects
> I hate to be a buzzkill, but the linked article is very sensationalist ... it entirely misses the point of the research published by the Anastasiadis lab (pubmed link ). Being able to stop or revert transformed cells in vitro is not new, we've been able to stop or revert tumor cells for decades....
> Tldr of this is that this is unexciting unless you're a researcher studying E-cadherin/B-catenin, and means very little to someone outside of the cell biology field.
/u/squaresarerectangles provided a link to the original Nature Cell Biology paper, reproduced in citation .
I fell like Cancer is an umbrella term for several different little monsters.
It would then seem prudent to ASAP get your own DNA sequenced. As time goes on, you collect more & more DNA damage & probably your original DNA becomes harder & harder to find.
We added the qualifier "potentially" from the article's first paragraph.
On the topic of on-boarding -- super important to get right! The idea of making your people the best they can be is lost, I think, on our generation. My grandfather-in-law was a mechanical engineer at Chrysler back in the day when they had a special school they ran to train their new recruits. When he started he had no idea of how cars worked but they picked him up from school and gave him the training he needed to become an important figure in their company.
Focusing on hiring the best because hiring someone mediocre will damage your business is negative thinking that can kill your on-boarding, and thus your culture and business. I've worked at places with terrible on-boarding and it was a fight just to get some direction and support for your work. Getting the attention of management was something you wanted to avoid which led to people becoming complacent about their work and its quality.
Great article. On-boarding is important to get right.
I worked for one company for a month. From the get go they had 100+ staff but no onboarding. Nobody knew how to set up the development environment or even get the application working on the company laptop. Nobody could show me how to VPN to client sites or show me how the software worked. I was meant to support it...
I was in my manager's office every day letting her know hey I really need assistance here, nobody seems to have time, nothing is working, I'm not learning anything and this needs to improve.
She'd tell staff to assist and they just wouldn't. And rinse wash repeat the next day.
A month in we are in a meeting and some difficult software issue comes up and it's assigned to me to fix. I mention that I don't have it running, don't know how to use it, have no method of finding who the customer contacts are to call them and will sound stupid not understanding anything, but also haven't been shown how to connect in. That I'm happy to shadow someone else and learn it.
It felt shameful but I knew it was the right thing, I had a year of extensive help desk experience under my belt including programming and other development, bug fixing, accounting, you name it, I was even team lead. I was used to dealing with multi million dollar clients and had no qualms about doing so... but this place was dysfunctional.
Anyway the boss stared me in the face in the meeting and called me a liar, to shut up and get to work. I was stunned. I repeated myself and she turned to my coworkers who claimed they had "showed me everything". I hadn't spent more than 5 minutes with them in the entire month and had been in her office every day. She told me to start pulling my weight and stop lying.
I walked very calmly to the office printed out a resignation and left my keys and walked out the door. I didn't get paid (a funny side effect of walking out of a job) and the move left me penniless and ALMOST homeless.
Luckily I got a job just in the nick of time shortly after, at twice the pay, and being treated like a human being. I removed that place off my resume and rarely discuss it because it's so humiliating.
I still remember the recruiter calling me screaming how unprofessional I was - he only cared about his lost bonus. I explained I was extremely professional but that after being humiliated and called a liar in a meeting and having none of the promised training, or anything remotely capable of making me a functional employee, there was no recovery.
Now I take onboarding very seriously.
None of my projects are at the point where you can just "vagrant up and go", but the next-best thing has been READMEs in the relevant repositories with exact lists of "Type this, type this, type this, type this. You now have a fully-working system running on localhost and you should be able to type this to get a full green test suite. If you can type this and it does not come out green, fixing that is more important than anything Patrick is doing right now."
Here's, for example, what we have for getting someone up and running on Appointment Reminder (in preparation for me soon no longer being the engineer who keeps all of that system in my head): https://gist.github.com/patio11/a0b1063c5d33b5748da6 Feel free to steal ideas in terms of level of detail or useful things to include. (A lot of the magic is in the rake commands like "setup_site", which take care of "All that crufty configuration stuff which you would otherwise need a senior engineer to do for you prior to actually seeing the page render correctly on localhost:3000.")
Quick hack which helped us on making sure this guide was actually accurate: we had two engineers coming into the project at the same time. I wrote down everything I thought they needed to know. Engineer #1 implemented to the document I had written, filled in the blank spots where he needed to ask questions, and then we committed the readme. Engineer #2 then had to do it off the readme without asking me any questions. Given that he was actually able to do this, we have high confidence that there is not at the moment anything rattling around in my head which is absolutely required to get up-and-running and documented nowhere else.
Heddleston's point about having less-senior people do the mentoring is especially apt. While I've received support from everyone in the team, the most helpful person has actually been an intern with only a few years of programming experience under his belt. While I have over a decade more general programming experience than he has, he knows a lot more about the specific domain, and has been an excellent teacher. I feel that this arrangement has been beneficial for both of usas I've gained domain knowledge, he's gained confidence and depth in his own knowledge.
I think that a big part of this is not setting expectations too highno matter how senior the new employee is. While I have a fair amount of experience, the expectation in my new positionboth from me and from the teamis that it will take me a while to get up to speed,(especially given the complexity inherent to the role), even though I'm not coming in as a junior dev. Therefore, I don't feel the need to pretend that I'm an expert in something that I'm not, and there's no ego hit when I'm being mentored by someone who is technically my junior.
In my city, it seems like most everyone is looking for senior devsto the point that they leave positions unfilled for months rather than hiring someone they don't consider senior enough. This is madness, and damaging to the industry as a whole. We need to focus more on efficiently developing talent, and Heddleston seems to have some great ideas for doing so.
"Here's Jim. He's our Widgitsoft guy. He's the only one who works on Widgitsoft. The system is entirely undocumented because Jim knows everything. Yeah, development has been slower than expected lately. Yeah, it would be nice if we could bring on a contractor when necessary, but getting to that point would be a lot of work, and we'll always have Jim."
Would love to find some examples of great cultural onboarding where it's not just the "what" of the work that a new hire learns, but also the why and how, to avoid implicit assumptions and biases from day one...
I originally thought that I was just unlucky, but when I asked around I found that I had an above average experience, at least among people near me who started recently. Unlike many other folks, I was actually able to get a username/alias, a computer, and an office. I dont know what folks without a username do since they cant even start clicking through the necessary EULAs to get permissions to do stuff, let alone do any actual work.
Im not sure how it is in other parts of the company, but if you imagine that its similar and do some back of the envelope math on how many people get onboarded and how much losing a month (or more) of time costs, it comes out to N million dollars for a non-trivial N. And thats just the direct cost of the really trivial stuff. Its hard to calculate the cost of the higher level stuff thats mentioned in the article, but I suspect thats even more expensive.
Kind of crazy...
I have been involved with several patent suits (on both litigant side and defendant side) and as an engineer, I have to admit that there has never been a time when I haven't read the statement of the problem the patent says its going to solve, and not thought of the solution myself, way before the patent presents the same solution. In other words, every single litigated software patent I've been asked to review has been BLATANTLY obvious. And I'm no genius. I've talked to other engineers and they've all said the same thing. I just explain a problem domain, and they usually give a solution that comes under the claims of the litigated patent.
This is not to say that there aren't non-obvious software patents. Its just that those never seem to get litigated, because they aren't some obvious concept sitting at the nexus of a well-trodden path the industry is following.
I can't describe or link the specific patents I've been involved with, for obvious reasons, but the stuff I'm talking about sounds like things as follows:
"Receiving at a server a data packet, the data packet comprising a user identification number and a merchant identification number
retrieving a record in a database referenced by the user identification number
determining if the record in the database contains an authorization entry corresponding to the merchant identification number
responsive to the record in the database containing an authorization entry corresponding to the merchant identification number, transmitting a second data packet, containing an authorization token, to a server operated by a merchant."
I am not lying to you. This is how stupid each of these patents have been. Sometimes even worse.
Nobody not involved in these litigations understands how bad it is. And this is coming from someone who has made at least enough money to buy several luxury cars, providing consulting services to this particular legal industry. In other words, I have a financial interest in things remaining this fucked up. And I'm still telling you, its really fucked up.
All Apple did was to acquire the inventors of the capacitive touch - and worked a bit on the UI. And while it's valuable to be the first company who recognize the importance of a capacitive touch screen - that isn't a basis for a patent - and Apple did get enough benefits anyway.
The whole "but on a computer" patent needs to go away. "Sliding a latch from one position to another to open but on a computer" should not be patentable.
Just so I understand what happened, can someone summarize German patent law? Is it the same 3 tests as in the U.S., i.e. statutory, novel, non-obvious?
It seems there are more patents that fill this description.
My favorite issue this year.
In parts of applied math for business, there are lot of talks and papers of the form "Problem X: A Y Approach".
That seems to suggest that there is something really promising about Y. Instead, more appropriate would be "Solving Problem X".If Y was involved, then fine; if not,still fine; that Y was involved reallydoesn't mean much.
Then also in computing there are talksand papers of the form "Problem X viaProgramming Tools A and B".So, for the part "A and B",can substitute Python and Julia,Fortran, C and C++,C# and C, C# and C++, C# and VisualBasic, Common Lisp, anythingTuring equivalent, etc.
To meQuantitative Economic Modelingis a big enough subject and quitechallenging. That some of thecomputing was done inPython and Julia instead of C, C++, C#, Fortran, Algol,Folderol, etc. strikesme as nearly irrelevant. That is,I see nothing about Python and Juliathat promises especially good resultson the main challenges of the very challenging problem of Quantitative Economic Modeling.
Where am I going wrong?
By running everything through the VPN, you'd be able to have TCP connections that didn't break when the network switched, since your device's public IP address would be in a datacenter somewhere.
Also with a VPN you'd be able to send voice traffic over both a carrier connection and the wifi connection at the same time to avoid dropouts.
There is something similar called Multi-path TCP (MPTCP) which uses latency to decide which TCP path to send traffic over.
In fact, Apple uses this tech for Siri to reduce latency on voice queries.
I'm hoping the new Nexus 5 (both LG and Huawei versions) is compatible with Google Fi, although I did call the Fi support number (and talked to a real human) and she said they haven't heard anything about support for the Nexus 5.
For when I have to wrangle lots of files at once (like during interactive rebase to clean up history before push) I have a git watch alias that shows a high-level overview of changes that refreshes with inotify:
[alias] watch = "!clear;inotifywait --quiet -mr -e modify,move,create,delete --format \"%f %e\" @/.git . | \ while read file; do \ clear;\ git status --short;\ git --no-pager diff --shortstat;\ done;"
- Mikko Nyln's comment in http://web.archive.org/web/20130127054804/http://asemanfar.c...
My main question is around the use of color. I'd argue the error states - conflicts, diverging branches, etc - should be the ones in red, since those are the issues you want to call the most attention to.
Getting rid of any chartjunk is the other big thing. Using four characters of every prompt just for `git:` is not reasonable. And as much as I like the idea of being warned about untracked files, I fear that in most real situations you end up with random scratch files in the same directory. My prompt would always say `7A` at the end, wasting more space (and mental effort!).
echo $SHELL | egrep -o '[a-z]+$'
Looks good though. I'm definitely going to use this on my dev boxes.
I wonder if this one is any faster. Waiting for a bash prompt in large repos can be frustrating.
Will be interesting to see what happens to FutureAdvisor.
A Course in Computational Algebraic Number Theoryhttp://bit.ly/1heah8l
This is also a nice complementary book.
The software or hardware engineer "tinkers" with something simple. And then, a lightbulb goes off and the "toy" looks like it is well-suited to solving a particular problem.
Larry Page wasn't looking to solve a "problem of inefficient advertising expenses." He was satisfying an intellectual curiosity about applying the citations (e.g. Erdos #) in research papers to web pages. (One could argue that you could reformulate "The Problem" to be "retrieve more relevant weblinks than AltaVista" but for my example, I refer instead to the "ad dollars problem" because that's the one that pays Google's bills.)
Maybe it depends on the person. One type of person sees a "problem", then he/she deconstructs that into components and try to make a viable business. That's definitely where a lot of B2B businesses get started.
Another type of person simply tinkers and experiments and "solves problems" as a side effect.
The hidden problem exists because there is a disconnect between the people who fill out those timesheets and the people who consume them.
The people who fill them out can't use the data. As a result the data is almost never correct and is largely useless and the people who consume them are picking out the wrong patterns from bad data.
The only reliable unit of measurement for time tracking is days spent. Anything else is measuring a largely made up number.
Further more they feed this idea that cramming at the end of a project is a good way shorten the time. Since you are tracking hours not days it's a short step to just upping the hours without upping the days as a short cut. But none of these apps give any data on the quality of those hours spent.
I've developed a few small software things that have saved people hours a week.
For example, I built a free tool  that lets you export a tagged subset of bookmarks from Pinboard into a nice format for inclusion in a webpage or Mailchimp newsletter. The person who I built this for wasn't really complaining about "gee, it takes me a couple hours to collect all these links, format them, etc." but when I saw their process, sheesh.
1 - http://www.bigbadassresourcelist.com
I describe really interesting problems as "fish don't know they live in water". The people who have the problem and deal with it every day don't even recognize that it's a solvable or that pain reduction is possible.
It is my guess that there is a potential goldmine of problems we simply dont know of because the people who are exposed to them arent connected with the people who have the opportunity and willingness to solve them. Perhaps the real power of diversity in business isnt hidden in gender but in age
I actually think that the older generation is far better at utilizing the younger, than vice versa.
Maybe if we could start CPU architecture from scratch, array with sizes would make more sense? Maybe even we could done it so memory segmentation would no longer be needed:https://en.wikipedia.org/wiki/X86_memory_segmentation
Haven't figured out the details, but maybe we can even gain some performance that way.
I know how I would answer this question but I am curious how others would answer it.
EDIT: s/possible/known to occur
Of course, you can't, as an ordinary user, start system executables with elevated privileges and set LD_PRELOAD.
Although, if anyone reading this has never heard of LD_PRELOAD, take a look at a better resource than this link because it can be a powerful debugging and testing tool.
The Contiki OS operating system is a lot of fun to work in. It's amazing just how much you can do with sensor-like hardware. However, the Cooja simulator was so unreliable. I'm not sure if the two projects are affiliated but the simulator was the single worse part for me when programming Contiki.
1. This was an academic project for an MSc class I took at UCL. It was also my first significant C project, so the source code might not be idiomatic (https://github.com/georgerobinson/citiesio/tree/master/citie...). The extended report (http://www0.cs.ucl.ac.uk/students/g.robinson/citiesio.pdf).
Great OS to experiment with, but I would not recommend building a business on top of Contiki today.
Roughly what he said might be summarized as follows (sorry if I misunderstood his intention): Tree Style Tab is useful because the add-on changes the behaviors of the tab globally. That way, it can cooperate with other tab-related add-ons whose authors didn't intend to make the add-ons work with TST. Therefore providing the Sidebar API doesn't help because you can't expect add-on authors to write code just to make add-ons work better with TST.
When Firefox loses the extensions which require XUL (whether fundamentally or merely to avoid rewrites) I doubt its upsides will outweigh its downsides for me anymore. They may surprise me, and even if they don't they still might increase usage with other people, but right now I am sad.
Would be great to see all browsers support a common extensions API. I hope WebExtensions is that API.
Looks like they realised that to keep moving forward, clean up technical debt and fix longstanding issues, they'd unfortunately break almost all extensions in the process. And if that's the case, they might as well switch to a better API while they're at it. It's painful, yes, but Firefox is still a slow, unstable and memory-guzzling behemoth, where all your extensions break on every update. This won't change if they stick with single-process, XUL, XPCOM and the existing extension model.
I am optimistic. I think Firefox can and will survive this. Look how well Apple's Mac OS to OS X transition went. Mozilla are willing to help people port extensions. And their timeline is probably unrealistic, but it can be pushed back. In two or three years, Firefox will be a world-class browser again, and we will look back at the panic we had now and laugh.
I get that some people may love XUL and whatever, but having never used it, I can't really comment. All I do know is that Jetpack (which Firefox has been pushing) is a sorry excuse for a way to build extensions after you've worked with Chrome. I really hope WebExtensions makes my life easier soon.
Being compatible with Google Chrome isn't very important. Extensions aren't used much on Chrome. I have the same extension for Chrome and Firefox, and Firefox usage is 100x greater.
Maybe I'm just lazy, but I was very surprised to find that Firefox would create such a technology when more "open web friendly" solutions were possible. I'm sure it's just that XUL is a vestige of a time when the web was still young, but I am going to wait until Web Extensions are released before I attempt to write any extensions for Firefox.
Are there any reasons to keep XUL around (other than lack of apps that were written with it)? It doesn't seem like there are any unique features that couldn't be reproduced using something like chrome's model...
If not I'll keep using the last version with proper extension support for a long time, until an alternative comes.
I guess it's good that they acknowledge that they didn't get input from those that write extensions before they made this decision.
Reading that made me extremely happy! I'm glad to see cross compatibility is still a goal for browsers.
One reason I've liked and used Firefox is that, especially with extensions, it has been more "my client".
Amidst all the noise over this change, I read into it that -- to some degree -- it is becoming less my client.
Which, to me, seems like another step in Chrome's direction, where I've felt that the client is increasingly the advertiser's client. (And the DRM pushers' client, etc.)
Being my client is what, for me, Firefox has had going for it.
It's my PC. On which I wish to use my client, handling and presenting data in the manner in which I want it handled.
And I'm sure the extensions to the webextensions api will provide enough to do what needs to be done
100% of finds (except for transaction costs that go to miners) get returned to the users who play.
This is probably totally different to search working over internal codebases however.
I must confess... I was originally looking from a vanity point of view and have mixed feelings to see that searchcode was mentioned in the references but not linked.
I think something like this with rosetta-code snippets is very doable (a weekend project, assuming you're good with your editor's programmability).