The tool should work efficiently without you ever looking at the settings, but if you need it you should be able to change anything you like.
Not sure it applies to consumer interfaces. Ideally it shouldn't. But for tools that you use to do day-to-day work, absolutely.
Client: "Now, this TODO list app is great, but we want something more like Facebook and Linkedin combined"
This is exactly how you get feature creep and is a really bad idea for fix bid projects. But of course it depends on how big a pivot it is and how big a contract it is.
"Can I meet and work directly with the designers and developers on my project?"
Sure, you can meet and get advice from them, but you may not work directly with them. There are some pretty crazy clients out there, and it is the management's job to keep them from driving developers insane. That is why management gets the "big bucks". They have to deal with the crazy.
As a developer do I really want the client being able to dominate all my time when I have multiple projects on the go at the same time?
> Can I adjust my feature set and specifications as we go?
I could easily imagine someone unaware of this hook running a test to create a bunch of user entries, sending emails all over the place without even realizing.
It's like someone read http://en.wikipedia.org/wiki/COMEFROM and took it seriously.
here's python source to a monadic lisp interpreter, which i wrote to follow along the paper "monad transformers and modular interpreters" . i think this is a much simpler implementation of python monads than provided in the ValuedLessons article. https://github.com/dustingetz/monadic-interpreter
Study of this implementation will teach you why nobody actually uses monads in python for non-toy projects. A literal port of this code to clojure would feel so much more idiomatic and not hacky at all.
here are some smaller code dumps demonstrating the fundamental concepts to that monadic lisp interpreter:
EDIT: I just tried out the code and it doesn't support multiple returns. Isn't that pretty much the thing that defines the Continuation Monad or am I just not getting it?
So, while it's nice implementation for some kinds of monads it's not nearly general enough to be (IMHO) called "monads in Python" - it is possible to implement fully general monads in Python, but you need nested lambdas and there is no getting away from it.
Still, very nice hack and worth spending an evening (or two) to understand how it works.
But that doesn't make it less interesting ;)
Use the right tool in its intended way: don't try to retrofit constructs in languages were they don't fit, just for an accessory characteristic. (Every line of an imperative language is conceptually kind of a monad, so the article here is just trying to convince us that yet another form of hidden goto is good, by an ad-hoc extension in a language were more than half of monad necessity is missing)
Looks fantastic though congrats.
This is feedback I heard from fellow developers as well.
I'm curious as to whether the maintainers are planning to simplify the list of dependencies and installation steps in the upcoming versions.
So it now supports both MySQL and Postgres, even though MySQL is preferred. Does anyone know what that means for a Postgres guy like me? Does "MySQL is preferred" mean that it has known bugs on Postgres? Is it usable?
An example, CS team have access to issues/wikis only while the dev team have unlimited access.
This is our only deal breaker with GitHub.
If you know of other git-based system/service that does this, please let me know.
To anyone who is using this, how would you compare 4.0 to the current GitHub, in terms of UI/Features?
I'm thinking of moving from a third party host to GitLab. What should I be aware of?
This is a hacker in the truest sense of the word. He builds stuff just because he loves building stuff. He is a student at a University here in Jamaica. He isn't building it to be cool, or chasing the latest web 2.0 fad of the week. He didn't know HN until I introduced it to him, and he is young (say 19/20).
He built http://wapcreate.com - a WAP site creator.
That's right, WAP...not iOS or Android optimized HTML5 sites. Good, old fashioned WAP sites.
The most amazing part of the story though is that he is running it on 2 dedicated servers in Germany, it's a hack job (PHP, a bit of Ruby & a bit of Java). But once he picked up traction, he got so much traffic that he hasn't been able to keep the servers online.
In the first image - http://i.imgur.com/yEbyh.png - you will see that he got over 1.5M uniques. The vast majority of the time period covered here (the last year) was either very low traffic - pre-traction - or servers offline due to excessive traffic.
In the 2nd image - http://i.imgur.com/Pu8da.png - you will see that about 1.2M of those visits were in the 3 month period of June 1st - Aug 31st. His servers melted down towards the end of August and he ran out of money to pay his server bills. He eventually got it back up again a few weeks later, and the traffic spiked again and the servers crashed a few weeks later again.
In the 3rd image - http://i.imgur.com/HJ4gy.png - you will see that the vast majority of the visits are from Asia (even though he is 1 guy in the rural areas of Jamaica).
In the 4th image - http://i.imgur.com/JSQ48.png - and perhaps the most striking you will see the diversity of devices that the visitors are coming from. Most of them are from "feature phones". i.e. A multitude of versions of Nokia phones. Notice that this is just 1 - 10 of 508 device types.
He presented at a conference I went to, here in Jamaica, and he and I started speaking. I am helping him figure out how to proceed in a sustainable way. i.e. getting this thing stable, and then generating revenue.
After speaking to him for many weeks, I finally realized how insane his accomplishment is. Apparently, in all of this, he had been creating his site on computers that were not his. He either used his school computers, or borrowed machines from people. His Aunt is buying him a 2nd hand Thinkpad for Christmas - for which he is EXTREMELY stoked.
So while we are all chasing the billions doled out by Apple on the App Store and the newest, sexiest SaaS app idea with fancy $29/mo recurring revenue, with our cutting edge macbook pros and iPads - here is one guy using borrowed hardware, making a creation app for a technology that we have long since forgotten, generating crazy traffic and usage and struggling to even make a dime from his creation.
The world is a funny place, and this internet thing that we live on - is massive. As big as TechCrunch & HN are, there is so much more out there.
If you think you can help out in any way, either donating computing resources or anything else that can help us get this site back online and helping him start to generate revenue from this - then feel free to reach out to me.
P.S. If you want to dig into it some more, check out what some of the fans of the site are saying on it's FB page. I am not trying to trick you into liking the page. It only has ~1500 likes, but you can see passionate users commenting (both complaining about the downtime and praising some of the features).
I find that tools that localhost various ad servers help, and other tools that load the crap but keep it off the page help like adblock plus, but even more so, adblock plus' filters that let me shitcan all the crap.
One of these days I want to write an extension similar to adblock plus that seeks out and removes jquery crap. A lot of the reasons I can't read pages anymore seems to be jquery slideshows, jquery toolbars, jquery popups and the like.
I am pretty sure that graphing this out and we find the end of the web occurs sometime in 2018 when page designers and their bosses and engineers and marketing pukes have so larded down pages that the net runs out of available bandwidth and any page takes 4:33 to load.
The first, and easiest way is to go to Content > Site Speed > Overview. By default this will show you a chart of page load time over time.
First, to get enough data change the time scale to a full year. Underneath the date picker there is an icon with 16 dots in it in a 4x4 arrangement, with some dots filled in. click on that and move the slider all the way to the right. This will ensure higher precision and will capture some of the slower page loads.
At the bottom, in the 'Site Speed' section instead of 'Browser' select 'Country/Territory'. It will change the data from pages to countries. Now click on 'view full report' and you will get a world map with page load times.
It will look something like this:
The site I just did it on doesn't have enough data, but if you have a fairly popular site you should see a nice variation in page load times.
Google have a post about this on their Analytics blog with much better maps and more information:
Their maps look a lot better.
As metric groups, I have - in this order: DNS Lookup time, Avg Server Connection Time, Avg Server Response Time, Avg. Page Load Time.
This then gives you a pretty report where you can immediately see which visitors are getting slow responses, and you can further drill in to see what type of connections and which browsers or devices are slow. I was surprised that my light page, with compressed CSS and every static/cached was still taking ~20seconds to fully load from 30% of countries.
To many sites are guilty of having pages that are just far too heavy - like they only test from their 100Mbit city based connections. I am in Australia with a 1st world internet connection at 24Mbit and I avoid theverge.com desktop site because of the page load.
Edit: If anybody can work out a way to share custom reports in Google Analytics, let me know - I would be interested in sharing reports with others, for specific cases such as this.
One of the experiments I did a while back was creating a "satellite Internet connection from halfway around the globe" simulator on a dedicated wireless SSID. Basically, I created a new SSID and used Linux's traffic control/queueing discipline stuff to limit that SSID's outbound throughput to 32 kbps, limit inbound throughput to 64 kbps, add 900 milliseconds of latency on sent and received packets, and randomly drop 4% of packets in/out. Very, very few sites were even remotely usable. It was astonishing.
I think one of the most useful products that could ever be created for web developers is a "world Internet simulator" box that sits between your computer and its Internet connection. (Maybe it plugs into your existing wireless router and creates a new wireless network.) It would have a web interface that shows you a map of the world. You click a country, and the simulator performs rate shaping, latency insertion, and packet loss matching the averages for whatever country you clicked. Then devs can feel the pain of people accessing their websites from other countries.
(Thinking about this for a minute, it could probably be done for about $30 using one of those tiny 802.11n travel routers and a custom OpenWRT build. It would just be a matter of getting the per-country data. Hmmmmm...)
However once Chris controlled for geography, he was able to find that there was a significant improvement.
Moral of the story: run randomized A/B tests, or be very careful when you are analyzing the results.
(of course, this is unrelated to YouTube, but to the general sentiment of the article)
you're right. And missing the point.
A large chunk of the world is permanently bandwidth starved, and most of the mobile world lives on transfer caps and long latencies. If those are plausible scenarios for your audience, you need to reduce the quantity of invisible bits you are sending them.
What's an invisible bit? Any bit that does not directly create text or images on the screen is invisible. Anything that runs client-side. Some of it may be necessary, but most of it is probably boiler-plate or general-purpose when what you actually need is more limited. Reduce!
Even personally it matters to me :) since I live in a 50 year old house that has slow DSL (YouTube buffers constantly) and I live in the center of Tucson, AZ.
I recall Google doing a study that increasing the speed of browsing increases the use of the internet. Page weight will always matter to someone.
Where did I get the inspiration to go against the current industry trends? HN's simple yet functional html setup. My god, its features a million tables, but the damn thing just works beautifully. By the way, Nuuton also uses tables. :)
The low bandwidth experiment has been educational. On Firefox/Ubuntu, you get the little status bar at the bottom that shows the requests. Some pages have a lot of those, and take ages to load. Distro-hopping is feasible (I'm trying out different interfaces), a CD-ROM downloads overnight quite easily. Software updates are a killer (go and have dinner, listen to some music...).
As many here provide Web content and applications, just try a command line Web browser on your site...
At two minutes, there are people out there with 6.6 kilobit links connecting to youtube?
I suspect there might be a bit of hyperbole in this article as well, because, even if there were people connecting on ultra slow links, the average latency of those connections is likely to be be wiped out by the tens of millions of broadband links.
Frankly, I'm stunned that they run this at no cost.
I was in Myanmar for a bit and their internet was so slow that I couldn't check even news or email -- no need for a China-style firewall.
Large swaths of South America have broadband access. I guess South Africa does too. Being a little more specific would be helpful.
Bad network coverage is a problem that can be addressed many ways. Shameless plug: I've been working on offline wikipedia that by its nature is always available and fast, everywhere: http://mpaja.com/mopedi I'm sure there are many unexplored niches where the cpu power in your hand can be put to good use without needing the network.
More seriously: my kids school are always using their smart whiteboards to play YouTube videos and DVDs on, particularly on pre-Xmas slacking week (the one that the teachers think everyone is entitled to in the UK). How does this fair with such terms? Would they sue a school?
TL;DR: The term lending has been on most records in the last 20 years and actually refers to public rentals, or something to that effect, and has nothing to do with letting your friend borrow your CDs.
I would think that having more people hear your music would be a good thing. Controlling or attempting to control whom hears it smacks of echoes of the Metallica 'Down with Napster' stuff. Either enjoy people listening earnestly to your music, or well, you're just a really interesting marketing project and not musicians IMHO.
I pay, when I can. I have enough money, and if I pirate, then I'm disenfranchising myself because, in pop culture, money is a vote. If I don't pay (vote) then I can't complain about garbage being produced because I'm a non-contributor. Piracy was OK when I was a college kid with very little money, but now that the cost of content is trivial in comparison to the time it costs me to watch something, I feel like I should take the legit route.
However, I don't buy cable. It's too expensive given that most of the channels I'll never watch, and Time Warner Cable is the epitome of Suck. Why should I pay so much for such terrible service? I am not going to "vote for" TWC just to watch Game of Thrones, which only requires a cable subscription because HBO was beaten into submission by the bad guys.
So I say: until HBO will take your money directly, pirate on.
I understand that HBO has some sort of proprietary online video watching service, but speaking as someone with no TV and no desire to pay for basic cable merely to enable my paying for HBO, I do with HBO would take Gabe's quote to heart.
> Game of Thrones
> Big Bang Theory
> How I Met Your Mother
> Breaking Bad
> The Walking Dead
Well, except for the last one ;) I kid, I kid.
It would be really good for me and people like me. I have a mortgage that's taking a huge chunk out of my income and will do for the next 20 years while I'd get about half what I paid if I sold the house now.
Inflation would be bad for the people living off their savings, generally the elderly.
But then I think they in turn benefitted from an inter-generational wealth transfer in the 70s when there was double digit inflation for most of the decade.
If you deal with a company that is much larger than yours that made a mistake or did something you don't agree with publicity is a means of last resort, not your first avenue for redress. And if you truly believe canonical pirated your game then you should sue them.
This is an excellent reminder why I prefer open source to closed source, projects like Arch and Debian would never suffer from this.
The one mentioned in this article for example, looked like a basic Flash game. So as an outsider to this indie gaming world, I can understand why they might have superficially rejected it - it's probably a great game if you give it a chance.
Just thought that someone should point this out because the comments have so far been the opposite.
They're not like the fire department, where they sit around waiting for something to happen. They're supposed to get out there and get proactively involved in all kinds of things from white supremacists to greens.
As a libertarian I enjoy a good rant about state security as much as the next guy, but I prefer to do so from an informed position. There's enough real things to worry about without going on about the FBI doing what they're supposed to be doing.
If there was a movement of people planing to protest Google or Facebook, I would expect the FBI to warn them if they had solid information it was going to happen. In fact if they were aware of large scale protests again an all but convicted child killer, they still have a responsibility to inform and protect. We protect criminals and saints equally in this country.
Second in my mind there is no question that on both "sides", police and protesters, individual people broke laws. Protests bring out the worst in some of the police officers under pressure and some of the protesters.So the FBI and the agencies they coordinated with would have been failing at their job to not monitor and report in an effort to protect the employees of the businesses.
You might not like the protection big banks got but they should receive it. Just like the most heinous criminal receives a lawyer to defend themselves, access to protection from danger (vigilanties), etc
So if we can step the emotions back a bit and use a critical eye on both sides of the protests I think we will see a FBI that jumped to conclusions but did their job.
And finally, I find it surprising in a start up forum that promotes agility and a lack of bureaucracy as the ideal that we are so quick to suggest more of it to an already bureaucratic, slow government.
This advice does not have to be sane, or efficient, or indeed have any consideration towards the interests of the company other than "prevents legally actionable mistakes". A few days ago HN saw an article about setting goals and perverse incentives. This is a simple example.
Hypothetically, someone was reviewing the Sony USA employment contract and saw that there were, perhaps, non-video-game related developments which might be valuable. Then they asked the legal department "Please supply contract terms that give us as much as possible." And after an hour or two of research, they did.
The surprising thing to me is that they tried to change language for existing employees out of cycle. If they did it during a regular review cycle, even fewer people would have noticed.
This makes me pro-regulation and anti-market, but unfortunately I see exactly zero ways in which market can make contracts better. What are you expected to do in this situation - quit?
Yes, it felt a bit 'nuclear' dropping such a charged statement like that, and even when I bring it up as an example in conversation, some people cringe - a 'sex tape' analogy might be less offensive to some, but the basic premise still stands. Any company that wants to claim ownership of every piece of content or code I 'create' needs to understand what that really entails. It might actually give some people license to work on legally questionable stuff (not child porn so much as, say, banned crypto), knowing that they don't really 'own' it and thinking someone else might be responsible for the consequences.
In the first case, the corrected terms got applied to everybody in the company but in the second, I believe I'm the only one who is protected thank to that written note.
I always use the analogy of an English teacher writing a book on his spare time. How he would actually be encouraged to do so, weighting how this would reflect nicely on the school he works at etc..
``The reasonable person (historically reasonable man) is one of many tools for explaining the law to a jury. The "reasonable person" is an emergent concept of common law. While there is (loose) consensus in black letter law, there is no universally accepted, technical definition. As a legal fiction, the "reasonable person" is not an average person or a typical person. Instead, the "reasonable person" is a composite of a relevant community's judgment as to how a typical member of said community should behave in situations that might pose a threat of harm (through action or inaction) to the public.The standard also holds that each person owes a duty to behave as a reasonable person would under the same or similar circumstances. While the specific circumstances of each case will require varying kinds of conduct and degrees of care, the reasonable person standard undergoes no variation itself.The "reasonable person" construct can be found applied in many areas of the law. The standard performs a crucial role in determining negligence in both criminal law‚Ä"that is, criminal negligence‚Ä"and tort law.The standard also has a presence in contract law, though its use there is substantially different. It is used to determine contractual intent, or if a breach of the standard of care has occurred, provided a duty of care can be proven. The intent of a party can be determined by examining the understanding of a reasonable person, after consideration is given to all relevant circumstances of the case including the negotiations, any practices the parties have established between themselves, usages and any subsequent conduct of the parties."
Although not around a sex tape, I though about a computer virus released from my Earthlink corporate email account. If I sent it out the virus technically belonged to Earthlink and not me. However, after talking to a lawyer about it years later, he explained there is ways the corporation could get out of the clause.
See http://www.leginfo.ca.gov/cgi-bin/displaycode?section=lab... if you don't know what I'm talking about.
(That said, Sony probably does enough different things that the difference does not matter much to most people.)
I don't think this is correct. I think if Daniel plays his optimum strategy, Nick will get the same payoff no matter what he plays.
I think this is a fairly general result, if one player is playing the optimal strategy, once the other player has eliminated options he should never play, it doesn't matter how his choices are distributed among the remaining options.
1. Static Games of Complete Information
2. Dynamic Games of Complete Information
3. Static Games of Incomplete Information
4. Dynamic Games of Incomplete Information
This segmentation covers all possible types of games. It's great because then you only have to decide if the game is static vs. dynamic and whether it's a game of complete vs. incomplete information (remember, perfect/imperfect information is not the same as complete/incomplete information). If you can answer those 2 questions, then you know what kind of equilibrium is relevant. For example, if it's a game of incomplete information (meaning that there is a move of nature, or equally, that the players don't necessarily know the types/payoffs of the other players) then you know that you are playing a Bayesian game, and hence the equilibrium (it if exists) will be some kind of a Bayesian Nash equilibrium.
You can always express a game of incomplete information as a game of imperfect information (see: Harsanyi transformation). However, here's something to think about: What do you lose when you transform a game from extensive form (a tree) to strategic form (a matrix)? The answer: Timing.
I lived in Shanghai last year, and Chinese Internet surveillance is unreal. I could use gmail chat to talk about tiananman square, but as soon as I did all of my Google apps would suddenly be unavailable. I can only assume that when i used certain keywords my every chat was being monitored. A VPN was the only way I could access YouTube, Twitter, Facebook, and even some Google searches.
But reality is 90% of the young population of Shanghai didn't really care what the "great firewall" did, because EVERYONE used a VPN. I saw more people watching YouTube in China than I do in the states, even though Chinese versions of these platforms exist. Some platforms, like RenRen (Facebook-like but more similar to Russia's VKontakte) were popular, but most just used the US-built versions. Now most of them won't be able to.
This absolutely terrifies me. I was literally minutes away from being on a bullet train from Shanghai to Beijing that killed "x" people. Chinese authorities cite incredibly low numbers for a train traveling at 300 km/h. Most non-state observers cited hundreds of deaths. China slowly grew its number from 20-40.
It's illegal for foreigners to talk about the "Three Ts" with Chinese nationals - Tibet, Taiwan, and Tiananman Square. But previously the youth learned through their VPNs letting them access the outside world. With that shut down, the government might as well be burning books.
Any suggestions of software that would deploy images to various cloud services on behalf of users? I don't think China would be able to block all of ec2 and Rackspace, though they do sometimes seem to throttle ec2.
I have thought about the idea for some time. The marketplace operator could take something like 30% cut. Any private invidual could sell their internet connection to the chinese and earn some bitcoins in the process.
There could be some rules which could stop the chinese goverment from knowing which IP's operate in the market. For example, someone could buy certain VPN/IP address recurringly, and others couldn't purchase that specific IP - that way the goverment would have no way to know how that specific connection is used.
And of course, bitcoin isn't very easy or well established payment method - bring in the resellers/market makers from china. These could (with some easy to use software/API) resell these VPN's to the chinese inviduals.
 http://en.wikipedia.org/wiki/Great_Firewall_of_China http://www.guardian.co.uk/technology/2011/may/13/china-crack...
Sensors and bigger stripes would be nice.
(It's a great idea, and especially if she never saw one before, shows a great thought process.)
I have to add that generally a 10 year old is not going to give you the next big app idea. 10 year olds will have no more creative ideas than you or me. But just like the lottery, it could happen.
http://www.artlebedev.com/everything/air-zebra/And the implementation is in the works - laws being passed to make those required.
Does it turn off immediately when there is no one in the crosswalk?
The question shouldn't be "how can we make crosswalks safer for pedestrians" but "why are there cars and roads where pedestrians walk". Otherwise we end up with things like the "bicycle lane".
Need to take a step back and look at the primary purpose of acquiring investors: taking capital from them now in exchange for future cash flows to them later. To maximize this in favour of our business, we need to take the most money for the minimum amount of equity sold.
Now look at the primary purpose of the management team: to use the assets and resources placed in their care to create the highest future cash flows possible. This includes the tasks that Chris is conflating with investors: secure deals, find the best lawyers and accounts, getting meetings with difficult people. The management team is then compensated directly for their efforts.
The above is how it works in traditional companies. The investors invest capital and decide on the management team. The management team actually runs all facets of the business. In the startup world this relationship is not as simple, as the founders are both the primary shareholders and the management team itself.
What Chris is proposing is not as outlandish as it sounds - he is proposing joining the investors as part of the management team by selling shares to them for cheaper. If a share is worth $10 on the open market, but we sell it at $5 in exchange for valuable help from investors, then we are doing something very simple: we are paying these investors to be part of the management team, and in this case we are paying them $5 per share. This is a good option to take if the skills they bring are worth the $5, and a very bad option if we could hire better/more skills by taking the $10/share and then directly hiring on the market.
TLDR: Nothing to see here - you can pay people in equity or in cash, and the choice is as difficult as it ever was.
On the other hand Zacharias's blog describes observing and responding to feelings that there was a disconnect between the funding features available to his YC class and his entrepreneurial gut.
As a "pre-cofounder" (I love the term) he was in a position to take a different course. He also enjoyed the advantage of friendships with potential investors.
I can't help but see what he describes as extending some of the fundamental YC processes beyond the point where YC kicks the baby birds out of the nest. YC works because of the trust founders place in the partners. It works because founders don't worry about the investor screwing them over, and it works because founders can spend more energy building rather than negotiating terms.
This appears to be what Zacharias did. He was careful about who he sold his company to and conscientious about the price of shares which are likely to be worthless.
It was a personal strategy - right for Zacharias. PG is right that it is poor as a general strategy for YC companies.
I can understand the reasoning but are the 'best' of this pool as good as those from the dollar-pool? Wouldn't the really good/useful ones be trying to enter the dollar-pool anyway? (NB I'm not aware of the back-story yet)
Edit: Just read both pieces and I agree that they're both right. I see this as a difference between Angel vs VC. Angels sometimes get involved because they can imagine having a useful impact on their portfolio. If an Angel takes a tiny slice of a company which also has VCs and 'YC valuations' then they may feel they have no real 'clout' or ownership in the company (not everyone wants a board seat). Even though the economic argument may be to take-whatever-you-can-get that doesn't make it fun or worth your time.
I will say that one nice thing about pricing on the higher side is it requires more conviction on the part of your investors. Thus, it will be those that are committed to your company, despite a higher relative price. (All investors are price-sensitive, it's just to what degree.) So by pricing it higher you end up with the same result as CZ and DS are seeking, except you keep more of the company.
And as far as the price-sensitive, there are many reasons, aside from them just being "smart": a) they're trying to raise a fund and need lower valuations, "better" numbers, for potential LPs (since LPs are usually investors with a more traditional mindset on finance, and thus more price-sensitive), b) their existing investors want to see lower valuations, "better" numbers, c) they're able to get "good deals" (I'm not mocking this, I'm just noting it's a value judgement, not something concrete) at lower valuations, d) they care more about potential multiples on their own fund(s) more than investing in the outright best companies, and any number of other reasons.
Hunting for the cheapest relative startup isn't necessarily "smart" (nor is investing in say uncapped notes), and investing in "expensive" startups isn't necessarily dumb (nor is haggling over price).
Red investors oceans represent to all the industries in existence today ‚Ä" the known Investor space. In the red oceans, investors boundaries are defined and accepted, and the competitive rules of the game are known. Here start-ups try to outperform their other start-ups rivals to grab a greater share of investment demand. As the funding market space gets crowded, prospects for high valuation and rapid investments are reduced. Products become commodities or niche, and cutthroat competition turns the ocean bloody; hence, the term red oceans.
Blue investor oceans, in contrast, denote all the industries not in existence today ‚Ä" the unknown investor market space, untainted by start-up competition. In blue oceans, investment demand is created rather than fought over. There is ample opportunity for high valuation and rapid investment. In blue oceans, start-up competition for funding is irrelevant because the rules of the game are waiting to be set.Blue investment ocean is an analogy to describe the wider, deeper potential of investor space that is not yet explored.
(Adaptation of Wkipedia article on blue ocean strategy for investment for start-ups)
In your article the "guilder-investing angels" are the blue ocean for investment in start-ups.
From that position I was able to observe fairly closely how the GNU project was being led, technically, in the days before the Linux kernel had had any real impact.
RMS's technical leadership was, I think, not very skilled. Let me explain what I mean:
If you were working on a program and sought his advice, he was very good at zeroing in on the issues and giving excellent advice. And sometimes if you were working on a program and he noticed something he didn't like about your approach, his criticisms were very good. People used to tell stories about how good a programmer he was and those stories were basically all true. He was sharp and I assume that, in spite of his age, he still is.
The problem was that he showed no effective capacity to really lead the larger meta project of pulling together a complete OS. He tried -- with projects like autoconf and documents like the GNU coding standards. And he kept a list of programs that, once we had those (he reckoned) along with a kernel -- GNU would be "done". That was about the extent of his "big picture" for project management.
Mainly, he concentrated on advocating for the idea of software freedom. I think the gambit was that if enough people demand their freedom, the project of organizing a GNU project would become easier. I don't think this gambit worked.
That was never a clear enough, coherent enough, or informed enough vision of the complete GNU project and, consequently, GNU has never really successfully gelled. You can grab some "100% libre" distributions, these days, but only barely. There is no sustainable culture and technical organization there ("yet", I hope).
The RMS failure I see is a failure at being a community organizer of GNU programmers. A lot of people got the vague idea of a GNU project. Many of us were happily recruited to the goal. But everyone I worked with at the FSF, including me, kind of went off in various incoherent directions -- doing what we guessed would help and that seemed interesting to us. We never "pulled together as a team" and, in the GNU project, that still doesn't happen.
The GNU project gradually accumulated a heck of a lot of very good "parts" but could never gel. The first three world-changing releases (GDB, GCC, and Emacs) really startled people. The various shell/text utilities in those early days spread because they were often usefully a little bit better than the proprietary "native" equivalents shipped by Sun, Dec, AT&T, etc. People sat up and took notice but behind the scenes the project of setting up a lasting "complete OS" project that would promote software freedom for all users ... never quite came together.
The "open source" people -- who I also later worked for, because I made a mistake in trusting them at their personal word to me -- seemed at first like they might help bring resources to the problem. In fact, what they mostly concentrated on was creating proprietary products using the free software "parts" from the incomplete GNU project. In the early days they sought to monopolize some of the key labor for the GNU project (and they succeeded, because they paid much better than RMS and many of those particular hackers didn't really give a shit about the freedom of users). As the "open source" industry matured it perfected its model of a perpetually incomplete / inadequate free software OS as a source of inspiration to enthusiastic youngsters, realized in practioce as a perpetually freedom-denying set of proprietary OS products. Companies like Red Hat and Canonical realized that they could exploit the deficit of community organizing to charge high rents for libre software, so long as they don't care seriously about the freedom of users. That's what they did and what they do.
So in my view, RMS was not good (and still is not good) at leading the GNU project -- but the real tragedy is brought on by the glad-handing, deep-pocketed, "open source" rentiers who place concern for their own profit above the freedom of the community.
Okey, I am skipping all the rant about FSF not funding software projects to pay developers, or that the GNU brand is not "hot", but both feels a bit silly. The hotness of a brand is transient, and in reality, only a handful number of brands inspires users and developers. I can't see how GNU would be more or less hot than say Gnome, KDE, or apache which each has a large number of projects under them. As for funding, since when did any of those organizations actually fund the projects? They role is provide help in setting up funding systems, help with tax declarations and provide further legal help.
Thankfully, the last link in the end (http://lwn.net/SubscriberLink/529522/854aed3fb6398b79/) looks to bring some light of what the actually issues really are: copyright assignments being US only, who the "owner" of a community project is, Nikos' feeling that he aren't getting any tangible benefits from being under the name GNU, and last a request for more transparency in the GNU projects decision process.
As for those reasons, there are two I agree with and two I don't. Firstly, Copyright assignments being US only is bad and shows an inflexibility a non-profit foundation should not have. Their role is to help projects, and thus should be as flexible as possible and thus provide equal possibility to assign copyright to US or EU. Second, as for who the owner of a community project is, the answer should stare the developers in the face. It should always be the community (developers and users) that "own" the project and decides its fate. If Nikos' announcement had included a decision by the community (preferable in a transparent manner), it would had been hard for GNU to object. Third, Nikos' feeling that he aren't getting any tangible benefits from GNU are his to have, but legal assistance is something many projects value. If a project has no need for legal assistance, no need for help in creating donation systems, and don't feel a threat about lawsuits against individual developers, then a foundation such as GNU, Apache or other similar organization are not going to give much tangible benefits. Fourth, in regard to more transparency in the GNU projects decision process, I can only agree with Nikos. The corner pillar in a community is transparency, and GNU should be fully aware of this. If there are discontent growing because of an lack of transparency, it should be addressed and fixed with high priority.
I can't help see posts like this and worry about the perpetuation of open source, and wish I had the chops to do more to help.
As I write I'm downloading a Raspberry Pi image for my son's hardware. I'm getting him an Arduino, a soldering iron and a book for Christmas. I'm looking forward to learning along with him. I don't claim to understand the particular flows of code or inspiration, but I don't see how those projects happen without open source.
I also don't see how the Pi happens without industrial scale chip production. As I understand the matter, the Pi was developed by Broadcom staff on their own or 20% time, and its production occurs on interstitial time on production lines that could never be justified by a $25 SOIC. Pi is basically a cheap add-on to a massive industrial base.
Of course one point of vision is describing a realizable potential not apparent to the rest of us. But vision can and does proceed despite deviations from its perfect realization -- and sometimes is corrected by those deviations. I deeply disagree with RMS' politics, I'm deeply grateful for his technical contributions. I hope the community can always find a way forward.
* GNU leadership seemed very stubborn from the beginning.
* GNU software is really great.
* Gnome is the new GNU.
I wish they wouldn't lose more momentum or the wide variety of software they write and maintain will suffer, too.
I have to say, cases like this really point out the flaws in copyright assignment. It just doesn't make sense from a developer's perspective. If you put in the work to create the code, why would you allow someone else to control the licensing and the name? With proprietary software, the reason is clear-- in exchange for money. But with open source or free software, you really have nothing to gain from copyright assignment, and a lot to lose.
If you disagree with whatever the GPLv4 ends up being (or v5, or v6...), your only option is to fork the codebase and choose a new name. Experience has shown that renaming the project loses most of the userbase (think OpenOffice vs. LibreOffice.) This just isn't right. Developers should have a say in how their code is used-- they should be consulted when the code is going to be relicensed.
...bluntly asking: why? (In any closed-source C++ project, if someone writes a "style guide and coding standard" thing and the project manager supports it, people start writing "compliant" code, grunting or moaning at first but they do, and then it becomes part of "company culture" and people find it natural to write code by it - I believe with Google's C++ was like this too... why does it has to be harder for an open source project?)
I'll be honest I could not understand what Mr Bonzini is trying to say anymore than I could understand Mr Stallman's antics in the recent YouTube clip. With all due respect, what are these people on about? What is the problem? Clearly and succinctly, please.
This is a fallacy. People use this term "priced out" as if it meant some sort of process, but it means nothing more than that the investor thought the startup's stock was too expensive. And it is very stupid to let valuation decide which startups you invest in, because the variation in outcomes between startups is orders of magnitude greater than the variation in valuations. I.e. there is no value investing in startups.
What we have here is a case of anecdotal evidence. A founder happened to get some investors who hadn't invested in other startups because they felt the valuations were too high, and those investors turned out to be really helpful. But there are other investors who are willing to invest at high valuations who are helpful, and investors who seek out low valuations who aren't.
Everything is going to go wrong: optimize for having people around you that are going to help you out of THOSE times, because when it's going well help will chase you down. When it's going bad, you better have backup.
It seems to me that if the angel investors in question are really able and willing to do productive work, you could get a similar result by simply paying them additional equity as an incentive after allowing the market to set the company valuation in the natural way. In other words, same as you would for any other early stage employee. No need to artificially interfere with the valuation and set it low to attract them.
I understand in USA all the hires are much more expensive, but why not aim towards bootstrapping if you have some money, hire cheaper Philippinos to help out at starters, and then go all shiny and hustla!!
Guys, get over it. Can some of you please post some inspirational articles on how you created a bootstrapped company?
It sounds pretty good, similar to an old tape recording.
Me and my colleagues can't publish in nonindexed (or weakly indexed) public journals, since you won't be able to publish the same research results in a 'good' journal or conference later (it's no longer 'original, unpublished research') - in essence, publishing here would mean throwing away many months of work, since the work itself and its citations will be disregarded.