This guy is not necessary a troll, of course this is just speculation, but if a project he was working on for a few years got cancelled, I could pretty well understand his frustration, even if the decision to cancel turned out valid in the end from a business point of view. I don't think it is valid to stick labels on people (both on the "troll" and on Steve Jobs) without knowing the whole story.
I know one of the early engineers who wrote the low-level software for that device. He was one of the more arrogant engineers I've known and basically dismissed the iPad because it didn't have enough "power." When he showed me the Kno tablet, I said "I couldn't even fit that thing in my backpack, let alone on any desk. You're never going to sell this thing to people." He insisted that power was more important.
And it turns out he was wrong because he was thinking like an engineer. Kno scrapped that idea and decided to build exclusively for the iPad. http://techcrunch.com/2011/04/08/kno-bails-hardware-30-milli.... Good on them.
And for what it's worth, the market seems to have proven Steve right. Nowadays, we can see some of Apple's competitors resorting to "different" in an attempt to gain traction. Doesn't seem to be working for them, either.
This is a lesson Microsoft needs and has never really learned, neither under Gates nor Ballmer. The bizarre approach in Windows 8 that has all kinds of UI doing the same thing with no clarity around development platform sounds exactly like what Jobs talks about with people going in 18 different directions.
What a great insight, it's really striking a chord with me right now.
I will read through the App Engine docs every day or so to figure out what cool thing I can make out of the APIs provided. Maybe I should forget that and just think to myself 'what would I want to use?'.
It also underscores the importance of being able to translate tech "pieces" into compelling products. I'm an app developer. When reading documentation for the latest release of iOS or Lion SDKs, and seeing all of the new APIs, I feel like a kid with a brand-new box of Legos. The challenge (and art) is in combining these technologies to build something actually catchy.
Here's to one of the greatest capitalist visionaries of our lifetime though. In jean patches no less.
Yi has taken a wonderfully pragmatic approach to implementing a text editor. They were working on an incremental parser framework to power the editor. The framework was inspired by Parsec and aimed at parsing incomplete code while typing. Emacs has similar parsers but in Emacs they're implemented with a messy pile of emacs lisp, while the Haskell parser in Yi tries to do it with an easy to read domain specific language.
I also like how Yi has a very flexible frontend. They ship with Vim- and Emacs-like configurations to get started.
The editor itself has a great fluid feel. Looks-wise, the default theme is nice, and "Soda Dark", which seems to be a community favourite, is gorgeous.
If you're new to 3.1, the following resources will help you to get started:
I would have found the experiment more convincing if it had been used to validate the basic assumptions of the theoretical model instead (e.g. the statistical distribution of the baggage loading and seating times).
It looks like it's the same physicist, and the same algorithm. Further more, HN had pretty much exactly the same discussion.
Plus Ă§a change, plus c'est la mĂŞme chose.
There's another submission from over a month ago here:
In that it's described how ...
American Airlines undertook a two-year study to try and speed up boarding. The result: The airline has recently rolled out a new strategyâ€"randomized boarding.
This submission from 1300 days ago - http://news.ycombinator.com/item?id=111416 - is a paper from Arxiv, suggesting that boarding times can be cut by a factor of 4. Guess who it's by - yup, our favorite physicist again. So he's been at this for 3.5 years. There are just 5 comments on that submission.
This latest paper is here: http://arxiv.org/abs/1108.5211
That was linked to from this submission: http://news.ycombinator.com/item?id=2943615
It was also referenced in the article pointed to in this submission: http://news.ycombinator.com/item?id=2943003
All in all, a popular topic that's been going for 3.5 years from this one physicist at least.
Despite his perseverance, it hasn't been adopted on any of the flights I've been on.
So here's a list of some of the previous HN items on this topic:
I think it's really cool, though in the article I read before one major block to implementing this is that you'd be splitting up group boarding (of even 2 people travelling together).
I feel like that might be a tough message to try to explain to everyone at the airport, since in general people are worried about everyone in their party making it on the plane safely and with all their stuff. Gate agents have enough worried customers as it is.
In all seriousness, the boarding problem only got worse once airlines started charging for bags as people starting carrying on more and more. I read somewhere that Southwest actually saves more money by offering free checked bags and saving on boarding time than they would make had they charged for free bags.
In fact, I'd be hard pressed to think of a worse way to board a plane. And yet somehow every time I fly that's how it happens. Maybe it's just that my company chooses horrible airlines.
The article mentions that assorted methods of boarding were tried, though it only goes into detail about "the Steffen method". I wonder what the difference between blocks-from-the-front and the obvious improvement of blocks-from-the-back is.
Not entirely joking!
Ryanair has an average turn around time (time between when the plane lands to when it takes off again) of 25 minutes. I've been passed through the gate and waiting at the door before the plane I've to travel on has landed.
Getting passengers to board when (and only when) it's actually their turn: priceless
(and, I suspect, far trickier)
I skimmed the paper and didn't see any mention of how to get passengers to obey gate agent instructions, though, which would be a prerequisite of implementing an effective boarding method. Perhaps this should be re-tested by airlines randomly selecting sold out flights with identical planes to try these methods.
As much as someone wants to say "we've studied this already, and this is the best way to do it!", you'd have a hard time convincing me that any major airline knows how to make good decisions about anything.
This really isn't a smart industry, in many respects.
Furthermore, there is no point getting too geeky about complicated boarding sequence if passengers are going to get unhappy over it.
Better to do a Steve Jobs and keep things simple.
What you do need is biz savvyness to understand that rows are ordered by passenger value (first and biz class first, followed by premium, platinum, frequent flyers, etc). These passengers pay a high premium to board and deplane first. Airlines are not going to lose these valuable passengers for a gain whose magnitude is uncertain at best.
"And, more importantly, the lower energy range from 114 to just under 145 billion electron volts, a region of energy that Fermilab has determined, through earlier experiments, may harbor the Higgs, has not been ruled out. But the Higgs is quickly running out of places to hide."
The article's main flaw is its assumption that, because the remaining mass window is "small", it decreases our chances of finding the Higgs. This is not the case for several reasons. Most importantly, it has been known since the planning stages of the accelerator that a Higgs with such a low mass is more difficult to find, in the sense that it requires running the experiment for longer, collecting more statistics, before we can decide whether or not it exists. So it comes as no surprise that we first have conclusive results about the higher mass range. It just happens that the Higgs, if it exists, doesn't have a high mass, so we keep looking.
It is expected that in 1-2 years we will have enough statistics to either discover or rule out the Higgs in the remaining mass window.
The second mistake the article makes is in claiming that not finding the Higgs is somehow a bad thing. That it means the LHC was a waste of taxpayer money. I would say quite the opposite. If the LHC finds the Higgs and nothing else, then it will only confirm our existing model and we will learn nothing new about the world (except for the value of the Higgs mass). This is the worst possible outcome. On the other hand, not finding the Higgs would be an extremely exciting result, since it would open the way to less well-explored ideas about the origin of mass. The goal of the LHC is to teach us about the world, not stroke physicists' egos and tell us how clever our existing theories are.
This is infuriating. A negative result is a successful experiment. Don't hamper efforts to fund science with the argument that science might figure out it was wrong.
Europe explored the universe where America did not.
More context here:
I'm a little surprised by that comment. The SSC would have be almost 3x more powerful than the LHC. I still feel like particle physics has been set back decades.
They have 95% confidence that it's not in the 145-466 GeV range.
They haven't searched the 114-145 GeV range. There's still plenty of work to be done, and sensationalist headlines only serve to mis-inform.
So if it turns out that it doesn't exist, where do we go from here? What are the alternative theories? It's been a while since I went into particle physics at all deeply, so I don't know all the leading theories and their pros and cons.
Also, could ECC memory for clients solve this problem?
''At the time GIC released the data, William Weld, then Governor of Massachusetts, assured the public that GIC had protected patient privacy by deleting identifiers. In response, then-graduate student Sweeney started hunting for the Governor's hospital records in the GIC data. She knew that Governor Weld resided in Cambridge, Massachusetts, a city of 54,000 residents and seven ZIP codes. For twenty dollars, she purchased the complete voter rolls from the city of Cambridge, a database containing, among other things, the name, address, ZIP code, birth date, and sex of every voter. By combining this data with the GIC records, Sweeney found Governor Weld with ease. Only six people in Cambridge shared his birth date, only three of them men, and of them, only he lived in his ZIP code. In a theatrical flourish, Dr. Sweeney sent the Governor's health records (which included diagnoses and prescriptions) to his office.''
The reverse of that phenomenon is that, given a data set in a high-dimensional space (even 3 dimensions, if each dimension has more than a few bits of entropy), it will cover the dimensions very sparsely (even if it's large!), and therefore it's relatively easy to recover specific details of the sample from the aggregate statistics.
edit: Well, I was hoping this might be a new insight, but in fact there's a good 2005 paper exploring that connection in much more detail: http://www.vldb2005.org/program/paper/fri/p901-aggarwal.pdf
For the busy HNer, it's not even necessary to click on the link to get the key idea from the article.
The ZIP codes in Israel are per-street, not city. Given Israel's population of just under 8M, I believe a very high percentage (95% <) of unique people can be found.
 - http://blog.y3xz.com/post/7846661044/data-mining-the-israeli...
I've been pondering a useful way to have /<yourname>in a URL, so that everyone with that name can use the url containing it without collisions. Of course, I always end up with something like website.com/a3fx/<yourname>. Which is arbitrary, and ugly. However, with this stat it seems we have something close to a non colliding, pretty, meaningful addressing scheme. Ie: website.com/<dob>/<gender>/<zip>/<yourname>, sure it's a bit long but it provides assurance you're getting who you think your getting.
My point is that this shouldn't be THAT surprising. I suspect full name and gender would uniquely identify a fair portion of the US as well. We're not as homogenous as some societies and I think this proves it...
What I'd find amusing would be how much of the population is uniquely identifiable by browser+plugins+os+resolution.
What I'd be really interested to see is why this works, and what it tells us about distribution of population by zip code. I'd imagine the places where this doesn't work as well, are the most densely populated zip codes, where the likely hood of duplicates on the given key increases, but I would never have guessed that the accuracy would be anywhere near 87%. (maybe there's alot more zipcodes than what I thought? maybe they used zip+4?)
Area_ID + DOB + Order Number + Checksum
For order number: Men are assigned to odd numbers, women assigned to even numbers.
A zip code with large populations probably are more around 50% than 87%, and obviously the reverse is true as well. I wonder what the population size for a zip code would have to be to be really close to 100%. Just throwing some numbers in a calculator I'd guess at 15-20k people would be damn close. So 10k is probably just about a unique identifier.
Sex, gender or age is allowed.
When someone steals your laptop, they are initiating force. In doing so, they give you the right to use methods, such as this software, to recover the laptop. But that right only extends to recovering the laptop, not to unnecessarily violate the privacy of the thief. While the thief does owe the laptop owner compensation for the crime, this is something that is determined via court, not by the victim of the crime.
So, yes, you have the right to hack into your laptop, turn on the webcam, collect evidence necessary to locate and recover the laptop, but you don't have the moral right to exploit the thief beyond that.
I remember the Defcon presentation had some privacy obscuring bits for the thief when he took nude shots of himself in the shower. That's appropriate. Shaming the theif by showing their face is reasonable, but only if you know they aren't an accidental victim. Sharing the thief's nude photos with the police or with any unnecessary third parties, where the they are not necessary for recovery, is a violation of privacy.
At least morally. Who knows whether a government in the US will hold the police accountable for any immoral actions.
I can't comment on the law but I really don't understand people who are ok with this kind of behavior. Thieves don't suddenly become fair game for any treatment just because they are thieves.
Uh, no she doesn't. Buying a working laptop for $60 is going to easily constitute knowing the laptop was being sold for less than true value. Sorry, I don't have the link, but this whole receiving stolen goods knowingly thing was explained fairly well on HN in some comments a couple months back.
A scary thought? That's why it's wrong, because it's scary? I've grown to expect slightly more sophisticated arguments from the people I agree with.
This is pure B.S. right? "I'm eager to see what a jury will think."
A jury is composed of randomly selected individuals. So he's saying "I'm eager to see what a group of random people will think." Clearly doesn't make sense.
Basically what he's saying is, "A group of random people are clearly going to side with us."
I didn't realize that this was a use-case anyone was interested in, but it sort of makes sense. I've added a parameter you can pass to RH.org to deal with the gravatar pull on my side, so you don't have to.
This adds a bit of serverload, since I need to make a bunch of requests, but it's not THAT bad.
If you pass gravatar=yes, it will make a pull to the gravatar URL for that address. If something exists there that isn't the default, it will issue a 301 over to the site.
Otherwise, it will return the robohash you requested.It also passes the size param over to Gravatar, just to be nice ;)
They're intentionally ugly. The theory is that an ugly default avatar makes the user more likely to upload their own image.
Each two digits of hex is 255, and I count 16 pairs, but can think of only 8 elements in the photos (eyes, ears, nose, mouth, head, body, arms, background).
If it's those 8 elements and they're derived from four digits in the hash, then you need 65,535 or so different ears.
I'm guessing the solution to this is to reduce the hash further... so you get more repetition where two texts produce the same robot, but apparently they've not made it too bad.
How far did robohash reduce it? Or how far should one reduce it? How many images does a robohash art set have?
It's a fun trick, but it should't be possible.
An excellent bonus - my robohash looks exactly like Bender from Futurama :^)
Edit: I'm also clueless when it comes to vimscript. It would be nice to see some stuff on that.
Are there plugins/gems for something like this or was this made from scratch?
Not sure i know what it is though. I really like VS debugger and programming in C#, C++ (both with VS). So, what exactly will i give up if i switch to vim?
With the ban on VPNs, steganographic techniques that make encrypted traffic look like regular traffic will become more and more common. The troubling thing is the fact that these techniques are somewhat hungry for bandwidth.
At my university, students are required to browse through an authenticated proxy (which we have to sign in to using our university IDs), which logs our browsing history. This is done so that they can comply with the PTA's requirement that an ISP should be able to provide browsing history of all users for the last 45 days upon request.
Never mind that it's trivial to get around that proxy, all it actually does is mess up most stuff like Windows updates, gaming, etc.
Or is the plan that the punishment for stepping outside the lines be enough to keep people from experimenting with these technologies?
VPN's work too, so far. I'm on one right now. As to why - the filtration system the government is using is so brain dead - there is basically one Juniper router and a couple of Cisco routers (last time I looked) - through which the entire country's traffic is routed.
Using a VPN makes web browsing much faster, with no annoying "waiting" moments - which I presume is the routers locking up under massive load.
The day VPN's are blocked is going to be a sad day indeed. I am going to explore for alternatives to VPN's. Way back in the days of super slow dial up I used these services which would take a link and email the page or entire site to you depending on the command you sent, in a zip file.
Why would a tech company even consider spreading/outsourcing to Pakistan after this?
Though of course they could just tap into the local last mile...
How can any global company now do business in Pakistan? Surely there is some kind of back door in there.
You can basically build and browse profiles like these - http://igeek.at/deryldoucette
We have a bunch of apps right now but we wanted to put it out on HN and see what people think about this. If you want to see some app added, please let us know. Also, we would appreciate any feedback. This is a side project right now but if there is a lot of interest, we are certainly keep on working more on this.
I suspect the fact that git exhibits a deep comprehension of that history is one reason git pretty much took over the mindshare for DVCS in record time. Where it took years for Subversion to oust CVS, and numerous DVCS systems had been plodding along for years, git was the obvious leader seemingly overnight.
So many projects have vastly over-engineered interfaces and component architecture and such, with a huge variety of interdependencies, often proudly, as though it is a benefit.
It's super nice having the dependencies specified by semvers in a package.json and installed locally in a project node_modules directory so that libraries can't step on each others toes.
The "Don't use Bundler version X with RVM version Y" can be specified directly in the package.json and concurrent versions of module dependencies even work without incident in the same project.
NPM is basically encapsulated into one command (npm), and there are no ways for modules to modify the way Node itself works, without the permission of other modules, plus NPM automatically resolves dependencies and ensures that things Just Work.
I spent a few weeks "interviewing" cofounders and working with a few, before choosing one and starting to build a product and applied (and scored an interview) for YC. But during that time I realized that this person was not the person I wanted to start a startup with. Thankfully we didn't get in, and parted ways. Since then, I've decided that having a partner would be awesome but I'm not wasting time looking for someone (especially with the very small pool of people that want to work on wedding startups)... instead focusing on building as fast as possible on my own (product: http://weddinglovely.com, launched and making small bits of revenue).
I'm applying again for YC this round as a single founder, and I would encourage people not to rush out and find last minute cofounders just to apply. Spend time now finding a cofounder, then apply for S12 after a good few months of working together â€" and if you can't find the right cofounder, then start building yourself (especially if you're not technical).
The biggest problem was that we didn't really have a template for making decisions. Up to that point, our only goal had been getting into an accelerator, which we somehow expected to solve all our problems. And I'd already decided that I'd rather lead a startup than get a PhD, so I was very focused on that goal, while they were more focused on schoolwork. So the way decisions were made before the summer was that I'd say "I did some research, and think we should do this" and they'd say "OK." It was very unilateral. Once we started working together full-time, though, that wasn't tenable, and there was a very fine line I had to walk, to provide both autonomy and a sense of direction. And we never could come to an arrangement that made everyone happy.
That's not to say that I would have been more successful as a single founderâ€"I had pretty minimal dev experience at the timeâ€"but there's a very good reason that one of the YC application questions is "Please tell us about an interesting project, preferably outside of class or work, that two or more of you created together." If you don't have a good answer for that, my advice is to hold off on doing YC and just figure out something substantial that you can build together.
Like in so many things, it's best to minimize the hand-wringing and make a decision based on your personal working style, who's available to join you, and how badly you want to be funded by an entity with a strong bias against solo founders.
The absolute best way to prevent this is to have customers demanding your product now (or by a certain date). The next best is either to be dependent upon the startup for income, or to have other people pushing (and pulling) you forward (or pushing them, at other times). Not sure which is better.
I've been focused on credibility and positioning. Waiting for the right co-founder. Forming strategic relationships, staffing up and making key investments. Book deal pending, just acquired a software company.
However, I'm not sure what the odds are for successful single founders. And at least some acquire co-founders (like dropbox).
In my experience the best way is to start alone and actively promote your work to others. Pretty soon people who are serious about working with you will present themselves.
I find it quite difficult to agree with this statement based on personal experience. For me, it is quite easy to try things to see if they work which often leads to going round in circles until a solution becomes obvious. I found that when working with other people, it is a lot easier to put ideas out there and you usually get the weak points shown to you right away. Also, having said that, I am also finding that the most of my dev activity on my current project involves stripping out functionality that simply turned out to be unnecessary or half baked. I feel that having a co-founder would be great help in avoiding some of the unnecessary work in first place.
I'm a few years out of university and have moved around a lot. Most of my trusted friends now days are non technical which makes my candidate pool near empty. I agree completely that getting on board with someone you don't know well is a bad idea.
It seems to me that all the rules placed around how you must have a team to get funded is restricting startups to the realm of college/uni students and there's probably a lot of opportunities being missed.
If I've missed this suggestion in another thread, please let me know.
In the meantime, I've made a google spreadsheet so please add your details if you want to find other HN readers taking part. http://bit.ly/pLCRzg
Edit: Also found this on reddit - http://www.reddit.com/r/aiclass
I'm likely to sign up for the ML one as well.
On a side note, I'm deciding to take this class or the ML one. In my line of work, I do believe that the ML class will be more beneficial but the AI one seems way more interesting.
I was wondering if someone can comment on the suitability of this book?
It works under Chrome though.
As far as I am concerned publicly funded research papers should (must) be freely available. If the public are funding it then the public has a right to the fruits of this investment. And newspapers must be able to link to or reference a source when they quote or review academic literature (in fact I think it should be law that they have to).
A very simple solution would be for authors or institutions to make copies freely available on their websites. I can only assume that they are not allowed to, due to copyright imposed by the journals.
It's ironic that the invention of the www was driven by the need for an easy way to freely distribute and share academic literature.
P.S. There's also a strong case for privately funded research to be made public too. Companies who make product claims based on privately funded research for example absolutely must make this research ("research") available for the public to review. It is notoriously hard to get pharma firms to cough up the papers which support their claims for the latest wonder drug.
In Natural Language Processing / Computational Linguistics, the professional society (Association for Computational Linguistics, ACL) was its own publisher, with no profit motive, and so authors for its conferences and journal never signed over copyright (merely granted permission to ACL to publish the work). For years, it was quite standard for nearly all of the authors to post PS or PDF versions of their papers on their own websites. Then ACL started accepting PDF instead of camera-ready, and just posted the PDFs themselves; and then they started scanning the back-catalogue.
The result of this is that the vast majority of all NLP/CL papers ever written (excluding only those published elsewhere, e.g. in AAAI, and a very few missing proceedings from fifty years ago) are available online, for free, in PDF, at http://aclweb.org/anthology-new/ .
This is how science should be.
His argument was the following: In many fields such as laboratory science, research is expensive; one has to apply for grants and then spend the money, and these departments have large budgets, and this all looks good to deans. If a department is going through a lot of money, then it must be prestigious, important, and doing good work.
I heard a joke once that mathematicians are the second-cheapest academics to hire because all we require is a pencil, paper, and a wastebasket. But, in fact, we require online access to all these journals, for which we have to spend a ton of money. Spending all this money makes us look good to our deans, and lends prestige and the look of importance to our department, and allows us to compete with other departments for resources.
I think it's a bunch of BS, frankly, but it's the one time I heard the existing system defended, so perhaps it's worth bringing up.
SSRN makes posted PDFs available for free download. The Wikipedia entry says that "In economics, and to some degree in law (especially in the field of law and economics), almost all papers are now first published as preprints on SSRN and/or on other paper distribution networks such as RePEc before being submitted to an academic journal."
Quality and prestige metrics: SSRN ranks posted papers by number of downloads, and it also compiles citation lists---if I successfully find Paper X at SSRN, I can look up which other SSRN-available papers have cited Paper X. (Sounds like a job for Google's PageRank algorithm, no?)
According to SSRN's FAQ, it's produced by an independent privately held corporation. I assume that means they're a for-profit company. I don't know how they make their money, other than that they will sell you a printed hard copy of a paper, presumably print-on-demand.
The author neglected to mention that peer reviewers work for free, and that the editorial boards are also made up of scholars who work for next to nothing. (edit: see reply below, this was in the article and I missed it)
It used to be that it fell to the publishers to typeset the articles, but with the advent of TeX they don't have to do that either. (in my field anyway)
Speaking as an academic, these companies do nothing for us. The sooner we agree on an alternative model which doesn't go through them, the better.
In the 1830s Ireland was mapped to a great detail (for tax purposes) of 1:6500ish. These maps would still be very helpful for OpenStreetMap, and are obviously outside copyright. A few Irish universities have them (e.g. Trinity Map Library, a copyright deposit library for Ireland & the UK), however they charge â‚¬50,000ish for a copy of the digitial scans of the full set. Other libraries are similar.
Universities really are not pro-sharing
It would be great if somebody could provided an insider's account of why the academic publishing industry maintained those margins from 2000 to 2010. Did nobody propose legislation to stop them? Were there no criminal investigations? Who are these people connected with politically? What sorts of causes to they contribute to? Just how is the status quo maintained? The guy made a point I was already predisposed to agree with, then kind of went on a rant about how bad it all was. Hey, I'm with you. I'm just not sure my useful knowledge of the issue has increased any.
But a less obvious - and personally very painful - consequence of greedy publishers is the inability to do serious academic research outside of academia/industry. I have the ability (math PhD) and the will (have published a few papers post grad school, though it's hard to find time), but I have all but given up due to lacking access to books and articles behind these paywalls. Yes, you can find a decent chunk of articles online -- but very often there are one or two (or more!) key papers you _need_ to read to be at the front of a field, and one of those will be behind a paywall. The worst part is that I never know how truly useful an article will be before reading it, so in the few cases where I've payed I find that only a small percentage of the time was it worthwhile.
In short, this system essentially kills research outside of academia / industry.
Why can a boutique shop sell a $50 dress for $200? Taste. One could simply walk into that boutique, confident that 20 minutes later a cute, fashionable and well-fitting dress would be acquired.
Why can top universities charge so much for tuition? Every year, %s University generates a curated list of individuals, and many hiring processes (not to mention ad-hoc interpersonal filtering processes) emphasize individuals in that list. Like boutique shopping, this is an expensive strategy that often excludes superior talent, but is fast.
Is it worth $200,000 to have one's name on that list? Apparently.
Is it worth application fees and an iron publishing agreement to have one's paper published in Nature. Apparently.
This is the only problem standing in the way of open access publishing. While the arXiv doesn't offer peer review and so doesn't negate the need for journals, the ecosystem would quickly adapt to open peer review. Unfortunately the implied reputation of being published in certain journals is still something that's too ingrained in academia. It's getting better slowly but it's going to take at least a generation to go away at the current rate.
To make matters worse, academics are among the most tradition bound creatures in the universe, especially when there is no clear criteria for truth (which is like all of the social sciences and humanities). The only thing they have to calibrate against is consensus, and consensus favors institutions already in place.
The unanswered question is "Why is the market failing?"
A couple unmentioned ideas:
- Until recently, tuition hikes went unchallenged.
- Faculty have a vested interest in maintaining the system. (If my publishing in Journal X marks my competence, what happens if it goes away?)
- An alternate system for rating a very hard to measure topic would be needed. Counting scarce publishing, and references in scarce journals is imperfect but nothing else has beaten it.
I don't have an answer but perhaps a couple bright entrepreneurs could figure out a better equilibrium, and find a way to cross the chasm to get there. Geoffrey Moore would say pick one vertical or academic discipline.
The system could keep volunteer-based peer review, and establish a (perhaps private) forum-like interaction for the authors to improve their article.
Google Scholar has solved many of my article search problems and often gives me directly a link to the PDF of (sometimes just a preprint of) the article. However the problem remains for the libraries, which might well be the largest contributors to publishers, and which may find it hard to cut a subscription and suggest its users to use Google scholar.
Cornell has about 18 libraries and is slowly implementing a "Fahrenheit 451" plan to eliminate them. First they eliminated the Physical Sciences Library, next the Engineering Library, and they'll eliminate most of the others, one at a time, until there's nothing left but a remote storage unit, lots of computers, and a few pretty library buildings for show. Since it's happening slowly and only affecting one community at a time, they'll avoid a general uproar.
If I blame anything, I blame the institution of tenure, which can be seen more clearly as a cause of moral decay than ever.
Workers and capitalists alike will fight to the death to protect the interests of groups they are a part of because shifts in the rules can cause their personal destruction. A man with tenure knows he can't be ruined, so he's got no reason to ever take a stand.
 http://www.hhmi.org/news/20110627.html http://www.eurekalert.org/pub_releases/2011-06/gsoa-gln06211...
which states that all NIH-funded research must be placed in this database ("PubMed Central") within 12 months of publication. I do not know how widely it is obeyed, but I know that the several labs I've been in regularly deposited their papers there.
Why would a government want to remove the middle-man when the middle-man is making enough money to lobby the government to protect them? -_-'
- SNI, doesn't work with the Cedar stack or < IE8 - Hostname, strips headers - IP, costs 100 dollar p/m.
Nice job Heroku.
Personally, the only terrifying thing about this is that when Heroku releases something, it's been in the works for a good long while, and they usually have a stream of incredible releases waiting right behind it. The mind boggles at what they're going to be doing next.
But we'll be first in line for it :)
It's not clear from the pricing page, and I'm assuming it's per month, in which case it's reasonable, especially given the value added services being offered, especially the ability to fork the database and the automated creation of read slaves.
It looks like a solid offering. It's probably not right for companies that are subject to HIPAA or PCI compliance requirements, and there's no information on the use of compiled extensions in the db which may limit it's utility if you're wanting to use PostGIS or other specialist datatypes.
I looked for any more documentation about that, but the only official word I have from them in the past is the support ticket I filed last year stating that they don't support "additions" like user defined functions or the various contrib modules.
I don't use postgres - I hope heroku expands this kind of service to more databases (although I don't think it's likely in the near future).
The smallest database is also pretty expensive. I wouldn't mind a cheaper plan for less resources.
Only being able to create one database of each size is just weird, especially if forking or following. Can anybody confirm that this is a limitation? (I only read about this from one of the other comments here)
* What do I do if I need to tune/configure Postgres for my workload.
* How is the performance of transferring all of the queries and responses across the internet?
Keep in mind I've never used postgres before so these may be moot points.
I guess I'm probably one of like three users of PL/python, especially since it's untrusted. Worth a check, though!
I'm sorry, but daily backups are not only for high availability, but also for point in time recovery.
What if $dev drops the user table by mistake ? Do they provide backup for that ?
And confused by "one database only". Does that mean CREATE DATABASE databases? If so why?