(from: http://webcache.googleusercontent.com/search?q=cache:9k5ryiX... )
Cambridge, MAâ€" Moments ago, Aaron Swartz, former executive director and founder of Demand Progress, was indicted by the US government. As best as we can tell, he is being charged with allegedly downloading too many scholarly journal articles from the Web. The government contends that downloading said articles is actually felony computer hacking and should be punished with time in prison.
â€śThis makes no sense,â€ť said Demand Progress Executive Director David Segal; â€śit's like trying to put someone in jail for allegedly checking too many books out of the library.â€ť
â€śIt's even more strange because the alleged victim has settled any claims against Aaron, explained they've suffered no loss or damage, and asked the government not to prosecute,â€ť Segal added.
James Jacobs, the Government Documents Librarian at Stanford University, also denounced the arrest: â€śAaron's prosecution undermines academic inquiry and democratic principles,â€ť Jacobs said. â€śIt's incredible that the government would try to lock someone up for allegedly looking up articles at a library.â€ť
Demand Progress is collecting statements of support for Aaron on its website at â€¦URLâ€¦
â€śAaron's career has focused on serving the public interest by promoting ethics, open government, and democratic politics,â€ť Segal said. â€śWe hope to soon see him cleared of these bizarre charges.â€ť
Demand Progress is a 500,000-member online activism group that advocates for civil liberties, civil rights, and other progressive causes.
Aaron Swartz is a former executive director and founder of Demand Progress, a nonprofit political action group with more than 500,000 members.
He is the author of numerous articles on a variety of topics, especially the corrupting influence of big money on institutions including nonprofits, the media, politics, and public opinion. In conjunction with Shireen Barday, he downloaded and analyzed 441,170 law review articles to determine the source of their funding; the results were published in the Stanford Law Review. From 2010-11, he researched these topics as a Fellow at the Harvard Ethics Center Lab on Institutional Corruption.
He has also assisted many other researchers in collecting and analyzing large data sets with theinfo.org. His landmark analysis of Wikipedia, Who Writes Wikipedia?, has been widely cited. He helped develop standards and tutorials for Linked Open Data while serving on the W3C's RDF Core Working Group and helped popularize them as Metadata Advisor to the nonprofit Creative Commons and coauthor of the RSS 1.0 specification.
In 2008, he created the nonprofit site watchdog.net, making it easier for people to find and access government data. He also served on the board of Change Congress, a good government nonprofit.
In 2007, he led the development of the nonprofit Open Library, an ambitious project to collect information about every book ever published. He also cofounded the online news site Reddit, where he released as free software the web framework he developed, web.py.
Press inquiries can be directed to firstname.lastname@example.org or 571- 336- 2637
In this case, the indictment alleges that the documents were stolen from JSTOR, which does not even own them! In the vast majority of cases JSTOR scanned documents whose copyright is owned by someone else, and acquired or was donated a non-exclusive license to distribute copies via its service. In many cases the documents are even public domain. The indictment continues the theft metaphor by discussing the effort and expense JSTOR incurred in scanning the documents, and the alleged attempt to render this less valuable by redistributing "its" documents, analogizing this to the loss someone suffers in a theft.
But effort expended to build a private repository consisting of copies of things you don't own doesn't give you ownership of the result, any more than Google Books doing the same has given them ownership of the documents that they've scanned. If you scraped Google and "stole" their scans, you would be violating Google's Terms of Service, and Google might indeed feel subjectively like you've taken something of value (their exclusive access to this repository of scans), but I think it would be a stretch to say that you've "stolen" "their" documents.
Defending his actions would require a very strong, multi-pronged version of the argument "if it's physically / technologically possible, it must be ok." Can MIT legally limit guest access to its network? Can JSTOR limit access to its content? Well, technically, their software didn't limit it, right? He just changed his IP address and they let him right back on, gave him permission. And then he had to change his MAC address. And then physically move to a different building.
But it doesn't matter anyway, because legal restrictions are legal restrictions. It's impossible to enforce every legal restriction in software. Put another way, we don't have to read JSTOR's server code to figure out if there's a violation of policy here -- the policy is written out as a legal document.
In the hacker world, there's a tendency to think that if something's possible, even easy, then it shouldn't be considered "breaking in" or "stealing." If my Gmail password is "password," then of course you're going to read my email! I had it coming. In the real world, though, this is still a crime.
JSTOR Statement: Misuse Incident and Criminal Case
The United States Department of Justice announced today the criminal indictment of an individual, Aaron Swartz, on charges related to computer fraud and abuse stemming from his misuse of the JSTOR database. We have been subpoenaed by the United States Attorney's Office in this case and are fully cooperating. While we cannot comment on this case, we would like to share background information about the incident and about our mission and work with the academic community and the public.
Last fall and winter, JSTOR experienced a significant misuse of our database. A substantial portion of our publisher partners' content was downloaded in an unauthorized fashion using the network at the Massachusetts Institute of Technology, one of our participating institutions. The content taken was systematically downloaded using an approach designed to avoid detection by our monitoring systems.
The downloaded content included over 4 million articles, book reviews, and other content from our publisher partner's academic journals and other publications; it did not include any personally identifying information about JSTOR users.
We stopped this downloading activity, and the individual responsible, Mr. Swartz, was identified. We secured from Mr. Swartz the content that was taken, and received confirmation that the content was not and would not be used, copied, transferred, or distributed.
The criminal investigation and today's indictment of Mr. Swartz has been directed by the United States Attorney's Office.
Our Mission and Work
Our mission at JSTOR is supporting scholarly work and access to knowledge around the world. Faculty, teachers, and students at more than 7,000 institutions in 153 countries rely upon us for affordable and in some cases free access to content on JSTOR. Since our founding in 1995, we have digitized the complete back runs of nearly 1,400 academic journals from over 800 publishers. Our ultimate objective is to provide affordable access to scholarly content to anyone who needs it.
It is important to note that we support and encourage the legitimate use of large sets of content from JSTOR for research purposes. We regularly provide scholars with access to content for this purpose. Our Data for Research site (http://dfr.jstor.org) was established expressly to support text mining and other projects, and our Advanced Technologies Group is an eager collaborator with researchers in the academic community.
Even as we work to increase access, usage, and the impact of scholarship, we must also be responsible stewards of this content. We monitor usage to guard against unauthorized use of the material in JSTOR, which is how we became aware of this particular incident.
Paragraph 35 & 36: which "protected computer" on MIT's network did he access? Certainly they're not trying to claim his laptop was a protected computer? Are they talking about the DHCP server or whatever registration frontend MIT has for the DHCP assignments? I have trouble with the concept that a violation of a computer use agreement (when there are no operative security barriers in place) constitutes a violation of the computer fraud and abuse act. Then again, I've always thought that act was vague and therefore overbroad.
Obviously what he did was bad in some sense (at least from the perspective of JSTOR and MIT), but even if it should be a crime rather than a civil dispute or internal disciplinary action at MIT, I don't like the fact that just about any misbehavior on the internet becomes a federal case because the probability of no interstate resources being used is very low.
Finally, I take issue with the notion that someone who is accessing a service through a public interface is criminally responsible for downtime if too high an access rate causes service degradation or an outage. The claims that JSTOR's servers were overloaded and (one?) even went down at some point are clearly there to set up a later claim of damages. Haven't they heard of rate limiting (in this case, since it was a rogue laptop stashed in a data closet, rate limiting by IP)? That wouldn't work against a concerted denial of service attack, but this was no denial of service attack. JSTOR seems to have been relying on manual intervention to stop article leeching that could lead to a (partial) outage. That's naive, and not a good idea.
"As Swartz entered the wiring closet, he held his bicycle helmet like a mask to shield his face, looking through ventilation holes in the helmet."
I know he won't get 35 years, but it's nevertheless outrageous that it could happen.
A place where all academic research that has been funded in part bypublic funds is published, journals be damned. Hopefully with deeppockets to fight off the lawsuits."
Nowhere do they say he did not do it however.
There has got to me more to this story, because I just can't for the life of me believe that he would download the documents to "free" them on internet (as is alleged).
Oh, the irony.
1. Wire fraud maxes out at 20 years outside of a presidentially-declared emergency. No fine cap, it seems. http://uscode.house.gov/download/pls/18C63.txt
2. Computer fraud under 1030(a)(4) caps out at 5 years with no prior offense, no fine cap. http://uscode.house.gov/download/pls/18C47.txt
3. 1030(a)(2), (c)(2)(B)(iii) looks to be another cap of 5 years. Ibid.
4. 1030(a)(5)(B), (c)(4)(A)(i)(I),(VI) looks like another cap of 5 years. Ibid.
IANAL, just trying my best to read the code itself.
It is alleged that he signed up for guest accounts on their network with different laptops, changed his MAC address and re-registered if the IP he was using was blocked (by JSTOR) or cut off of the network (by MIT), and finally connected a laptop in a basement networking closet.
I guess you could say that is 'hacking' in the unauthorized access sense, but not in any meaningful sense. It isn't breaking and entering if someone repeatedly trespasses somewhere (say, banned from a store) even if they change their clothes to avoid detection.
"Our ultimate long-term objective is to make JSTOR available to everyone who wants access to it, while doing so in a way that ensures sustainability of the service."
Cynically, it seems like the bit about "ensures sustainability" can be translated as "we will aggressively prosecute in order to protect our bloated salaries."
This is not the first time he has done something like this if memory serves me. In late 2008 Mr. Swartz and Carl Malamud went to select libraries, ones with free PACER access, and proceeded to download ~700 GB of information that was behind a paywall. After which they made all of it available on Mr. Malamud's website.
Reviewing the Indictment report, "JSTOR did not permit users... to download all of the articles from any particular issue of a journal." Further, "JSTOR notified its users of these rules, and users accepted these rules when they chose to obtain and use JSTOR's content."
The report goes on to claim that Aaron took action to "avoid MIT's and JSTOR's efforts to prevent this massive copying". MIT and JSTOR allowed users to access their network, with no system in place to ensure that a user was a student (by design, as MIT admits) or that they were using their real name (or a single MAC address). A researcher accessing JSTOR is really less of a concern than other potential types of access so perhaps this is not a good system. The report suggests Aaron took action to "elude detection and identification" but courts have held that anonymous speech and action are valid parts of society. They take issue with his using a Mailinator address but such an email address is just as valid as any other and simply allows others to read ones mail.
The report whines that the "rapid and massive downloads and download requests impaired computers used by JSTOR to service client research institutions". This inconveniencing of other users could have been avoided and the blame for how JSTOR allocates resources lies with the architects of JSTOR.
MIT acted to ban the IP ranges that they believe were in violation of their rules. Users were to use the network to support MIT's research, or at least not obstruct it. However, very likely Aaron was conducting research. Any hindrance to other users may have been the responsibility of MIT's infrastructure team.They further request users "maintain the system's security and conform to applicable laws, including copyright laws" seemingly suggesting Aaron was in violation of copyright. Very importantly, MIT should remember that when it comes to copyright "Reproduction for purposes such as criticism, comment, news reporting, teaching, scholarship, or research, is not an infringement of copyright." The last point of MIT rules is that users "conform with rules imposed by any networks to which users connected through MIT's system" which makes little practical sense and is certainly selectively enforced. Assuming a JSTOR web server is now a network, so is my personal web server. On all html files on my webserver I link to a ReasonableAgreement-style notification that no user may browse such files between 8am and 11pm EST. Any MIT student, faculty member, or guest who connects during those hours is in violation and should be kicked by MIT, for if a rule is to be fair it should be consistently enforced. This third rule is simply a CYA clause and is its selective enforcement is arrogant.
Why is the Obama administration pursuing an investigation in Wire Fraud and Computer Fraud?
Also: nice going, Aaron! Drag research access into the 21st century, kicking and screaming!
Does anyone think it's odd that an Acer laptop could write these files to disk faster than JSTOR could serve them?
Demand Progress is an organization Aaron co-founded. They've done some great watchdog work on things like PROTECT IP, the Patriot Act, the Internet Blacklist Bill etc.
It is mind boggling how the supposedly smart people are not getting their heads out of their asses so late in a world frighteningly short on distribution of knowledge that can be effectively used to solve the wicked problems that are crippling it for so long.
We really need a global, openly accessible knowledge network and a platform where all eligible can contribute and collaborate to research at least when it comes to areas that impact human society at large - medicines, natural resources etc. It is hard otherwise to see how things like Cancer and Energy shortage can be tackled.
Aside from the allegations about breaking into various physical hardware infrastructure at MIT, wouldn't that be like being charged with downloading too many Jonathan Coulton albums?
How do they know this? Has he said something to that effect?
--edited for formatting.
All the documents have been returned?!
Liberating those documents from JSTOR would have been quite a gift to society.
He may very well die in prison.
Or perhaps he will be forced to publicly recant and merely be forbidden from using computers. I hope that in the latter case he will have the good sense to emigrate.
One day, his tormentors will be harshly punished. Unless, of course, "the future is a boot stamping on a human face â€" forever."
"Keeping medicines from the bloodstreams of the sick; food from the bellies of the hungry; books from the hands of the uneducated; technology from the underdeveloped; and putting advocates of freedom in prisons. Intellectual property is to the 21st century what the slave trade was to the 16th."
Why not just say "JS is the common language of the web" without the horrible analogy.
This won't be fully true until everyone is on the same page with ECMAScript5.
This problem is arising frequently for me with Circles - I'm not exactly sure what the impact will be beyond blanket posts to Friends and Public.
conveniently, i have a meeting with the plus folks so i will tell them :)
I mean, come on, you just alienate a ton of users without implementing iPod or iPad compatibility...
I can't really give any impressions on the app without these things working, but viewing photos and photo comments feels nice and responsive.
I was able to find it there.
Update: I'm reading that it's only available on the iPhone currently. Alas, my iPod Touch and iPad are not the iPhone. If this is the case, then that's too bad.
(That's what it says when I try to install it on my Wifi iPad and it did not show up in the App Store on the iPad either)
If you can build even a primitive version of this deployed well, it would be the Google-killer. Have it answer everything from 'how do you make a for loop in php' to 'what cinemas near 32807 are playing Cars 2'? I don't want pages of links, I want concise understandable objective correct answers.
1. Does age matter?
2. How important it is to be in Silicon Valley?
3. Are biz devs really important?
4. Are cofounders really important?
If this takes off at all, there is no way that a lone programmer can provide adequate security for a project like this. You will get hacked, there's just so many potential attack vectors.
Wonder if there's a way to distribute a pot that doesn't require trusting a central server?
Your request was blocked by BlockScript based on the policies of the strikesapphire.com website.
BlockScript is security software which protects websites and empowers webmasters to stop unwanted traffic. BlockScript detects and blocks requests from all types of proxy servers and anonymity networks (such as web-based proxies, open proxies, Tor, VPN servers, etc.), hosting networks, undesirable robots and spiders, and even entire countries. For more information, see: www.blockscript.com
One needed to make >= $4000/year in deflated 1893 dollars to even be eligible to need to pay any taxes. The goal was always to tax corporations more heavily than people.
Even the poorest Americans end up spending more than 2.4 percent (effective annual rate) in taxes when their employers "withold" earnings.
So when corporations go out of their way and spend a lot of money to accomplish tax avoidance, that seems kinda wrong.
Github, and virtually every other thing that costs more than $20 a month, targets primarily a B2B market. It might be popular with some local poor 20-somethings, but honestly, you're just an infection vector to get your day job on board.
The pricing is designed to extract maximum value out of business customers. If they have 125 simultaneous projects, they officially have More Money Than God. "The price of a residential Internet connection" is not a pricing anchor to them. (Should they need one, they're probably going to be persuaded by "We have 500 man-years of labor in our projects, one man-month costs us $15k, lemme break out Excel for a minute, oh it seems all my options cost pigeon poop.")
I strongly, strongly encourage you to listen to the Mixergy video about Wufoo or talk to anyone who runs a SaaS business if you do not understand where most of the money is likely getting made. That topmost plan which costs $$$$$ prints money, primarily from people who don't need all that it offers and couldn't care less because it costs less than pigeon poop on their scales.
If you don't use Github for your projects because $100 is a lot of money for you that's perfectly fine for Github because it does not make them meaningfully worse off.
Ask yourself what kind of markup we'd have to charge on storage space and still be able to grow our business when most of the repos we host are less than 1 MB.
We charge what we do because it makes money. Money that allows us to continue hiring really talented people that are all focused on building an even better service.
Doing things like including private repos with our free plan would eat into our margins and only satisfy the people that are likely to never convert to a paid plan. Frankly, I think being able to use all of the tools we provide for the price of a pint of Guinness every month is a damn good deal.
Let's look at the standard plans for smaller teams - it maxes at $22/month for 20 private repos and 10 collaborators. Not bad.
On the business side the max is 125 repos for $200/month.
Even in the midwest a full time dev costs say at least $4,000 a month. Assuming you have a team of 10-20 devs, that is what $40,000 - $80,000 a month.
So, at the high end to keep your team of 10-20 devs happy it costs you an extra $200 a month on top of the $40k+ you are spending in salary and so forth. Drop in the bucket.
And if you're an indie dev and you can't afford $22/month for awesome code hosting for all your projects, you are the kind of cheapskate that you might as well look elsewhere. Also, there are a TON of options out there like bitbucket, assembla, and so on if you want "cheaper" hosting.
Seriously, you could put out a crappy android app that makes you $100 a month in a weekend and that pays for your github hosting.
It's not that people have a problem paying $22 for 20 repos, it's that the 21st repo costs $23 per month!
Github's pricing structure has friction in this area. Without a controlled experiment it's impossible to determine whether this pricing model is best for Github or not.
Imagine if when you bought toothpaste there were two options, a small travel-size tube for $1 or a crate full of 500 full size tubes for $250. Or imagine if a restaurant served ice cream at $0.25 for a spoonful and then your next option was a full gallon.
The friction occurs b/c people don't like wasting money, and the pricing model Github has chosen feels like unused repos are costing money but not being put to use.
In other words, there is a nonlinear relationship between money spent and usefulness gained per dollar, which makes it difficult for people to maximize utility over. This is friction and it probably has mixed results. I think the most important thing to note is that we don't know whether it helps or hurts Github's business to do things this way. Assertions that it does one vs the other are only speculation.
See http://news.ycombinator.com/item?id=2674417 for the discussion of Codeplane on HN.
I'd still like a good way to back up my private solo-project repositories off-site using git, but I suppose DropBox works pretty well for that?
For example, my company has two full time devs and a few contractors for small projects. We have accumulated over 30 projects and that number will continue to grow. It's not uncommon for an older project to be re-opened after a period of inactivity for new features or fixes. It's not cost effective to shell out $100/month. We've moved to Springloops which allows 10 active repos and unlimited archived for $15/month.
Ironically it's nicely illustrated by the employee/owner of a web agency complaining in the comments. Obviously it worked and Github managed to extract more value from a larger customer.
I can have all the private repos I want by creating repositories on my computer. Git is decentralized. Putting it in a central location is centralized. :)
But seriously, I can have as many private repositories as I want - all I need is a server with SSH support.
What I want is the user interface for adding comments and collaboration on my private repos that I get for public repos. If I find that valuable to me, I'll pay it. If it's a "toy" project that I'll never touch, a local repo and a backup of my computer is all I need - I don't need others to have that code.
GitHub is the Library of Alexandria, not a safety deposit box.
There's nothing stopping a customer from cramming several projects into a single Git repository. You could theoretically take advantage of GitHub's "unlimited" storage for cheap this way. The problem is, you need separate repositories if you want to manage access for different collaborators.
Folders aren't expensive, but access management can be. Github understands this, which is why their Business plans, which are differentiated by having finer-grained access control features, are more expensive.
I personally prefer doing public development, but this has been cited by a number of colleagues as a reason not to host at github.
I've personally witnessed individuals with email Inboxes with over 50,000 items in them -- total size 30GB. No use of folders, no meaningful search capability.
Imagine if someone thought a blogging software was like a diary in days gone by. "You mean everyone can see what I write in my diary?! How terrible!"
What if Western Digital used Dropbox's pricing plan?
Not as sexy as GitHub but it has Trac (or Agilo Trac) and a choice of GIT/SVN. Unlimited projects, but it has disk space limits and user limits that differentiate the levels (similar to dropbox.) Cost structure is here:
There is no public visibility on projectlocker.com, thus it is best for teams that don't want their stuff public (which actually most companies.)
Disclaimer: I have used PL as a paying customer for 4 years at the Equity level (<30 users, <30GB of repos) and am a very happy with it. I haven't noticed it go down in all that time.
Likewise, it's funny to think that you could encrypt your git repositories and use github public hosting for private projects. I wonder if someone already did, but I guess github wouldn't care (if you're doing this, you wouldn't be paying for the service anyway).
Surely developers aren't that desperate for a nice UI for their git repos?! I assume that there are a bunch of web based repo browsing tools you could install for free if you were that hell bent on looking at your code in a web browser
Also, putting my grumpy hat on, what's with all the cheapskate whining? "Give me more, I want it FREE!"... bleh.
There are similarities to 1998: enough of a boom that's unlike safe historical value-anchors to make the skeptical scared, but also enough novelty to suggest continuing upside as the opportunities are explored.
The recent memory of the dot-com boom, and the general malaise in the rest of economy, changes the expression quite a bit. Are we hiding the exuberance that would otherwise signal excess, and thus don't realize how late in the cycle it already is? Or are we proceeding at a measured pace, keeping perceptions closer to sustainable valuations, meaning we're still early in a now longer, slower cycle? I wish I knew for certain.
Things never fell back to the 1994 level. In fact, it looks like they didn't fall too far below where they're at now.
I was a lot more worried about a tech bubble -before- I saw those charts.
They're very special in that they have a lasting technical edge, or you could say they've gotten lucky. I think major companies that could compete with them think the technology they're using is so hard to work with that it's a permanently boutique business, but they have always had a cautious eye towards scaling the business, and I don't see any reason why they can't accomplish it someday.
(edit: seems the site is back now)
(edit2: maybe not)
As the graphs in this article show the tech industry has recovered at a fairly constant rate since 2000. There was a drop in funding in 2008, corresponding to the drop in the rest of the US economy. The tech industry managed to bounce back from the that incredibly quickly, resulting in what everyone is now saying is a bubble.
However, looking at the entire situation, not just the tech industry, we're in wildly different economic times now than we were in 2000. The US economy as a whole is still fairly depressed, while the tech sector is sort of an anomaly that's doing extremely well.
It's certainly a possibility that this is a tech bubble that'll burst and drop the tech industry back in line with the rest of the economy. However, to look at history as a guide once more, it's much more likely that the rest of the economy will continue to recover and enter boom cycle (which shockingly also happens roughly once every 8-10 years) and catch up to the tech industry. At which point a lot more than just tech companies and investors will have a lot of money to spend. Until, of course, something else triggers the next major economic down turn sometime between 2016 and 2020.
I don't find the argument as laid out particularly convincing, though it's nice to have data to look at.
I would say no. And that is why while I suspect what we're in is no where near as big as the .COM bubble was, the road ahead is also not smooth. But then gain, ups and downs are also nothing new or worth worrying much about.
The very early stages of .com 2.0 produced some of my favorite startups. And the most optimistic part of me is curious about what great things will rise form the ashes of this/next bubble.
There are a lot of lists like this.
So, they don't write tests at justin.tv, and they don't do automatic deployment? Sounds like a great place to work at...
I do remember someone from the staff coming into the overcrowded channels and expressing their astonishment feelings toward the fact that the chat was actually standing and surviving the load.
Besides that, some phrases they translated into my language are just preposterous, usually it's just better off to do not translate a site at all if you manage to do it in such a way.
I did read all the comments where they've pointed out their many improvements, i hope it will get better and better since a platform like that has great potential.
Overall, their product is good enough to make me pay for a stream or two. But, as a user used to free video, that eventually pays for yours and you can't deliver, - ouch.
Downvotes, pfft . What a joke.
$type echo echo is a shell builtin
V6: http://www.bsdlover.cn/study/UnixTree/V6/usr/source/s1/cat.s...V7: http://www.bsdlover.cn/study/UnixTree/V7/usr/src/cmd/cat.c.h...Plan 9: http://plan9.bell-labs.com/sources/plan9/sys/src/cmd/cat.cBSD: http://www.koders.com/c/fidF501905968D8BE7BBDD355C3C8DB62804...GNU: http://git.savannah.gnu.org/cgit/coreutils.git/plain/src/cat...
(alluding to a quote from him that I can't source, â€śOne of my most productive days was throwing away 1000 lines of code.â€ť)
The "-n" special case opened the floodgates for many more options. And what if I actually wanted to print "-n"? There's no way to do it.
It's close to the FreeBSD implementation.
Shell is my favorite programming language these days. For systems programming there's really no way to get terser or more accurate programs with a scripting language. Once they work, my shell scripts are essentially side effect free.
That said, I use it for true pipelines. An MVC framework sounds insane. But I already learned some new Bash tricks from reading the code so keep up the madness!
Original: http://home.tudelft.nl/en/current/latest-news/article/detail... which, unlike the physorg blogspam, includes links to sources with more information.
For anyone who doesn't think this is a big problem, picture taking DOUBLE the population of the USA, moving them to an area ~1/3 the size and having them defecate in the open with no sanitation. This (from what I've read) is pretty much the situation in India, where more people have mobile phones than access to proper toilets.
The brother of my grandfather lived on a farm for all of his life; there was never any kind of toilets on the farm. While there always was access to clean water from a well (which may not be the case in the poorer parts of the world), running water was installed inside the "house" only in the early sixties. "House" is between quotes because it looked more like "the part of the stable to be used by humans".
We used to go there often during the summer. We did our business behind the house, like everyone. We loved everything there.
And what about health? My grandfather's brother died past 80; when he died his wife moved to the "city" where she lived to see her 94th birthday.
They would've had a hard time believing people would ever want to gasify their poo with "plasma created by microwaves".
In fact, so do I.
What baffles me is that the people bright enough to invent it didn't come with a similarly brilliant excuse to deploy it.
Is this really a problem that needs a new solution?
If I recall, you can't pee in them due to ammonia, but there are separate ways to handle fluid wastes.
Perhaps the articles is just trying to disguise the use of incinerators.
We are talking mostly about places without proper sanitation, how are they supposed to have (and sustain) facilities for generating plasma?
Left to itself human waste degrades and the residue can be used for fertilizer or maybe even fuel. A cesspit or septic tank provides for safe storage until it is ready for disposal, preferably in a way that aids the local population.
(it's the part about farmers, not sure about a specific seek to point, sorry it's been a while)
EPIC way to say "for god's sake keep shitting"
Not sure how this works, but it sounds like it would only worsen the situation.
I hope those those who needed a reminder about the capabilities of girls or women see this news.
However, (and I realize this will be controversial) imo this result doesn't contradict the general notion that girls (in mainstream American culture) are discouraged (by societal and cultural pressures) from engaging in science and engineering. Two of the three girls are Indian-American and these girls presumably don't face the same pressures that most other American girls face.
For that matter, I think that women in countries like India, China, Russia etc. are much more represented in science and engineering (in their countries) than native-born women in the US.
Am I the only one who feels like, although men may 'dominate' the hard sciences, one of the reasons (among many others which I won't discuss) why girls may succeed at stages like this is because of the encouragement from a "girl power!" (perhaps, underdog?) mentality?
If boys won and went with "guy power!" they would simply be accused of being sexist or even misogynist rather than fair-minded, or intrinsically motivated.
I somehow find this quote to be rather unfortunate. I know he meant it in a good way, but it makes it seem as though the general public needs to be reminded that women aren't less intelligent than men. Perhaps for many that reminder may be required, but I guess the fact that we need such a reminder saddens me. I would have preferred something like that not be mentioned. Or perhaps I would prefer that it need not be pointed out at all.
On a related note, here's a NYT blog post that I saw recently that highlights the science fair winners' gender and just goes up in flames: http://bits.blogs.nytimes.com/2011/07/13/girl-power-wins-at-...It ends by suggesting that Google hired Marissa Mayer and Susan Wojcicki because they would help recruit more women, which made me cringe, even though I know that the author probably didn't intend to make exactly that point.
Original study: https://sites.google.com/site/ampkandcisplatinresistance/
Doesn't seem to have been published yet, but presumably it will be.
It still remains to be seen how doing more of something (or being more successful at something) in high school translates to being successful at that endeavor later in life. Time will tell!
Glad to see I was mistaken :)
Edit: I see, the graphs are continuous.
It's really difficult to conclusively prove anything until you're looking back at the event. Until then there will always be two sides to the argument.
We are not in a tech bubble that resembles the dot-com days. There is a ton of money at the seed and early stage for untested ideas (Color) and many VCs and angels will lose money. But more entrepreneurs will get funded and that instinct to gamble is why America has such a terrific tech sector.
Also, I can't speak for anyone else, but New York simply doesn't appeal to me as a wannabe hacker. I like San Francisco and the surrounding area. I like the culture and the people. New York really doesn't have that same draw for me. That said, for those from the East Coast who would otherwise go to MIT or CMU, a top tier engineering school in New York might well be a viable alternative.
However, I think the idea that New York needs to be the capital of technology over Silicon Valley is like if New York tried to become number one in Chicago style pizza, it's just silly.
Technology is going to be a driving force in our economy because technology simply means producing useful things we don't yet have. To make it about some kind of dominance over another state within the same country is unnecessary, just grow your own legacy.
This isn't necessarily a bad thing. But the picture today implies that the technology sector doesn't tend to create a lot of jobs, even when it creates extremely useful inovations or novel avenues of entertainment.
noscript would, but (imho) it's too intrusive.
would adblockplus have blocked it? what about ghostery, disconnect, et al?