I also noticed recently that, in a somewhat shell-like way, you can press the up arrow to edit your previous message at will (but perhaps this is common knowledge).
guy> I love you darling!
girl> GO 2 HELL!!
One of his arguments is that these languages are often only mentioned in conference proceedings.
How you get to be a PhD student in computer science without realising that conference proceedings are the leading distribution mechanism for knowledge in the CS research world is a mystery.
I may only be a humble honours student, but the central importance of conferences over journals has been drummed into me over and over by my professors.
The whole deletionism fiasco at Wikipedia is ultimately a software and UI failure. Misguided people who in most cases could never write a good article (or even improve an existing one) themselves are running amok because the system is re-enforcing the belief that their only talent, destroying information, is also a valid form of contribution. It is no statistical accident that rampant wiki deletionism is even more intense in ..."strict" countries such as Germany.
At the same time it is important to note that a lot of articles have serious shortcomings and are in need of improvement. While deleting them is in my opinion unforgivable as long as they contain useful information, I believe Wikipedia could profit from a more modern approach to article rating and validation. If substandard articles were allowed to continue existing albeit with low ratings and missing validation tags, Wikipedia as a process could focus more on improvement as opposed to gleeful pruning. If they concentrated on more constructive measures and included better ways of gathering user feedback for quality control, they could also provide former deletionist users with a UI option that simply prevents them from ever having to see an article that is below a certain quality threshold. Everybody would win.
As it stands today, Wikipedia increasingly fails at its stated mission of being a repository for the world's knowledge. Sadly, I don't believe it is possible to change Wikipedia in any way, ever. Someday, someone will have to come along and fork it.
> All that donation money, and they still can't afford enough hard drive space to avoid deletionism.
The guy allegedly doing the flagging has responded on his user page: http://en.wikipedia.org/wiki/User:Christopher_Monsanto
Edit: The quoted comment was in jest, and too many missed this, so I'll reinforce that by adding 'and not serious'.
The Notability guidelines often both me really, as they are a somewhat silly set of 'rules' in many ways and not everything fits into a nice and tidy system. For example, Christopher M seems to feel that his understanding of the requirements if that all languages must be cited in well published and cited academic papers and there is no other way around it. That's just silly. There could be new and growing languages that are of importance, or older ones that were important at the time, but that there weren't papers for and aren't being actively used. Do they each have a purpose and for the people who is researching things via the Wikipedia important? Yes. They are.
I feel that there is more to be lost by most deletionist activity than there is to be gained. The risk evaluation here almost always (except in cases of spam and self edits, which are frequent) should lean on the side of having more information available, not less.
The project codename is 'Infinithree' ('‚ąě¬≥'), and I'm discussing it pre-launch at http://infinithree.org and (Twitter/Identica) @infinithree.
I played with this language a few years back and thought it had great promise(when C# was much less capable). I have read the exact Wikipedia page you deleted, and it got me to write some code in Nemerle.
* Btw, this might get some publicity for Nemerle (and the other languages).
The narrator of Foucault's Pendulum, when he decides to be freelance researcher, says that his main principle will be that all information is equal, nothing is more precious than the other.
This is a lot less useful way of doing things but it flies almost completely outside the deletionist radar. There is little cultural dance pertaining to the the concept of notability for mentioning something in a list, and no bureaucratic pseudo-procedure for a deletionist to wield against such practice.
Otherwise, Mr. Monsanto has every right to push his agenda on Wikipedia insofar as it is within the bounds of legal play on the site. Attacking his character gets nobody anywhere, and probably adds credence to whatever he's doing. If you're really concerned about deletions of your favorite PL articles, sit on them. If a request for removal/deletion (I don't know the wiki-jargon) pops up, just dump all over it. Even better, improve the articles. He can't get something deleted that's not mediocre. Agents like Mr. Monsanto will actually improve the quality of your average article one way or the other. I'm impressed that somebody would bother reading so many articles and post meta-data about them....especially on a topic that so few people engage in.
It's curious that pages that don't meet Mr. Monsanto's criterion of having been cited in a 'top-tier' publication. There are so many articles on Wikipedia that do not have ties to anything real. Is it really fair to hold PL topics to academic-level standards? What if somebody considers PL an art, or something other than semantics and formalisms? This does happen, and people who create new languages from languages that aren't considered much in the PL community might actually fall into these categories.
I think Mr. Monsanto would do well to spell out his criteria for what isn't desirable in precise and formal terms.
>Raj Reddy ‚Äé (dr. is so unnecessary)
>Randy Pausch ‚Äé (dr is unnecessary)
>Benjamin C. Pierce ‚Äé (Don't need dr.)
and so on
If someone thinks the language is not notable, there is a discussion page attached to the main article where such things can be expressed. The obscurity of the language can also be communicated in the article itself. While lots of us can be pretty sure Nemerle will have no lasting impact in the field, they can be wrong.
Not anyone can invent a programming language, it's not comparable to your pet rock band. Chris, you clearly displayed that you are not capable of handling this subject satisfactory and you've displayed arrogance in response to peoples distress.
Simply put - marking the articles for deletion was rash, and in the larger sense unjustified.
I don't understand what the cost is. Why don't you make a list of "notable" programming languages so that people who want to browse around can skip the less influential / new ones like Nemerle. But to delete hundreds of languages (and if you apply these rules, you need to delete hundreds of languages, you've missed lots of them) is a travesty.
Now Google and Wikipedia are failing at the same time. Bad.
The easy reaction would be to focus on the flamers, harden your heart and drive ahead. The wise man, here, stops and thinks for a bit.
BTW, Why the hostility? and the mob mentality. I thought he articulated his arguments clearly and quite well without malice.
I fear that this issue has become less about fixing broken Wikipedia policies that encourage people like Chris to delete articles, and more about "teaching him a lesson".
The amount of ad hominem attacks in the original story is much higher than anything I'd expect from HN. And, considering HN is self-policed in a lot of ways, I think the issue needs to be pointed out.
[only partially tongue-in-cheek here...]
If anyone thought what I was doing was wrong, they could have just sent me a friendly message and I would have politely discussed the issue.
Amen to that. Others have said it many times before and it is still true: Wikipedia's notability guidelines would benefit from being fixed. It's rather annoying to note that what this guy was doing was within the letter of the law but yet seemed so wrong. There is no way that the likes of Factor or Clean should be deleted from Wikipedia so the question becomes how can the guidelines be changed so that they don't allow it to happen.
That's how these things work Chris.
"The Net interprets censorship as damage and routes around it." -John Gilmore 1993
Problem with wikipedia is that it does not have a target audience defined clearly enough to answer these questions easily. And as with many sites, except for the specific use case it was designrped for, it's not useful. For eacmple could a student learn any calculus from the calculus page?
We should stop trying force wikipedia to be the ideal resource for everything, as it's clearly impossible. There are better mediums for that.
Basically, it's a fun little project for me to try out a whole bunch of new stuff, including the file API, the drag-and-drop-api, openCV and Canvas. It works best in Firefox, and otherwise in chrome (though performance on mac is abysmally slow)
Let me know what you think, and please set a laserified photo as your Facebook profile picture :)
updated: also, definitely make an iphone app for this!
I must say the same as I did on another project here today - it needs linking to the result. then it'll spread viral (I will share for sure :D)
I'd put the "Login with facebook to post pictures to Facebook!" box under or next to the picture.
Also keeping it hidden and making it appear after the laser has been placed, might also be a good idea.
Hey, i didn't notice the Facebook connect thing. You should center it above the pic.
Also, you could load the profile pic by default to save the user a step.
It would be cool to allow me to pick from a list of my friends and give'em laserey.es.
...and throw an ad on there, dude!!
Once Django is Py3K-ready, all other projects will follow suit.
And there's a version that shows rails3 compatibility amongst other things here:
Do people actually think that retail price is a function of production/R&D cost? It's not, never has been. Dropbox charges 20$ for something that consumes marginally more ressources and incurs identical R&D costs when compared to the 10$ product. Chip manufacturers do this all the time. Discounts for electronics and groceries are fully artificial.
A much more realistic model is price as whatever the market can bear. I sincerely doubt that there is a moral obligation to set price points in any other way.
[This may be relevant: http://www.joelonsoftware.com/articles/CamelsandRubberDuckie...]
Rather than getting angry at the company for segmenting its market, why not enjoy the fact that you can buy their top-of-the-line product for a fraction of the price? Rather than raise a stink and force them to do something about it, why not stay relatively quiet about it and let those of us in the know profit from it?
I'm still upset that Baush and Lomb got raked over the coals so publicly for packaging the exact same lenses as monthly, weekly, and daily wear at different prices. I would have been perfectly happy to wear my "daily" lenses for a month each, thus saving several hundred dollars a year. Instead, a witch hunt was raised and they were forced to actually develop a flimsy contact lens to sell cheaply.
It's a good thing, and now you know about it. Try not to ruin it for the rest of us.
People are talking about market behavior and what not, but I don't feel like Sennheiser is behaving in good faith here. They are intentionally relying on opaque information (this is obviously news to almost everyone), and making price the only way consumers have to reliably differentiate between the products (you can't even test drive the products to tell). Also, with respect to the "luxury" part of the discussion, they are competing against themselves, the branding and "Sennheiser" name you get with the 595 or 555 is the same.
I've been considering getting 800's, I'm a huge Sennheiser fan. I think this has put me off though (even knowing that everyone else probably does the same thing).
I toured professionally as a drummer with a tech metal band and played with a click track every night. I used Ultimate Ears (high-quality earbuds) with ProTools through a rack amp. I had guitar tracks going behind the click track (metronome) for reference and let's put it this way... the reference guitar tracks had to be louder than the actual live amplifiers/PA and the click track had to be even louder than that. On a scale of 1 to 10, relative to the loudness of the vast majority of electronic music players (computers/mp3 players/phones/etc.)... I'd say my click track was at least 17. It hurts my ears just thinking about it now... constant TICK TICK TICK TICK TICK for 30 (sometimes 45) minutes every night for weeks at a time! I don't wish that upon my worst enemy.
If it was another style of music there's no doubt I could have had it turned down to a fraction of what it was... but it was tech metal so it was always loud and heavy and all over the place. Most of the time I'm playing upwards of 300 bpm and the tempo and time signatures were always changing so the click track was necessary. I remember I tried turning it down a few times and it didn't work out so well haha... Bye bye hearing! I miss you.
Buy a $20 pair of Koss KTXPRO1 http://amazon.com/dp/B00007056H
Read the reviews. They sound almost as good as $200 headphones.
I own a couple of Sennheiser, and I prefer the Koss unless I need closed cans for some reason.
HD 555 MSRP in the States is $170, and you can buy it for $85 on amazon.com. Right across the border, the MSRP is 200 CAD (202.5 USD) and you could get it for 180 CAD on amazon.ca. In other words, the discounted price in Canada is higher than the nominal price in the States, which in turn is almost twice what consumers actually pay for these headphones.
 http://www.amazon.com/Sennheiser-HD555-Professional-Headphon... http://www.amazon.ca/Sennheiser-Open-Hi-Fi-Stereo-Headphone/...
Would love to find a link; it's stumped all my search attempts (including at Google Groups) ever since.
If you want a good over-the-ear headphone, the Sony MDR-V6 is quite nice. They sound decent, are built a tank, and have been in continuous production since the mid 80's. They're $90 on Amazon.
If you want in-ear, get Etymotic ER-4's. They're the only headphone I've ever found that can compete with my NHT 2.5i speakers. They should be available around $200.
I'm interested in seeing what Sennheiser's justification for this is. It seems odd to piss off buyers of high-end products, because they're the ones that do the research. You can't just go to Target and buy some HD5*5 cans; you have to order them from an audiophile-y place (or Amazon).
(The processor manufacturers make their price structure clear. They make the good ones, part of it is broken, so they turn off the broken part and sell it to you as a lower-end model. And if there is no demand for the fully-working ones, and they don't have enough broken ones to meet the demand for them, they just cripple the good ones. But like unscrewing the headphone and taking out foam, you can just change the multiplier and enjoy the increased performance. So why complain unless you already bought the high-end product?)
-the headband is leather-cans have a Sennheiser logo on them-cans have a chrome ring around them-comes with a nifty headphone stand / mount
disclosure: I own some 555's, and will be trying this mod at the office on Monday.
For the record, the practice does not offend me. This has been done for ages. R&D costs for creating the HD595's were no doubt great and to recoup some money they created a cheaper, crippled version. With headphones you aren't paying for the parts but for the engineering.
However, it does turn your headphones into the most annoying things ever for people who are in the same room as you as they too can hear, quite loudly and clearly.
As for Sennheiser's strategy to cripple the product line to handle a different marketing segment; it reminded me of a Steve Blank's post.http://steveblank.com/2009/04/16/supermac-war-story-7-buildi...
Once stuff hits guitar center and best buy, you're probably going to be getting ripped off somewhere. You can buy better quality Mogami mic cable from a pro-audio supplier than you can from Guitar Center for half the cost. The audiophile world (along with the "guitar aficionado") is a very strange place.
In any case, my friends and I have long held the belief that "If you can't open it, you don't really own it". If my toaster breaks, you can be sure I'll open it up and try to fix it before I think of buying a new one.
Though before anyone thinks of flaming, I'm ok with my iPad. I can make an exception for this type of tech!
It looks like he didn't explain why they wanted to create a "challenger".
imo having three dominant players is better than having two dominant players. However, it would have been great to get Elop's thoughts on what benefits Nokia would get from creating a "challenger" and why a "duopoly" was not good for Nokia.
I find this very telling. As far as I can tell, Google made the first public announcement of the Open Handset Alliance in November 2007 and HTC released the first handset in October 2008.
Nokia and Microsoft who should both have more experience and resources specific to the mobile industry than HTC and Google did in 2007 are "hopeful" to even meet this timescale, let alone beat it.
In a year, Nokia will have even further to catch up. I'll admit I may be underestimating the difficulty of getting a new device to market, but it seems to indicate an aversion to risk-taking which may set Nokia even further back than they need to be.
Wouldn't these two "value transfers" have occurred with Google? Nokia's operating expenses might have been reduced even further if they'd gone with Android (free) instead of WP7 (which they are paying to use). Wouldn't Nokia have gained access to Google's search and advertising capabilities if they'd gone with Android?
I don't know if Google or Microsoft is better for Nokia, but this article doesn't make it sound like Elop had any convincing reasons for his decision. It sounds like Nokia agreed to give Microsoft money (WP7 fees), but they haven't actually agreed on anything concrete that Microsoft will give Nokia yet?
One of the best articles I've read by Sarah Lacy (I wish she would write more big picture stuff like this) and it's on TechCrunch! It would be a very interesting visualization to build a Silicon Valley family tree from Fairchild Semiconductor to HP to Xerox PARC to present day. Maybe that is one of the reasons Silicon Valley can't be duplicated - and is one of the most unique business locations on the planet - everyone is virtual cousins or 2nd cousins with everyone else. All in the (extended) family.
Did Facebook really have controlled pacing or did the product actually just start out as a niche product and grow? I thought the latter. The difference being did Zuck actually have even a blurry plan of its growth?
These "Mafioso" Startups are playing the right game. Leverage slow growth and focus on maximizing value per consumer, rather than maximizing the number of consumers.
A good point made in the article, that could have been overlooked.
I wish you had considered the route of getting involved instead. There are just some basic, fundamental things that would help fix this. A modification to the notability rules would require a lot less effort than your other proposals. The notability rules work pretty well for kicking out useless chaff like pages about high school garage bands that have never played a show. But they do privilege old-school publishing and broadcasting, so particularly ephemeral creations like new programming languages may fare very poorly against the notability criteria. But this strikes me as fixable.
Programmers can help too. We don't even have the capability, for instance, for people to be emailed when their favorite page is up for deletion. We don't have a lot of means for casual involvement; everything depends on logging into Wikipedia regularly. This is part of why battles in Wikipedia tend to be won by the most, shall we say, persistent.
I work for the Wikimedia Foundation, as a programmer. In general our resources are stretched pretty thin, and there's really only been two years so far of a budget that's even remotely in line with the size and impact of the site (thanks to your contributions). In a lot of ways we're still playing catchup with an explosive period of growth that happened around 2006-2007, using technology that's getting a bit venerable.
But the issue of deletionism and the general community demeanor is a problem that is occupying more and more of our attention. If it matters to you then contact me. I can definitely tell you there are lots and LOTS of ways to help out.
Actually you could even get PAID to fix this problem. Want a job working here? We have lots of open technical positions. http://bit.ly/WikimediaJobs
The one thing I can't guarantee is that it will satisfy your need to rage. The truth is, almost everyone in the Wiki community is acting in good faith. What they need are a) your input as a knowledgeable person about how policies need to change, and b) your technical and design skills, to create systems that avoid these communication breakdowns, and guide volunteers to be more effective.
This is spot on. I had to engage in a battle on Wikipedia to keep the page up for dream hampton. Not everyone knows who she is, but she was the editor for The Source at one time, and ghost wrote Jay-Z's autobiography, among other things.
What I ran into is that Wikipedia basically demands that you get published in these megacorp publications that are basically all run by rich white people, and mostly men. So being written about in black publications, which tend to be more magazines and online publications, and less Library of Congress kind of stuff, doesn't cut it according to Wikipedia's notability "guidelines". If the white editors don't recognize the publication names, they don't "count".
The fact is, if she had been editor of Rolling Stone, I don't think there would've been a problem.
There were other factors too... being an editor and ghostwriter means she's more behind the scenes, and less likely to get outright exposure in the press. But that, too, is a requirement that I think turns Wikipedia into an amplifier of power, rather than a distributor of one.
I'm not sure if there was some outright racism going on too. I mean, she was mentioned in the New York Times and people were still calling for her page removal. At that point things start to get a little murky for me. But the situation was fishy for sure.
Jesus, does Zed Shaw live on another planet? I've never edited a Wikipedia article in my life and even I know that when an article is nominated for deletion, there's a big hullabaloo where everyone votes and argues about it before it's actually deleted.
It's sort of impossible to say that the Esoteric programming languages page should not have a description of every "notnotable" programming language.
I do not think "impossible" means what you think it means.
I'm not trying to defend the notability rules (I think that they are quite often pretty stupid), I'm only saying that it is completely and utterly wrong to claim that Wikipedia's notability rules exist for a technical reason.
Jimbo put your dog on a leash, or next year, when your bambi eyed face gazes into me from my monitor, I will send a turd in a bag instead of a donation.
Does anyone know what address I could/should send my complaint to for maximum impact?
Instead, wikipedia should allow most any article to be added, but have one or more groups that "certify" articles as notable. People who want to avoid all the non-notable clutter can then elect a view of wikipedia that only includes articles certified by the group (or groups) they choose, while non-notable articles can still be viewed by those who choose to view them, and can have the breathing room to in some cases evolve into notable articles.
In the beginnings it actually made sense, too, since it would never have took off if every page was about Joe Schmo's cat.
But now that Wikipedia is probably more famous and more well known than any other "real world" encyclopedia, such "respectability hacks" are unneeded and should be repelled.
"Plan A" is not a new idea, it's exactly what happens to other categories with lots of items that don't deserve their own page.
"Plan B" won't work because part of deletionist policy is to ignore sources published by self-publishing outfits, because "anybody can publish there".
"Plan C" won't solve a goddamn thing. Good luck with "Plan D".
Quit asking for features that already exist.
Immediate problem would of course be advertising, which would not be desired.
It's just that if I donate now, I feel as if I am supportive of their deletionist policies, which I am not.
But that's not really the point here. It's just because it's wikipedia that everyone gets all weird about it. The information is out there, I don't see why it's so important that every esoteric project have a wikipedia.org page. It's not supposed to be a compendium of ALL human knowledge, it's an encyclopedia of generally useful knowledge. There has to be SOME criteria for excluding stuff. There's already a problem with wikipedia containing way too many pages about nerd topics and not enough information about the rest of the world.
wikipedia was a huge threat to traditional information sources, but now something has to come from one of those traditional sources to be considered legitimate info. problem solved.
Plan B does not work. Wikipedia guidelines do not consider self-published sources as 'reliable' and things published by Lulu are considered self-published. Wikipedia does not consider itself a reliable source either. It is also not going to disappear in a puff of logic like God in hhgttg, no matter how witty you think it is.
Plan C has not worked in the years of people deleting things that appeal to far broader audiences than esoteric programming languages. They are still around. How well do you think this will work?
Plan D is something many people have done. In a way, Wikia is a giant example of it. In other cases they go to wikinfo or create their own wiki.
This post is exactly like 100s of others that someone writes whenever Wikipedia deletes (or even tries to delete) something that they care about. The ground is well-tread and it brings nothing new or interesting to the table. The internet does not need another blog post where someone spends an hour in isolation writing about it.
On an irrelevant note, mediawiki does support subpages (the slashes), but articles cannot have them as article titles may contain slashes. This has absolutely nothing to do with Wikipedia's notation of notability and the reasons people try to get rid of non-notable content.
What do the near-constant donation drives cover then?
Hey, I have an idea! Let's publish suggestions like this to a blog and purport for it to be sound, so then it can be picked up by quacks who have an agenda to push in a bunch of completely unrelated situations. (Think vaccine scare pushers or those looking for self-promotion, who already exist in abundance on Wikipedia). Yeah! Let's all eschew with stuff like a "nuanced understanding".
Whether you are for or against this instance of deletionism, Shaw's plan B here wouldn't successfully legitimize anything of any subject matter, and that's a good thing.
If I were King of Samsung, this would be cut down to around three models. The model names would be consistent, recognisable, and the names would not be discarded when the phones are updated. This way, when my Samsung Galaxy Pro is due for a replacement, I can go and ask for the latest generation Samsung Galaxy Pro. No need to spend ages trying to figure out which of the vast selection is supposed to be the flagship, or which is the budget model. The laptop situation is even worse. What's scary is how nobody but Apple seem to have figured this out.
For example, Apple doesn't do any manufacture themselves, and yet they have costs so much lower than the competition that they are able to define entire new markets in the time it takes for their competition to develop cost-competitive supplies.
This has happened three times now, firstly with the original iPods (Apple pre-signed huge contracts for those tiny hard drives, proved the market, and when competitors wanted in they had to wait for new sources to come online), the iPod touch (Apple pre-ordered a huge proportion of the worlds flash memory supply) and finally, most recently with the iPad.
Ever wonder why it's taken a year for anyone to build a 9"/10" iPad competitor? It's because no one can get capacitative touch screen in sufficient quantities. Even Samsung (which owns the factory!) had to make do with 7" screens.
Now, finally new factories are beginning to come online, which means that competitors can release their products. The problem for them is that Apple locked in much lower prices (because of their bigger purchasing power), which makes it hard to compete on price. This applies to Samsung as much as to anyone - they can't afford to drop Apple as a customer, but the capital costs of building a new factory means it costs them more to supply themselves (selling in smaller volume) than it does to supply Apple.
Want to be even more scared?
And even though they have the price advantage over the other "premium" vendors, they are fighting a war on two fronts. The ultra cheap vendors can undercut even Samsung by doing things like using super cheap plastic, having batteries that stop working after 6 months, dead pixels, and a software team of like 2 people who just maintain a couple drivers and slap some shitty graphics and free apps on top of stock Android.
So Samsung's playground is a sizeable but bounded slice in the middle: people who don't care about having the BEST phone, and want it to be really cheap, but not cheap cheap. Cheap, but on-brand.
Microsoft had no real competition. No one else was really licensing an OS for commodity hardware. Apple, IBM, Sun et all were all selling their OS on their own hardware.
Samsung is beset on all sides by competition, and they only hold one of the Aces (supply chain).
a) regularly updated with Android tech as opposed to seemingly lagging the rest of the industry
b) supported worth a hill of beans instead of getting a "uhhhh we don't support that in the US market" when asking about AT&T
c) a good demonstration of Android capabilities in the first place.
Samsung makes decent phones. Nothing spectacular. The GPS in the Galaxy class phones draws endless complaints about bugginess. The builds deployed on AT&T are mediocre. Their Kies desktop software is bloated, slow, and unreliable - it's frequently the case that disconnecting and reconnecting the phone/tablet fails to recognize it (and in turn it doesn't work with a number of US SKUs, thanks to AT&T).
Sorry gang. There's the potential there to be great. But it's quite a leap from there to dominance. Nokia has had plenty of potential and in the past a pretty dominant position too, but I think the shellacking they took the last few days here on HN is entirely deserved and demonstrative of "you have to nail all facets of the user experience."
If you look at it in those terms, there are two manufacturers that are vertically integrated- Apple and Samsung.
Samsung is weak on software, and software is the critical component of this age (more important than flash, RAM or SoCs). Meanwhile, Apple is a design house, and doesn't have manufacturing capability of its own.
So, one question is whether owning your own plants will be critical or not.
Consider this possibility- it is like the Windows market, only instead of it being a licensed OS, it is Apple, and Apple devices, but HTC et. al. become contract manufacturers for Apple. The margins may not be any better than competing in the commodity phone market, but the R&D and marketing costs are a lot less, and so they may be more profitable.
I'm sure this goes against their DNA, but unless FoxConn can expand enough to cover the demand (for Apple and for everybody else) I can see the commoditization happening at the manufacturing level, rather than the device level, as it did in the PC market.
Samsung has had a decade to replicate the iPod, or to build a video game system. I don't know if they will be able to build the expertise or find the mojo that Apple has on software.
I don't think Apple is going to start building plants nor will they build expertise in lithography, etc.
But I think that while software is easy to commoditize (eg: Android) a quality software experience is much harder. Meanwhile, it is cheaper to be a contract manufacturer for a %5 margin than it is to try and compete with your own products.
So, Apple could be the new Microsoft, and Samsung could be the new Apple-- if software is what matters. The reverse could occur if lithography expertise is what ends up mattering.
I would like to see a company like that enter the mobile market. Someone who makes a product that speaks for itself.
Applications are C/C++ UI is GTK/EFL. Has multitasking and Debian package management.
now let us wonder about the non-goog customers of canonical, with Unity, the need to ditch X11, ditch Gnome3 ‚Ä¶fresh release of GTK3.
I (sadly) still use a Samsung Instinct and the OS is of their own design. If you've ever used Samsung's Instinct you'd know it's absolutely terrible and slow. I can't remember the last time I typed something - and I don't even type fast - and it wasn't at least a couple of seconds behind my fingers... pretty annoying when the touch screen won't calibrate and you make at least one typo every other word!
The only upside to the phone is the relative ease of "unlocking" it for use as a 4G modem.
Sorry, just don't see that happening.
Oh, and HTC's phones are so much better quality than the Samsung phones.
Could be that manufacturing capacity is slack, or they need a technology demonstration for design firms that otherwise aren't moving quickly enough ("Hey, we have all this stuff, let's make something cool out of it!").
That'd explain why they were one of the first Android tablets out there, especially with Android not really being tablet-ready. That kind of risk could break a firm that had to outsource manufacturing. But Samsung already had all the hardware pieces; they just had to drop in the software.
From link "Apple Inc. (AAPL 356.85, +2.31, +0.65%) is expected to purchase components used for its handheld devices from Samsung Electronics Co. worth about $7.8 billion this year, the Korea Economic Daily reported Monday, citing industry sources.
The paper also said that Samsung will supply Apple with liquid crystal displays, mobile application processors and NAND flash memory chips used for the U.S. company's iPhones and iPads.
If the contract pushes through, Apple will become Samsung's largest customer, the paper added."
I tried signing up to be a developer about a year ago when they first announced it and I had a under-powered feature phone (Samsung Behold I), yet was discouraged by the fees and other associated things to even sign up...
After reading the Wikipedia page on it: http://en.wikipedia.org/wiki/Bada_%28operating_system%29#Cri......I'm glad I saw the light and bought an iPhone.
Why should I bother to read about your views on Samsung when:
1. you show neither data nor other facts on which to base your opinion
2. you don't disclose anything about yourself?
For all I know you could be a Samsung employee trying to increase the value of his stock benefits.
Except using named functions will also make your stack traces useful. Because function names tell you more than line numbers.
Step 2) Make it marginally more expressive
Step 3) ???
Step 4) Profit
I'd like to point out that English has no official 2nd person plural and "you guys" is basically a genderless colloquialism equivalent of the Southern "you all\y'all" or perhaps "you's" in the Midwest.
Arguably the linguistic origin IS sexist, the point is its part of an English dialect and a much broader debate. What perhaps was a Midwesternism has been misinterpreted as a CompSciism.
The size difference is basically the number of people who dropped out. A few may have dropped college altogether but many have migrated to another major, because they too doubted they were in the right major until they felt they had to take action. (Or in some cases had it forced on them by failing grades.) This is not a unique experience.
Presumably this is posted here because of the gender issue raised in the post, but given how non-unique this experience is I don't see that the gender angle adds anything. Scratch a few sentences out and any number of juniors could post this. This is not "woman doubt", it's just doubt, and the doubt does not admit of "woman solutions", it's just the same "finish the degree" solution everyone else has. I take the time to say this because I actually think adding the gender idea into this is a little cognitively dangerous; incorrect identification of the problem leads to incorrect identification of the solutions. (As every engineer comes to learn instinctively after a few years under their belt.) Those who are certain they are in the right major are the unusual ones, regardless of gender and from what I saw in college, pretty much regardless of major.
> ‚ÄúWho knows what lexical analysis is? No one? What, don't you guys do this constantly in your spare time? All right, I'll show you ‚Ä¶‚ÄĚ
If I may, I believe that the professor was connecting lexical analysis with what humans do on a second-by-second basis -- that is, parsing and interpreting speech from other human beings. Your brain is lexing all of the time, and I believe that's what the professor might have meant -- that was the first thing I thought, anyway. I'd drop a class like a bad habit if the professor quipped about me knowing something before he taught me (I'm not paying for self-study, pal).
If you're worried about that the programming world implies less balance, consider this: the big advantage of programming is that you can get a job that coincides with your passion. That means, meetings and other bs aside, a significant chunk of your work is what you'd consider leisure time devoted to one of your hobbies. It means you actually have more time to pursue non programming hobbies.
That said, you should consider programming a bit outside of work and class. It's very easy to lose focus of what is the general industry trend when you're focused on your specific job. It can be as short as an hour a few evenings a week and a few hours on the weekends. It doesn't need to be anything "cool", it should be something you get a kick out of building that you don't get a chance to do at work: it's perfectly fine to re-invent the wheel, learn a language that isn't used in industry, write a software to facilitate a non programming hobby e.g., I love classics of literature, so I once built a "beautifier" for Project Gutenberg works that would convert them to LaTeX and type set them.
The fact that you love programming should be enough of a reason to continue doing it. Especially if you're skilled in areas outside of programming, you won't have any issue staying employed. Since you have less ego and arrogance, you'll be able to learn more from others, opening fields that are often close to people who are convinced they can't be taught anything about programming in a university setting.
If you really are an impostor, that's likely a much more rare and valuable skill than being a programmer!
The author does not want to be a competitive computer scientist, and is happy being a casual one -- skilled enough to make a 40 hour-per-week living with it, to be sure, but not obsessed enough to advance the field itself. She made a great choice going to a liberal arts school.
When she compares herself to students at engineering schools, or references a study of students at CMU, home to some of the top computer science students in the world, she is doing herself a terrible disservice. She sounds a bit like a casual runner upset by the fact that Usain Bolt exists.
Elite computer scientists, like elite athletes, live in a different world. If you want to join that world, the rules are pretty gender-neutral -- work 80 hour weeks, write great software, publish papers, dream in code (or math, really). If you don't want to do that, you aren't a lesser person. Just don't compare yourself to those who do make that choice.
The image of the ultimate hacker assumes a specialist. 100% focused on their expertise. It could lead to a great coder. but a company will also need someone with a decent technical background who can also relate to the end-user for instance, or align more than one sentence when a customer calls. That would be more a generalist.
It's ok not to want to be a domain expert.
So I would recommend taking it easy. It makes sense to do what you're interested in, and much of that is not night programming, then so be it.
Why does this very basic fact elude so many?
"When they talk about how they live to program and never leave the lab"
THEY ARE BORING.
I LOVE programming, however I also LOVE playing with my 2-year old, playing poker with the fella's, practicing origami, watching movies, working on my (admittedly horrible) art skills.
I know exactly where you are coming from when you say that you are worried about your programming skills not being 'up to snuff' but trust me, it is a small price to pay for having a life that is fun and enjoyable. It's these people skills and life skills that will make you more-rounded, and I'll be frank when I say that the more-rounded you are, the more likely you are to have fun, and the more you have fun the more it will show. And the more it shows you enjoy life, the more people/bosses/hiring managers will want to hire you to work around them.
It isn't always the technical skills, most of the time it's the soft skills that make a difference. So practice your knitting, and definitely practice the Japanese. And next time you are asked about what you have as a hobby, be honest. It will impress.
Besides the obvious non-IT-related downsides (made me obviously less social), it also has some IT-related downsides. Even on the purely technical career path (programmer->senior programmer->tech leader->architect->???) the more you advance the more you need the skills and knowledge outside the IT field, even if it's only to keep a lunch conversation with those weird non-technical people who decide about IT spending :)
My point is that even if you consider this only in the context of your career in IT, spending your time outside programming might be quite a good investment.
If you have difficulty staring at yourself in the mirror and saying "I love what I do every day", then you should seriously reconsider what you're doing.
Been there, done that.
> We saw an opportunity to create a site similar to threewords.me
Nice work. You definitely need to work on the design a lot. At the moment, it looks a bit gaudy. "My favorite thing about you is" hint text is good leading (I didn't think of that). You NEED for someone to be able to create their own profile immediately after they post something. Currently the user flow is: new visitor ‚Ä"> post comment ‚Ä"> "okay, what now?" ‚Ä"> close page. Get your viral coefficient >0 and ask them to sign up for a profile right after they submit something.
More of my signups on threewords.me were through the new visitor ‚Ä"> interaction ‚Ä"> signup viral flow (viral coefficient >0) rather than the go to homepage ‚Ä"> get convinced to sign up for the site ‚Ä"> get account flow.
Good luck on the site, and the homework :p. I'm about to get back to mine. Cheers.
It seems to me like you're focusing too much on the tech part of things (building it) to the expense of "business" part (spreading the word). I'm a technical guy too and have similar problem with my projects - I enjoy building them, but I'm poor at actually reaching the potential users.
So my advice (which I'm trying to follow myself, too :) is for the next 30 days, don't build any more projects, instead focus on marketing the ones you've already built.
In my house, I do most of the cooking, the dishes and the laundry. (The latter is because, being one of these work-at-home laptop-bound types, I'm at home a lot.) My girlfriend tends to vacuum and tidy more. But none of this is because we've established an internal market in our home and relationship. Between the two of us, we do what's right for each other, and it just works out.
This applies just as well to startup co-founders as it does to marriages. You want to find someone whose best interests you want to act in, and who will act in yours. Life's too short to be negotiating about who's going to wash the dishes, or who's going to pick up the mail. When you're emotionally invested in a person, an organization, a project or a cause, you'll do the right thing for it. And that's the kind of person you want to be in any kind of relationship with.
They should combine them. They could keep pretending they have separate accounts and just write checks for what they owe, just have it come out of a single savings account.
Get FISA Right's action alert: http://bit.ly/feb13aa
Sometimes even just reading on HN about people who are succeeding (or persisting) with their side projects or about successful startup founders or failed startup founders is enough to release the tension (momentarily) - to jump from a certain 9 to 5 reality to very real world of a real start-up.
The ability to leave your job at 5pm is very much a benefit, one that 99% of startups will never offer.
That being said, you will never become wealthy without ownership of something that produces income. That will almost never happen at a BigCo. My mom had multiple patents - for which she got a nice lucite paperweight 
 My definition of wealthy == How long you can maintain your current, preferred lifestyle without working. This means building a semi-passive/semi-active income from your own products. Binging on consulting income isn't the same. Yes, you can adjust your preferred lifestyle to meet median cash flows.
 Example from the "Corporate & IP Recognition Company" (LOL)
And it'll end up being 'production' code because "we already have it and they already paid for it!". And the crappier it is, the longer it will remain in production and the harder everyone will fight against rebuilding it to be not crappy.
That said, as an engineer being a startup "employee" is a sucker's game. You'll never get enough equity or intangibles to compensate for the risk/effort/opportunity cost.
Large companies change all the time. Departments are merged, strategies are changed and projects are stopped constantly. I think this point isn't valid.
It was a reasonable debunking of a deplorable TC article, but I was really interested in the difference in mindset.
It also seems to me that this post is criticizing two sentences out of the entire article (http://techcrunch.com/2011/02/13/engineers-startups/)without properly understanding them with rest of the context. All this article is saying is that due to the market warming up, now is a good time for engineers to work at a start up.
That said, I agree with the point that dbasch is trying to make even though I don't think that they needed to be presented as a 'rebuttal'.
One possible explanation: 1) Statistically, most engineers work for big companies. 2) By human nature, people tend to stick to their current jobs, most likely a big company job.
Here is a related thread at Quora: http://www.quora.com/If-I-want-to-be-an-entrepreneur-later-s...
Any logical being would play with the idea of financial freedom. And it is unlikely in a "regular job."
Microsoft should buy this company:
They claim to turn any surface into a touch screen. Combine that with kinect's technology, and microsoft has a revolutionary UX that could beat apple & google in the mobile, local, and web space.
Membership is ¬£165/$264 a month. However this is the top rate, you can pay ¬£25 a month for 'mates' membership, which means you aren't guaranteed a desk, but you can go there and find a place to sit wherever the hell you want, 2 days a week. It's an odd pricing structure but it must work for them because from what I've heard it's always busy.
Never been there myself. We just have our own little office.
They host some sort of hacker drop-in-centre called "Build Brighton" down there at night time apparently, which I've been meaning to check out. Apparently they made a MIDI exercise bike once...
I wonder how many people (aside from me of course) would be happy to know about a spot on Nusa Lembongan with guaranteed fast internet, reliable power, a nice slice of white sand and a really nice reef break just outside the lagoon.
Would anybody here take the effort to weasel a working holiday to such a place? Any SV startups that would pick up the whole shop and set up on the beach for a month or so?
It's been raining here in the North of England for six months straight. If I get enough love for the idea here, I might just have to book a flight and start scouting locations.
Until I know that it's safer than my bomb shelter, I am not leaving.
I've been thinking of renting out an office just so I have somewhere to go to and work in a daily routine.
Working at home all the time is not as grand as some people might think it would be.
- http://treehaus.ca in Kitchener, I think they've been around a while;
- http://threefortynine.com, a new space starting up in Guelph
Unfortunately, the decompiler output doesn't convey much as it stands, unless you like sorting through pages and pages of
local199 = local191; local203 = local191 + 0x6f02418d; local3 = proc2(0x10021238, param1, param2, param9, param5); /* Warning: also results in local190 */ local208 = local3; local209 = local190; local211 = local203;
It does seem to suggest, at least, that this dump didn't have the actual source.
This github project is pretty much useless for those who want to learn about Stuxnet. Better to load the binary into IDA Freeware instead.
Stuxnet does appear to be an unusually large project (base classes, ungainly modular structure) for malware. This reinforces what I said earlier about its lack of stealth for the payload.
It does not appear to be sophisticated in any way except for its payload, which some evidence seems to claim was carefully constructed (e.g., with a PLC testbed). The "embarrassing" fact I was referring to in the above post is that its lack of stealth revealed its payload to the world, and no competent intelligence agency has that goal if the purpose of the worm itself is to do some damage.
Perhaps the worm is a way to draw the heat off the real deployment method. Or it is industrial sabotage gone awry. There is still not enough evidence to come to any conclusions on it, except this is not what an eleet cyberweapon would look like if you were to find one.
There is nothing new to see here. A quick Google search for "stuxnet.zip" reveals other samples, undamaged by some PR whoring idiot running it through IDA.
I can't recall a single dictator in the modern history of the world which would fall down without US interference. People's engagement and activism is hard to undervalue and for sure the revolution is in their hands, but when I came across lots of such comments http://i.imgur.com/7u8xk.png I was really disappointed how people tend to easily forget history.
Lesson: huge props to the Egyptian people. But also give credit to the US administration. They didn't have to do what they did. Also, the follow-in issue being that probably the biggest reason why we had such strong leverage over Egypt was the billions in military aid we give them annually. To a government run by a dictator. Leverage is good, but backing dictators is bad in the long run, despite short term benefits.
Think back a hundred years to a time before the internet, long distance telephones, common air travel and so on. To people from a hundred years ago we'd be the most outlandish science fiction story ever told.
I consider the same thing whilst watching science fiction films - in reality they wouldn't be amazed that they can reach Mars in a half hour. They'd instead be complaining about the in-flight entertainment or the turbulence caused by navigating asteroid fields.
AI has the same problem. People can now (naively) communicate with each other even though they share no common language. Texts and documents can be re-written in real time to be legible in another language. Credit card fraud and spam emails are handled transparently by systems trained on hundreds or thousands of hours of cumulative knowledge. No-one sees these things. No-one respects these things.
Human kind judges itself from the position it currently stands, not from where it began. For this reason alone incremental improvements will never really be considered milestones of achievement.
The episode, along with four others, are available here: http://waxy.org/2008/06/the_machine_that_changed_the_world_t...
When we ask whether machines can think, I believe it is a question of volition and self-direction. Does the machine have an open-ended goal and is self aware? Can it alter its own programming to change its own methods, its own knowledge, and its own goals? If the answer to these two is "yes," then I believe we will consider the machine to be thinking.
What are the implications? If machines gain those capacities, as well as the ability to compute emotions, then what separates them from us? If you agree that humanity is fundamentally a function of our minds, then, if machines can compute in fundamentally equivalent ways, they become "human." What evidence is there that this is forever impossible?
Anyways, for me it was:
* not doing customer development at the very beginning
* concentrating on technology instead of customers
* not taking into account market forces
* not taking into account my economic environment (Eastern Europe != California), reading too many SV blogs
* not starting the company when I was younger (started at 27, should have started at 25)
I think not concentrating on customers is a common mistake, which could be avoided if for example the book "Four steps to Epiphany" would come up more often, and people would read it before spending 6 months writing code =)
I actually think reading lists like this is not that helpful, at least it wasn't for me. A technical person doens't even know what customer development is. What's technology, it's different in different contexts. What are market forces in your segment, you probably don't even know when you're starting out.
I would recommend to talk to a friend who has started a business and has experienced the realities. I have been on the advice-giving end of such a conversation a couple of days ago, and it was shocking to see my mistakes (mostly wishful thinking and lack of ecomomic realities) repeated in my good friends line of thinking. I spent a whole night (8 hours) telling him my insights, it was pretty shocking for him, several times he walked out and then came back, but I was only telling him what I learned the last 2 years. In the end it was a good deal for him.
* Not starting early and having to pull all-nighters as deadlines approached.
* Not communicating with clients on a daily basis (this also helps to keep you from procrastinating).
* Not starting on your crazy side ideas/weekend projects as soon as possible. The longer they sit on your to-do list, the more they'll bug you but also the less likely they'll get done. If you at least get them to a prototype stage, you won't feel like you missed out even if you decide not to finish them.
I was doing stuff I found interesting even though it wasn't what my customers were asking for, and in doing so I've left a lot of money on the table.
This is being corrected now, my users have been screaming for me to add advertising for a while (strange, huh) and I'd been against it because I didn't think it was important and because I thought I had more pressing things to do (adding new features).
But in the area I'm in (highly loyal niche community) most of the users are involved in cottage industries of their own and want the advertising to not only help ensure the sustainability of the community but also to give their products exposure and a springboard. Likewise users of the site have been asking for adverts so that they have a kind of slow beacon showing what new stuff has emerged that they might've missed in the ebb and flow of fast moving conversation.
So the lesson I've learned is to quit holding an anti-advert bias and to listen to the users more. They really want adverts and they really want to give me money, and I should really let them do the latter ASAP.
Another one would be to make larger pivots faster.
I started the company to own the pet project and to use that to fund what I feel are larger more complex things. However, the pet project has ¬£3k revenue per month with almost no effort (maybe a couple of hours per week), clearly it's the one with traction and I should be behind that 100% instead of mostly ignoring it. There's so much I could still do with the pet project and haven't been doing because I've been researching more complex things... this is dumb of me in the short term as I could be building the pet project into the main entity and sustaining a proper income from it.
Both of these lessons lead to the same thing: Quit being blind about your realities. Stubbornness to only do the things you want to do, without listening to your users and customers, may be stopping you from actually do what you can do.
2. Not addressing a universal consumer market.
3. Extrapolating too far and pivoting unnecessarily.
4. Not focusing on the bare minimum of features.
5. Not keeping things simple.
6. Not having a big "Fordian" vision (read Ford's autobiography).
7. Not getting prospective customers to cover development cost.
8. Focusing too much on "talking about culture" rather than culture itself which is values plus action.
9. Not pushing employees enough.
10. Not insisting on timeliness (being "before time" not "on time") or outcomes enough.
12. Not working a fixed 8 hours a day (max 40 hours per week).
13. Not exercising consistently.
Needless to say in spite of these, the outcome so far has been good, but if I were to advise someone, I would mention the above.
* Make one thing very good, ignore trying to do hundreds of features at launch.
-> This leads to a situation where you see customers asking questions in a forum and you (as a developer) are not allowed to answer because this is what the investor's support team is supposed to do. Thus, answering will be slow and of low quality. This builds a bad reputation. I will never do that again.
Note I'm not talking about a "marketing launch", but a launch where your product gets in front of early adopters.
* have at least 10 customers willing to buy my service when it's done. During the time of development they would commit to spend time with me validating my product hypothesis.
* prepare spec for a _REAL_ mvp.
* clearly _describe_ problem that I'm trying to solve. (now I know that it takes multiple iterations to achieve this - most importantly discussing the problem with your future customers)
Not that I think that's what this is, it's just an unfortunate turn of phrase.
A video that everyone should take the time to watch is Conrad Wolfram's TED talk on teaching students with computers (http://www.ted.com/talks/conrad_wolfram_teaching_kids_real_m...). While we may use computers now to enhance the learning methods, we haven't taken advantage of the content/knowledge computers have and can supply for us. The majority of what is taught in grades K-12 in science and math could be done by a computer in almost no time with increased accuracy. Why wouldn't we take advantage of that?
The gist of Wolfram's talk is that we are teaching students mechanics (which computers can do more efficiently) when we should be teaching them the higher orders of thinking and problem solving.
Against popular belief, concepts can be understood without learning the nitty-gritty mechanics. One could solve a quadratic word problem without solving the quadratic equation themselves, and still understand the problem just as well. Think about it: do you need to learn how the engine of a car works before you can drive it?
While all of this is not entirely relevant to the article, I humbly believe that the future of education should be and will be (if everything goes right) orders of magnitudes higher in efficiency as students will spend more time learning the right material. I'm not sure how much the Khan Academy will play a role in that...
Talented educators (non-traditional and traditional) and technology are going to keep coming together in new and interesting ways. There will be lots of failures, but Khan Academy will not be the only success. The article doesn't mention things like MIT's effort in this field that predate Khan's efforts etc.
There are no guarantees, of course.
Maybe, then, it's time to pull back on the hype machine for Khan Academy. I've used it before, and it's wonderful as a student aid, but it basically boils down to free instructional videos. It's a great help, but it's hardly the revolution that the headline makes it out to be.
Education's past is a series of textbooks (plus supplemental info and exercises), providing a clear path to learning. Khan Academy, such as it now stands, is a video analog of textbook chapters as discrete units all in an unorganized pile. Until more is done in the neglected areas, Khan Academy will remain an extremely valuable resource, but not a curriculum. Even if/when these missing bits are done this will only be the future of delivery of fairly traditional materials.
The real future of education is a computerized personal tutor that provides individual assessment, guidance, alternate explanations where comprehension lacks, encouragement to pursue natural ability and enthusiasm, etc. That's pretty ambitious, but not at all inconceivable. We're close enough to being able to achieve it that we should hold up this ideal goal so we know the right direction as we build the pieces.
It was really inevitable that someone with his abilities would change the way education is delivered by using tools like YouTube. Of course, his teaching style may not be for everyone, but the mere fact that the Khan Academy exists, and is free, gives me great hope for humanity.