A couple of other examples:There was recently a rule to limit SMS spam by limiting each cell phone to receive a max of 100 texts per day, there still is a rule where you can't entering the country twice within a certain number of days without getting prior permission, IIT students were arbitrarily limited in the number of hours they could spend online because some administrator thought they should get out more, etc.
I really think 3D secure is a good move. All it requires is entering your internet banking password at the time of making the transaction. Is this really so bad for usability?
For a startup like us who have a web based software like basecamp, there is no way we can charge subscription services. Infact no payment gateway exists for us to take credit card payments from Indian customers.
Thanks to PayPal, we can serve international customers much easily. I have a lot of anger against the people who run India's large services as if its an entitlement, without caring for the new entrants.
I dont like 2 factor authentication, especially with mobile/sms, when I am abroad or travelling, I still can do my transaction
- Vehicles registered in one state cant be used for too long in another state
- Banks have insane policies
- Online electronic tax filing requires that you complete the process in paper format as well. To complete the electronic process you have to send it in by normal snail mail as well. And you cant get acknowledgments.
- Universities don't recognize each other between states
If you are stupid enough to fail to use and understand the new system, they you wont be able to purchase it.
And thus they will end up avoiding huge credit card debt's like poor Americans.
Also hardly anyone buys from Amazon and only pathetic people buy from apple iTunes.
That is one bold statement. I don't know if I'd bet the company's success on a claim like that.
First, there is a question of development and time to market. By the time Nokia launches MeeGo and handsets, Android, iOS, and others (WP7, RIM, Palm) will be further entrenched in the market (e.g., market share, brand, hardware partners). Second, there is the fundamental issue that succeeding with MeeGo hinges not only on the OS but also on a thriving application market. Because of the application markets, there are strong network effects with mobile platforms. These network effects make it very difficult for a new platform to break into the space.
To complement investments in MeeGo and WP7 for the smartphone market and Symbian for the feature phone market, Nokia should immediately invest in an Android strategy as a fail-safe. I understand this conflicts with Nokia's historical strategy of controlling both software and hardware, but it's quite possible that Android will eventually emerge as the winner-take-all platform in smartphones, aside from Apple/iOS and several niche platforms. If this were to happen, Nokia's singular bet on MeeGo (or WP7) may destabilize the entire company.
In short, I propose that Nokia pursue a four-pronged strategy, pushing forward with MeeGo, WP7, Symbian, and Android -- Symbian for feature phones, which still account for roughly 80% of the worldwide mobile phone market, and MeeGo, WP7, and Android for the smartphone market. As uncertainty is reduced over time regarding 1) the potential of each of the smartphone platforms and 2) the pace at which geographic markets are shifting away from feature phones to smartphones, Nokia can appropriately adjust its investments. By making investments in each area, and adjusting the relative amounts over time, Nokia can better ensure its survival and prosperity despite the quickly evolving mobile phone market.
As these projects tend to go, things didn't go well. due to various issues in the Nokia closed source software layer there were a number of bugs we (Orange & Kodiak) couldn't fix. We decided to fly to Tampere (one of Nokia's R&D locations) to fix the problem.
Tampere is a lovely place to eat Reindeer. However, not once did I meet an engineer who could get shit done. Not once - Nokia never paired us with a serious developer who could even attempt to fix issues in their code. They surrounded us with product manager wankers and threw in a 22 year old engineer who wasn't able to make much progress debugging the problem. The Kodiak engineer was all ready to attack the problem with a dev board and a JTAG but no one would let us in the lab. What a clusterfck. Apparently a good bit of the S60 development wasn't even done in Tampere (or Finland). I think it might have been done in Japan. I think these sort of issues are what the author of the article alluded to regarding distributed development teams.
Perhaps I only saw a small slice of the Nokia culture. But it was really* bad.
I'm glad the guys behind nokiaplanb.com are passionate about fixing Nokia. Much as I think the M$ alliance is a waste of time, I admire Elop's bold actions. I can't see continued development of MeeGo as useful. What I have seen to date has been unimpressive and late. Additionally -why not just use Android as the base OS and innovate on top of it?
a) These 9 are young, and likely Finns ‚Ä" who are generally proud people and who are guarded against outsiders (like Elop)
b) They are software developers. Most of their Plan B focuses heavily on Meego and keeping development and R&D in-house. With Meego 'out' (or close to it) and WP7 in, software development resources at Nokia are likely to be slashed, and so of course they'd want to contest the decision.
The biggest problem I have with their plan though, is this:
> Return the company to a strategy that seeks high growth and high profit margins through innovation and overwhelmingly superior products with unrivaled user experience.
Return? Nokia & Symbian might sell a ton of phones in the global market, but they certainly haven't had high margins nor growth over the last few years. They can't 'return' to the way things were, because that strategy is no longer viable in today's market. To carry on as if Apple & Google aren't kicking your ass is a sure-fire way to lose everything.
No affiliation, but I think it's funny.
(For those who don't get it, Nokia was originally a rubber goods manufacturer)
Its staggering to think of how many resources nokia have, and how little and slowly they've innovated. Nokia has been falling behind for a long time.One thing I always found disheartening was their desire to compete against themselves, and ignore others, as illustrated by them releasing an older version of symbian for their business phones, while using the new symbian for their media phones, but it seemed there was no place to get 'the best' nokia. It was always a choice, but one that didn't seem to have an easily identifiable consumer flagship. Just N's and E's and everything in between.
If these guys want to make Meego the dominant smartphone platform, they're going to have to do it with something besides Nokia.
It's true that they managed to kill it through sheer incompetence, including alienating a lot of first adopters by discontinuing product support for the internet tablets. But there's obviously brilliant people at Nokia, just like there were brilliant people at Apple before Steve Jobs.
Now, if Apple had ditched MacOS when Steve returned instead of using NextStep, and instead went with Windows, and shipped a WP7 phone instead of a phone with OSX, where would they be today? They _might_ have had the iTunes ecosystem, if Microsoft would have allowed it. Their "differentiating features" would be at the mercy of Microsoft and their strategic plans.
I guess people are right that you need an app ecosystem to compete in the smartphone market today. But the iPhone sold like hotcakes for more than a year before it had apps. If Nokia made a phone that users really wanted, I think the app ecosystem would have followed. Instead, they're using their huge market presence to give Microsofts platform the same boost.
Have some former Nokia lead engineers and managers, start working on that plan.
Have some other managers and engineers work on the Android plan.
Have some others compete with Nokia to make better WP7 phones.
Gets rid of 100 layers of management, etc.
But as an avid spectator of the evolving mobile platform "war", this sort of coup d'√©tat would be amazing to witness from the sidelines, no matter the outcome. Therefore, and for no greater reason, I really hope this goes through.
- Who are you?
- What is the rationale behind your plan?
- How does your proposed course of action deal with Nokia's current issues?
all of which were left unanswered.
I'm considering putting together some fun and easy mobile games for some mobile device in 2H 2011. I can select iPhone, Android, or, I guess, WP7.
I'm looking for a platform that has these features:
* Nearly free to register and start developing* Provides app store & DRM mechanism* Doesn't eat too many profits. * Ideally, lets me program very fast, think Ruby on Rails or similar framework.
Okay, so that doesn't exist as far as I know. If Nokia can reboot to provide the above - then they can probably provide a fourth option.
Nothing I've read so far indicates that they are going to roll that route.
From the bottom of the AGM questions page:
Who has the right to participate in the AGM 2011 and what is the last day to buy shares if I want the right to attend and vote in the AGM?Each shareholder, who is registered on April 19, 2011 in the Register of Shareholders of the Company, has the right to participate in the Annual General Meeting. A shareholder, whose shares are registered on his/her Finnish book-entry account, is registered in the Register of Shareholders of the Company. A shareholder, who wishes to participate in the Annual General Meeting, may register for the Meeting by giving a prior notice of participation no later than on April 27, 2011 at 4:00 p.m. (Finnish time) by which time the registration needs to arrive in the Company. - http://www.nokia.com/agm/2011/in-english/questions-and-answe...
Although their plan is not in the list of proposals. How does it work? 1 vote per share or 1 per shareholder? Can I buy 1 share (which stock name on which exchange?) and support them? If not, and only big shareholders count, why the twitter popularity campaign?
What specific goal(s) do they have (how many people, doing what?)
Once they can be seen as having the same big name apps as the other two, I think MeeGo has much more of a chance of being competitive, rather than being a 'third world country' of a platform.
While I admire the passion that fueled this letter, their goal to "offer overwhelmingly superior experiences" seems foolishly optimistic. How will Nokia differentiate from the plethora of Android derivatives, iOS, WP7, Web OS, and Blackberry?
So to hear these 9 disgruntled folks say they're going to stick with a bad plan and make it happen sounds like lunacy to me.
Glad you're not going to any shareholder meetings of companies I own shares in...
"You can't build a reputation on what you are going to do." ~ Henry Ford
I was watching some meego videos on youtube, it does not look that impressive and launching the phone usually takes a full minute. Whats up with that. Its an early build i guess, but as software ages, it generally gets more bloated meaning even slower.
In the mean time, use this extension to clean up your own search results and tell us which sites you don't want to see in Google.
2. Click Install, close page
3. Open each of the links below in a new tab, click block on the first result
Edits: fixed formatting, added suggestions
This method is fine. The actual data sent to Google when you block a domain does not contain the search query (or the referrer).
This is what gets sent when you block a domain:
2. Any plans for a Firefox extension? I'm willing to install Chrome just for running Google searches, but would rather add it to my main browser.
e: After a month or so, I would absolutely love to see the top 10 or so blocked domains. It's OK if you can't do this, but it would be interesting/amusing.
Make this a google labs feature directly for Google itself in the personalization options.
Not only did you just make your search engine 50 times more valuable to me, but you've just ensured I'll be spending almost all of my browsing time inside Chrome.
or just simply a bulk insert
I mean this is kind of like if a kid pissed all over the floor in wal-mart, and when you notified an employee about it they gave you a mop to clean it up yourself.
It should be relatively easy to listen in on the background page while the extension is running and write a script to extract the list of blocked sites or update it with a master list so you don't have to block dozens or hundreds of sites manually.
Not that I think everyone should blindly block everything everyone else does on HN; I personally loathe Experts Exchange, but I do find an answer I needed from them now and then.
I was more curious than anything.
Update: As "dsl" posted above, it does look like the extension makes a call out to a Google Endpoint to record the block as well, but I don't believe that call actually filters the data for you. That's still done client side. So it's probably best not to call the end point directly or update the blocked sites list directly, but actual use the extension as intended?
So: will this eventually be a search settings option once it is less beta or permanently an extension thing?
I'm cackling maniacally while I block expertsexchange, Mahalo, and several other sites. I'm so happy right now.
This little bit of successful nerd prognostication cheers me up more than perhaps it should, but oh well.
However, this is a feature that Google actually had. Why did you remove it? I accept the Search Wiki was not particuarly a success, but the remove option was very nice.
Alas, thanks for listening. I'll be waiting for the server-side option.
If that happens, what's to stop this being used by companies to influence the results to get rid of competitors?
Why can't Google offer something like this rather than only allowing it via a Chrome extension?
By why is this extension not marked as Verified author? https://chrome.google.com/webstore/detail/nolijncfnkgaikbjbd...
The "nolijncfnkgaikbjbdaogikpmpbdcdef" makes it look suspicious as well.
(Disclaimer: I work at Blekko)
An option to hide the icon from the toolbar would be nice.
In particular, I love that you can arbitrarily define (or redefine) patterns for the match construct using macros.
It leads to some damn elegant code:
The other day I tried a simple benchmark (nothing elaborate - just fib) and found it to be significantly faster than Python. Unfortunately I don't have the numbers right now.
Does anyone have any experience to share regarding the use of Racket in a production app?
- A completely new GUI and drawing layer. Replacing 200,000 lines of C++, Xt, Win32, and Carbon with about 30,000 lines of Racket that builds on Gtk, Win32, Cocoa, Cairo, and Pango. Nice.- Web server changed semantics- Scribble documents can now hold any image- Module dependency tools
For the first 3 years it was Apache+Prefork+mod_php+WordPress (default setup for any on RedHat-based or Debian-based setups for the longest time).
Any time I would get a story on Slashdot or Digg the site would die for at least half a day... god I hate it.
I went from a 1GB RAM VPS to a 12GB dedicated machine in 3.5 years trying to get it to stop crashing whenever I would get a flood of traffic and was never able to. I pushed back on the idea of being a Linux sys admin for so long because I didn't want that hassle, but alas, I had to bother with it.
Finally at about the year 4 mark I decided no REAL site on the planet was running with this configuration since it didn't seem to matter the hardware you threw at it (yes I tweaked the Apache setup/mod-list tirelessly to scale with the improved hardware). I finally started digging into how real human beings setup Apache and came across the argument for using the "MPM worker" as opposed to the default pre-fork worker.
Made sense to me; less 30+ MB processes running around answering questions.
After that change, it helped a little... I have no hard numbers on hand, but it felt like a small improvement.
I kept digging and soon ran across the one-hacky-but-now-officially-supported method of using Apache + a family of PHP VM threads pre-launched and called via FastCGI to execute the .php pages from my WordPress site, the computer-science part of my brain loved this idea... the Java-trained side of me suddenly made me realize that prior to this with Prefork and mod_php, every time someone was connecting I was spinning up a new Apache thread and a new PHP VM every single time (please correct this if wrong... this is how I understood it).
With FastCGI I could have a family of say 20-some PHP VM threads living in harmony and responding to Apache constantly asking them questions.
After rolling that change out at about year 4, I noticed a big improvement; maybe about 50%.
At my next Slashdotting the server got REALLY slow, but hung in there; no crashes. I thought it was odd that all that hardware still couldn't host things snappy... it seemed like every other day I was clicking a link off of Hacker News or Reddit front page to some dude's personal blog that was responding very quickly to me and I was positive these people weren't spending $300/mo like I was on dedicated hardware to run their blog.
So I kept digging.
As you guys probably know, when you start searching for what sucks about Apache two things come up more than any other: "use nginx" or "use lighttpd" -- I had read that early versions of lighttpd had some memory leak issues (I think long-since fixed) and had a handful of Ruby friends that loved nginx... so I decided to stay up all night one night and port the site over.
25mins later I was done.
Yea so that was a lot easier than I expected. The only painful part was using some heavy handed redirect logic to convert my WP-SuperCache rules over to nginx (the author wasn't supporting nginx yet, but I think he does not).
I would point out that the server load with nginx running with NO CACHING (WP-SuperCache disabled, all queries execute PHP and perform a MySQL query) was something like 1/4 what my Apache/MPM/FastCGI/PHP/WP-SuperCache-enabled setup was using.
Once I got WP-SuperCache up and running on nginx, the different was stupid-big. The nginx/WP-SuperCache setup was using 1/8th or 1/10th the system resources that the Apache setup had been using.
Soon there after I got Slashdotted with a peak of roughly 400 users on at a time and the server load barely crawled beyond a 1 (screenshot):http://www.thebuzzmedia.com/wp-content/uploads/2010/12/slash...
Overall I couldn't be happier with nginx. I think there are probably people that live in oxygen-rich test chambers inside of military bunkers who were bread to tweak Apache that can optimize it to have comparable performance, but that wasn't me. Out of the box nginx has been fantastic thus far.
And that is my little story related to this subject... for what it's worth.
For traditional LAMP-style document-producing engines, cache invalidation strategies rely either on TTL (leading, as he says, to stale data) or on polling the source data (leading to an unavoidable performance hit, amortised over the improved speed of the cache).
Leaving aside TTLs, the key issue is that cache invalidation is driven by GET and not POST requests. I wrote a thesis proposal where part of the concept was to drive all cache invalidation from POSTs. New comment added to a story? A regeneration is queued up. New post on front page? A regeneration is queued up.
Firstly, you can improve both staleness by only regenerating when new data is added, and you improve performance by not needing to poll the source data for currency every time you touch the cache. In an ideal situation you could come close to raw HTTP serving speed.
You also allow some degree of dynamic responses to load. Under high rates of POSTs you can batch up regeneration events to prioritise the GETs.
However, I won't be pursuing that project -- I've been accepted for one I was more interested in.
I'm now working on my first major webapp, so I've been learning how to be a sysadmin, pretty much from scratch (almost no previous experience with running webapps, lots of experience with other things though).
Anyway, I've been trying to find resources to help learn, but it's been extremely difficult. These are the questions that are still bugging me, hopefully someone here can point me to some good reading about them. Note that my stack is Python/Django over Apache (mod_wsgi) right now. Also note that my site has a "ping" system, where each client connected continuously pings the server every few seconds to see if any new info arrived, making me have to handle a larger amount of requests per second, I believe.
1. What kind of load do I need to handle? Is 12,000 requests per second terrible/good/great performance? How do I go about figuring out how many people are online at once for most sites? How do I even estimate it?
2. How can I test the performance of my application? I've learned that Apache Bench is used a lot, but are there better tools?
3. What are the best tools to help me monitor and understand the load on my server?
4. How do I go about understanding the bottlenecks in my application? Right now, my Apache process is taking most of the cpu. What does that imply about where I should optimize?
Sorry to braindump, but I've been looking for answers to these questions online and haven't found any clear help.
For those that need hassle-free backwards htaccess compatibility, try litespeed.
But there a few nice alternative to apache these days.
But when it comes to nagging reminders about what your spouse still has to do after a long day working for the man‚Ä"take out the recycling, walk the dog, write a thank-you letter, defrost the chicken, fix the stereo‚Ä"keep a lid on it. Economists talk about ‚Äúinformation processing costs,‚ÄĚ or the costs incurred from processing, absorbing and filtering information. When information processing costs get too high, we tend to become paralyzed.
Make an effort with your relationship.
Like everything good in life it takes work to have a good relationship. If you make an effort to have a good relationship, applying thought and energy day to day, you will grow a healthy satisfying relationship.
They even optimized which kid helps them, I worked better with my mom (I was all about getting a list of chores and powering through them when I had time, and so is she), so I ended up working with her for the weekly chores. Shutting up and getting stuff done really does make one happy.
The problem in all this venting is that the advice is given to both sexes - it just discusses using comparative advantage to split up tasks, don't nag each other and slip between the sheets as much as possible. If you realise that the advice could equally apply to a gay couple you can see there actually isn't any gender bias in the article at all.
This sounds a lot like the advice to women from the Tom Leykis show:"Stay slim, Long hair, sex anytime, shut up!"
lmao on the (with each other) emphasis :)
Edit: Working link, courtesy mhbhttp://blogs.wsj.com/ideas-market/2011/02/14/the-secret-to-a...
Things work very differently in the rest of the world.
I'd never equated being overweight as behaving irresponsibly. If everyone thought this way, would we all be thinner? If my spouse can't nag me to lose weight (see #1), is there anything she could do to encourage it, or is it all on me?
ETA: With the growing number of men who refuse to pick up every check, plan every date/event, act as sole provider for the family, purchase gifts regularly for their other half, etc. for fear of marrying a woman who is too focused on money... I'm really beginning to wonder what women are getting out of this marriage deal anymore.
Because it required interesting technology and was challenging and fun to implement.
Because it solved problems (granted, all of the easiest and least important problems).
A lot of products are brought to market through the same flawed process. Look at the Segway for a perfect case in point.
the fact that this exists is terrible news for nokia.
Great movie that I just had to quote. I really hope to visit the chapel someday, and this site only reinforces my desire.
Dang, wikipedia knows it all:
The seed of the silphium plant, used in ancient times as an herbal contraceptive, has been suggested as the source of the heart symbol.
Oh, also http://www.wolframalpha.com/input/?i=%28x^2%2By^2-1%29^3-x^2...
The thing is, it seems there are examples of open source companies or developers folding when presented with the "whiff" of a patent suit. See: http://forums.fedoraforum.org/archive/index.php/t-234073.htm...
Further, as I recall, it has been speculated that Nokia's Apple suit involved a demand by Nokia that Apple cross license its UI patents. http://www.businessinsider.com/did-nokia-sue-apple-to-access...
The thing is, if Nokia had previously been "coding around" Apple's UI, it's not surprising that their UI sucked.
It's easy to imagine that the more timid an organization, the more willing they are to be pushed around by an over broad claim.
Altogether, It would be a good thing to provoke Apple into actually suing someone for violating "their" UI controls: Threats people back away from put a bully in a more powerful position than threats carried through.
To avoid a trial the prosecutor first offered me 50% off my two tickets which I politely declined citing that I was certain I would win based on certain facts which I laid out.
30 minutes later while I was waiting for my trial to begin he came and offered to drop one ticket and give me 50% off on the other. I again politely declined stating that I was certain I would win based on the facts I had laid out earlier.
Then right before the trial was about to begin the prosecutor came in and dropped both tickets. I know many other people who got a ticket under the exact same circumstances and just paid it.
Point being whether a dispute is about millions of dollars of software patents or 160 dollars in parking tickets it pays to know the facts well and stand your ground convincingly and unemotionally.
It's a load of bunk. Jobs is charismatic, but he found early on that due to his charisma, reporters liked to tell tall tales about him. Always looking for the "human interest" side of things, or something to spice up their reporting they'd exaggerate. So he stopped giving interviews, figuring that would give them less to work with, and in doing so he overestimated their integrity. Instead they quickly figured out he wouldn't give them the additional attention of debunking them, so they just started spreading whatever rumor or gossip or fabrications sounded good.
Jonathan Schwartz is not a reporter, but he can say whatever he wants, knowing that Jobs is not going to waste time disproving it. Doing so only brings more attention to the faux controversy.
And of course, Apple haters, who really don't need much prompting anyway, will simply take it as the gospel truth.
I'm sure Apple's right and attempting to point out this is just silly, because those who believe will believe anyway because they want to.
Look at Job's stanford commencement speech. That's the real guy. Always has been.
Ironic given the "egregious" Kodak suit he writes about.
Also, from what I can tell, every query is log(N) in the size of the completion set, instead of linear in the length of the query/suggestion (again, like a trie). Seems like this might have trouble scaling to large suggestion sets.
I might have to use this in my next service.
Or do you have numbers on the mean length of a phrase you handle currently, the number of such phrases and how much memory it takes?
One question I have about Watson that I don't recall being mentioned in any videos or articles so far - what sort of interface does Watson receive the questions over? Is Watson performing speech recognition or getting the text of the question via some sort of interface?
Game over, Watson.
(Question for native speakers: When watching the practice round  are you generally able to keep up and answer the questions? The speed with which the game was moving made it nearly impossible for me to follow or enjoy the game. I would like to know what the experience is like for native speakers.)
Has a nice little video too.
Or conversely, maybe Google should buy them?
As an AI researcher I'm excited to watch this week. Even if it's not the most elegant artificial Jeopardy player imaginable, it raises the public profile of a lot of AI & ML topics and might encourage and inspire other groups to tackle ambitious projects.
I'm not arguing that this isn't an impressive accomplishment, but that the statistical-learning stream of research is likely a conceptual local optima that yields the best results in the near term but is probably unrelated to the way we ultimately achieve a creative, general AI.
Programmers who are interested in learning and growing will always be attracted to new languages and technologies. What's interesting about Scala is that it's not solely academic. You can learn and do useful things with it at the same time. Later, when you've learned more, you can go back and improve your old code for added beauty and performance. That's a fun process, I think.
What grieves me as I read that article is that it seems like he had a perfectly good way of doing it simply and easily, but then decided to go for a much more complex and risky solution (with what _he_ describes as poor tooling) ... for what? Simply to increase the difficulty level?
Take the analogy of a backyard pool. Instead of just running up to the pool and doing a bomb (or even a belly flop), he has to climb up on the roof of neighbours rickety garage, where he is going to attempt to do a triple twist half pike helicopter/superman maneuver in order to score higher from the judges. Problem is, he's got a real risk of either missing the pool entirely or cracking his head open on the concrete.
I'm not saying we should never use new languages or techniques or tools. What annoys me is that given the choice of doing something simple, or doing something complex, he chose the complex way, and then piled on the risk, with a side order of complexity.
From the article:
"Shortly after working this out and drawing my architecture diagram (a pretty insane-looking tangle of boxes and arrows on a sheet of paper)"
Shouldn't that have been a pretty big red flag? Sure he found some superstar programmer to pull it off for him, but I can't help but think that a little bit of darwinian natural selection would have been in order here.
The annoying thing is that you see this all the time in the Enterprise. Time and time again someone with architect in their title goes and makes an appalling horrible mess of the design, leaving the poor bastards at the coal face to sort it out and try to make the abominable crime against nature, reason and common sense actually work.
EDIT: Using "esoteric" languages can also be a good long-term hiring strategy. It's pretty easy to get a place in people's minds as the "go to" company to work with a language. Google did that with Python, Twitter is doing it with Scala, Basho are doing it with Erlang.
as a mere mortal I gravitate to simpler things.
Alas. If only we could live on technical stimulation alone :(
"Scala is not ready yet, but when it is, it is gonna take over Java as the next big language".
Years have passed and this has failed to materialize. Maybe it will never be 'ready enough' to take over? What do you think?
(the last pardox thread)
Also it shows an enormous ego. Mr. Lippert thinks he is as smart as Feynman but he is not; he just makes Feynman sound like a pedantic asshole.
Mind you I agree with his point that those questions are silly, but again he can make this point without bringing in Feynman's corpse and using it as a sockpupet.
If a bridge can only support 2 people, maybe it is better to not cross it at all. If you do have to cross it, maybe you can trust your tamed tiger with the goat. Pirates aren't rational agents that use silly rules for sharing the treasure. An egg that doesn't take a scratch when falling from the 13'th floor belongs to wonderland. Few women would kill their cheating husband right away. Fewer still would rely on the perfect rationality of others, and the mayor's to do so.
And so on. Brain teasers are fun, but many people (not just Richard Feynman) don't accept their weird assumptions right away, and instead assume a real-life setting.
Since when has the thermal performance of a light bulb been undocumented and unreliable? I would think that designers of light fixtures and shades rely heavily on the documented thermal performance of light bulbs, and rate their products for compatibility with a range of bulbs accordingly.
Can I assume that the lights and the switches are correctly wired according to the National Electric Code of the United States? That is, that the switches interrupt the hots, not the neutrals, that the switches are standard-duty switches rated to interrupt 15 amps of 120 volt alternating current, and so on?
I loved this implication on a Microsoft website.
The worst thing is it doesn't seem to be that bad a platformer either!
The game itself is a nice classic platform.
Daniel easily could've packed it up when he lost his cofounder - Shai had to go back to Israel a week or two before Demo Day. Instead, he built a prototype of what would become Greplin (in a week), pitched it to investors at Demo Day, secured funding from Sequoia, and is riding a rocketship.
When PG and YC say that they invest first and foremost in founders, this is a perfect example of what they're talking about. Every startup hits a rough-patch, and some even look like they have no hope at all, but the best founders persevere against crazy odds and will themselves to succeed.
The outcome could be either really good or a privacy nightmare.
The unstated premise of the piece seems to be that, as technology matures, control over it becomes consolidated in fewer hands, each of whom has a proprietary interest to limit its ultimate use in order to serve its short-term profit interests. Thus, service providers charge high roaming fees and structure their charges to serve their interests at the expense of consumers while apps that are offered come increasingly within walled gardens where proprietary overlords dictate all terms.
I believe this is all true but disagree as to its alleged threat to our future as tech consumers. Human innovation in a free enterprise system tends to take on a force of its own that overwhelms individual corporate interests. If this were not the case, then our current overlords would be Barnes & Noble (for books), Tower Records (for music), Blockbuster Video (for video rentals), Western Union (for instant messaging), IBM (for enterprise computing), and so on. These were yesterday's corporate giants and they each declined in their influence specifically because they failed to anticipate key trends in technology and thus became laggards instead of leaders.
Telco service providers, of course, have incredible power and will use that power to further their particular narrow interests if at all possible. This has resulted in a variety of frustrating user experiences. Even here, however, technology will tend to outrun their long-term ability to dictate terms. Those who are old enough will readily remember the outrageous expense, while traveling, of "long distance" charges, of the scarcity of reliable phone systems in many countries, and of the difficulty often incurred while wanting to make a call of trying to find an available phone booth while on the road. We are worlds removed from that old environment today owing to amazing advances in technology, and we as consumers are far better off than before in spite of the hassles, inconveniences, and expenses we experience with our service providers.
I resist walled gardens and proprietary traps in the marketplace as much as the next guy but these too do not threaten my choices in the long run. I may or may not like what Apple does but Apple will not control the future any more than the companies listed above. Corporate control in this sense is powerful but ephemeral - absent government restrictions that give it quasi-monopoly status, it lasts only as long as a company serves important needs of consumers. Once that slips, so too does the corporate dominance (over the long term). I may be wrong about this but, given what I have observed over my lifetime, I take a much more relaxed view of it than does the author of this piece.
"One reason you should not use web applications to do your computing is that you lose control," he said. "It's just as bad as using a proprietary program. Do your own computing on your own computer with your copy of a freedom-respecting program. If you use a proprietary program or somebody else's web server, you're defenceless. You're putty in the hands of whoever developed that software."
He dismisses GNU/Linux right off the bat, even though the very reason why GNU was even born is the very same problem he is starting to admit (given where he worked, I think he saw this ...but now that he no longer seem to profit from it enough to even get his blackberry roaming payed for him by The Company Overlord -- welcome to the reality of the majority).
Just visit http://www.debian.org/ and download http://www.libreoffice.org/ and do something about it, instead of musing and whining (I am awaiting a functionality he would need that is missing).
> But what I see developing seems driven by greed and profit...
Yes, it's called "capitalism".
> For me, the future would bring forth solutions to our needs and wants...
> design that provides value in a sustainable and responsible manner...
"Sustainable" is an extremely loaded term. Basically, you get it when the market demands it. If you see it as important, it's not a failure of the producers: it's a failure of the consumers to demand or the governments to require it.
The rest of the post talks about nascent issues of balkanization, which is the natural path of progression. Consider the examples of railroads in the US and the road system in the UK. In both cases, they were initially private endeavours that were likewise driven by "greed and profit". Arguably in both cases the high pricing stifled innovation and in both cases, the systems were eventually nationalized.
In the UK's example, nationalization fostered trade (the roads were all toll roads previously). In the case of the US railroads, nationalization can arguably be seen as a disaster so there are mixed results of this.
Ultimately though the story is one of commoditization. We are still in the pioneering days of these technologies and as time goes on they will get cheaper and ubiquitous to the point where people through the instrument of governments will start to see such services as basic rights, much as is becoming the case with Internet access, which many countries are starting to see as the "fourth utility".
"Greed and profit" propelled us from an a hunter-gatherer and agrarian existence to manned spaceflight and the global Internet. Don't be quick to dismiss or disparage "greed and profit" as there has been no greater catalyst for human advancement.
As for his talk of "open standards", I refer you to Dave McClure's "Open is for Losers" . We may find open desirable philosophically but it is not the natural product of a market--at least not an immature market. Open standards are the byproduct of commoditization.
As for synchronous Internet connections:
> Why would it harm companies to provide equal access?
The predominant form of broadband access in many countries is ADSL. The "A" is asynchronous in this case. ADSL2+ is up to 24M down and 1M up (2M with Annex M). Gain more upload speed and you lose download speed. So this isn't an artificial restriction: it's giving consumers what they most likely want. You can buy SDSL links. They have lower (relative) download speeds and generally higher cost, mostly because they're a business product as a general rule.
> I fear the Internet is doomed to fail, to be replaced by tightly controlled gardens of exclusivity.
I don't. I see such walled gardens as merely transitional. The "greed and profit" that drives them pulls us all forward and makes a bucketload of money for a few in the process. For a time.
> Today it is too easy for unknown entities to penetrate into private homes and businesses, stealing identities and corporate secrets.
Not really. If you run a Windows machine directly on the Internet (ie not via a router that does NAT, etc) then you kinda get what you deserve.
The fact is, the closed devices the OP bemoans are actually much safer for such things and that's almost by definition because as soon as you get a complicated mess like Windows, faults are inevitable.
TL:DR the sky isn't falling.
Mobile internet access while traveling abroad is expensive and unreliable because it's new. Yes, there are commercial forces that try to maintain high prices as long as possible, but short-term profits never hold up forever. Fifty years ago, every phone bill in the US was payable to AT&T, and long distance calls could cost dollars (not cents) per minute. Thirty years ago, we gained multiple long distance carriers and prices fell quickly. Twenty years ago, we cut the cord, but cell phones charged by the minute and came with draconian contracts. Today it's common to lease a service with an near-infinite supply of minutes, that lets you call anywhere in the country for a flat rate. It's incredibly cheap compared to long distance of twenty years ago. All this, despite the evil resistance of profiteering corporations.
Sure, Microsoft laughed at Apple and Sun when they approached to develop a common standard platform. Times change, and high profits are constantly under attack. Microsoft completely missed the internet boat -- not a single protocol in use on the internet today comes out of Microsoft. Google and many other corporations offer free products that compete favorably with Microsoft's expensive products of yore. It must have sucked to compete with Microsoft in the 90's, but here we are, and where is Microsoft? Sinking billions into R&D in the race against Google.
Over and over, new technologies lead to incredible profits for a few, controlling corporations, but inevitably these situations aren't permanent. Our internet faces censorship threats from governments and exclusivity threats from corporations, but these controls, like others before, won't last.
So what is Mr. Norman going on about? Short-term profiteering? That's what covers my paycheck, and makes all this possible. Privatization of the internet? Maybe temporarily, but if that doesn't truly benefit the public then it simply can't endure against technological advancement.
An obvious way to avoid becoming beholden to service providers is to retain some independence from our devices. It's not like people didn't travel before the invention of the iPhone.
Aye, there's the rub. Corporate welfare programs are the root cause of everything he's describing, and of a lot of other problems with today's economic system. Restrictions on competition, created by the government, have been a major obstacle to everyone's welfare since at least the 18th century (Adam Smith rails against them at length in Wealth of Nations).
Agree that it is a problem. Quite brave, by the way, that he speaks out about this as a (former) Apple VP...
And that VPS will have probably far less limitations on the content you are distributing via it compared to a residential connection, partly because the higher competition. And if your VPS provider decides to shut your revolutionary website down, it's trivial to move to an another provider, compared to switching your home connection. And additionally VPS is easier to anonymize if you need/want to. Your home connection will always point to your home, and to you.
While I agree with his post on the whole, nearly all of those smartphone apps are available offline. His point is more valid when it comes to content.
That said, I recommend the author bring a paper book along whenever he travels.
To the extent that connectivity and access are increasingly becoming essential to a modern way of life, commerce, education, etc. and considering the clear direction towards consolidation, oligopolization etc., ISP/cable/wireless providers increasingly look like natural monopolies. Somewhere on the spectrum of a public national highway system, public utilities, or a regulated broadcast and telecom systems, probably more towards the latter.
There are lots of good reasons for being concerned, and historical lessons abound.
I can see that you are.
Remember that people throw stones only at trees that have mangoes.
In general, it's important to distinguish cash balance and flows from investments vs operations.
Banks often report their vast cash balance under operations, as holding and lending cash is their core business, not something they do on the side.
Accountants like to project this aura that they're perfectly predictable categorisers of financial truth, but it's bunk. When you can change a company's reported cash balance by billions of dollars through a clever argument, you're well and truly into the land of politics.
What blows me away is that Wal-Mart has 2 million+ employees! A piece of information that would be interesting as a comparison is the number of government employees there are in different countries ...
One of his arguments is that these languages are often only mentioned in conference proceedings.
How you get to be a PhD student in computer science without realising that conference proceedings are the leading distribution mechanism for knowledge in the CS research world is a mystery.
I may only be a humble honours student, but the central importance of conferences over journals has been drummed into me over and over by my professors.
The whole deletionism fiasco at Wikipedia is ultimately a software and UI failure. Misguided people who in most cases could never write a good article (or even improve an existing one) themselves are running amok because the system is re-enforcing the belief that their only talent, destroying information, is also a valid form of contribution. It is no statistical accident that rampant wiki deletionism is even more intense in ..."strict" countries such as Germany.
At the same time it is important to note that a lot of articles have serious shortcomings and are in need of improvement. While deleting them is in my opinion unforgivable as long as they contain useful information, I believe Wikipedia could profit from a more modern approach to article rating and validation. If substandard articles were allowed to continue existing albeit with low ratings and missing validation tags, Wikipedia as a process could focus more on improvement as opposed to gleeful pruning. If they concentrated on more constructive measures and included better ways of gathering user feedback for quality control, they could also provide former deletionist users with a UI option that simply prevents them from ever having to see an article that is below a certain quality threshold. Everybody would win.
As it stands today, Wikipedia increasingly fails at its stated mission of being a repository for the world's knowledge. Sadly, I don't believe it is possible to change Wikipedia in any way, ever. Someday, someone will have to come along and fork it.
> All that donation money, and they still can't afford enough hard drive space to avoid deletionism.
The guy allegedly doing the flagging has responded on his user page: http://en.wikipedia.org/wiki/User:Christopher_Monsanto
Edit: The quoted comment was in jest, and too many missed this, so I'll reinforce that by adding 'and not serious'.
The latter is something my theory friends complain about. According to two of them who have tried, attempting to expand or correct any of the fringe topics in algorithms and graph theory is futile because of the instant-reverters who will simply revert any change they make.
Of course, what's most disturbing to me about this is... dear gods, man, you're at Princeton! If you don't understand what the contributions of Alice ML are to the field, walk down the hall and talk to Andrew Appel! Or David Walker, if Andrew is too hard to track down. I would hope that by this point this student has learned that there is a lack of fidelity in the search engines for anything published in the 90s and earlier, as the scanned PS converted to PDF is neither as well-indexed nor as comprehensively available (e.g. Springer-Verlag work from that time is frequently not indexed in scholar/citeseer due to a lack of non-subscription links, particularly if published by someone who is no longer in academia).
Fortunately, most of the work in PL was done in the lifetime of people still working. If you're too busy to do a thorough search of relevant work, you can sit down and talk with the people who were there when concurrency was first being introduced and formally modeled to understand Alice's place and contributions (or lack thereof, if that's the conclusion you come to).
You guys win. I will stop nominating pages for deletion.
I wasn't doing this to troll or to slam any language community. I was just trying to help -- I read the WP guidelines for inclusion, and whenever I came across a language that didn't seem to meet said criteria, I nominated it for AfD. I think, with respect to Wikipedia's established notability guidelines, my arguments for deletion were airtight, which is probably why the articles were eventually deleted. I'm not sure my actions warranted the kind of internet-hatred I received as a result. If anyone thought what I was doing was wrong, they could have just sent me a friendly message and I would have politely discussed the issue. Few took this route, and I am sorry that due to time constraints and an overwhelming amount of invective I could not reply sensibly to everyone.
Since the internet seems to care more about keeping these articles than I care about deleting them, I'll stop. I personally think a lot of the articles should have been deleted. I think that ALL articles I nominated for deletion fail to meet Wikipedia's general notability guideline. Here's a challenge, then, for the internet: instead of spamming my Wikipedia talk page (which I don't really care about), why don't you work on fixing WP's notability guideline for programming languages? Otherwise, some other naive editor will eventually try to delete them. Perhaps they won't have as much experience dealing with trolls and flamebait as I have had, and will become very hurt and confused. Nobody wants that :(
This was fun. Now back to real work, I guess...
The Notability guidelines often both me really, as they are a somewhat silly set of 'rules' in many ways and not everything fits into a nice and tidy system. For example, Christopher M seems to feel that his understanding of the requirements if that all languages must be cited in well published and cited academic papers and there is no other way around it. That's just silly. There could be new and growing languages that are of importance, or older ones that were important at the time, but that there weren't papers for and aren't being actively used. Do they each have a purpose and for the people who is researching things via the Wikipedia important? Yes. They are.
I feel that there is more to be lost by most deletionist activity than there is to be gained. The risk evaluation here almost always (except in cases of spam and self edits, which are frequent) should lean on the side of having more information available, not less.
The project codename is 'Infinithree' ('‚ąě¬≥'), and I'm discussing it pre-launch at http://infinithree.org and (Twitter/Identica) @infinithree.
I played with this language a few years back and thought it had great promise(when C# was much less capable). I have read the exact Wikipedia page you deleted, and it got me to write some code in Nemerle.
* Btw, this might get some publicity for Nemerle (and the other languages).
The narrator of Foucault's Pendulum, when he decides to be freelance researcher, says that his main principle will be that all information is equal, nothing is more precious than the other.
Otherwise, Mr. Monsanto has every right to push his agenda on Wikipedia insofar as it is within the bounds of legal play on the site. Attacking his character gets nobody anywhere, and probably adds credence to whatever he's doing. If you're really concerned about deletions of your favorite PL articles, sit on them. If a request for removal/deletion (I don't know the wiki-jargon) pops up, just dump all over it. Even better, improve the articles. He can't get something deleted that's not mediocre. Agents like Mr. Monsanto will actually improve the quality of your average article one way or the other. I'm impressed that somebody would bother reading so many articles and post meta-data about them....especially on a topic that so few people engage in.
It's curious that pages that don't meet Mr. Monsanto's criterion of having been cited in a 'top-tier' publication. There are so many articles on Wikipedia that do not have ties to anything real. Is it really fair to hold PL topics to academic-level standards? What if somebody considers PL an art, or something other than semantics and formalisms? This does happen, and people who create new languages from languages that aren't considered much in the PL community might actually fall into these categories.
I think Mr. Monsanto would do well to spell out his criteria for what isn't desirable in precise and formal terms.
This is a lot less useful way of doing things but it flies almost completely outside the deletionist radar. There is little cultural dance pertaining to the the concept of notability for mentioning something in a list, and no bureaucratic pseudo-procedure for a deletionist to wield against such practice.
I don't understand what the cost is. Why don't you make a list of "notable" programming languages so that people who want to browse around can skip the less influential / new ones like Nemerle. But to delete hundreds of languages (and if you apply these rules, you need to delete hundreds of languages, you've missed lots of them) is a travesty.
>Raj Reddy ‚Äé (dr. is so unnecessary)
>Randy Pausch ‚Äé (dr is unnecessary)
>Benjamin C. Pierce ‚Äé (Don't need dr.)
and so on
You can see this by going to, e.g., the Japanese article (http://ja.wikipedia.org/wiki/Nemerle) and looking at the language links at bottom of the left sidebar.
So if you are a speaker of one of those languages, you're still in luck :-P
The article had been up for less than a month when someone requested speedy deletion, despite the article having ample evidence of the subject's notability. Deletionists are out of control on Wikipedia, and need to be stopped. I've thought about writing articles and though "no, why bother, some deletionist will just delete it." and I'm sure many others have been similarly dissuaded.
To this end I'm building an inclusionist fork of Wikipedia. The main difference it will have is there will be no notability guidelines, only verifiability ones.
Not anyone can invent a programming language, it's not comparable to your pet rock band. Chris, you clearly displayed that you are not capable of handling this subject satisfactory and you've displayed arrogance in response to peoples distress.
Simply put - marking the articles for deletion was rash, and in the larger sense unjustified.
Now Google and Wikipedia are failing at the same time. Bad.
The easy reaction would be to focus on the flamers, harden your heart and drive ahead. The wise man, here, stops and thinks for a bit.
editNemerle appears to have been frozen and deleted
Alice has gone down 3 or 4 times, but it's now up for the last 10 minutes.
They're down again, looks like semi-permanently.
One criteria that can be used for Speedy Deletion is:
No indication of importance (individuals, animals, organizations, web content)
It's a very subjective measure, yet it encourages over-zealous Wikipedians to expunge content.
The spam problem is very real for any user generated web site. I think it would be more ideal if Wikipedia didn't delete anything - but rather marked pages as being of low quality, or not meeting their standards, and perhaps removing those pages from their search index.
Here's what I wrote about this problem in 2007:
If someone thinks the language is not notable, there is a discussion page attached to the main article where such things can be expressed. The obscurity of the language can also be communicated in the article itself. While lots of us can be pretty sure Nemerle will have no lasting impact in the field, they can be wrong.
Seriously, person who did this? I thought wasting time browsing news sites like HN and reddit was bad enough but this... this proves that the internet is a very serious business indeed.
1) The languages exist, are supported, and are used by many users
2) There are other bad articles on wikipedia
Both of these are, unfortunately, terrible arguments.
In response to the first argument:
Wikipedia's rules state that for an article to exist, it must be proven notable by certain types of accepted references. That does not include tutorials, blog posts, software's official website, or questions on support websites/forums. These rules are unfortunate, and have been sources of much arguing, but they still stand.
We, as programmers, get upset when information that is useful to us is removed. The rules exist for a reason, though; one place where they are often enforced is the addition of video game articles. There are hundreds of thousands of video games with significant user bases. Wikipedia has made it a point that it does not intend to be a catalog of software that exists, and for that reason video game articles are deleted often. In order for software to legitimately qualify for an article, it must be significantly, demonstrably important. Existence and popularity is not enough.
In response to the second type of argument: existence of violations does not justify other violations. If don't think the blue slime from Dragon Warrior deserves its own wikipedia page, mark it for deletion and argue your point, but don't reference it as why your bad article with weak references should remain.
Wikipedia has a LOT of articles that are against its rules. We have become used to these, and depend on them, so we get upset when the rules are enforced. Have a look at the actual rules and I'll bet you can identify plenty of articles you have read that are in violation:
BTW, Why the hostility? and the mob mentality. I thought he articulated his arguments clearly and quite well without malice.
I'm also offended that the value of a project seems to be based on how well someone can market it. If your project hasn't made a name for itself, then it's worthless, right? Personally, I'm content to hack away on things that no one has heard of because I enjoy what I'm doing. If someone else happens to find it useful, that's awesome. However, deleting things from the Mecca of knowledge-seekers in an attempt to purify it in this manner is nothing short of crapping on the ideals that Wikipedia was built on.
Doesn't seem to be any evidence it ever existed on the web, but it looked awesome, and as I remember had some cool interaction design.
On a slightly different note, I did NaNoWriMo this year with a friend. I would have loved a way to quickly give/get live feedback with my writing partner. If there was a community of other writers who also wanted to take part, that would have been incredible.
I suppose that I could have used Google Docs, but something geared towards writers would have been amazing.
Why does neovella need access to my facebook chat? Sure there might be a legitimate reason but they certainly haven't explained it to me!
"2. Copyright. The content, organization, graphics, design, and other matters related to and created in Neovella are protected under applicable copyrights and other proprietary laws, including but not limited to intellectual property laws. The copying, reproduction, use, modification or publication by you of any such matters or any part of Neovella is strictly prohibited, without our express prior written permission."
However, I question whether or not making it realtime is the right way to go. I could be wrong, but it seems that stories like these would work best on a more laid-back timeline. I was about to start writing a story, and then I saw there was a "duration".
I set it to "no limit", and noticed that I can't write anything without someone else. Why? It's valentine's day, and most of the other folks I know who'd be interested are all busy. So I'm stuck unable to use your site. I should be able to do SOMETHING by myself.
I love the idea, though.
After a minute of poking around: when you "hide chat" or "hide info", they lose their mouse-over which would probably be "show chat".
What prevents someone from signing up in your name and posting something that would certainly prevent you from getting hired in the future?
Also, what if someone "internet famous" or even worse "real life famous" wants to contribute something?
There has to be a better way to control trolls.
If Quora is going to demand ID, they need to ask ID from EVERYONE before their account is active to prevent fake Steve Jobs, etc. not selectively.
Otherwise I recommend you sign up using your dog's first name or your elderly neighbor's name.
So I've left.
Speaking of fakes, Flickr cofounder Caterina Fake has had all kinds of problems due to her (real) last name:http://caterina.net/archive/001011.html At worst, I've been asked if I was "one of them foreigners" by a landlord years back.
All I do to check whether someone appears to be real is to use Rapportive in Gmail and see whether they appear on Facebook, Twitter, LinkedIn, etc.
I don't force users to use real names, I think there's a real benefit in allowing people to use aliases. Rapportive allows me to really quickly grok whether the alias has an underlying real ID in terms of fighting spam and trolls.
It's hardly a fool proof technique, but it answers the questions 9 times out of 10 which removes any need for me to create obstacles for my users to jump through.
PS: Interesting aside: I really loathe Facebook due to a personal incident in my life that occurred on there. Rapportive were kind enough to understand this and then to write an exception into their codebase such that it never prompts me to connect with Facebook. Talk about customer service.
One of our top managers' name is Herman Herman. 10 years ago or so, he was working next to a guy, whose name was Martin Martin. Of course, Martin Herman was lurking nearby as well.
No doubt, real names would be an effective aspect to quality content, but if the system of verification is flawed, it's meaningless.
But that's an awfully big rathole to crawl into.
See also: http://www.kalzumeus.com/2010/06/17/falsehoods-programmers-b...
Lessons for me: things might be very different on the other side of the globe. Don't try to arrogantly believe you can judge wether things can be true or not from your limited experience and imagination.
All you have to do is make a new Facebook account with a real sounding name, and noone would be the wiser.
So whats the point of wasting valuable resources enforcing something this stupid?
Would Quora have allowed _why?
If Quora wants people to use their real names, they should require them to link their facebook account -- and these days, even that is a poor guarantee of real names. Let's face it, people are striking back at the "real names" thing by altering their names on facebook, because you never know what they're going to make public by default next.
Her name is currently set as Sushii to compensate.
It's a pretty sad troll that can't figure out they need to make "real-sounding" names. Basically you're filtering out the twelve year olds that will try to write 'fuck' and 'shitcock' everywhere. That can be accomplished more effectively with some simple analysis on post content.
Instead, you're really only making life hard on legitimate users that don't match your dramatically underinformed notion of a "proper" name.
And this article takes other political potshots for no good reason... "birtherist"? I think birtherism is silly too, but what's that potshot doing in the middle of this article?
It gives me a chance to experiment and explore different facets of my life and personality that I wouldn't feel comfortable doing if that exploration were tied to my real name.
Maybe Quora isn't the place for experimenting with identity.
The Internet used to be a place where we were able to freely express ourselves. Increasingly it seems that we can only express ourselves if it's congruent with our IRL selves.
Do people really think it's that bad? I really like Quora for the most part and think it's way more useful than Yahoo! Answers.
You can find him on LinkedIn (along with other similarly named folks): http://www.linkedin.com/in/danieldaniel
I understand their tired reasoning, but don't agree with arbitrarily forcing your users to do anything.
I've messaged them and am awaiting a reply.
Then they can go to advertisers or just blatantly sell the data if it doesn't work out.
Regardless if their intentions are good or bad, this data in the wrong hands can cause nuisance.
These are their accounts to loose and I don't see why this is even news, or a blog post. So what if you can't come up with a name that sounds real.
For a q&a site these guys certainly get a lot of HN attention.
Obviously you can't predict all cultural phenomenons (for instance, if you named your child "Ken Ryu" before Street Fighter hit the shelves) and you shouldn't do it just to please ignorant folks or know where your child will be living in the future ("Kumar??? What is that, like 5 O's and 2 U's?"), but don't intentionally make life difficult on your children just to make yourself laugh or to fulfill some nerd agenda.
If he needs to use it, he can go by a middle name. I go by my middle name, because I share my first name with my father. No TechCrunch drama required.
Yes. I think I could build my own Groupon... in fact hundreds of people have and many are quite profitable.
Could I build it to scale? Could I build a sustainable business that will be there for the long haul or be significant enough to be acquired? Could I find an untapped niche that the group buying biz model could exploit?
The writer doesn't even list many accomplishments and the barriers to entry that Groupon has created (besides capital and what access to capital buys you i.e. PR, buying competitors, buying super bowl ads.
He should have said... Could you compete against Groupon's salesforce? Could you compete with their ever decreasing cost of customer acquisition? Could you compete with their high gross margins while you are forced to cut your margins to compete?
Basically it is a probabilistic bitmap index for quickly telling if a certain elem is in a set. It is used in BigTable, Hbase etc.
The Jaccard similarity (or Jaccard index) mentioned in the article is used for finding similar items in a big set. E.g. Google News uses it for aggregating articles on the same subject.
It looks as if a major component of the class is mathematical distillation of complex problems trading exact matching for acceptable probabilistic bounds. When you are able to do these math tricks you can cut an O(n^2) or worse problem down to O(n log n) or better problem which is the real win.
I don't know what you have planned for the next article in the series, but I would recommend adding code snippets in a language of your choice. I would have liked to see your example implemented in python or ruby to make it more concrete.
Which depends on a Ruby implemenation of MurmurHash2:
Anyone have any idea what the 23 is for?
# 23 can be any unsigned 32-bit integer (i.e. from 0 to 2**32 - 1) hash_number = MurmurHash.murmur_hash("somestring", 23)
Neural network data mining can also sometimes survive missing data or extrapolate from unknown instances.
It is clear that the author does not understand what a hashing function is. He says, "Your programming language of choice's API will almost certainly have a few acceptable ones within arm's reach", but that's not true. Firstly, what is a programming language's "API" ? Secondly: most languages that I'm aware of have no builtin hash functions; they are usually libraries (I know, I'm getting pedantic here). And thirdly: most hash functions that you'd find in a library don't return a floating point value that you can operate on with a "min" function (or even an int value). They typically return a string of 40-, 128 or 256 bits (depending on the function). So you'll need to map the hash returned by your hash function to a value that you can compare; for example, by grabbing the last 4 bytes and treating them as an unsigned integer.