They struck a deal to erase the 270M owed, as long as Amazon starts collecting sales tax starting 7/1, create 2000 jobs and invest 200M in Texas. They were likely already planning more infrastructure in Texas, but threatened to reverse course and pull out of the state entirely.
The way I see it, Amazon bluffed the state of Texas to the tune of 270 million dollars and all I might get out of it is next day delivery?
I'll take what I can get, I guess.
This is such an over blown argument. Sure, Amazon is ~7% (where I live) cheaper than traditional stores due to sales tax. But that $1000 Wal-Mart laptop has a $900 sticker price on Amazon, and most non electronic items are 20-40% cheaper than in stores.
If laws change and I have to start paying sales tax on Amazon, it won't change a thing about my buying habits.
Edit: They also have an inventory many times larger than any brick and mortar store. Whenever I go shopping, I have to choose between the least crappy option Wal-Mart decides to stock. On Amazon, I get exactly the one I want.
The future: high inflation in food, energy, fuel, and consumables, hyper-deflation in everything else except to the extent that it depends on or consumes the former.
A while ago I sold a few bumper stickers online and ended up using cafepress. I could have made a better profit by printing the stickers in bulk and mailing them to people, but I'd have to spend $100+ on paperwork just for the privilege of paying sales tax just in case I sell any to New Yorkers.
If small internet businesses had to pay taxes to the 40+ states that have sales tax plus to all the other jurisdictions (cites, counties, who knows what) in the U.S. it would be almost impossible to sell stuff and comply with the the law.
For AMZN the overhead is nothing.
Item 1 spent 13 hours on a UPS truck driving around my city and was delivered about 7pm in the evening 2 days after ordering.
Item 2 was delivered 14 hours after purchasing by Amazon Fresh at 9am.
For the same price of shipping, which service would you rather have?
EDIT To Add: The delivery guy for the 14 hour item works for Amazon - the whole experience was produced by Amazon without needing a third party. UPS is another company that will be in trouble if Amazon can make this scale.
[Edit: To clarify, the showroom and warehouse would be in the same complex but separated. Shoppers + heavy merchandise + fast moving robots is a recipe for disaster.]
Then they can satisfy both the "I want to see it before I buy it" crowd as well as the "I know what I want - just give me the best price" crowd.
Mark my words. Amazon has the Costco's, Walmart's and Best Buy's of the world squarely in their crosshairs.
*Of course there will of course always be specialty categories that are too niche to fit in this model, thus many specialized retailers will still exist.
Or is this just a case of the more efficient company (Amazon) beating out less efficient companies (Best Buy, Barnes and Nobles, etc...)?
Therein lies the problem. At some point there will have to be revenues to justify the company's valuation. I suppose their goal is to initially obliterate all competition in entirety and then have everyone purchase from Amazon. I'm doubtful this will work. Of late, there has been a trend towards experience stores - with manufacturers creating their own stores instead of distributing to retailers. Many luxury brands do this and even some non-luxe ones, such as Samsonite, have been getting into the game. There's some value added here, and it's something Amazon won't be able to directly compete with.
That will even include your postal mail if the postal service ever wakes up. There's no need to physically deliver mail every day if people could see what mail they've received remotely and can pick it up or request deliver if they can't leave home.
I've got news for you- they didn't make any money on your order. Amazon is profitable but has razor-thin margins on the whole. They take losses on several markets, including Prime, gambling that the entire seamless, low-cost user experience will cause enough people to spend enough money to push them over the edge of profitability.
I for one have serious doubts about whether what the author describes (based to a large extent on pure speculation) would be a profitable business model. It's one thing to keep huge amounts of inventory in centralized locations and ship items out across the country over the course of days. It's quite another to stock all that inventory redundantly throughout every state. So if I want to buy a specific pair of shoes that I can't find at Foot Locker, is Amazon going to stock 50 pairs on the off-chance that somebody down the street is going to buy them? Probably not.
In fact, the USPS runs a similar business model. One would then logically think (if this article is any indicator) that UPS and FedEx would go belly-up. I mean after all, if I can distribute mail to somebody's house for $0.50 in a day, how could UPS and FedEx compete with that? The reality is just the opposite. UPS and FedEx enjoy such enormous margins when compared to the USPS, that they want no part of the markets in which the USPS operates and it's too late for the USPS to compete with UPS and FedEx on parcels and important packages. I don't agree with the author's prediction at all.
Until then, I like to see which pants fit best, which knife is most comfortable in my hand, which tablet is most responsive. Yes, if I already know what I want, I'll go to Amazon. But if I'm not entirely sure and I want to compare things, I'll go to a local store.
Was this some kind of test for Amazon ran delivery from a local warehouse?
Amazon is to shopping what McDonalds is to food. We all know what happens when you have McDonalds every day.
I love the audacity of "If you can't beat them, find some other way to beat them."
This one goes in the scaremonger bag.
If I go to the website of a brick and mortar store, chances are I want to know what they carry in-store, because I need something specific and am planning to visit a store.
Ideally I'd be able to see what they have in stock at the local stores, so I can avoid a needless trip.
Instead, many retailers have larded up their online stores with online-only products. Some don't even give you an easy way to exclude those and see only in-store items.
This is stupid, and just makes me more likely to not bother visiting the store at all, opting to just order from Amazon.
So from the outside one cornerstone of a strategy like this would be inventory management at at least regional if not even warehouse level. It's do-able, no question, but difficult. An advantage amazon has here is a huge history of point-of-sales data for most important regions they are operating in, no matter if there actualy is a amzon warehouse or not. And if they don't have (enough) point of sales data, well then it's to early to offer same-day service and all that.
All amazon has to do now is to keep operations up to the task... :-)
When your the only game in town, well.
So I don't buy the dire predictions.
Then, can we have evacuated tubes that connect our homes to all of the retail outlets, or homes to homes maybe also, and then they could have like 5 minute delivery, or however long it takes the picking robot to get it and then for it to travel at hypersonic speed through the tube?
Maybe we should all live within a few hundred yards of the warehouses. Then upgrade the picking robots and streamline the warehouses and tubes so they can deliver items in less than a minute.
Then we can all have robots that pull the items out of the tube, open the packaging and hand it to us on our couches/beds.
I would totally buy a giant plastic jug of cheeseballs right now if it could arrive in less than a minute and be delivered to my hands.
That's not something you could expect in any country amazon operates (french's chronopost delivery is really hostile for ex., but there should be a ton of others). It'd bear with it for things I expect in weeks anyway, it would be horrible to have it for goods that are supposed to be there today.
I buy stuff regularly from Amazon and their prices are even cheaper than what my country's local stores are offering after shipping included. E.g. A book from Amazon cost ~30% cheaper than the shops here in Singapore.
For instance, bars will include the taxes in the drink tax because if they show you how much tax is in a glass of beer you'd be very surprised, it's something like 40%.
It's an mental game though. Do you get people "through the door" with an item at $19.99 and hope they don't close the tab when they see the $1.00 tax, or do you just advertise it at $21.99 and hope that people are attracted to you because of name, etc even though joesonlineshop.com has it for $19.99?
The way that happens is the same way supermarkets work. In very competitive areas, cut rate prices, coupons and promos like doubling, etc. are offered very liberally. To make up for that profit loss, they pump up the prices and cut out promotions in areas where there is little to no competition.
In areas with a walmart and target and now amazon local delivery, prices are going to be crazy good.
Everyone else will suffer as they supplement the profit-loss.
Do they really think that they'll get more affiliate conversions by being the honest guys? Seriously, what is the point of this?
What I gained from the article wasn't any explanation of why Vim is great; indeed, there was little included in this respect, aside from moving "entire blocks of code with the flick of a finger." Even the cool feeling of proficiency you get from knowing a tool well, and the fun of getting to show off a skill to a "how did he do that!?" audience was secondary.
My favorite part was the implicit reminder to relax about my tools. The most effective use of hyperbole is not the emphasis of a particular point, but a caution not to take ourselves too seriously, and place our self-satisfaction in proper perspective. The most powerful instances border on satire, and in that respect, this piece was perfect.
Addendum: I use vim and pentadactyl. They are extremely awesome, fantastic, and exhilarating.
I just don't get it. Ever since newer editors got block editing or multiple insertion cursors, and RegEx find & replace across multiple files, and searching filenames to open... I feel like I've already got everything I need!
What am I missing out on? I don't feel like my text editor holds back my productivity. Using something like Sublime, I never think, man, if only Sublime did x, it would save me five minutes, twenty times a week!
I've never had anyone explain to me what specific kind of code editing is so much more productive in vim than in any other editor. Can someone give me a real-world, commonly occurring example?
Or is it not about productivity? Is it an interface thing? People like the way it feels to use? The article explains the "feeling" I always hear about, how vim is so much better, but for the millionth time, fails to tell me why, in a way a non-vim-user can understand.
* This is how you log in.
* This is ls.
* This is cd.
* This is man.
* This is apropos.
* Learn vi.
* Have fun.
What came from that 5 minute intro to Unix has been applied orders of magnitude more often than anything from 5 years of an EE degree.
Editing text is a solved problem: vim or emacs. Pick one and get back to work.
Yes, its a fantastic editor.Yes, its probably going to make you more productive.Yes, its every bit as capable as any other IDE or editor out there.
No, it will not make you a better person.No, mastery of vim does not make you fart sunshine.No, it won't make you an enlightened buddha.
Of course, if you like it by all means continue using it. But if you think it has anything to do with your talent as an engineer, you are wrong. And if you see other engineers not using it and think it implies anything about their talent, you are wrong.
Side note: if anyone wrote in such a breathless tone about Apple, they would be skewered as a fanboy and a drinker of the Kool-Aid.
It inspires the same kind of nostalgia.
Since then, I've only used vim for editing config files on a remote unix system.
Either way the funny thing is when someone asks me "how do you do THAT in vi?" I often find myself typing in the air to replay the muscle memory that executes those instructions.
For better or for worse I've become muscle programmed to execute vi commands in flashes of thought that really never reach my normal cognitive functions. I love to watch people stunned as they see me do something I've taken for granted for years or figured everyone else knew how to do.
All that said, all tools are not for all situations and all people. I do at times find myself jumping into eclipse (Java I'm looking at you!!!) because of the many additional features available in a good IDE.
I know you can run the vim extensions in eclipse but I've never bothered. I figure if I'm going to mouse my way through an IDE I'm not sure what having vi commands will do to make my life easier. :-)
So I designed my own system. One key press drops the piece instantly into the correct spot, with the correct rotation. It's making my brain hurt practicing because it requires so much thinking, but once I get it into muscle memory, things are going to get really interesting. My 40 line best using the arrow keys is 45 seconds and with my new scheme it is 2:21. But I'm dropping time quickly and I'm sure very soon I'll beat my record.
A long time ago I narrowly escaped being like that exactly, but with TECO :-)
I've learned not to fret about people's choice of editor. FOr instance, if someone was happy in NotePad, as long as it didn't affect me, I learned not to care. (I might grit my teeth in frustration, watching them flail for minutes at something I could do in seconds with my choice of editor, but that's my problem).
I understand that this post was tongue in cheek. I also understand that not all text editors are equal, if you see someone writing their first hello world program in Notepad yes you should tell them about much better alternatives. But this post confuses ability to program with ability to use Vim.
Being really good at using Vim will definitely let you say you're part of the club of people who are really good at using Vim. It just might make you more efficient at manipulating code. But let's not perpetuate the elitism that is pervasive enough in hacking, it's a text editor. It doesn't make you more intelligent, better at problem solving, or writing more efficient code.
All text editors have crazy shortcuts that do magic, but only vim has the ironically named NORMAL mode with its verbs, movements and registers...
I treated it like a new game to learn. Sometimes I will write a macro because it is more fun, even though it might not save me raw time.
I even tweeted it. With TwitVim, naturally. http://vim.sourceforge.net/scripts/script.php?script_id=2204
But to somehow claim that one editor or another is going to make you a better programmer and make your 10 pages of crap code become 4 elegant lines is pretty nonsensical.
No matter what editor you use; if you're not good at problem solving, you're not going to make a good programmer.
And its related site for questions, Ask MetaFilter: http://ask.metafilter.com/. From browsing it regularly I know [good, well-supported] answers to a thousand interesting and useful questions I'd never have even thought to ask.
It really is something to read papers that define the way we think, and it's a nice alternative to short blog posts and (pop) articles versus (pop) books.
Here are some papers to get you started:
\* 'I've Got Nothing to Hide' and Other Misunderstandings of Privacy: http://papers.ssrn.com/sol3/papers.cfm?abstract_id=0998565
\* Broken Promises of Privacy: Responding to the Surprising Failure of Anonymization: http://papers.ssrn.com/sol3/Papers.cfm?abstract_id=1450006
\* A Future-Adaptable Password Scheme: http://static.usenix.org/events/usenix99/provos.html
Maybe this is cheating, maybe it isn't, but I definitely recommend it.
If you want to get political, some left-of-centre-leaning writers you can't go wrong with:
1. Matt Taibbi on Wall Street, Rolling Stone (http://www.rollingstone.com/politics/blogs/taibblog)
2. Frank Rich in American politics, now at New York Magazine (https://twitter.com/frankrichny)
3. Glenn Greenwald on civil liberties and foreign policy, in moderation (http://www.salon.com/writer/glenn_greenwald/)
4. Juan Cole on the Middle East, in moderation (http://www.juancole.com/)
5. Lawyers, Guns, and Money (http://www.lawyersgunsmoneyblog.com/)
6. The New Yorker's long-form articles. (I find their blog posts to be really poor, by any standards.) Also, ditch the Malcolm Gladwell articles.
A lot of the main sequences of material (http://wiki.lesswrong.com/wiki/Sequences) there were written by a Hacker News member, Eliezer Yudkowsky. But it's also a great living community (like here), with new stuff constantly being added.
Is it just me or is that list a bit too shallow?
It's definitely worth checking out, it's one of my favorite web sites.
Edward Tufte Forum http://www.edwardtufte.com/bboard/q-and-a?topic_id=1
http://c2.com/cgi/wiki if you're into seeing how the Elders thought about computing
Question: Engineering Management: Why are software development task estimations regularly off by a factor of 2-3?
Exploring, reading, writing, upvoting, and commenting on great question and answers from all sorts of subjects, has helped me learn a lot. The people there are great too, with many lending expert knowledge you wouldn't see elsewhere.
> The central theme of You Are Not So Smart is that you are unaware of how unaware you are. There is branch of psychology and an old-but-growing body of research with findings that suggest you have little idea why you act or think the way you do. Despite this, you continue to create narratives to explain your own feelings, thoughts, and behaviors, and these narratives â" no matter how inaccurate â" become the story of your life.
I'm pretty sure everyone on HN knows what TED is, but just in case someone doesn't, here are some of my favourite talks - they show the breadth of the topics covered:
- http://www.ted.com/talks/derek_sivers_weird_or_just_differen... - 2 minutes about how things we thing work in some way may be completely different somewhere else
- http://www.ted.com/talks/hans_rosling_and_the_magic_washing_... - how technology really improves lives
- http://www.ted.com/talks/lang/en/eythor_bender_demos_human_e... - exoskeletons, with wheelchair woman standing and walking live on the scene
- http://www.ted.com/talks/lang/en/sam_richards_a_radical_expe... - a talk about empathy
And of course, the obligatory one,
- http://www.ted.com/talks/lang/en/ken_robinson_says_schools_k... - how school kills creativity
I want to call it "intellegacy", for "intellectual legacy"
I really enjoy reading insightful essays and comments on the internet, and thought it'd be cool to have a website with all kinds of intellectuals with their own pages on it where I could read their essays, see a list of their books, and have conversations with them.
I envision scientists, artists, politicians all interacting on the site. For example, Neil de Grasse Tyson could post his blogs or essays there, and fans of his could get a summary of all his work, books, and what he's currently working on or reading.
Besides the goal of intellectuals having their own space to publish their insights, I also want the public to be able to read and learn on a clearly organized website, by taking their time. Maybe this isn't good for pageviews, but on other websites the content refreshes so quickly that a lot of insights are lost in the shuffle.
The grand vision is to be a huge library of insights, clearly organized and that can be read by anyone who wishes to learn and follow the thought leaders in our world.
We could "best-of" the best debates and discussions and future generations could read everything.
Just to give you an example is this book recommendation post
Warning: There is some weird flash thing in the very beginning but just click to continue.
Vitamin Cr : http://vitamincr.com
Brain pickings: http://www.brainpickings.org/
Synaptic Stimuli: http://synapticstimuli.com/
Timoni's blog: http://blog.timoni.org/
For most HN readers, I'd suggest visiting a site like http://www.vdare.com
I've found I also get good mental stimulation by picking free online courses at random. For example Yale University has a brilliant set of lectures on Milton, which I knew very little about prior.
My "intellectual stimulation" consists of coming up with or doing things. Sure, self-reflection helps, but more often than not, they're inspired by something I see in a "specialized" community.
Slightly off-topic, but asking "why" helps with self-reflection and creativity.
The five points listed there are well known and well documented. Those are not crazy unfounded theories.
Yet most people I meet do not believe any of them. Why is that? Cognitive dissonance?
I also love the whole StackExchange network, especially skeptics (http://skeptics.stackexchange.com). A great way to learn new stuff from common myth.
Science subreddits with a cognitive barrier to entry like /r/neuro are a great source of news specific their scientific communities. Geographic subreddits such as /r/[yourmetro] are also a great way to keep in touch with the general vibe and events of your city.
I like object relational mapping as a theory (ie. I have an object of type Author which has 1 or more books I can loop over), but I hate ActiveRecord implementations. Eventually, they just end up implementing almost all of SQL but in some arcane bullshit syntax or sequence of method calls that you have to spend a bunch of time learning.
I also seriously doubt that anyone has ever written a production system of any reasonable complexity and been able to use the exact same ORM code with absolutely any backend (if you have an example please correct me on this). This barely even works with something like PDO in PHP which is a bare bones abstraction across multiple SQL backends.
When it comes down to it, the benefits of ActiveRecord are all but dead on about the third day of development. The data mapper pattern adopted by SQLAlchemy (et. al.) takes all of the shitness of ActiveRecord and adds mind bending complexity to it.
SQL is easy to learn and very expressive. Why try and abstract it?
I spent years working with an ActiveRecord ORM I wrote myself in my feckless youth and thought that it was the answer to the world's problems. I didn't really understand why it was so terrible until I did a large project in Django and had to use someone else's ORM.
When I really analysed it, there were only three things that I really wanted out of an ORM:
1) Make the task of writing complex join statements a bit less tedious
2) Make the task of writing a sub-set of very basic where clauses slightly less tedious
3) Obviate the need for me to detect primary key changes when iterating over a joined result set to detect changes in an object (for example, looping over a list of Authors and their Books)
To that end, I wrote this:
It's written in PHP because I like and use PHP but it's a very simple pattern that I would like to see elaborated upon/taken to other languages as I think it provides just the bare minimum amount of functionality to give some real productivity gains without creating a steep learning curve, performance trade-off or any barrier to just writing out SQL statements if that's the fastest way to solve the problem at hand.
The whole post was excellent, but all the useful points will now be overshadowed by the armchair quarterbacking about security by people who mostly don't understand that ALL security is a compromise, and it is as important to understand and make deliberate decisions about your security as it is to try to make a secure system in the first place.
I think a lot of services (even banks!) have serious security problems and seem to be able to weather a small PR storm. So figure it out if it really is important to you (are you worth hacking? do you actually care if you're hacked? is it worth the engineering or product cost?) before you go and lock down everything.
Just because you can "afford" to be hacked, doesn't mean you shouldn't take all the steps necessary to proactively protect your data. In the end, security is not about you, it is about your users. This is exactly the type of attitude that leads to all the massive breaches we have been seeing recently. Sure your company is "hurt" with bad PR, but really your users are the ones who are the real victims. You should consider their risk (especially with something as sensitive as people's files!) before you consider your own company's well being.
Why not take the extra half a second to make those random strings meaningful and hidden behind a DEBUG log level?
One point it misses though is to test your backup strategy often. When you scale fast things break very often and it's good to be in practice of restoring from backups every now and then.
I've never seen a shorter description of real-world software development. That's it in a nutshell!
"pick lightweight things that are known to work and see a lot of use outside your company, or else be prepared to become the âprimary contributorâ to the project."
* on my machine xargs -I implies -L1, so you can drop that * use gnuplot -p or the graphic will disappear immediately after rendering
Does anyone know what memory corruption bugs they are referring to?
A username and password represent a pair. Neither one has meaning in terms of authentication without the other.
Take the example where I have forgotten my username (JohnGB), but try with what I think it is (Say JohnB), and enter the correct password for my actual username. The system would then tell me that my username is fine, but that my password isn't. From then on, I would be trying to reset the password for a different user as the system has already told me that my username was correct.
Please, for the sake of sane UX, don't do this!
MySQL has a huge network of support and we were pretty sure if we had a problem, Google, Yahoo, or Facebook would have to deal with it and patch it before we did. :)
I'd say that's terrifying.
Another thought: doesn't this make it possible to frame someone by writing random data to their hard drive?
Edit: Link to ent http://www.fourmilab.ch/random/
You could prove the file is encrypted if it is indeed encrypted and you have the passphrase and the program to decrypt it, but outside of that, it's simply not possible to say with any level of confidence that the bits are really encrypted.
BTW, I wrote TCHunt in 2007, a program that attempts to seek out encrypted TrueCrypt volumes and I have a FAQ that covers much of this. Here's the link for anyone interested in reading more about it: http://16s.us/TCHunt/
And, there is usually much more to it than randomish bits in a file on a disk. The government agents usually have other evidence that suggests the person in question is doing illegal things and may have cause to use encryption. Finding actual encrypted data is normally just icing on the cake to them.
âkeyâ, in relation to any electronic data, means any key, code, password, algorithm or other data the use of which (with or without other keys)â"
(a)allows access to the electronic data, or
(b)facilitates the putting of the data into an intelligible form;
-- and --
âprotected informationâ means any electronic data which, without the key to the dataâ"
(a)cannot, or cannot readily, be accessed, or
(b)cannot, or cannot readily, be put into an intelligible form;
At first, I thought the argument in this article was nonsense. However, whilst I'd hope common sense would prevail, the definitions above seem broad enough that a policeman could make one's life difficult for a while.
I know the answer to this is 'easier said that done'. Certainly hardware and OS vendors can't be trusted with this task. Maybe FOSS installers could educate users and optionally create the file? How can we make this happen? I want to wear a t-shirt that says 'random numbers save lives.'
"For the purposes of this section a person shall be taken to have shown that he was not in possession of a key to protected information at a particular time ifâ"
(a) sufficient evidence of that fact is adduced to raise an issue with respect to it; and
(b) the contrary is not proved beyond a reasonable doubt."
In other words, if there's evidence for there to be 'an issue' about whether you actually do have a key (or whether e.g. it's just random noise), it's up to the prosecution to prove beyond reasonable doubt that it is actually data, and you do have the key.
So the flowchart is:
- If the police can prove they have reasonable grounds to believe that something is encrypted data that you have the key to, then
- That raises an evidential presumption that you do have it, which you can rebut by
- adducing evidence that just has to raise an issue about whether you have a key (inc. whether it's encrypted data at all), in which case the police have to
- Prove beyond reasonable doubt that it is encrypted, and you do have the key.
Meanwhile, a criminal could easily just store everything on an encrypted microSD card, then eat it if anything goes wrong - the oldest trick in the book still works in the digital age :-D...
Volume one contains hardcore porn, volume two contains bank job plans. Neither can be proved to exist with their keys.
When asked, hand over the porn keys. Plausible deniability.
Of course, if you have access to the files, you could just XOR the noise with some innocuous documents, and send the result to the police saying it's a one-time-pad.
- The passwords on your bitcoin wallet give you the authority to spend your money.
- Your encrypted signature requires your private key so other's know your message came from you.
So, this law gives the government the ability to impersonate you and consume/use your assets in an unrecoverable way.
While the government might not have the authority to impersonate you or spend your money, they do have the authority to acquire the means to do so. And then all it takes is one dishonest person working for the government to use that information maliciously.
What would happen if there is encrypted data on your system but you didn't set the key yourself? For example DRM systems usually work by encrypting data and trying their best to make sure you never acquire the key.
But on the whole, the whole article is scary and slightly unsettling. On the upside I dont live in the UK - But if we were to be traveling through the UK with our encrypted HardDrives, would we be targeted by the law?
Prevention is better than ranting after it's set in stone.
Eventually the preposterous laws drive those with mobility to simply leave. Follow that to it's logical conclusion; the UK will make it difficult to impossible to leave with your assets intact. Loss of privacy is a just a precursor to loss of private property altogether.
Now question is - compression can be views as encryption. How does that pan out if you use a non-standard form of compression that does not require a key as the compression formula is the key in itself!
The law as I understand it says that if you've got data (and the context of the law is in focussed primarily on targeting terrorism, child-porn etc) that you've encrypted but refuse to give over the encryption keys to; then if the police then convince a judge that there is valuable evidence in the encrypted data, and you still refuse, then you could ultimately go to prison.
Is this really any different to a digital search warrant?
Sure this law, like many others, could be abused. But I don't see it as anything to get to wound up about.
P.s. what kind of person has a 32GB file of satellite noise to generate random numbers with?!
Turns out he was a bookkeeper and had just purchased an Apple IIe and wanted to use it for his clients. I knew nothing about accounting, he knew nothing about computers, so it seemed like a good match :)
Four weeks of spending free afternoons at his shop, and it was ready to go. He was happy and I had 200 bucks in my pocket. Life was good.
Almost 20 years later, I get a call from him. He says the program isn't working so well and he wants to upgrade. I'm like WTF? Does anybody in the universe still even have a working Apple II anymore? Why would he keep using something like that for 20 years?
He told me that as computers modernized, it became a bit of a status symbol to have an older-looking system spewing out reams of reports. His customers, who were mostly small construction companies and such, got the feeling of stability and security from something that was unchanged.
It is a very strange feeling to get a call about code you wrote a long, long time ago. If I would have had any sense, I would have realized from the experience that programming is normally an extremely tiny part of actually making a business work. But it took me many more years to figure that one out.
"What the hell! It's been down for a day already! When is it coming back?"
I was pretty bemused, so I checked the server logs and found that it had been hurtling along at what looked like 60,000 requests per day (I later found out it was closer to 200,000).
Needless to say I jumped in, fixed it, and got talking to the users who relied on the project for their businesses or personal projects - it's been a blast.
edit - it's since been completely rewritten, so I can't say the code went unchanged..
Thank you, alarm clock! Thank you, spoon! Thank you, hammer!
In any case, thank you for the post, and thank you even more for keeping this kind of service running for so long :)
I didn't add Google Analytics until 2008 but here are the stats since then:
Visits: 304,619 Unique Visitors: 246,368 Pageviews: 361,097 Pages / Visit: 1.19 Avg. Visit Duration: 00:00:41 Bounce Rate: 84.44% % New Visits: 80.87%
And if it's written in Perl looking at the source code again might break it so I wouldn't do that.
I don't think a new feature has been added in about 5 years and it barely seems maintained, there have been a few day-longish episodes of downtime but other than that it works great.
It has all the features I need and am used to, and I don't have to worry about the product constantly changing. In a way it's almost better to use a neglected product.
It looks as if he's updated his forum software about as regularly as the site's main code because it's full of spam (http://forum.invoicejournal.com/). Might be worth fixing that as the consistent number of sign-ups and invoices suggests there is a community of users there to be fostered.
That gives me mixed feelingsâŠ
I'm sure there's a white paper or talk on the life developers give their software...
Such a great story, makes me want to go fulfill those umpteen other ideas I've got hanging around to see what sticks. (I keep all these ridiculous domain names as kind of a skunkworks project task list.)
Id be very interested in taking this over and looking after it properly. Please shoot me an email and maybe we can talk about it.
It isn't clear though- is this a paid service?
Oddly, I think that's a great definition of "systems software."
But after a few outages that I had no clue about until days had passed, I did learn to sign up for Pingdom so I can at least reboot apache within an hour of the site going down
If you haven't committed to a decision with it, give me an email (it's in my HN profile).
Is this likely to become more of an issue as people move to Web based applications for things they used Office and file folders for before?
I left the company 1.5 years ago and got a call from a supervisor asking me how to start it up again after a power outage and UPS failure.
The company was in a bind with some contractors who were developing a web app for them, and I had recently written my own personal HTML site. So, I bought "Teach Yourself ASP in 24 Hours", read it over the weekend, and put together a little prototype, to see if I could actually connect to the database and report the information they wanted. I ended up writing that website for them over the next two weeks. Once I deployed it (no IIS web admins at the company), I went back to my regular Clipper work.
I wouldn't be surprised if it hasn't been touched in years.
Part of me wants to be happy that it's lasted so long. But I'm also a bit disappointed that all the other 'cool' stuff I've worked on since then probably hasn't had one-tenth the use of that site.
I'd argue it's likely successful just for the mere fact that he _hasn't_ touched it in four years. Had he actually tried to monetize the project, he probably would have just gotten in the way and screwed something up. This gives evidence to the "simple is more" rule.
It's actually not a bad template. To the point.
Please apply this technique to resolve a historically important question about the assassination of President John F. Kennedy by identifying the license plate number of the car immediately behind the bus:
Better resolution photos should be available since the ones I've seen printed in books are better than these. And there are plenty of photos of Texas license plates from 1963 with which to seed your algorithm.
The reason this is interesting is because Roger Craig, a Deputy Sheriff in Dallas, after he witnessed the assassination said that he saw Lee Harvey Oswald run from the Texas School Book Depository and get into Nash Rambler station wagon driven by a another man.
These photos show a Nash Rambler station wagon that was indeed passing the Depository just after the assassination. If motor vehicle records still exist for 1963, then its owner can be identified.
The key point is that the existence of a person picking up Oswald after the assassination would strongly indicate the possibility that he was not acting alone.
I'll readily grant that this isn't likely to settle the issue, but I think it still would be amazing if the super-resolution approach was able to generate new information about a topic that no one expects to ever see new information about.
Their explanation for it:
"The lines of letters have been recovered quite well due to the existence of cross-scale patch recurrence in those image areas. However, the small digits on the left margin of the image could not be recovered, since their patches recurrence occurs only within the same (input) scale. Thus their resulting uniïŹed SR constraints reduce to the âclassicalâ SR constraints (imposed on multiple patches within the input image). The resulting resolution of the digits is better than the bicubic interpolation, but suffers from the inherent limits of classical SR [3, 14]."
So it's guessing based on the larger characters. Neat.
Here's their last line:
Here's the actual:
(from what I can read.)
( http://1.bp.blogspot.com/-VgvutrSWaFk/T4VI-2tDH2I/AAAAAAAAAX... )
This could probably help ocr.
Other than this, I think Genuine Fractals and BenVista PhotoZoom give similar results.
Implementation in Python: http://mentat.za.net/supreme/
(In CS terms, this is akin to comparing your algorithm to something using a bubble sort, and ignoring the invention of n log n sorting algorithms)
Sadly, this is usually first and last time we see the technology in question. They do not seem to produce any impact that could increase our quality of life. They just sit on some dusty shelves somewhere.
One example is font and font size choices - because the system fonts and font rendering styles differ between platforms, it becomes very hard to tell what looks broken or 'not quite right' on the platform you're not used to. It's not uncommon to see sites launch with font choices that look rubbish on ClearType, but if you're not used to ClearType, it's hard to tell whether the rubbish is your fault or not.
Apple's excellent execution and Windows' (no-longer-deserved) poor reputation also mean you frequently hear excuses for this behavior like "Windows users won't care because they don't care about design" or "The Apple way is better, so we should do it that way on Windows too". Both of these are infuriating and lead to terribly designed products.
"that is some nice software you have there, would be a shame if users thought it was dangerous"
"pay a little money to one of these approved companies and that warning will go away"
If MS was serious about this only being for security they could issue the certificates for free and prove me wrong.
On the other hand, why is it that about 20% of users click past BOTH of these EXTREEMLY scary warnings? Don't they read them at all?
Errr, um, sort of.....well.... Mafia protection racket, yes?
Put it this way. What is the first thing that springs to mind when some one is scaring off your customers demanding, sorry, politely implying a payment to stop?
Yes, yes, yes, I know. Security, user safety, lots of lovely logical arguments for it, Im sure there are plenty. But strip it back to basics and, well, there it is. I presume since MS is a big huge "evil" business which probably funds some political rodent its all cosy and legal.
Its more complicated, right?
1. Dev checks out his site using IE
2. Dev realizes that IE users were getting scary warnings about his software
3. Dev has to pay up money to a third company to make the scary warnings go away.
Seems like a bad state of affairs to me.
It's a nice money maker for them getting all those yearly certificates, some charging several hundreds of dollars per year.
On the other hand you have my grandma,aunt. Random old folks who fall into the red messege = panic & insta call super urgent call to me.
So yea far more layman are using IESent from android.
As for the cert. When you know about you simply explaon this on the page.
Once you get past the learning curve of knowing that the end trip is a mash of your thoughts and the original's writings, it becomes easier to see how to think on BT.
Kudos on the launch, and here's hoping for a bright future.
Seems a bit overdesigned/overengineered though right? Do you really need the adlib typeahead? Or did you realize that people needed more constraints than just being able to type whatever they wanted into a box?
Seems like you could have an entire conversation between a few famous celebrities (alive or deceased) and then take that conversation page and make it atomic and shareable... and I feel like that would get traction.
Just my 2 cents.
I've had a lot of fun tweeting as the old-timey characters for one of my brands (https://www.blamestella.com/news) but it's definitely felt like hard work at times. Huge kudos for really breaking it down and simplifying it.
I don't like the text input system.
I have quite a few idea and I like those kind of projects that permit to liberate the creativity (through constraints!)
Judge Posner admits he is no expert on what the fixes should be and his tentative suggestions for fixing the system are, in my view, decidedly mixed on their merits (e.g., specialized adjudications before the USPTO - remember when it was suggested that a specialized appeals court would improve the patent system and the result was a court that has been so maximalist in its approach to patents that it has in itself become a significant part of the problem).
So where to begin?
Legally, it has to go back to fundamentals and, for me, this has to go back to the scope of patentable subject matter and whether this should be defined to include software at all.
The Patent (and Copyright) Clause of the Constitution (Article I, sec. 8, cl. 8) provides that the Congress shall have the power "to promote the Progress of Science and useful Arts, by securing for limited Times to Authors and Inventors the exclusive Right to their respective Writings and Discoveries." Note that, in defining this as one of the enumerated powers of the federal legislative branch, the Constitution does not mandate that the legislature provide for patent protection of any sort. It merely permits the exercise of such a power within constitutionally prescribed limits. Thus, any legitimate exercise of patent authority in the U.S. must come from Congress and must respect the constitutional bounds that any grant of patents be for "limited times" and be done in such a way as "to promote the progress of science and useful arts." Legally, then, any patent system in the U.S., if adopted at all, must be authorized and defined by Congress with a view to promoting the progress of science and, implicitly, must employ "limited times" consistent with what it takes to promote scientific progress.
The first issue, then, is whether patents are needed at all to promote the progress of science. In the U.S., in spite of philosophical arguments to the contrary by Jefferson (http://news.ycombinator.com/item?id=1171754), this has never been seriously in dispute. The industrial revolution was already well in progress in 1789, when the Constitution was adopted, and the federal authority, though generally regarded with great wariness at the time, was seen as vital to protect the rights of inventors and to reward them with limited monopoly grants in order to encourage the progress of science. In the first U.S. Patent Act (Act of April 10, 1790, 1 Stat. 109, 110), Congress implemented its constitutional authority to sanction patent monopolies by defining patentable subject matter very broadly, to include "any useful art, manufacture, engine, machine, or device, or any improvement therein." Congress amended the Act in 1793 and then again in 1952, so that today it reads as to the idea of "patentable subject matter" as follows (35 U.S.C. sec. 101): "Whoever invents or discovers any new and useful process, machine, manufacture, or composition of matter, or any new and useful improvement thereof, may obtain a patent therefor, subject to the conditions and requirements of this title."
Thus, patents in the U.S. can be granted for any original invention that fits within the definition of patentable subject matter and that also meets the other conditions of the patent act (i.e., that is useful and non-obvious). Note, though, that the 1952 definition of patentable subject matter significantly expanded the scope of such subject matter in the name of bringing the patent laws up to date with developments in then-modern technology, all in the name of promoting the progress of science. It did so by defining patentable subject matter to include any "new and useful process" as well as any "new and useful improvement" of any original invention. Over time, "process" has come to embrace business methods and also software. And the protection of "useful improvements" made clear that new uses of existing machines or processes could be patented notwithstanding older Supreme Court decisions such as Roberts v. Ryer, 91 U.S. 150, 157 (1875) ("it is no new invention to use an old machine for a new purpose").
To promote the progress of science, then, Congress in 1952 allowed patents to be granted for any inventive process and for any inventive new use for any such process. In my view, this generally made sense for what was essentially the continued playing out of the same sort of industrial revolution that animated the original forms of patent protection granted in 1790. Looking at that language at that time, one could readily make the case that patentable processes and improvements thereon could and did promote the progress of science. Discrete inventions tended to be sharply differentiated and tended to involve significant development effort in time and resources. An inventor could keep a process secret and not patent it but the grant of a limited monopoly gave a decided inducement to disclose it to the world and, hence, to expand the broad pool of scientific know-how available to society.
Then came the digital revolution and, with software, a new or improved process can amount to an almost trivial variation on prior art amidst a seemingly endless stream of improvements developed in ever-rapid succession and with little or no capital investment beyond what developers would be motivated to do for reasons entirely independent of gaining monopoly protection for the fruits of their efforts. Moreover, there is little that is closed about such innovations: a wide knowledge base is in place, known to an international developer community that is basically scratching its collective head asking why it should be restricted legally from using techniques and processes that represent common knowledge in the field.
The main question, then, concerning software patents, is whether the existing framework makes sense as one that promotes the "progress of science" insofar as it grants patent protection to process inventions in this area. Congress needs to seriously ask itself that question. A second question, also tied to constitutional authority and assuming that it is legitimate to grant some form of patent for such inventions, is whether a 20-year period of exclusivity makes sense in an area where innovation occurs at blazing speeds and with not too much capital investment tied specifically to any given discrete invention. Is that necessary to promote the progress of science? That too is a question that Congress needs to consider.
Thus: (1) there is nothing magical about the current definition of patentable subject matter and Congress can adapt this to suit the needs of the time in promoting the progress of science, (2) process patents are in themselves a fairly recent phenomenon (at least in any large numbers) and it is no radical change to curtail them in areas where they make little or no sense in light of the constitutional purpose for why patents even exist in the first place, and (3) legitimate patent reform needs to go far beyond procedural fixes around the edges of the system and needs to focus on the realities of modern technology and whether the patent laws further or impede the progress of science as applied.
The policy debate can and will go all over the board on this but, if it is framed in light of the constitutional foundation for having patents in the first place, it can be shaped in a way that puts the focus on the fundamentals of what needs to be fixed as opposed to lesser issues that do not get to the heart of the problem. The main problem today is the blizzard of vague and often useless patents in the area of software. These are effectively choking all sorts of innovation and are benefiting mainly lawyers, trolls, and others who do not further technological development by what they do. It is a mistake, in my view, then, to swing too broadly in trying to fix things (as by advocating abolition of all patents) or to be so timid about the issues that reform is marginal at best and ineffective in dealing with the current crisis of an over-abundance of essentially worthless patents. Congress embraces the patent system as a whole and shows no hostility to its fundamentals. Reform must be shaped in light of those fundamentals but it must, at the same time, be meaningful to eliminate the main garbage from the current flawed system. Judge Posner has pointed the way generally and proponents of reform ought to follow his lead, with the focus being (in my view) on software.
One of the most compelling arguments to me (against these patents) is that in pharmaceuticals, for example, you are dealing with a handful of patents. Some processes might be patented, maybe even some equipment (easily licensed generally) but basically the patents that go into a process (that may itself be patented) are minimal and can be reasonably well understood by those running such businesses.
Posner pointed out that a smartphone may well contain (and violate) thousands of patents. That right there is a sure sign that something is rotten in the state of the patent system.
The solution here isn't reform, as some suggest (ie raising the bar to what's patentable). It's simply to get rid of them. First-to-market and execution are what matters and what should matter. 20 year exclusives for vaguely worded patents on things that are more often than not obvious is just a means for big companies to extinguish smaller companies.
Wow, this guy really gets it. This is how markets and competition work. There's no need to give a company a legal monopoly. If anything, that lack of monopoly, will force companies to keep trying to invent new things to keep staying one step ahead of the competitors.
I also love this one:
"forbidding patent trolling by requiring the patentee to produce the patented invention within a specified period, or lose the patent"
These days big tech corporations are filing patents as fast as they can print them on paper. And then 95% of them will probably never be used in products that are shipping in the market.
See chapter 9 for an historical analysis of the pharmaceutical industry in countries without patents. The surprising result is companies no-patent protection countries were producing equivalent new drugs as the patent protected companies.
Now I am full anti-IP advocate, except for certain trademarks and attribution of authorship (so people know who the company/author this product came from).
Whenever a judge decide something, it usually make me like them.
In fact, Americans trust their judges more than their politicans and bureaucrat. http://www.gallup.com/poll/143225/trust-legislative-branch-f...
This hierarchical and networked architecture is inevitable and it is the best way to organize such complex information, however the stability it requires at the bottom of the pyramid of code means that some building blocks cannot be changed once the pyramid is built. Someone claiming ownership of the shape of a bottom-center block after the pyramid is built, someone having the power to force a bottom block to be removed and replaced with a different shaped one, no matter how simple and obvious this bloc is, does not have power over just this block but over the whole structure above it and all the components that depend on it. This means patent holders have a disproportionately large amount of power when they target such a bloc. From a possibly trivial piece at the bottom, they can control a vastly more sophisticated structure built on top which they had no part in conceiving. They know that changing it would require tearing down, redesigning and replacing often tons of dependent work and probably break compatibility for huge amounts of users of these projects.
To me the obvious solution, and the one missing from this list, is to abolish patents altogether in such industries ( including the tech industry). I wonder if judge Posner would agree, and if so, why not come out and say it? Would this be considered too radical at this point in time?
The optimistic in me would love to believe there's a brilliant solution, which is way over my head. The realistic in me, can only see paradoxes and no obvious solution. Maybe I'm just too dumb to solve this problem myself.
I believe the right path is to look back at what the vision behind patents are in the first place (incentives for invention), and think from the ground up how we can implement this without the modern "necessary" dogmas (such as licensing or IP). Then I can actually think of plenty of solutions. But none of them even remotely resembles what we know today as a patent.
For specific drugs, this may be the case, but when you have pharmaceutical companies doing things like patenting specific gene sequences, causing both other companies and academics to have to get licenses/permission just to perform research on something completely different, that's just ridiculous.
How patents work should be more flexible, and not limited to just whatever industry they're in.
UK perspective: what do people here think of this suggestion, perhaps even as a temporary 'damper' on the patent troll business model? Raising the barrier to litigation would perhaps slow down the rate at which these cases occur. In the UK, we have a special court for trying IP cases, and the barrier to litigation is very high, perhaps too high for some small companies. Of course, the EU does not allow the granting of software patents.
For example :- somebody could patent teleportation - define it and then when somebody does all the hard work and actualy invents a teleporter, you are then able to cry patent violation and cash in. That too me is compeletely wrong, yet that is how the patent system stands currently.
I have also noted that alot of patents that have no working prototype or product, all seem to have been done in some SCI-FI movie/TV series priviously and find it somehow suprising that the movie industry have not started jumping on this patent bandwagon as they have more of a working prototype than many awarded patents that get approved in this day and age.
Patents have become a hideously bureaucratic market unto themselves, creating a kink in the hose leading to the fountain of progress, but the fountain attained its magisterial beauty partially as a result of the motivation to circumvent roadblocks.
First thing first is rescue the culture. Digg engineering already fled, the peanut gallery in the valley keeps squawking with delight about your failure, and all you have left is some nicely designed pages showing double digit gains on stale links that's a shade away from being as if they were spun up by a Russian spam squad. Save yourself, redo the logo, redo the color scheme, don't let legacy drag you down.
Second thing is do some soul searching, figuring out what layer Digg wants to play in. Links aggregation? Community building and content creation? Traffic? Attention? Engagement? That elusive ad sharing model for content creators?
After that figure out for who. Reddit's core is dying, and eventually it'll be fully crowded out by the mainstreaming of rage comics, aww pics, and counter-Tumblr-pseudo-nerdy programming. Do you target those people, the walking wounded much like Slashdot?
Or do you go after the youths, the ones addicted to Instagrams, 9gag, imgur, but bored of their Facebook feeds? Then play the waiting game. New is everything old, after all.
The game's so different now from 2005. All the majors have feeds now. All the majors have figured out sharing, commenting, and extracting action on a story item. On top of that you're competing against mobile guys like Flipboard.
This is one of those situations where execution is easier than creating an idea. What is Digg's $1MM idea? Good luck with that because Digg needs to be futuristic but also really lucky twice being at the right place, the right time, with the right idea. Once you're lucky, twice you're good, right?
I bet this kind of flame out, this one thing keeps Mark Zuckerberg from sleeping.
With my cynicism hat on nice and tight, it seems a bit late for that. Digg WAS, in its heyday, the best place to find and share online content. But as the management undertook on iffy decision after another, and as imitators sprung up left right and centre, it lost its place in the online world. Sad.
We have plenty of content aggregators now. The kind of people Digg 5.0 (?!) will target are probably more than happy finding their entertainment through Twitter, Facebook, and dare I say it, Reddit.
Having said that, I for one wish the new team all the luck in the world in trying to get the site back to its former glory, and I look forward to trying it out.
Not all companies are going to change the world, nor should they. I see Facebook and Twitter making the same mistakes. Perhaps Facebook is a great place to keep in touch with friends but nothing more. Perhaps Twitter is simply a great replacement for blogging, nothing more. Instead of filling their niches and quietly making a small number of employees very wealthy, they try to become these massive institutions that are reliant on very fickle user bases. It's a house of cards.
Also, is this an example that illustrates why craigslist refuses to redesign their site? A redesign really could spell the end of craiglist like digg.
FWIW, Pligg (the digg clone CMS) looks like it's getting a twitter bootstrap upgrade. I've been wanting to roll my own "digg/reddit/HN" for ages. Maybe now's the time. http://www.pligg.com/demo/
Fingers crossed, but betaworks seems to want to take Digg in the direction it should have gone during the v2->v3 upgrade.
I hope BusinessWeek does a piece on this.
Another example is TDD. People espouse the benefits, then some study comes along (http://www.neverworkintheory.org/?p=139) saying the benefits are largely illusory and that code reviews are more effective.
Instead of listening to the experts at programming, listen to the experts on programming. Read some studies about the effectiveness of various tools and methods. Try new things. Programming is a craft, and like many crafts it contains significant amounts of dogma passed from teacher to apprentice.
To me, being afraid of a debugger is like being afraid of actually knowing exactly what is going on - being lazy and just read logs and guessing what might have gone wrong, instead of letting the debugger scream in your face all the idiotic mistakes you have made.
I would argue that using the debugger is being lazy in an intelligent way, instead of spending hours reading endless logs trying to puzzle together logic the debugger can show you directly.
I don't build bridges but I would be very surprised if an architect described his work as "pure science and no craft at all" (how would it be possible, then, to build beautiful / ugly bridges?)
I do a little woodworking and have many tools; friends sometimes look at my shop and ask if I really need all that -- yes, I do. In the course of a project you get to use many different tools. You can get around to missing one but it takes exponentially longer to work with not the exact tool. (Same thing with photography).
I'm learning to fly, and the most important word regarding human factors is "honesty". The way to fly is not to avoid mistakes, it's to detect them and minimize the consequences; if you feel you can do no wrong you'll eventually kill yourself.
Unless, of course, you are software engineer-ing.
Flight guidance-and-control systems, among many other things, are are precisely engineered software systems. In a world of web-apps and mobile-apps, people tend to forget this kind of software exists.
Sure, working on your web app, writing some JQuery widgets, or coding up some python scripts is a craft.
Is there anyone who uses a debugger for more than inspecting state?
EDIT: I guess lower level languages and more involved applications use debuggers much more extensively.
1. Don't be lazy and just do something that works without taking the time to learn why it works.
2. Don't be lazy and just stop when you have something that works. Go through the code again and see if you can make it better.
4. If you find yourself writing the same thing twice, don't be lazy and carry on, put the code in a single place and call it from where you need it.
Or at least that's how I see it. I do all of the things I shouldn't do, largely because doing things the wrong way is so much easier!
Edit: Rather than rewritten, I meant "falls under the general category of". The article was great!
>Be promiscuous with languages
>I hate language wars. ... you're arguing about the wrong thing.
It's easy to take this for granted, but it's a concept that is very important to stress to new coders. If you spend too much time focusing on one language you run the risk of the form becoming the logic . This is a dangerous place where your work can be better analogized to muscle memory than to logical thought.
At least in a college environment, I think this lack of plasticity causes discomfort with different representations of similar logic - and so flame wars abound.
One thing I would add though is that there are many times when there is time pressure and a kludge works. The right thing to do here is to document that it is a kludge so that if/when it bites you later you have a comment that attracts your attention to it.
"I don't understand why this fixes the problem of X but this seems to work" is a perfectly good comment. It's great to admit in your comments what you don't know. (That's why questions relating to commenting are great interview questions IMO.)
Finally, I think it's important in the process of simplification to periodically revisit and refactor old code to ensure it is consistent with the rest of the project. This should be an ongoing gradual task.
Anyway, great article.
Having recently started mentoring/managing the first really junior engineer on our team (self-taught, <1 year programming experience), boy does this ring true. Luckily I'm of the temperament to find the "advanced beginner" stage of learning more funny than annoying.
I think it's possible to understand as little about your code when using loggers as when using debuggers, so I have a hard time agreeing with him there. I think his general point about having tools and knowing when to use them applies just as much to that as it does to language, so he contradicts himself.
In Smalltalk, you practically live inside the debugger. Also, if you are an ASM programmer, the debugger is indispensable.
I used to do precisely that. Sprinkle code with log messages, recompile and run. When I finally learned how to use gdb, my debugging productivity increased tenfold.
I mean, just the ability to stop your program at any given point gives you an enormous advantage. You can not only examine the local state of your program, but also you can see how the state of systems outside of your program (e.g. database) changes, and all of this without polluting the code with tons of useless debug messages.
Often when I had new ideas during bug hunts, to test my hypothesis without a debugger, I had to go back and add new logs, the recompile, then run (and make sure it reaches the same state as before!) - lots of wasted time. With a decent debugger it's as easy as typing an expression.
And I don't think debuggers lead to lazy thinking. The process of finding the problem is the same whatever method you use - you analyze the code, have an idea about what could be wrong, change one thing, then see what happens. Debuggers just make it easier.
You seem be implying that the latter statement doesn't apply to the disciplines of science and engineering. "skill and experience expressed through tools" is highly important in both watchmaking and bridge building. I would advise anyone who says elsewise to reconsider.
I understand your point, but why create a hugely false dichotomy between a craft discipline and the science and engineering disciplines?
I strongly concur with points 2 and 6.
It is harder to grow software than it is to initially build it. Preconceptions bite you on the ass, data structures don't allow for new features, side effects multiply.
You don't need to learn the layers. In fact, if you're learning all the layers, you're probably an innefective coder. This is not to say that you shouldn't investigate the layers or have a poke around the layers. But, software's about reuse and reuse is about reusing other people's work via known interfaces without worrying overmuch about what goes on underneath the hood.
I'm actually more of a debugger than a profiler, and as much as I'd like to believe that my way is as valid as his, I suspect that he's probably right on this and I'm probably wrong.
I'm excited about sites like Dribbble and GitHub, which make it easy to showcase your work without having to go through the pantomime of a large folio piece. Though it can be a pain for introverts to do, I've never met anyone who regretted overpublishing their stuff.
One thing that I observed today was an old friend and colleague sitting next to me. He sort of wrote a Perl program to analyze a weeks worth of tweets in India and draw real cool visualization out of it. The thing is script was not really big and was full of functional programming stuff.
As I was reading the script with another friend, My friend was like 'Oh! I though SQL is unpopular these days, or Perl was dead, or no one uses functional programming these days'. I soon realized- For each of these cries, some one is still sitting and doing awesome stuff with things and using the most simplest and basic tools.
Vast majority of awesome guys don't blog or tweet or harp about every bit of tiny things they have achieved on the Cyber space. But your everyday guy is vastly influenced by ranting, blogging and tweeting and takes them to be holy! And anything other thing than that- which produces results is generally a 'black swan' event to him.
So when something unpopular is used to get the job done, its often difficult to the everyday guy to understand this.
There are some people out there who are truly exceptional and might never be heard from. However in general the real exceptional people are the ones pushing the envelope of whats possible forward. Their work _always_ finds the limelight. The author uses an example of a rigid body physics implementation in a game, and while that's great work and the team should congratulate themselves its not that big of a deal BLAS had been around for decades by that point. That does not describe experts doing amazing things. It describes software developers doing a good job and not being cocky about it.
I think why this has received such a warm welcome here is so many of us software developers want to think we are part of that silent majority. That we are doing amazing things too and if we cared to we could write articles and become famous. The truth is we are not experts we are people who have jobs to feed our families and we may be just as skilled and talanted as the vocal minority, but we would never be able to reproduce their success. That's not a bad thing. I have no problem being an unknown developer working on a piece of corporate software and providing a good living for my family. Its ok if others get press time for their contributions I might take from them and learn, but it doesn't change the fact that they at the end of the day have moved our profession further and I haven't.
Take home message: read everything on the net with a grain of salt.
As one example, law-professor blogs have really taken off. They're good for building reputation and making a legal argument actually have real-world impact (people read them, whereas the average law review article is little read), and sometimes have even been cited in court decisions! So there is a movement towards law professors doing more popularly oriented, but still legally sound, writing; and towards universities taking this into account.
Science has also long had some research/popular-writing crossover, and there are a number of researchers who blog in areas like physics and climate science. And mathematics has an increasingly strong blogging culture, with Terence Tao setting the nearly impossible to top example. So I think there is good stuff out there if you know where to look. Of course, it's also good to experiment on your own, and also to look elsewhere (e.g., in books).
"Just because looking down your nose at C++ or Perl is the popular opinion doesn't mean that those languages aren't being used by very smart folks to build amazing, finely crafted software."
I'm reminded of the quote, "To every problem there is a solution that is simple, elegant, and wrong." I have fallen into the trap of looking at a long, messy piece of code, thinking "I can do better than this". I would replace a 50-line code with 5 lines, only to have it fail at some random edge case, which the original has been fixed for.
That is why I always remain skeptical of people who are out to "disrupt" an industry, especially when they don't have much experience in that industry.
There's still nothing like C/C++ for getting close to the metal. And until we have an OS written in Ruby or Python, rather than C, it's gonna be like that forever.
There are a few who don't care to participate thinking they are either above it or below it , but the truth is most people do and everyone should.
1. Knew about PPK, didn't know about JZ
2. Feature detection came about from writing this book! Which generalizes too: writing is a great way to innovate (something pg has said, too).
3. A nice little jab at GRRM, who thoroughly deserves it.
5. Points out that even the best of us can get stuck in a procrastination loop.
The best way to get a project done is to
1. need the money 2. not get paid until you finish
On a selfish note, writing HTML seems to be much easier than producing a properly formatted PDF page. So I enjoy technical writing, but not enough to go through the hoops of for-paper (or paper like e-books) publishing.
Very fitting that he's now at Khan Academy.
I've personally bought the jsninja ebook for friends and myself because I thought it was a great read and wanted to contribute to this kind of effort.
Can't wait till my paper copy ships - guess I better make sure my address is accurate still :)
We've been live for several months now in the Real World - our userbase (amateur athletes) is primarily nontechnical. About half of our users choose Persona/BrowserID and half choose Facebook. We were initially concerned about the BID login flow (in particular, the immediate email roundtrip) but it hasn't been a problem and the UX has been refined quite a lot over the last month or two.
For a mass-consumer audience, the combined FB/Persona solution is excellent:
* Facebook unquestionably has the slickest auth experience, even eliminating the followup name/sex/bday questions. However, a significant percentage of the world (possibly > 25%) either Hates Facebook or wants to keep their Facebook account isolated. This is unlikely to change in the near future and could even get worse depending on what sleeping dogs Zuckerberg decides to kick next week. We don't have the option of alienating the FB haters and we wouldn't want to anyways.
* The Persona UX is good and rapidly getting better. BigTent integration with gmail, yahoo, hotmail will bring one-click login to those users. A native experience is being built into browser chrome. All this is coming without me having to write code. It may not be as slick as Facebook, but I like where this train is headed.
* Integration is simple compared to writing a username/password system. The API is incredibly easy to work with. Dual-auth with Facebook is a little more complicated, but a complete Persona-based auth system is a question of hours, not days.
* The fact that identity is just an email address makes it easier to integrate with existing login systems. In our system, you can log into the same account with both Facebook and Persona as long as the emails match. No, email is not a perfect identifier, but even nontechnical users understand it immediately and really - what other option is there? "What email address did I use?" is a lot better than "What weird combination of letters and numbers did I use as a login name?"
* Support on the Mozilla dev-identity list has been fantastic.
We're pretty happy. Honestly, I don't ever see myself writing another username/password login system ever again. Persona is less work for a better UX.
Try these links, which go into much more detail, answering the "why bother" questions.
Note that BrowserID was the old working name for the project.
This seems much more one sided -- it's good for the user that doesn't use FB or Twitter but 'meh' for the website. I'm not sure we'll see fast adoption like we have for OAuth.
There's also some work on integrating existing e-mail providers, so you can get instant identities:
This is probably the web-thing I'm most excited about today, even though the development is kinda slow. I really want to implement it as a sole identity provider on my websites some day.
EDIT: I also began writing a Safari extension that would work like the mockup above, but gave up halfway since it was too convoluted and totally insecure... maybe I should try writing a SIMBL plugin or something like that.
Every company and every account I sign up for I give a different email address. I'm not about to use a single one for my ID. Especially when they are easy to spoof (any one can send mail as email@example.com) and they are easy to spam and abuse. I had an email address that got 3000 spams a day. That's why I never give use email as ID because I want to be able to disable any email address that's giving me trouble.
So, not interested in BrowserID I think. Or maybe I didn't grok it.
Want to use SSH keys for auth? Okay. You can do that.
There are websites that have chosen Persona as an authentication method. We now need to see the following two pieces implemented:
- It needs to be implemented in the browser's GUI, like in this old screenshot . People will be able to see the usability and security benefits of having a standardised way to log in that's built-in into the browser. Could we at least have a Firefox extension or a nightly build of Firefox that does this?
- At least one email provider needs to be a Persona "ID provider", which would eliminate the need to create a Persona password and to click on a link in an email. My guess is that getting Gmail to support this would be slow work, why not try persuading smaller email providers like Fastmail to be the first to openly support Persona?
Fortunately, Persona does work even without these two pieces, using fall-back servers and a shim. But most of the benefits of Persona are only valid once the browser and the email provider deliberately support it.
 - https://news.ycombinator.com/item?id=2764824
 - http://i50.tinypic.com/2ptyv80.jpg
I know a password reset function that uses only email is basically the same level of trust in the email provider, and I'm no fan of email based password reset, but this feels even worse -- literally abdicating your security entirely into your email providers hands? Gmail is great because it's free, but I didn't join gmail with the idea of giving them the keys to my life.
Another thing I don't fully grok yet is the 'issued-by' concept. Does this mean that 'Relying Parties' need to whitelist all the secondaries they are willing to trust? How can that possibly fly?
Finally, in a native implementation, how is the keyring persisted on disk kept safe from malware extracting your private keys? If the browser can decrypt the keyring, so can malware.
Ok I lied, one more thing... Is there a password prompt when you first sit down at the native BrowserID implementation? Or does it just assume that your browser means its you sitting there?! Then of course the next question is, how do you tell your browser you are walking away (akin to logout) and is it going to expire all sessions that were tied to your identity when that happens? So much to worry about...
âą The name "Persona" is odd considering something by the same name already exists in Mozilla-land, a fact they seem to be aware of.
âą I hope sites use this instead of forcing Facebook login!!
Correct me if I'm wrong, but BrowserID/Persona assumes that for a firstname.lastname@example.org identifier there is a corresponding https://example.org. That's not true for most of the domains I support. In many cases, example.org doesn't even resolve to to an IP address and web servers are only run on subdomains (and not all subdomains have HTTP servers). Does BrowserID/Persona support a DNS mechanism to discover the location of the required HTTPS web server, similar to an MX record?
The opposite problem is where a user has a single email account with multiple valid addresses at multiple domains, such as email@example.com, firstname.lastname@example.org, email@example.com, etc. I work for an organization approaching a million users where this is the case and isn't going to change anytime soon. Once again, you can't assume there is an identity provider running on a web server at all of these domains. Is there a DNS-based method of discovery to solve this problem?
Does BrowserID/Persona allow users to authenticate against the same system they use when accessing email? If so, why does the OpenPhoto example ask for a password? I thought the whole point was that users avoid sharing credentials with sites. That implementation is very confusing and looks like a scary phishing attack.
Finally, it's not very clear what I should expect in the way of connections from other computers, whether I'm running an identity provider or not. This is crucial information to avoid tripping an intrusion detection system (IDS) during unexpected connections, especially from my own users.
Most sites nowadays log you in right after account creation and just wait for email-verification later. Is that even possible with Persona?
Now I'm wondering whether to use Persona or build it myself from scratch.
Many more users are going to have Facebook logins already, and it provides social information that may be useful to your app.
(Hoping to hear answers other than the dev-centric 'I don't like facebook')
Could somebody do a quick summary for me? I would appreciate it very much.
^ adapted from this?
This buq with webfonts are reporter year ego and stili NGASF on Windows xp Firefox 13.01 its start on Firefox 4
Honestly, I cannot think of a good reason to delete any article at all, unless it's obviously fraudulent, marketing-oriented, illegal, or obscene according to a widely accepted definition of obscenity. All of these standards can be applied fairly strictly, and with much less vagueness than notability.
- It's not like Wikipedia is short of disk space to store a few million extra text articles.
- The argument that it would be too difficult to maintain lots of extra articles is also weak, because not every article needs to be regularly edited, and more articles on niche topics might actually attract more editors.
- No, we won't end up with a page for every John Doe and his cat. That's just alarmism. Besides, if something like that ever becomes a problem, a better response would be a prohibition on self-promotion or some other clear guideline, rather than a vague requirement of notability.
- If these deletionists are just being OCD and wanting everything to be tidy and clean and under their editorial control, I would say that they need to take a break. In fact, it's possible that people with certain psychological traits self-select for Wikipedia editorship. But the kind of intolerance and self-centered narrow-mindedness that overzealous deletionists exhibit doesn't suit the spirit of a collaborative online project. Keep your OCD to your own home/office and away from public spaces, thank you very much.
Right now, I get the impression that it's too easy to flag something for deletion and too difficult to counter the deletionist argument, especially since the deletionists are so familiar with editorial procedures. This inequality needs to change. The burden of proof should be on people who want to remove information from the Web, not on those who want to keep it. Isn't that the same principle that we fought tooth and nail to uphold against the onslaught of SOPA, ACTA, etc?
The Slate article by Torie Bosch,
a professional journalist who edits a project covering technology and society issues, reports from this year's Wikimania meeting that Wikipedia continues to face criticism from readers who think its group of editors ("Wikipedians") skew too heavily to "geeks" and result in underrepresentation of topics of interest to women. Thus far Wikipedia is still working on plans to encourage more women to become Wikipedians and to edit more regularly.
She finishes up by writing, "I've never been a Wikipedia editor. The community struck me as uninviting, legalistic." I'll be interested in her experiences if she decides to wade in. Unlike most Wikipedians, Torie Bosch has actual professional editing experience, having had to submit manuscripts to editors who chop out her darling words, and having had to chop out words from the manuscripts of other reporters. Most Wikipedians have not had professional editing or research experience of any kind before joining Wikipedia, and what I find most "uninviting" about Wikipedia is not that it is "legalistic" (although it often is legalistic) but that many Wikipedians are completely clueless about what a good source looks like and how bad many of the current articles have been for how long. I'm not sure yet if Wikipedia is pursuing a successful strategy to improve content quality.
After being very involved in Wikipedia editing just as there was a major Arbitration Committee case on topics that I have researched thoroughly for years,
I have reduced my involvement mostly to "wikignome" editing of random mistakes I encounter as I use Wikipedia as a reader. I still have the SOFIXIT mentality,
of cleaning up problems in Wikipedia as I find them, but to fix big problems on Wikipedia caused by point-of-view-pushing propagandists is even more work than editing a publication as an occupation (something I have done), and yet unpaid. So I really wonder how much time Torie Bosch will devote to Wikipedia when she could be doing editorial work in an actual collegial environment at Slate with pay and professional recognition.
The Hacker News comments before this comment have mostly referred to the issue of "deletionism." For example,
Every time a "problem" like this makes the news, the real problem always seems to be overzealous deletionists with their ridiculously strict notability requirement. . . .
Honestly, I cannot think of a good reason to delete any article at all, unless it's obviously fraudulent, marketing-oriented, illegal, or obscene according to a widely accepted definition of obscenity.
I wonder if there is an organized campaign to fix the overzealous deletion problem (by changing the "notability" policy), to boycott as long as it remains and pledge to donate if it is changed to a more objective policy.
Why are any articles deleted, unless they are factually wrong? Censorship. Who is to say what will be important in the future? Censorship. Who is to say that people will want to read? Censorship.
I have noticed alot of information/articles upon wikipedia get deleted/flagged for deletion at a rather zelous rate and in that I have one question: WHY, if they are not superceeded or and made redundant then personaly I feel they should never be removed.
The one-word reply to comments like these is "Deletionpedia."
I was just browsing random pages of Deletionpedia to see what was posted there before the Deletionpedia project fizzled out (which appears to have been back in 2009). These are by no means the worst examples of material that has been deleted from Wikipedia (I'm not sure if Deletionpedia was ever an exhaustive list of deleted articles, or only a selected sample of those), but the sheer lack of maintenance of Deletionpedia over the last few years calls baloney on the idea that there are lots of readers happy to read stuff that has been deleted from Wikipedia. As bad as Wikipedia often is, EDITING (modifying and deleting) stuff on it so that Wikipedia more closely resembles an encyclopedia makes some Wikipedia pages much better reads than many of the millions of pages would turn up in a keywork search on the same topics.
I don't believe that a lot of readers see value in an online "encyclopedia" with a no-deletion or hardly-any-deletion policy because no one has put up the money to fund one, and I'm not aware of anyone here on Hacker News who is donating programming skill to start one. If you really think articles "should never be removed," build a service to host articles written by anyone about anything and see what happens.
The big problem on Wikipedia is not deletionism. It is insertion of promotional articles (some more subtle than others), propaganda articles (likewise), personal or family vanity articles (very numerous), and fan and hobby articles that are not based on any reliable sources and are written in a manner more suitable for MySpace than for any encyclopedia.
A lot of people who attempt to edit Wikipedia never look up the article about what Wikipedia is not,
and attempt to publish their own thoughts, promote their own causes or businesses, social network in an online encyclopedia, self-report the news, or otherwise post material that has nothing to do with maintaining a free online encyclopedia built from reliable sources.
I find such articles extremely useful, so I'm not advocating that they be deleted. But surely the wedding gown worn by a British Monarch who is widely known for her fashion sense is at least as relevant to the world at large.
There's been a lot of talk in recent years about how the "initial work" of adding information to Wikipedia is mostly done, and that from here on out, it's going to be mainly about adding new content as it's created (new events, people, companies, etc.). But it seems possible that myopia on the part of editors could be having inadvertent effects.
It was the last time I tried to edit anything in WP, as I always had this kind of problem.
I read it a lot like anyone else and donate small money every year but each time I read about the behind the scene, I'm appalled.
There should not be any notion of importance. All knowledge is important. What I find important is as valid as what any one else values as important.
Frankly, and to my shame this is the first time I have given any thought to it, I am disgusted that something which, IMHO, is supposed to be an unbiased information repository actually deletes knowledge. To me, this is the most disturbing case of censorship I have ever thought about. Government censorship is expected, bad news, sure but expected. But this is supposed to be above that. How can they bleat on about SOPA etc, then allow a small number of geeks to tell me I can't see an article about some princesses dress? Wikipedia is NOT Geekpedia. And it should not be censoring knowledge.
Quite sad actually. My Wikipedia love bubble just burst. :(
My previous company was a chip and Wi-Fi module startup (ZeroG Wireless). I had requested our company name to be created around 2009. At that point, we had been around for 4 years and we had taken $30m in funding. However, we were never granted a section on Wikipedia.
On the other hand, plenty of internet companies who were around than launch much shorter than that and their names are currently on Wikipedia, for example, Pownce. I am sure that many others are granted a Wikipedia entry for being around less, accomplishing less than what we did. The only difference is that these were Internet startups and ZeroG was not.
Some people simply need to wake up and stop living in their own bubble. Let's hope that one day they can realize that others do care about things that the community doesn't care.
While I highly regard Wikipedia's amazing and quite successful project, and hope there will be more editors that are female (oriented, not necessarily biological), there is still a lot that's not there, perhaps will never be, yet matters for various cultures and localities around the world, Wikipedia English has a built in bias (hint: English), and it's not a gendered one.
So the question is realy for me is wikipedia a source of knowledge/history/facts or is it biased towards flavour of the month(FOTM) and if it is the later then perhaps they need to revaluate there priorities.
As to a solution, maybe they could have all articles that are flagged for deletion, need approval by one male and one female. Though in that I will say that some males have a female mindset and some females have a male mindset and in that they should be able to express and vote based upon there mindset as apposed to some physical gender.
There is no golden solution though I do feel the zelous removal of articles be curtailed and a approach of only removing non-factualy/incorrect articles be the approach taken and in that, there will be less issues and references to items that have been removed.
I hope they resolve a amicle approach or I fear they will only spawn a womenpedia based site for women only and that would be a sad day and a true wakeup call to the insanity and directions being taken. Look at business for examples how issolation impacts - you will see many females who state they promote women in business and yet you never see a man saying I promote males in business. One is accepted and the other is sexist. But sadly they mearly add fuel to the issue and instead of addressing the issue, I personaly feel they exacerbate the issue. Though if you were or felt persecuted - you to would stand up and do something about it, if you were strong person. Not all people are strong in defending there morals and fairness and in that it does highlight women are just as right to feel persecuted. Though I do wish the approach of "Supporting fairness in buisness" was adopted as apposed to "Supporting women in buisness" was the standard they promoted as it is just that - about fairness and that can and does work both ways on many level.
Much respect to Mr Wales for spotting this issue and taking onboard, a true sign of a fair person.
I also have no interests in make-up and wedding dresses and the like, but I fully respect they are facts of the World and in that have as much right as any linux distro to be there, I'm not forced to look up those articles, nor am I forced to read linux distros on the site but having that option is something I compeletly and utterly support and to do otherwise would be unfair and that is something that I feel uncomfortable with and hopefully this clear and documented bias can be eliminated in any form it takes in life, be it race, sex, orientation or origins. We are all humans and in that we strive to be better every generation we spawn. Humanity is wonderful when it works and aporant when it fails. Lets stand up and count everybody.
In other words, the problem is that it both belongs, and doesn't belong. And they need to resolve that paradox, maybe setting a new precedent or revising their official criteria.
I think the "not enough women" thing is just a side issue. And one that has an easy and blatantly obvious solution: if you're a woman and you want to become a Wikipedia contributor or moderator, then go do it. If enough of you do it, then the gender balance will shift notably. If enough of you are not interested, then it won't. There's nothing inherently wrong with either state of affairs, it would be just the way it is. For example, I don't think it's "wrong" that the overwhelming majority (99.8%+) of hair cut folks at Great Clips over the years, in my direct experience, have been women, because that probably just reflects the natural level of interest of men and women in working in that role. I don't feel oppressed or excluded. If I wanted to work there cutting hair, or have a man cut my hair, I'd make it happen, end of story, and if not, or either way, I'd live with it and move on.
There are too many structural problems with Wikipedia - documented over the last few years by various angry bloggers - for me to feel OK with Wikipedia. Some of the content - good. The community & rules - blech. c2 is a better wiki. :-)
However, if Wikipedia has another aim - to make the scope of its content bias-free, then I think it has not thought it thoroughly yet: even structuring information as encyclopedia entries is inherently biased and restrictive (not necessarily bad though). Correlating Wikipedia's contributors' sex to an assumed gender bias in its scope (who gets to decide the articles' 'gender'?), as Wales does, proves how naĂŻve such a project currently is.
more discussion at http://meta.wikimedia.org/wiki/Deletionism and http://meta.wikimedia.org/wiki/Inclusionism
Wikipedia has become a success because of its culture. It should be very careful about changing that based on the demands of the entitled multitudes.
Prior to the Internet you had mass media controlling distribution. The Internet comes along and you have things like Usenet and the proto-Web.
Then comes the first crowdsourcing sites to allow people to find content without employing people to curate that information. Slashdot was certainly early in this trend.
What Digg allowed is a certain band of people to control the information flow. People would get paid to promote submissions as it became clear that a front-page submission generated a lot of pageviews.
But what became apparent with all these community sites (and this includes forums) is they start with an early group who provide value to each other. This group ends up becoming insular. De facto standards form. But even in the Usenet days you had the "September" problem (where new college freshmen would get Internet access and not understand the "rules" and conventions that were in place and would ask questions that had already been answered, etc).
Basically all these social sites get worse over time as the masses flood in.
Digg died because the idea that there is a central source for news was a holdover idea from the old media days. Reddit understood this. Global reddit is basically useless. The subreddits are the only remotely interesting thing about reddit.
People complain about how HN is getting worse. That's probably true and it is true (and will continue to be true) of any such social site in the future.
I've heard the same complaints about Twitter.
Facebook for most people is not a source of news. It doesn't have the same link-sharing mindshare (IMHO) for most people that other mediums have. Ultimately I think the biggest use case for Facebook is still sharing photos. People go to Facebook to find out what their friends are doing. Very few go to find out what's going on in the world (much as Facebook would like that to be the case).
I'm surprised at how some wax lyrical on how amazing reddit is. It's just a minor tweak on a long trend of existing prior art (the subreddits). Personally I think it's a cesspool full of trolls. Proggit (programming.reddit.com) is (IMHO) just awful.
One example of being jerks: http://blog.jgc.org/2006/07/sense-of-humor-failure-at-digg.h...
After that I was unbanned, but not after an employee of Digg defamed me in a blog post by making false claims: http://blog.jgc.org/2006/07/unbanned-from-digg.html
And here's an example of how reddit people weren't jerks when I inadvertently brought the site great slowness: http://blog.jgc.org/2010/09/tale-of-two-cultures.html
This isn't a problem with the sites, it's a problem with the users of these sites. This is democracy in action. It's cable news. People voting don't want to see challenging, thought-provoking content. They want to see things that confirm their biases, or things they can repost to facebook for quick laughs from their friends.
FB, Reddit, HN did not "kill" Digg. The tribe growing up and or moving on to other things ended Digg. A similar tribe is at Reddit now, but a similar tribe used to live at Slashdot. Before that they probably lived on Usenet message boards or wherever.
In college, the comp sci program I was in had a private message board that ended up having a very similar vibe to digg/reddit/slashdot. Tech heavy, at times very heated political and religious debate. Eventually the original group graduated and it's never been quite the same.
It seems that each generation has something like digg, slashdot, reddit, whatever... that is "the thing" for hanging out and sharing/complaining about the news of the day or whatever is interesting. They might look like fads because on the internet they peak and crumble pretty quick, but really it is probably a natural cycle that communities and tribes go through.
Eventually HN and Reddit will become irrelevant to certain groups and the tribes that live there will move on to the next thing, whatever that might be.
I'm guessing the next site like this already exists or is going to be built soon, so any guesses as to what it will be if it's already out there?
But it was pretty obvious much earlier on that Digg had zero respect for its users. In many ways, Digg had a very old school broadcast attitude: the users were merely part of the product, only the advertisers mattered.
I read a bit of the discussion on Reddit, and there were surprisingly (to me) many people that had used Digg before. Then something called "v4" came and Digg became unusable to many people. As I understood the discussion. Digg didn't care. So its users looked for alternatives and moved to Reddit. Digg still didn't care. People got used to Reddit and stayed. Digg still didn't care.
Yesterday I looked at Digg.com for the first time in years. Only a quick look and I spotted already a number of beginners mistakes only in the front page design.
For example the clickable headlines to the stories: they are the main content. Yet they are very /very/ light and hard to read on the white background. WTF. You want your main content to have /good contrast/ and stand out, and secondary stuff (like "points" or "who submitted" or "vote buttons") to have less contrasts as to not distract from your main content. Yet, even such basic things Digg gets wrong. I didn't dare to digg further.
I logged in. Still no content. That was the last time I visited digg.com, and I'm sure I'm not alone.
My guess is that if the acquirers just rever to the pre-redesign version Digg will come back to life.
As soon as that happened, Digg lost the alpha nerds. The rest was destiny played out over time.
How long before web designers start complaining about reddit the way they complain about craigslist's design?
In any such ecosystem where the inmates are running the asylum, pissing them off is not a good course of action.
The tribe knew that it was the reason why Digg was what Digg was. But arrogance on the part of the upper management at Digg was its downfall; they couldn't come to terms with the basic fact that the people were responsible for Digg's success. So they decided to tweak it, "enhance" it, modify it to "Digg 2.0", and the people revolted.
Lesson: if you are a user-driven site, listen to them. Don't piss them off.
The day that Digg changed their interface was the day they lost a huge portion of their users, including me. I went back once more after that, and then never again.
The Digg v4 idea was good, but poorly executed. They shouldn't have suddenly forced you to follow other people to get your news. They were trying too hard to become a social network like Twitter / Facebook. Instead, I think they should have integrated with Twitter / Facebook to find top news, rather than starting their own social network.
Reddit was right place, right time to pick up the exodus. I don't think it did anything to kill Digg.
What killed it (for me anyway) was that Digg suddenly allowed advertisers to start posting away. Ads popped up everywhere, and every other post from directly from Mashable. Digg was no longer cool, and mostly, it was Mashable's alternative site. :)
I switched to Reddit. Reddit didn't like all the Digg users migrating over initially, but attitudes have cooled over time. I can't really see a move that Digg could make at this point that would entice me back.
Part 1: http://ncomment.com/blog/2009/04/08/war-13/
Part 2: http://ncomment.com/blog/2009/12/17/war-23/
Part 3: http://ncomment.com/blog/2012/01/06/war-33/
Even Microsoft is not the same since Bill Gates handed the reigns to Ballmer. They're just too big (Office/Windows cash cows) to actually die from that.
Founder disinterest is lethal.
Kevin got caught in a situation where he had to please investors and they were looking at what other people were doing and he stop innovating and started copying.
That's what killed Digg. Nothing else.
Reddit was founded in 1997?
Staying late one night to finish an assignment that was due at midnight, you happened to catch a glimpse over one of the quiet uber-programmer's shoulders. Your eyes twinkled from the glow of rows upon rows of monitors in the darkened computer lab as you witnessed in awe the impossible patterns of code and text manipulation that flashed across the screen.
"How did you do that?" you asked, incredulous.
The pithy, monosyllabic answer uttered in response changed your life forever: "Vim."
At first you were frustrated a lot, and far less productive. Your browser history was essentially a full index to the online Vim documentation; your Nano and Pico-using friends thought you were insane; your Emacs using friends begged you to change your mind; you paid actual money for a laminated copy of a Vim cheat sheet for easy reference. Even after weeks of training, you still kept reaching for your mouse out of habit, then stopped with the realization that you'll have to hit the web yet again to learn the proper way to perform some mundane task that you never even had to think about before.
But as time went on, you struggled less and less. You aren't sure when it happened, but Vim stopped being a hindrance. Instead, it become something greater than you had anticipated. It wasn't a mere text editor with keyboard shortcuts anymoreâ"it had become an extension of your body. Nay, an extension of your very essence as a programmer.
Editing source code alone now seemed an insufficient usage of Vim. You installed it on all of your machines at home and used it to write everything from emails to English papers. You installed a portable version along with a fine-tuned personalized .vimrc file onto a flash drive so that you could have Vim with you everywhere you went, keeping you company, comforting you, making you feel like you had a little piece of home in your pocket no matter where you were.
Vim entered every part of your online life. Unhappy with the meager offerings of ViewSourceWith, you quickly graduated to Vimperator, and then again to Pentadactyl. You used to just surf the web. Now you are the web. When you decided to write an iPhone application, the first thing you did was change XCode's default editor to MacVim. When you got a job working with .NET code, you immediately purchased a copy of ViEmu for Visual Studio (not satisfied with the offerings of its free cousin, VsVim).
Late one night, as you slaved away over your keyboard at your cubicle, working diligently to complete a project that was due the next morning, you laughed to yourself because you knew no ordinary programmer could complete the task at hand before the deadline. You recorded macros, you moved entire blocks of code with the flick of a finger, you filled dozens of registers, and you rewrote and refactored entire components without even glancing at your mouse. That's when you noticed the reflection in your monitor. A wide-eyed coworker looking over your shoulder. You paused briefly, to let him know that you were aware of his presence.
"How did you do that?" he asked, his voice filled with awe.
You smile, and prepare to utter the single word that changed your life. The word that, should your colleague choose to pursue it, will lead him down the same rabbit hole to a universe filled with infinite combinations of infinite possibilities to produce a form of hyper-efficiency previously attainable only in his wildest of dreams. He reminds you of yourself, standing in that darkened computer lab all those years ago, and you feel a tinge of excitement for him as you form the word.
Bret mentions Larry Tesla (starting at about 38:10) who made it his personal mission to eliminate modes from software.
This is the problem I've always had with Vim and I suspect I'm not alone in this. I find the concepts of modes jarring, even antiquated. Everyone who has used vi(m) has copied and pasted text in while in command mode and done who knows what.
Emacs is better in this regard but I find Emacs's need to consecutive key presses (or a key press followed by a command) to be longwinded.
The advantage of either is they're easy to use over ssh+screen (or tmux) for resuming sessions.
That all being said, give me a functional IDE any day. IDEs understand the language syntax. Vim can do a reasonable job of this. Emacs (with elisp) seems to do a better job (or so it appears; I'm no expert) but IDEs (my personal favourite being IntelliJ) just make everything easier. Things as simple as left-clicking on a method and going to its definition.
For statically typed languages (eg Java/C#), IDEs (quite rightly) rule supreme. Static code analysis, auto-completion, etc are just so much better than text-based editors.
Dynamic languages are more of a mixed bag and its certainly the norm for, say, Python and Ruby programmers to use one of these.
Still, give me an IDE any day.
One objection seems to be that people don't like using the mouse. I tend to think the speed differences over not using the mouse are largely illusory.
Anyway, I can use vim but I've never felt comfortable in it and I don't think I ever will and modes are the primary reason.
EDIT: I realize there are plugins and workarounds for many of these things but that's kinda the point: I don't want to spend hours/days/weeks/years messing with my config to get it "just right".
Also, instead of archaic commands to, say, find matching opening/closing braces (which tend to involve an imperfect understanding of the language), IntelliJ just highlights matching braces/parentheses, where a variable is used and so on.
I think the best thing about Vim is that it will always keep surprising you. If only we could find a life partner like that, there won't be any divorces.
But I guess I just don't get it. It's too obtuse. I don't feel connected to my editing while using it, I feel... connected to Vim. Which, I think, might explain why others feel so connected to Vim too. They get attached to it because it's an investment, and as we know from various psych studies, we get attached to things we invest in.
This isn't a bad thing, just wanted to throw my 2Âą out there. Vim's a cool language for text editing, but it's not the only one.
That's not to say Vim doesn't have an excellent ecosystem, and has tried and true ergonomic benefits, however I feel that if someone spent the same amount of time learning, say TextMate (just an example, may not be the best), they would be just as productive.
My startup Circle is written in Clojure, so I was pretty much forced to learn Emacs, and the world is such a nicer place. I really wish I had learned Emacs 5 years ago - with the time I spent mastering and then trying to customize vim, I could have mastered and learned to properly customize emacs.
This article is very incomplete, there are various completion modes. I also don't recommend mapping complete to tab (or even using SuperTab or other completion plugins). Vim has a total of 13 different completion modes (see :help ins-completion), all of which are useful. There's complete on local and global keywords, complete for complete lines or filenames.
In particular, this article calls Vim 7's omni complete (^X^O) "syntax aware complete" which it is not. It's a language specific completion, which can complete stuff like members of structs or functions. You need your language specific plugins to be installed. Vim ships with a decent plugin for C and C-like languages, which works if you have a ctags database generated (:help 'tags').
If you want syntax completion, you can map it to user-defined completion (^X^U) like this: set completefunc=syntaxcomplete#Complete
It's not particularly useful as such, because syntax complete completes keywords like "for" and "while" which we all have in muscle memory.
It might be due to my vim configuration but I doubt it (though I am not a vim script expert by any means).
Here is a function I found on the web a long time ago and that seems to prevent that problem:
inoremap <Tab> <C-R>=MyTabOrComplete()<CR> function MyTabOrComplete() let col = col('.')-1 if !col || getline('.')[col-1] !~ '\k' return "\<tab>" else return "\<C-N>" endif endfunction
Credits/source: http://www.slideshare.net/andreizm/vim-for-php-programmers-p... slide 44
edit: formating and credits
In HTML5, you can. See http://html5doctor.com/block-level-links-in-html-5/
It'll let you use <Tab> for all the completion types, and you can tell it the fallback order for various thing (i.e. try omni, then <C-N>, etc.). It's even slightly context aware, so it can guess a good first completion method for you.
All with <Tab> :)
in intellij, you can ctrl+w to select word, ctrl+w again to select statement, ctrl+w again to select function, etc.
you can also refactor replace all, shown if foo.gif actually exist in img src, ctrl+click to css, shown if a variable (in code/js/css) is unused, hinted if a line can be simplified, etc.
can vim do that?
if it can not, what's the compelling reason for people like me to use it? (i honestly interested, but wondering if it's worth the effort)
In Vim you have to specify whether the word you're looking for is above or below the cusror (using C-P or C-N). IIRC, TextMate just looks for the closest matching word, no matter where it is.
I wrote a little plugin to do make it exactly that:
Disclaimer: totally rough and not (yet) very customisable
1a) Make a "brand" with your middle nameGoogle your first and last name. If you're like most people on earth, you're one of many with your particular combination. So how can you rank higher?
Never fight a battle you don't have to. Pick a middle name, real or imaginary. Google your new full name.
Example: My name is Kevin Barry. The Google result is completely owned by Wikipedia and other impossible to compete against sites.
My full name is Kevin William Lord Barry. I think Lord sounds cool, so I'll make Kevin Lord Barry my âofficialâ online name. It's much easier to rank for and even helps with personal branding.
1b) Consistency!Put your new name on top of your resume for consistency.
2) Edit/Create Your FacebookTake your new name. If your Facebook looks professional, change your Facebook name to your new name. If not, make sure your Facebook doesn't use your new full name.
3) Edit/Create Your LinkedInTake five minutes to create a LinkedIn account with your new name. Put all of your resume information on it neatly. LinkedIn will rank well for your new name, and you can brag as much as you want on it without looking pompous.
4) Make Yourself Look Good on AmazonMake an account on Amazon, using your new branded name. Pick a couple of books in your industry with good ratings. Read the summaries (read the book, preferably, but I won't judge if you don't). Leave a review of the books that makes you look good: show that you know industry terms, talk about your experience, etc.
Each review you leave will go to your Google front page and make you look smarter. This only works if you know enough about your industry to sound smart, of course. You can also do this for textbooks, or fiction that you like if you want to sound interesting.
5) Make Accounts on Web 2.0 WebsitesTake five minutes to make an account on sites that allow descriptive profiles with your full name Quora, Yahoo Answers, DisQus, Meetup, or anywhere else you want. Feel free to participate in these communities to help even more, although it's not necessary.
6) Strut Your Stuff!Here's where you can have fun and really seem impressive. Go to Weebly.com and make a free website, called âyourfullname.weebly.comâ. Set the page title to âYour Full Name Onlineâ and the page description to âYour Full Name's Online Websiteâ. Write a paragraph about yourself on one page, and a page with links to your linkedin, Facebook, or anywhere else you want to show people. Go nuts and add anything else you want that might make you seem interesting. Voila!
In the top ten for my firstname lastname I have:
a famous chef, a life coach who is doing branding SEO, someone else who is a programmer, a professional photographer
various high school kids, a registered sex offender
1) Some of you wanted to know how HN compares to other sources for us, in terms of signups, conversions, etc. I'll post it in this thread at the end of the day (would you guys want a follow up post on this "Value of HN visits")
2) I've gotten some great feedback on the actual product from the HN crowd, which is awesome. Feel free to leave any feedback on this thread (you can try it at http://brandyourself.com)
Do they attempt to create a top ranked page about you, and then monitor IP addresses visiting it, and match that to some db that maps company to IP address?
Keep expecting a pivot to small biz though and hope that's in the plans, seems like a much easier group to monetize.
You may have seen "not provided" in google analytics for searches that found your site... This is users logged in to google. And with more people using google services such as gmail and g+, more and more results will be hidden. On my corporate, tech web site, I'm seeing 40-50% of results from google as "not provided" nowadays, and its increasing every month.
> Co-Founder & CEO at BrandYourself.com
> January 2009 - January 2000
How do you start this conversation with a publication? How hard is it to simply cold email Mashable and attempt to get an article?
I'm assuming once the relationship is built it's easy to get follow up posts but do you have advice for making that initial connection?
I have two concerns:
1) What happens when two people with the same name sign up? Couldn't that lead to issues where they're both trying to promote their own stuff while burying the others?
2) Is there any way that someone could pose as someone they're not in order to sabotage that persons google results, or I could see the possibility of a friend doing that in order to pull a prank on you?
As I said I haven't fully explored to site, so if these concerns are addressed on there, apologies. Great article!
If someone from the company is reading: I found out I already have an account (probably from the time it launched), but I can't access it. Login doesn't work and when I try to reset my password it says "e-mail not found".
As a startup founder, however, it's invaluable. I'm a converted fan, and the process even taught this rookie a bunch about link building and SEO.
Great work, keep it up, and loved the blog post.
Since so many people were curious to see the affect HN would have on our signups, I think I'm going to do a follow up post next week.
Other than that keep up the great work!
Is this a joke?
This one is my favorite: http://macitbetter.com/BetterZip-Quick-Look-Generator/
I didn't realise Quicklook plugins were a thing...
Pro tip: Instead of restarting finder run `qlmanage -r`
PS, just like clicking "More" on HN front page after reading a few articles results in a "cutting edge" error.
But for god's sake, stop posting about how they're useless every time someone starts talking about one. First off, we get it, some people think they're painfully useless. Second off, just because you can't see any direct effect, or just because the effect wasn't exactly what you wanted it to be, doesn't make them useless. They are a good tool for rallying support behind an idea. They are a good tool for spreading awareness. They are a good tool for getting a cause a little bit of noticeably. They are a good tool for collecting thoughts in a coherent matter so they may be further discussed.
TLDR: We get it. You don't get petitions. Please figure it out or stop complaining. You're not helping a goddamn thing.
"We hear you, but you're wrong and we aren't going to change a damn thing."
The site is currently undergoing maintenance. We appreciate your patience while we make some improvements.
Please check back soon.
Additional uncaught exception thrown while handling exception. Original MongoCursorException: couldn't send command in Mongo->__construct() (line 35 of /mnt/codebase/petition-release-2012-07-11/sites/all/modules/contrib/mongodb/mongodb.module). Additional MongoCursorTimeoutException: cursor timed out (timeout: 30000, time left: 0:0, status: 0) in MongoCollection->findOne() (line 22 of /mnt/codebase/petition-release-2012-07-11/sites/all/modules/contrib/mongodb/mongodb_cache/mongodb_cache.inc).
Delusional. It's time to leave America.
What we need is an abstraction layer on top of social networks. No matter what their TOS, they do not own my friends or my conversations with my friends. I have no qualms at all about having some other service handle my friendships and conversations in a way I deem appropriate.
We need to pry Facebook's greasy hands from our throats before it's too late. At one point they were cute. Then they were pleasantly time-wasting. Now they're crossing over the line firmly into evil territory.
Which is why I stopped using Facebook.
I also stopped using Twitter to tweet. I still use it to follow news sources, I just don't actively tweet. I did that after the NYPD won a court case to see all the private messages you send on Twitter.
I also don't comment much at all on blogs, and social sites like this one or Reddit anymore. (I use to be a top 10 contributor over at Reddit. At least that is what some metric said a few years ago when someone listed the top ten most popular usernames. That account is deleted now)
I am slowly pulling out. I have a deep distrust of the current surveillance state in the United States. I remember reading a story about a guy who posted a quote from fight club on his Facebook status and a few hours later in the middle of the night the NYPD was busting in his door and he spent 3 years in legal limbo over it. (Might have been NJ police anyways, red flags)
You start piecing together these things, and you start to realize that your thoughts and ruminations about life, the universe, and the mundane, can be used against you at any moment and can completely strip you of your liberty and freedom, and any happiness you may have had.
I am gonna be completely honest, I am scared to express myself any longer on the Internet in any fashion. I don't trust it any longer. I don't trust the police, I don't trust the FBI, I don't trust the federal government, and I also don't trust, nor have faith, in the justice system in the United States.
I thought the whole thing was adhoc and confusing. Anyone who saw the comment could easily see that it was a joke. Also, if it wasn't a joke, why is FB calling her and not someone from law enforcement?
Would love it if someone from FB here on HN could comment.
Facebook's mass wiretapping and analysis of its users private communication seems almost like the post office scanning each and ever letter and postcard in the vague hope of finding some keywords related to bomb, terror and of cause "children". I wonder how long it is going to take until Google is going to send automated notifications to my local police station when I'm going to start googeling some water bomb tutorials for the summer.
Mashable quotes Facebook as stating âwhere appropriate and to the extent required by law to ensure the safety of the people who use Facebook"
Can anyone speak to whether or not proactive scanning could possible be required by law? It seems entirely unlikely, but IANAL.
Generating deliberate false-positive inducing noise in communications deemed to be private between two or more individuals who know one another should be protected as free speech. To argue otherwise would be the equivalent of prosecuting an individual for yelling "Fire" in their own home among friends and stating that such an act is a clear and present danger to the US.
IMHO automated cooperative manufactured reasonable doubt will probably be one of the last bastions of civil liberties in a surveillance society.
Facebook is essentially using the same techniques to monitor private communications as the NSA supposedly does. This means Facebook has the power to report, for example, selected messages but not others. (I'm not saying they do, of course, just that they could be selective or discriminatory that way.)
The fact is that Facebook has taken upon itself a role similar to that of the police, but without any democratic oversight.
This is different from a bar owner overhearing a conversation about a crime and calling the police, because he wasn't specifically monitoring every single word said by every bar patron. But Facebook is casting a wide net by analyzing every conversation that happens.
Questions: should Facebook be permitted to do this? Should we ask for laws preventing companies from "eavesdropping" on their users' communications with the intent of detecting and reporting criminal behavior? Should this be the role of the democratically-elected government instead? Should sites be required to turn user communication over to the government for such analysis?
It's a fascinating area of law/politics with so much room for future development, and gets down to the heart of what values a society has.
Apparently people have sent their friends money, rent, etc. and did that as a joke, boom, it's a nightmare.
And while this has always been the case ever since letter writing, electronic communication is so much easier to parse and distribute and copy on bulk.
As I understand it, FB is currently only required to respond to appropriately specific subpoenas and warrants. If the cops want more, they should petition for laws to require that and we can all argue about it like responsible citizens. And we could equally demand more protection.
But this thing where sometimes FB voluntarily sends law enforcement bits of information and sometimes they don't based on poorly defined criteria is just creepy. And why does FB even want this responsibility? Isn't the simplest, most obvious model to say no by default?
You know it would not supprise me one bit if FaceBook had staff monitoring this modding down every post that holds them in true^H^H^H^HBAD light.
This is not supprising in any way.
If you don't like this then don't do FaceBook - realy that easy I have found.
Now the fun part is another friend of ours (call him Jeb) was in the habit of making movie quotes when he started phone calls, so he calls up Mickey and leads in with a Lethal Weapon 2 line about "shipments", completely unknowning that the DEA was potentially tapping the call.
Because of the way the warrant was written, Mickey was able to wave off the tap on Jeb's call since it only covered calls from Ken. But it could just as easily led to all sorts of other problems since between friends, the level of discourse can go far afield of what a non-initiated 3rd party might consider normal.
Question is, do they warn you that your private conversation is not private and do they comply with the data protection acts the various countries have and more importantly who monitors FB? So many things can be taken out of context and acted upon in good faith at the detrement of innocent parties, this is concerning. But I don't do FB, nor do I have any immediate plans either. That has nothing to do with this, but more todo with concerns in general about there privacy and policeys they act out.
If they are going to leverage the Kickstarter model, then they should at least learn from other KS projects: show me the problem, build a prototype, show the prototype solving the problem, play cool music, and for crying out loud look at me (the camera) when talking.
There's probably a good reason that Kickstarter doesn't accept nebulous web businesses - scaling manufacturing costs real money, but building a software prototype is cheap. If someone had made a video dryly explaining GitHub or DuckDuckGo as a concept prior to developing them, I wouldn't have signed up. But I find them invaluable today.
I may grow in to needing the $yyyy/month plan, but most people don't start off at that end, and the majority of people who can afford something that expensive (because they're an operating concern with a lot of money) probably already have a solution in place (hence their ability to make money).
So basically... it's already built.
> To manifest this grand vision, we are officially launching a Kickstarter-esque campaign. We will only accept money for this financially sustainable, ad-free service if we hit what I believe is critical mass. I am defining minimum critical mass as $500,000, which is roughly equivalent to ~10,000 backers.
Oh that's interesting. The grand vision will not manifest until $500k is raised. But isn't the product already basically finished?
So what happens if they don't raise $500k? Do they kill the product and flush 8 months of hard work (and presumably a bunch of money) down the drain? Doubt it. That doesn't make any sense.
Something is fishy here.
But that seems to be the very problem with it. In trying not to bury his lead, I believe Dalton Caldwell has let it take over the pitch. 95% "why" and 5% "what". 10 minutes in, I was still scratching my head wondering what the service will look like. And the name App.net certainly doesn't do the cause any favours. The first decent explanation finds itself relegated to question 2 of the FAQs.
This is an audacious attempt, and I laud that. But the pitch, in my opinion, needs an overhaul. Diaspora was also about the "why", but they addressed the "what" really early in their Kickstarter pitch.
Kudos for taking on the big dogs, though.
Why muddy the waters like that? The takeaway is that the incentives a company adopts are consequential and will define the culture and the product.
I love the spirit of this, but as described, I wouldn't use the service if it were free. Twitter isn't even in the same ballpark as SourceForge. shrug
With pay-for services, there's a much smaller number of people who are going to (or be able to) pay, and when you start looking at those numbers, it's never the fabled 'hockey stick to heaven' that people dream of.
What if twitter just sold access to their stream, and a few million orgs (companies, individuals) were paying, oh, say, $20/month. And let's say... 2 million - why, that's only $480/million per year maximum! Gosh - who would ever want to invest in that? Instead, by going with 'ads', there's always the promise of some big change that could explode the revenue down the line.
Grandiose vision, but going to war with Twitter today is as foolish as trying to take on Facebook.
You're not going to attract a mainstream audience and gain serious critical mass just because you're an open platform and developer-friendly. In the case of SourceForge/GitHub, it was different because the demographic is entirely developers. Not so with social networks or media companies of any kind, the audience is mainstream, that means college students, celebrities, high school girls, etc.
"Every battle is won before it is ever fought" (Sun Tzu) -- learn from Diaspora's failure, don't repeat it.
Evan Prodromou has been trying to crack that nut for many years now...
What is a real time feed and service or a social platform in general? Is this basically Facebook sans the UI for a monthly fee?
If GitHub is being used as the example in this line of argument, then why not follow their lead and build something people need and raise money later? Wouldn't you then be even more likely to convince hackers â" the primary community you're trying to serve?
Some things fall somewhere in between. Your grandfather's watch is a marvel of mechanical engineering that will keep time adequately for centuries if taken to a watch-shop for maintenance every few years. That maintenance will cost more than buying a new timex would, and the timex will keep far more accurate time. Timex's are disposable. When the timex's battery dies it will probably cheaper to replace it than to replace the battery. However, their function is superior for all practical purposes. Many people currently enjoy the aesthetics of mechanical watches. They're currently enjoying a major surge of collectibility, but that wasn't the case twenty years ago and it may well not be the case twenty years from now.
The real trick is to figure out what can last and what should be treated as disposable. e.g. Say you're building a home theater. Amps have changed very little over the last 20 years. Preamps and receivers go obsolete every few years. If you spend $5000 on a Bryston amp it will easily last 20 years (that's how long it'll be under warranty!), but a Bryston preamp will be hopelessly unusable long before that 20 year warranty runs out.
There are also big variations in short-term durability too that aren't necessarily correlated with price. e.g. Macbook air's and the new retina pro's are gorgeous pieces of engineering, but they're made to be disposable. If you spill coffee on them they're basically done. You can't remove the battery and take them apart to clean them (proprietary screws) like you can with a much cheaper laptop and Apple won't lift a finger to help you. This is a case where our perceptions of what is durable and high-quality can actually lead us to buy something that won't last as long!
A $300 meal isn't 10x better - in terms of taste (subjective obviously) or nutrition - as a $30 meal. A BMW 3 Series is a better car than a Ford Focus, but looking only at the practical reasons to own a car, it's not even close to having twice the value.
In a lot of cases, brand names bring no value other than the brand. And while that can have a placebo effect, it's not more value. Medicine comes to mind as a great example. You are paying more for marketing and packaging than you are for R&D. The recent thread on memristors even highlighted that R&D costs are 1/100th of total cost, with marketing taking the lion's share.
Quality items appreciate while cheap items depreciate? No consumer good should be viewed as a financial investment. If something does appreciate in 50 years, it'll be more about luck than anything else.
Using this, I've managed to buy better shoes, toasters, and even a car than I've ever had before, and at less than I would have spent if I hadn't done that research. The car ended up being the cheapest car made by that manufacturer, and I like it better than any other car I've ridden in or driven. (Admittedly, I haven't tried anything over about $50k, but I couldn't afford those anyhow.) I would never have found it if I used price as an indicator of quality, instead of reviews.
The key to reviews is to look over the good reviews. Look at the bad reviews instead. Find out what people hate about it. And then ask yourself: Does that feature matter to me?
My rice maker doesn't make brown rice well at all. Many people complained about that. However, white rice is the only kind I make, and it does a great job on that. So it didn't make sense to spend twice as much money on a better rice maker. I could have spent more money and gotten an objectively better rice maker. But why bother?
So no, I don't agree with paying too much or buying the 'best'. At all.
Many of the intangible pieces that make up the quality of a product or service go out the door when we're getting a deal. They're doing a favor and sometimes when we bargain, people resent us.
As for the relation between cost and quality, I agree with many here that it isn't clear-cut. But the relation does exist. You just need to be discerning, which is why the Op writes:
Because we only buy quality, we are forced to wait until we can afford what we really want. That wait time leads to better decisions, and it forces us to make due with what we have.
Our audience here is mostly programmers. Programmers should all know the golden rule of optimization: profile it first. Don't take it on intuition or faith that "this is what was slowing me down" unless you've measured it. The same principle applies in personal finance. Group together your expenses into some key logical categories -- "groceries", "tech", "rent", "retirement", "health", "nights out", and whatever else makes sense to you. Then add them up, do the maths. Know what you're spending and how it compares, and how the categories make up your monthly budget.
Frugality is generally quite good, but just like code, it's quite possible to overoptimize it into a time-sink.
Also, it's key that you see both the high-cost single-time expenses and the low-cost-but-everyday expenses and you get an idea for how they compare. Depending on how often you visit Starbucks, it might be a very good or very insignificant change to brew your own coffee at home; go on and measure it.
The single most surprising thing that I discovered about this process was that, even though I feel like I give money to beggars in the street "often" (i.e. whenever I see them and they ask) and give "a lot" (i.e. much more than is normally typical), the amount that I give actually works out to almost nothing on a monthly basis. Like, I'm poor and right now I have no proper employment (they don't pay you to do a Master's degree) and yet I can still afford to be generous to the people who are down on their luck, and it's just nothing when you compare it to the amount I'd save if I stopped drinking cola. I've got no income, but I'd still easily spend âŹ 50 on food to throw a party for my friends, so long as it happens only about once a month, twice max.
1) That paying more equates to higher quality. It doesn't. Your ÂŁ30 t-shirt will fall apart every bit as quickly as your ÂŁ3 multi-pack from Primark.
2) That you want to have something last a lifetime. Just last weekend, my dad was complaining about having no good jig-saws. Then followed up with "of course, I could buy a good one, but I only use them once every 5 years, so they're knackered when I need them anyway".
3) When you want quality, you can't just borrow it off someone else.
I'm not against paying more for something e.g. my uncle has expensive joinery kit because his day job is a joiner. I simply dislike the mentality of "I must pay top-dollar for everything", before you've evaluated what you actually want.
Higher price != Better Quality (not always anyway)
First, I obviously cannot always afford the high price. When I bought the car I had checked the failure reports and decided it just wasn't worth paying an additional few months salary to get the failure probability down 2-3 percent a year (checked other parameters too, similar conclusions.) And yeah, I waited till a good deal was available to me.
Second, getting a quality item might cost even more of my personal time - finding out which one is actually good and searching for a local vendor isn't exactly free.
Third, maybe I don't even want an expensive item that lasts forever. Suppose I want to get into photography but don't really know if I'll keep this hobby. I'd intentionally google "cheap beginner dslr" and stick with the findings. Chances are that by the time it needs replacement I'll be ready for a more fancy gear or bored with the new hobby. Win-win :)
I'm sure other watches were purchased by my other great-grandfathers; to the best of my knowledge, none of those watches survived to the present day. It might be because they only bought cheap watches. But it might also be that they bought expensive watches and those watches broke or got lost. Looking solely at what survived to today tells you very little about what was purchased yesterday.
And I'm sure my great-grandfather didn't buy the most expensive watch possible. If it had come down to the choice of buying a watch which cost $10 more, or saving that money to give me $200 today, I'd take the money!
The reality is you can't rely on any single of these to be true, and as soon as you're willing to buy something and especially if you're willing to pay 10, 50 or 100% more than the absolute minimum you're the prime meat of the other half of the worlds who make it their business to ensure that it's not an equitable approach.
Another point I'd like to make is that cheap stuff can be totally awesome, especially if you don't need or want it to last a lifetime. Example: IKEA. My girlfriend changed her mind about the furniture she wanted in her apartment several times over the few years she was there, and was able to update the look to be exactly what she wanted for very little money. Compare that to the armoire I payed an ass load of money for that will surely last a lifetime, but doesn't really fit anywhere in my current home.
Lifetime guarantees are pretty much worthless because most people will forget they have one, lose the item or lose the documents related to it.
Top pay doesn't attract the best people. Top work attracts the best people because the best people have most likely realised that money doesn't really matter if you're working on something that doesn't provide meaning for you.
The sweet spot for quality is to figure out the approximate average price of the product you want to buy and then pay a little above that.
I think price is a poor estimator for quality since marketers know that price alone can signal quality, so it stands we need something beyond price.
How do we determine if something is quality when price is unreliable.
I use a lot of earphones, and they tend to break quite fast. Once or twice, I decided to move away from the usual 15$ models and pay 60$ for better brands, with good reviews, and everything. Did they last longer ? Of course not. They sounded marginally better, but nothing worth the price..
The thing is, price is an indicator of positioning. It's a marketing tool, and it's generally not correlated to quality. Buying the very cheapest is usually not a good idea, because the manufacturer probably did everything it could to actually beat competitors and propose the lowest price. But other than that, that's pretty much all you can know in advance.
Knowing what items are made of good quality and will last longer (and therefore are worth paying an extra for) is very hard. So, my reasoning on that is that I prefer to spend less money on something, since on average, it will live just as long.
I've found it's much better to avoid middle-priced things, most of the time. Usually, the cheapest option will meet your needs just as well.
But then you need to identify the things in your life that really contribute to your happiness, and then pay what's necessary for full quality in those.
I spend top dollar on fresh meat and vegetables and on my kitchen pans and chef's knives and olive oils for salads. But I have the cheapest vegetable peeler and spatula and paring knives and toaster and olive oil for frying.
For example, I'm buying a set of screwdrivers for a DIY job. I know I'll need screwdrivers for the rest of my life, I want to buy quality ones. I go to some big DIY store expecting a good range of screwdrivers. What I get is a selection of mickey-mouse screwdrivers (ÂŁ5 for a set of 20) and some middle-of-the-road-but-overpriced screwdrivers (ÂŁ20 for a set of 20). There is no "buy these and never buy another screwdriver again" set for ÂŁ60-100.
You can get these, but it takes considerable effort to track them down.
e.g. My dad has 2 Bosch drills ( http://i.imgur.com/aCkaq.jpg ). Roughly 20 years old. When you hold them in your hand you just know that it'll last another 40 years. As a result I have a really difficult time taking any of the modern drills seriously.
For example, I could go out and buy a $200 pair of sunglasses but depending on where I buy them they can either be poor quality (e.g. fashion brands, sunglass hut, etc) or exceptional (e.g. fishing shops, sailing suppliers, workman-glasses, etc).
You would expect reviews to give you an accurate way to tell but people often review a product by the way it makes them FEEL rather than about the product its self.
I think the example given, for a golfer, was a crafted/precious golf tee over a trashy electronic putting game. (Ignoring, of course, the fact that the tee might easily get lost.)
I usually eat at the same few restaurants all the time. They're maybe 10% more expensive, usually locally owned, and the food doesn't come out of a frozen pre-made bag before being tossed in the oven. I never tip less than 20%, and I'm not an assholeâŠ.at restaurants.
I don't understand this thought process. Why not pay 100% tip then? Is the 20% number somehow magical and affords the ability to be elitist? So if I tip 15%, which is the average suggested tip in Western Canada, I'm somehow an asshole???
I've had such a good relationship at some of my regular restaurants to the point where I once had ups deliver a package to the bar for me when I was out of town. Other places, the staff starts comping your drinks. My best one was the manager at a restaurant giving me a blanket 20% discount on anything I buy (I tip at least 25% after they put in that discount).
In the long run, you get better service if you pay for it.
Also, the point missing is that you need to take care of your expensive gears. You need to put money into maintenance.
Other than that, it amazes me how people won't spend for quality in situations where it clearly is in their best interest for the long-term...
Value = Benefits / Costhttp://en.wikipedia.org/wiki/Value_%28marketing%29
Marketing comes into play. If not, items for sale would be using cost-based pricing.
A Mac and a PC would both can accomplish the same task. But one is more expensive than another. Being cheaper doesn't necessarily mean it won't last as long.
Tipping is one area that I don't understand about cultures that have them. Generally, I would expect the tip to be included in the price of the food. Why waste my time and have me figure out what amount to tip? Personally, I don't see the perceived value of tipping.
In some way, how much you pay is also determined by how you see yourself, or in some cases want others to see you.
For example, I wouldn't buy a Ferrari or even Porsche 911 (both high quality cars, no doubt, but expensive) when a Nissan GTR R35 performs just as well and costs less.
All three will last a lifetime with a bit of care (I hate how people just use their tech without any maintenance, then wonder why "the POS" broke down!), so why pay more (aside from the design)?
I loved it but had to leave it when I moved -- just too heavy.
(And anyway, the Mac support wasn't good a couple of O/S versions later, but it still worked well with Linux.)
My takeaway from this has more to do with negotiating moving packages (and Samsung support of old customers, grumble) than anything else.
To quote from the article:"People who constantly try to always get that great deal end up spending all their time chasing those deals and never actually get things done. I've seen people do this their entire lives, and it is debilitating."
You can say the same about people who spend all their time trying to constantly get "the best" product. Figure out a few things that are important to you and maximise these - whether on quality or price. For the rest, learn to accept "good enough" and get on with your life doing things that matter for you.
A $5 on a 2000 is 0.25% and a $0.50 on $20 meal is 2.5%
"Poor people can't afford inexpensive stuff."
This took me embarrassingly long to realise. Instead of buying clothes because they were pretty nice and on sale, buy only what you absolutely love, and pay full price. Instead of having "favourite underwear", get rid of everything that's not your favourite and make sure you only own favourites.
Yeah, it costs more at first. But over time you build up a wardrobe of high quality clothes you love. Quality over quantity, indeed.
The challenge is automation and commodification pushes us towards the lowest common denominator. Look at airlines, being forced into price, not quality.
 if someone has the exact quote, I'll be thankful for it.
It's simple and good advice.
I'll have to read it when I get home.
The "more efficient and longer lasting" case materials he brags about are irrelevant in their disposable products made of low quality components that are not user serviceable or upgradable.
It's like bragging you made your automobile frame out of solid titanium with a carbon fiber shell, but ignoring the fact that you built the engine without any way to change the oil, so you're going to be throwing it out or sending it in for major costly service after a short time. Such planned obsolescent designs are certainly not environmentally sound, and claims of the longevity and strength of the frame materials, and even of certifications, are just PR to distract and hypnotize the marketplace into believing the opposite of the reality of the situation.
The pattern is typical â" more or less. Some criticism appears. Apple is dead silent for a few days. Apple has a comprehensive response to the criticism. (Alternative that also happens frequently: Apple doesn't ever mention the criticism.) That's what used to happen in the past, that's what nearly happened here.
The difference is that they responded with a different message pretty quickly after the criticism (arguing that EPAT isn't such a great certification) so it's not true that they staid completely silent.
When it comes to the message itself, I don't think it's that atypical. Apple rarely responds to criticism, so there are few situations we can use to compare. It's not as big a deal as Atennagate â" so they picked a less involved way to respond (basically a press release instead of a press conference) â" but in every other respect it's pretty similar.
This time there is a clearer Mea Culpa but the undertone is still that EPAT is a bad certification. (During the Antennagate press conference the undertone was that it's not really that big of a deal â" and it was a much more obvious undertone.) The tradition of Apple execs writing letters is also continued.
I would only say that Steve's letters tended to be more about presenting arguments. That has certainly something to do with the different purposes (explain why DRM/Flash are bad vs. admit that you were wrong and reverse direction) but I still would have preferred if Bob Mansfield had explained more of Apple's reasoning.
How is this possibly true? Nothing Apple makes can be easily opened, upgraded, or have their batteries replaced. They are made to be obsolete in 2-3 years.
When my iPad screen cracked, they made me buy a new one (at discount) instead of fixing it. I doubt they even fixed the old one I gave to them - just discarded it. How is that environmentally friendly?
I own almost everything Apple makes. But they need to better explain how built-in obsolescence and impossible-to-fix devices equates to environmentally friendly.
* Apple makes thin, light, durable products. Reduce > Recycle.
* Raw materials are a small amount of the embodied energy in electronics. The microchips themselves constitute many times the embodied energy. Again, reduce > recycle.
* As others have pointed out, Apple didn't do this because any of their newly-released products weren't eligible.
Putting it all together, Apple did this to send a message to EPEAT: "Disassembly isn't the end-all be-all of green." Looks like EPEAT caved.
I wonder how they are going to spin this one.
Reads to me like EPEAT is moving (a bit) in Apple's direction with the new future IEEE-standards.
Almost all power cords and extension cords in the US contain lead: the lead is added to the plastic part of the cord when the plastic is still "molten" and its purpose is to make the plastic less flammable.
Although I did not do a chemical assay on it or anything, I am pretty sure that the power cord Apple included with my 2011 Mac mini contains no lead. (The cord has a different, more rubbery feel to it that strongly suggests a completely different material, and I might have seen a claim to that effect somewhere on the web.)
But if it's all true, why did they pull the products from EPEAT at all?
A middle finger, the Apple way.