A spec is a long, tedious, human-readable document that explains the behavior of a system in unambiguous terms. Specs are important because they allow us to reason about a language like Markdown without reference to any particular implementation, and they allow people to write implementations (Markdown processors) independently that behave identically. The Markdown Syntax Documentation is not a spec (it's highly ambiguous), nor is any implementation (not human-readable; some behaviors are probably accidental or incidental and difficult to port perfectly). The hard part of writing a spec is codifying the details in English, and secondarily making decisions about what should happen in otherwise ambiguous or undefined cases.
I would love pointers to Markdown processors that are implemented in a more principled way than the original code, for example using standard-looking lexing and parsing passes, but that still handle nested blockquotes and bullet lists together with hard-wrapped paragraphs.
Nobody should be using the original script, and unfortunately many of the other implementations out there are direct transliterations that replicate all of its absurd errors, like where if you mention the MD5 hash of another token in the document, the hash will be replaced with the token, because it uses that as an inline escaping mechanism! Reddit got hit with a XSS virus that got through their filters because of it: http://blog.reddit.com/2009/09/we-had-some-bugs-and-it-hurt-...
See the changelog for what started as a PHP transliteration and turned into a rewrite that squashed 125 (!) unacknowledged bugs: http://michelf.com/projects/php-markdown/
The worst part is that he outright refuses to either disclaim or fix his implementation, and so far he's repudiated everyone else's attempts to do so. He's a terrible programmer and a worse maintainer, he really still thinks the documentation on his site is comprehensive and canonical. As much as Jeff Atwood leaps at every chance to play the fool, there's no way his directorship can be anything but an improvement.
In other words, let the user type:
It will save a lot of trouble -- and especially when linking to a Wikipedia page whose URL contains parentheses.
Could I soft-wrap in my editor? Sure, but that would mean that the text files sitting on my hard drive now have very long strings in them making it harder to grep, making it harder to add to git (change a single character, entire line is now a diff :-().
I hope that doesn't become the default.
Why get all angry at John Gruber? As many have already noted, he created Markdown for himself and released so that others could use it. AFAIK he didn't put any license/restrictions on it outside of calling himself BDFL. Whatever his skills as a programmer, writer, or his role as Mouthpiece of Apple, the vitriol is unnecessary (but absolutely fanscinating to watch). My panties bunch up naturally, no need to allow my feelings regarding Gruber to bunch them further.
Why get his approval? In the same spirit that Gruber created something for himself, you should just create something for yourself. I find it hard to believe that Gruber was the first person that conceived the idea of user-friendly text-markup. The new standard could just be inspired by Markdown and that would be a win-win: a respectful nod towards Gruber as well as the ability to move towards something 'better'.
If you have not taken a pandoc for a spin I highly recommend you do so soon. In addition to being a great markdown dialect the pandoc tool set is the swiss army knife of text formatting. It is amazing how many formats pandoc can read and/or write.
EDIT: I spoke too soon, Fiddlosopher continues to impress. I just checked the open issues and a little less than a month ago he added "limited org-table support." Based off of the rest of pandoc "limited" probably means something like 85% to 95% :)
I ended up writing my own in Objective-C. It's not very pretty, and it doesn't use a formal grammar (just a lexer + custom grammar code), but it does the trick. I took a few liberties with the spec: throwing in GitHub-flavored code blocks.
And then, for the LaTeX that you can't shim in, just have some escape hatch that sends fragments out to a renderer.If I could only have:
* Math mode * Citations and Bib files * Labels and References
EDIT: Having just investigated Pandoc, which many here are talking about, I realize this might be exactly what I've been looking for :)
"I'm reminded of the guy who decides that there should be one standard because there are n divergent implementations. So he goes and writes his own. Now there are n+1 divergent implementations."
The idea of Markdown is great, but I found the implementation of links is less than obvious. (haven't tried it in 4 years, so there was probably other issues that I had that I've forgotten)
The problem I inherently always end up having with "parses to HTML" syntax conventions is there are always warts where the syntax is harder to remember than the HTML it is supposed to parse to.
The current behavior of Markdown solves this problem very well. I don't want the newlines I enter for non-wrapping editors to remain in the generated HTML.
I love it because the world needs an easy-for-humans way to format in pure ASCII without any tool. It is much simpler than using even the most well designed GUI. You can even write books with it, and you can focus on content.
But I hate Markdown. I hate it because it is superficially good: a lot of Markdown seems to make sense at a first glance, but if you look at it more closely you see that a lot is broken in its design (IMHO the fact that the reference implementation is broken is the minor of the issues).
It is surely possible to fix it. However it's better to have a broken Markdown now that no markdown at all. The fact that Github and Stack Overflow and Reddit are using it makes it absolutely obvious how useful and great the concept is. The actual design, implementation, and specifications can be fixed now. So kudos to the original inventor, but it needs a second pass from people that can give it a more coherent shape, with clear behavior, minor surprise, and parsing in mind.
Edit: I've wondered whether the original Markdown didn't have underline support because <u> was deprecated/removed from HTML. FWIW, <u> is now back in HTML5.
If this gains some traction I'm sure I'll be adding support for it at some point.
: a wonderful almost-everything-to-everything text converter http://johnmacfarlane.net/pandoc/
IMHO, pandoc markdown support is the mother of all implement featuring lots of goodies (table and footnote to name 2)
I don't think such a thing is feasible. I also don't think it's feasible for any proposed standard to simply look at the largest users and say "okay, we'll accept the idiosyncratic extensions of all of these differing flavors in an unambiguous way."
So assuming this pushes forward, there are (to my mind) two possible outcomes:
1) A backwards-incompatible standard emerges. No existing project adopts it, but new projects do. It gains legitimacy only once Github, Reddit, et al fade into obscurity.
2) A backwards-compatible standard emerges. Every large existing project adopts it, but the standard is so full of cruft and TIMTOWTDI that in ten years it gets usurped entirely by a challenger that emphasizes simplicity.
Mou + the (built in) Github theme = best Markdown editing experience.
If only a couple sites band together, then I see it more like this:
But I have learned to love Markdown too, I hope in the future, distant future: Someone will create a language that integrates HTML and CSS into a nice Markdown-like language.
> The problem with writing my own Markdown parser in Clojure is that Markdown is not a well-specified language. There is no "official" grammar, just an informal "Here's how it works" description and a really ugly reference implementation in Perl. http://briancarper.net/blog/415/
I absolutely love the simplicity of Markdown, especially with github's addition of code fences/blocks. It's so trival now to add code and have it automatically highlighted. It's not nearly that simple in other formats (to get autohighlighting I guess).
Excited to see what will come of this.
Dodgy HTML, content pasted in from Word (with crazy styling intact), and a general encouragement for users to see text content in terms of styling rather than structure are all things that it will be delightful to see the end of.
There are many questions â€" "What is Markdown?", for starters â€" that feel unaddressed by the mark. Instead, we get the brute force approach: splitting up the word into smaller word parts, which is what you do with a word if you don't know what it means, or you have to gesture it in Charades.
Rather uninspiring for an idea so beautiful that Jeff and others can get so excited just thinking about it, but what else can you expect from such a mark whose approach is so stubbornly literal? I take that back â€" only one word part actually gets to be represented literally... the other only managed to become a letter, in a moment I can only imagine involved the creator muttering "good enough". He must have found this mark uninspiring as well, given that he sought to put a box around it.
At least consider that the down arrow on its own is an overloaded concept, particularly on the web. Without context â€" and a mark should not need context â€" Mâ†" could read like a hotkey or command of some kind. This kind of ambiguity is utterly unnecessary â€" you're making a mark; it can be whatever you want it to be. Push!
I also see no reason for text and _text_ to produce the same output. It just seems like a fault in the original spec to me.
rst just looks more powerful and yet still as readable as markdown.
Aside from that (and implementation bugs) I've been very happy with markdown.
I'm very happy that GitHub has an Org Mode renderer, even if rudimentary - I don't have to rewrite my notes and READMEs to Markdown.
The one change for good I can think of would be removing the ability to embed HTML.
1. hello something 2. foobar
1. hello something 1. foobar
This is an [example link]<http://www.example.com/>;
You can play with it here: http://www.markdowncms.com
If there was a standardized Markdown, we would implement that for sure.
 http://www.aaronsw.com/weblog/001189 http://en.wikipedia.org/wiki/Markdown
I'd certainly be interested in switching over to their version, provided some of the noted kinks get worked out.
The Surface one up there is just some guy's blog review. It's not poorly written, but why are we reading randomly selected Surface reviews? There's an entire post right now that is basically a Samsung press release via CNET, describing some (totally unquantified, of course) minor uptick in sales for the latest Android phone. There is literally nothing to talk about there except to proffer essentially baseless flames, praise, or speculation.
I would have no qualms asking the moderators to fix this. I can't understand any metric by which these are useful posts to have on the front page. There is lots of much better stuff sitting on the New page which is being crowded out by noise that I could go read in two hundred other places. "Intellectual curiousity" is not referring to what you have every time a phone comes out which is 20% lighter and 10% longer.
Notionally, this is a forum for creators, but it seems increasingly pre-occupied with utterly unproductive posturing over whose tastes are 'better'. It's a troubling trend.
Besides, you're supposed to up-vote comments you don't necessarily agree with so long as they are well argued. That is what a good debate is about.
I think that the reason is the same: when you spend money on one, you buy into a community and an ecosystem. You become a part of a tribe and naturally begin to see the world in an us vs. them paradigm.
It's worth noting that this is an irrational behaviour set, and best avoided if you want to learn anything objective. In typically-emotive arguments like these, you have to make the decision yourself and realise that, whatever you choose, you'll likely justify it to yourself afterwards however you can. Once you start to realise that, you begin to realise how inconsequential "what type of tablet or console you own" is, and the less likely you'll be to fall into that destructive us-vs-them mindset.
* yeah, I know you really like [company] and really don't like [competitor], but please don't say mean things about those who disagree with you, and especially don't say mean things about the staff at those companies without very good reason
* it's election season in the US, which means more than the usual number of offhand derogatory comments about the other side's politicians and voters. Please refrain from this.
* I've seen a few shots taken at other peoples' religions. Principled disagreement is OK, but try to resist name-calling.
* There have even been a couple of recent arguments about nationality that have involved some unnecessary name-calling.
* As a final heads up, remember that even deleted posts may be cached by various external services that grabbed them via the API. It's good to think better of something after the fact and take it down as soon as you can, but it's even better to avoid posting nastiness in the first place.
As a community, let's do a better job of controlling our own posts first and foremost, and let's do a better job of downvoting and flagging when others cross the line.
Hacker News is usually a pretty nice place to hang out, but that comment thread reminded me of the ten minutes hate from 1984.
These threads remind me of reading newspaper articles that discuss how uncivil our current political discourse is compared to the far more civil past. And you can read essentially the same article from a 1880's/1950's/2012 newspaper archive.
I regularly see long and technically strong articles sink with less than ten votes and zero discussion, while those lambasting Apple yet again get dozens of votes and comments. Add hair-splitting with strong passive-aggressive undertones, and what's left is vacuous and mildly toxic.
The conflict doesn't arise when switching cost is low or the differences are too minor (e.g. Sony TV vs. Panasonic TV, Verizon vs. AT&T, Unitarianism vs. Baha'i fail to generate rancor on both counts).
The conflict would appear to arise from people struggling with cognitive dissonance. In other words, if an iOS or Android user were supremely confident of the superiority and perfection of their chosen platform there would be no dissonance and and no outward invective.
Just as Freud (correctly, for once) observed that the most passionately homophobic individuals were often in denial of their own urges, the most fervent boosters of a platform are probably plagued with doubts about it.
Those are pretty strong words. All I've seen is a few geeks trading opinions about--ultimately petty--consumer electronics issues.
It's all just opinion. Nobody's said "Person X is ignorant waste of consciousness and they should kill themselves" (which would be uncivil, inappropriate, bilious, and divisive.) They just have opinions about products. Products that in the grand scheme of human achievement really aren't that important.
You're just causing even more drama with this self-righteous post. It's all, like, your opinion, man; take it easy, let the geeks bicker (relatively politely) about fruit versus miniature eiderdown, and save the outrage for things that are truly worth it.
Edit: I'm not going to upvote this parent meta-post, and neither should you, dear reader, for it itself is the one causing drama, not the majority of posts on HN in the past few days.
Don't get me wrong, I was honored - but it's off topic.
Maybe I can help with the understanding part. Here are some things that I've observed, as a hacker, about humans:
1. (most) people like to form groups and then compete with other groups
2. (most) people enjoy feeling superior to other people
These are things that seem to have been true in any part of the world, throughout all of human history.
So what's our plan here? Are we going to turn hackernews into the only collection of humans to ever live that defies these rules? Is there some technical solution that will change fundamental aspects of human nature? Maybe getting rid of the voting arrows will remove all of the meanness and tribal thinking on the planet.
I say all this because I don't understand the impetus for your post. Of course it would be nice if everything everyone said made an insightful contribution. But you know that people aren't like that. No amount of blog posting or commenting is going to change how people interact with each other. It seems like your problem isn't with the hacker news community, but with the nature of human socialization.
This looks like a clear case of selection bias. It's hard to do good as a hacker if you isolate yourself in an ivy tower of ycombinator hackers and geniuses. Making things does take some understanding of the average person and how they behave. If you truly think that hackernews is negative when compared with just about anywhere else, then you might be out of touch.
The multiplicity of products that we have to choose between and the lockin we experience once we've made the purchase (we have a contract for the phone and have made significant monetary commitment to the devices in general) mean we have to make a hard decision and then try to feel good about it.
Once we've picked, if we admit that another device is better, then we're saying that we made the wrong choice and that we have to live with a subpar device for another few years. Most of us tend to get defensive about our purchases instead, even when we are trying to be objective.
The truth is that there are trade offs between all of the devices that are related to our priorities, our personalities, and our social circumstances all of which make us feel personally invested in a gadget decision. This makes it hard for us to come from an objective place to talk about some of our favorite topics. Many of us are looking for validation more than information (I've definitely been guilty of this).
The trick, then, might not be to try and be more objective, but to take criticism of the products less personally. Headlines are meant to get clicks, not express thoughtful opinions. The intricacies of the tradeoffs are worth considering, but you won't find them in most tech coverage. Save your hate and try to understand why the competition is valuable to others and what your product could learn from.
So can someone create a 3rd party site that displays HN, but removes/hides these off-topic posts? Then everyone would be happy. There are already some similar implementations (like http://ihackernews.com for a mobile version), so it can't be that technically difficult. It would also be great for users to be able to specifically block certain domains (e.g., I could get rid of all Gruber and Marco blog posts from the list of links I personally see).
Edit: this could also be done with a browser extension, but that wouldn't work on mobile devices (I think)
Re lack of civility, this is a normal feature of anonymous interaction which stems from lack of accountability - the only way to deal with it is to impose social sanctions on the users responsible. Everyone can do this by refusing to be baited, and calling out others for antisocial, insulting, or extreme comments.
Actions will have more impact that meta discussions.
As hackers, I believe we all subscribe to the old mantra that one should use what is best for the job at hand, and arguments about whether Microsoft Surface or iPad Mini or so on and so forth are the "right way" detract from the quest for knowledge in which all of us participate.
As is evident by the comment threads on the tablet releases, people have strong opinions, supressing these with calls for 'civility' are nothing more than asking for people to only post comments that you approve of, which seems extremely bourgeois to me. I enjoy seeing the comments where people express strong opinions because I am able to learn for example what kind of person is going to like the Surface and who won't. There's signal in the noise and in a public forum it's not about what you want to read. If I could down vote your thread I would because I find it extremely distasteful to see someone wanting to read just what they enjoy. It's really no different to me posting an ASK requesting that we focus more on Python or jquery plugins. Please, less of the high horse rhetoric.
As for the issue you're talking about, this guy here is obviously a flaming Microsoft-fanboy: http://news.ycombinator.com/item?id=4706624 .. I wanted to call him one, but refrained from doing so, mostly because I thought it would be met with a negative reaction.
But you know, talking to fanboys is really frustrating. Their posts are full of such obvious, annoying bullshit/misdirection that it's just really difficult to ignore, but on the other hand, going through the effort of shutting them up is pointless too.
That's why it's tempting to just call a fanboy a fanboy, instead of wasting a lot of time and effort in a civil discussion with them.
related: best God joke ever:
In many other discussions it seems like one controversial sub-topic ends up dominating as well.
Perhaps downvoting controversial comments isn't always a bad thing? There seems to be a big fear of the downvote button, but in some cases, even if a comment is useful on its own, in the end it sparks massive amounts of arguing back and forth which could be avoided if it were just downvoted instead.
I'd like to see a /startup or similar, moderated by entrepreneurs to set the tone of what posts or comments aren't welcome.
Liking or disliking Apple or Android or Windows... well sure. people can have preferences. But self identifying or rejecting people based on their computing software? Being rude to people because of their technology preferences?
Ask yourself, why do we do that? Does it make sense logically? Not really. But at an emotional level, it feels good to have a group of people who one can feel part of, and a group of people that are outside it that one can disparage as not being one of us. Making moral judgements based on what tech company a person likes? Human tribal groups.
The truth is, we can do better then that.
Performance, cost, usability, etc. are all factored into the system I use, the phone, tablet, etc. are all purchased based on these factors. If you do not like a particular product just do not buy it and if for some reason someone asks for your opinion on a product you can give it without being fanatical about it, it is just a product.
Agree - this should be the default in any comment. It would be interesting to see a "karma" score for those who hold their tongue when they have nothing constructive to say, but obviously, that's pretty much impossible in an online format. In a way, the karma on a forum encourages opinions whether vacuous or not.
> But what makes Nexus 10 unique is that it's the first truly shareable tablet. With Android 4.2, you can add multiple users and switch between them instantly right from the lockscreen. We believe that everyone should have quick and easy access to their own stuff -- email, apps, bookmarks, and more. That way, everyone can have their own home screens, their own music, and even their own high scores.
From the marketing video it looks like Android 4.2 gained Swype-like keyboard.
It seems that they're no longer using tablet UI, even on Nexus 10 (i.e. it has status bar on top, navigation buttons are in the middle of the screen). That's weird, and I definitely don't like it, but it might not be that big of a problem.
As for the Nexus 10, I hope that it gets enough sales to start pushing developers to make tablet apps for Android, and for Google to make the split between phone/tablet sized apps better in the Play Store.
- screen resolution isn't important
- multi-user accounts are overly complex
- low prices mean the devices are cheap and nasty
On every single spec, Nexus exceeds the iDevice equivalent. The Nexus 4 has two NFC radios, a higher res screen, is thinner and lighter and is less than half of the price of the iPhone 5. (edit: That's unfair, I forget about LTE, though I understand why Google skipped that)
Similar things hold for the Nexus 10.
Am I the only one really surprised?
(This is of course not to mention the numerous new Android 4.2 features that everyone except the Verge has ignored)
Apple has a loyal mac user base willing to pay an Apple premium. A business that won't just disappear overnight. They had a good head start on iphones. Combined with the obscure phone prices on plans, Apple can easily get their premium here. The tablet market they pretty much had to themselves.
Now Android really is mature. Great devices at great prices that compete with iOS devices on features and not just on price.
Lets see if they can keep up their margins.
The N4 is bonkers too. Sure there are other phones with HD screens now, but with a quad Krait? the only other I know is the Mi2, and good luck getting one at launch.
The N7 with cellular is really tempting since with a little hack you could have a tablet+phone hybrid (using a Bt headset). Too bad it still uses the Tegra3, a SoC that couldn't keep up with the dual Krait.
But overall I think Google just brought a gun to knife fight...
The ~$450 I paid for a 32GB iPad mini would net me two base Nexus 7s. Alternatively, for $100 less I can get an equivalent Nexus 7...with cell radio. Those jumps add up!
- Google Now will now automatically detect packages you are going to receive and will notify you of their progress. You can now dictate calendar events (also, in 4.1 they added the ability to say "navigate home").
- You can take 360 degree panoramas
- Quick Settings & Multi-user accounts
- You can swipe the lock screen to reveal informational widgets. (quickly check your calendar, etc)
- Swype functionality built into the keyboard. (Even cooler than Swype though because of where it shows the word and the suggestions)
I've also yet to see anyone mention that the Nexus 4 rests in a capacitive, magnetic dock.
I think I'd buy a phone just for this feature.
With regards to the emerging enterprise tablet market, Google is playing serious catch-up. If Microsoft can come late to the game, but demonstrate the tenacity they have in the past, they might pull another "IE over Netscape" on Apple.
Nexus 10 - clearly better than the iPad on price, display (resolution, size, aspect ratio), sound, Wifi, RAM, GPS. Battery life is unknown though, GPU likely to be worse.
Nexus 7 - beats the iPad Mini on price and resolution, matches on most other things.
Edit: Basically the website still has old info about old devices.
edit: oh, they had one planned in NY, cancelled due to the hurricane
One's not included in the box though. I wonder how much they plan to charge for that accessory.
Up until now it was Donut, Eclair, Froyo, Gingerbread, Honeycomb, Ice Cream Sandwich, and Jelly Bean. Did they run out of tasty treats already?
I'm still a bit disappointed though. I've been waiting/hoping for a Nexus hybrid (ala the ASUS Transformer). I'm really sold on that form factor and hope they do something with it on a "Nexus" device. The continued lack of this makes me wonder if they just haven't gotten around to it yet or if they are avoiding it due to Chrome OS related strategy tax.
It's not enough to say "Hey, X does that, too." There has to be a qualifier on how well/poorly/etc. it does it. Otherwise, we're meaningless comparing feature checklists, which could be done by a small perl script.
I am excited to see the screen on the Nexus 4.
I'm sitting in Germany, all my Chrome settings are set to English. Go to the main Nexus blogspot website and its in English. Great. (despite being blogspot.de)
Click on Nexus 10. Its in german. Understandable as I'm Germany. But please respect the browser settings.
Go back and click on Nexus 7, and its in English.
Go back and click on Nexus 4, and its in FRENCH!! Why?!
Click on the expanded memory option on the French page and it goes to the English version of the Nexus 4 page.
This infuriates me no end. This is the single reason why I will not use Google Play. I know that if I buy a book or a movie, the chances of it being in the language I set for the device will be slim. I have no desire to through money away like that.
Correction - magazines now available in Canada (Woohoo!) they just didn't say so. Now, can we please have google music?
edit: I was under the impression they both were Samsung Exynos arm processors. Turns out the Nexus 4 is using the Snapdragon S4 Pro and the Nexus 10 is using a A15 Samsung Exynos.
When I can get a 7 with phone and at least 32 Gigs or preferably expandable storage like my Nexus One which I've had 32 Gigs on now for a year or two and an 8MP real rear camera and a front facing camera and MicroUSB then I'll upgrade from Nexus One to Nexus 7 or if the 10 gets a phone I might go to that.
The 10 would be a strong option for me now if it had at least 4 Gigs RAM, 64 Gigs storage or expandable and a phone as well as rear and front facing cameras.
Fast forward 3years, tablets are no longer a novelty and users have figured out that the product is more function based rather than status driven - unlike Phones where brand plays a part in the purchase esp because majority of the usage happens in front of other people.
One sign of this change in consumer behavior was signaled by the launch of iPad 4 within 6months of the iPad 3. Felt like Apple saw a glitch in the matrix and had to make sudden changes. Q3 earnings confirmed that iPad sales were down.
What now? Well, the door is wide open. People will pick based on personal preferences as there is no clear "objective best": iPads, Kindles and Nexuses are all interchangeable.
From the viewpoint of the customer, this feels awkward. I guess the business people and engineers see it otherwise...
One thing I would add to the topic of 'determination': Are we speaking about determination to make a startup successful or determination to try out as many ideas in our life as possible, learn as much as possible and try to make at least one startup successful in our life?
I mean first we have to analyze what we optimize for:
If we optimize for the success of a given startup then it is obvious that the optimal strategy is to never give up on the startup.
If we optimize for the success of a person in his lifetime then it is different. In this case we have to examine all kinds of opportunity costs. Could it be a better strategy to very quickly abandon a startup when it seems that people do not want the product, so that we can start much more startups in our life, to increase the chance of at least one becoming successful?
Founders at Work was written by Jessica Livingston, who is a cofounder of ycombinator. She's married to Paul Graham. But do not think that she's in there just because of the personal connection. Her book is truly excellent. And in previous articles I've seen Paul say that the #1 thing that they want in a founder is determination, and the person that they rely on to spot it during the interview is Jessica.
It really is great! I encourage everyone on HN to read or watch it if they have not already. As a not-yet founder, it has a lot of interesting advice that I don't think is documented anywhere as concisely and practically as it is here.
The pizza place was very confused by this, but they send the pizza guy without a pizza, Kyle answers the door, and the pizza guy says, "The site is down."
PG: "You know, that is her deepest wish. If she is watching this, she'll be laughing so much at this point because that's what she would like the most too to be able to spend more time on the new version of Founders at Work. There's a new, she's working on a new edition, with a bunch of new interviews."
Any updates on this?
The effects of those two are very large, the effects of everything else comparatively small per decades of startup and longitudinal entrepreneurial studies.
Nonsense about hustle is exactly that: nonsense. The weight of evidence suggests that, if anything, hustling and creativity have a net negative effect on long term health of a startup.
But there's money to be made keeping up the lie.
Lastly, beware of pseudo-pop-science that opens with only a few people's stories. People manage to succeed as founders all over the world; these stories are not remarkable and tell us nothing.
In general the whole "determination" thing has little to no value in any serious consideration of startup success: it's about on the same level of credibility as diet fads.
Is she ever going to pursue writing a sequel to Founders at Work?
He has been collecting data on start-ups and then looking at survival lengths and outcomes. He wrote a book on the topic
Also, since I just survived a dual-founder breakup (company intact), it was encouraging to know that this was probably a bigger bullet to have dodged. (For those curious, post-breakup I reached out to an old friend with whom I've shared some tenuous situations and we have applied to YC for the next batch)
Edit: I forgot about the pizza comment! When she asked how to contact someone in Lake Tahoe, I audibly said pizza (in my empty apartment). When the solution was pizza, I had a celebratory moment.
Jessica mentions the Codecademy team launched 2 days before Demo Day and managed to signup 200k users. If I remember correctly, they launched on HN through a Show HN thread.. and so on..
What I really want to know is, how many of those initial 200k users stuck around? I was one of them and I have only signed in maybe twice since their launch.
So what does that mean? they leveraged the curious users to get VC interest? Did they really engage me, us, the 200k? is that a false positive?
I guess if the net result is a positive one Today, none of this really matters.
3rd sentence: "There's a talk I've always want to give at the beginning of each batch...". I think this should be either "I've always wanted to..." or "I always want to..." right?
-- This is a great point. Even outside of startups.
CMD+F for "luck" = 0 results.
Luck is a huge factor and sometimes you just need to move on to either something new, or working for a company to fill in the gaps, and trying again soon.
I guarantee you cannot name three books that have done a better job capturing this topic, because they don't exist.
Claiming that Livingston's relationship with YCombinator/Graham is the reason why the book is so wonderful, is like claiming David Pogues relationship with the NYT is why he's such a popular tech reviewer, or Manohla Dargis is such an amazing movie reviewer
It misses the point of both their contribution, and talent and is frankly, quite rude.
1) A private frame relay network that one day stopped passing packets over a certain size. Worked around by lowering the MTU at both ends till I was able to convince the frame relay provider that yes, the problem was in their network. This was relatively straight-forward to diagnose, but it was still odd being able to ssh into a box, then have the connection hang once I did something that sent a full-size packet (cat a large file, ls -l in a big directory, etc).
2) A paging gateway program I wrote (email to SMS) that worked fine when testing on my Mac, but couldn't establish connections to a particular Verizon web site when I ran it from a Linux box. Turned out that the Linux TCP stack had ECN enabled and at the time the Verizon website was behind a buggy firewall that blocked any packets with ECN bits set.
3) A Solaris box that could randomly be connected to, but not always. Turned out someone had deleted its own MAC address from its ARP table (yes, you can do this with Solaris) so it wasn't replying to ARP packets for itself. As I recall, it could make outbound connections, and then you could connect to it from that same peer until the peer timed out the ARP entry. Then the peer couldn't reach the Solaris box again.
None of these are nearly as complex as the scenario in this story.
I was troubleshooting with a user of an audio streaming application running over a LAN. The user could stream classical music but not rock music. Seriously. Classical was fine, but when streaming rock, the connection would drop after a few minutes.
The application took chunks of audio, compressed them with a lossless codec, and then sent each chunk in a separate UDP packet to the other end. It tried to use IPv6 whenever possible because it was generally more reliable in the LAN environment, although it would happily use IPv4 if need be.
After a huge amount of boring troubleshooting going back and forth with this guy, I finally figured it out. Somehow, he had set his network interface's MTU to 1200 bytes. IPv6 won't perform automatic IP-level fragmentation for MTUs below 1280 bytes, so larger packets simply could not be sent at all. The streaming application would try to send an audio packet larger than 1200 bytes, get an error, and bail out of the connection.
Why did it only happen with rock music? Turns out to be pretty simple. Lossless codecs are necessarily variable bitrate, and classical music compresses better than rock music. When streaming classical, each chunk of audio consistently compressed to less than 1200 bytes, but rock music produced occasional packets over the threshold.
The user didn't know why his MTU was turned down and didn't need it, so we turned it back up and everything worked just fine.
We had a similar issue at Blekko where a 10G switch we were using would not pass a certain bit pattern in a UDP packet fragment. Just vanished. Annoying as heck, the fix was to add random data to the packet on retries so that at least one datagram made it through intact.
Shortly after bringing up a second T1 into a remote location we discovered that some web pages would show broken JPG images at the remote site.
Some troubleshooting revealed that this only happened when traffic was routed over the new T1. The old T1 worked just fine. Pings, and other IP traffic seemed to work over either line but we kept seeing the broken image icon for some reason when traffic came over the new T1.
We tried several times to confirm with the telco that the T1 was provisioned correctly and that our equipment matched those telco parameters. Still had some mangled bits going over that new T1.
Finally had the telco check the parameters over every span in the new (long-distance) T1 circuit and they eventually found one segment that was configured for AMI instead of B8ZS (if I can remember correctly, certainly it was a misconfigured segment though).
The net result is that certain user-data patterns that didn't include sufficient 0/1 transitions would lead to loss of clock synchronization over that segment and corrupted packets. Those patterns were most likely to occur in JPGs.
Once they corrected the parameters on that segment, everything worked as expected.
Quite a bit of head scratching with that one and lots of frustration as the layer-1 telco culture just couldn't comprehend that layer-2/3 Internet folks could accurately diagnose problems with their layer-1 network.
About 3 visits in a row I went to look at problems (core dumps or errors) that the customer could reproduce at will, only for them to be unable to replicate the problem with me present on site.
I sat at one customer (in sunny Minneapolis) for 2 hours in the morning with the customer getting increasingly baffled as to why he couldn't get it to fail; it had been happily failing for him the previous evening when I was talking to him on the 'phone. We gave up and went for lunch (mmm, Khan's Mongolian Barbeque). A colleague of his called him midway through lunch to tell him that the software was failing again. Excellent I thought, we'll finally get to the bottom of it. Back to their office and ... no replication; it was working fine.
As a joke I said I should leave a clump of my hair taped to the side of the E450 it was running on. The customer took me up on that offer and, as far as I know (definitely for a few years at least), the software ran flawlessly at that customer.
It's the closest I've got to a "'more magic' switch" story of my own.
A little background... I was brought up in the network ranks, I worked as a network / sys admin in high school, ended up working for an ISP as a junior network engineer in college (while I went to college at one of the first Cisco NetAcad baccalaureate programs - which was a combo of network study and Cisco curriculum and certifications) and have gone on to work in every major vertical since then for the past 10+ years; government, finance, healthcare, retail, telecomm, etc. I always tell clients and potential employers that having a network background generally gives me somewhat of an edge in the industry I primarily focus on: security, and I generally will study and take Juniper & Cisco tests and work on labs just to stay current. Most software devs and security folks I've run into (keep in mind there are a lot of really good folks who have a better grasp on network than a lot of seasoned engineers do) are generally overzealous in the thought that they truly do understand IP from a debugging and troubleshooting standpoint.
Case in point: I interviewed for a "Network Architect" position with a very well known online backup company (think top 4). The interview was the most bizarre I've ever had, not that it spanned more than 5 interviews, but that every time they positioned a complex network problem it was generally solvable within 5 to 10 minutes of pointed questions. The software dev who was interviewing me was baffled by how I came to a reasonable solution that took them over a week, in some cases, that quickly - and it was pretty simple in the fact that 1) I've seen something similar and 2) that's what I studied and still have a passion for over the course of 20+ years (when I found the Internet in 1991).
Most of the time when I run across a "magical" problem it's because someone hasn't looked at it from L1 up. As this article showcases you generally have two generic stack angles to approach it from - application back down to physical, or the inverse. Having been in network support - by the time you get a problem like this it's often so distorted with crazy outliers that really have nothing to do with the problem your best bet is to start from that L1 and go back up through the stack. Reading into the problem the author describes I think there were some key data that was missed and/or misinterpreted. There most surely would have been key indicators in TCP checksum errors and it was glossed over pretty lightly in the explanation - but it's interesting that those items of interest are often cast aside when digging into something like this. Nobody in this thread has indicated where a bit error test or even something as simple as iperf, or similar, would have been able to more accurately showcase/reproduce the problematic network condition.
But back to the labels remark - I don't believe, as some people have said, that this is a DevOps role largely. I don't mean to cut down on DevOps folks because I think, at some level, if you're a jack-of-all in any org then that's your role, it is what it is. However, this would be a problem most suited towards a professional network engineer - and you don't see much of that need in the startup space until people get into dealing with actual colo / DC type environments, otherwise it's often very simple and not architected with significant depth or specific use cases.
Long story short: network professionals are worth the money in the case of design, build, fix of potentially issues that may seem complex to others, but can be solved or found in minutes when you know what you're looking at. That being said, I'm impressed that the OP dug into it to get to a point where he could ask a specific person (who was probably a network engineer / tech of some level) to validate/fix his claim.
I suspected the VM code at the time, but it is very likely that my packets had to go through the same router (geography would support this).
I'm so glad somebody debugged this problem. Also, I'm quite glad that at least this time I'm not the only person with a weird issue (I have a knack for breaking things).
Today, people are relying on SSH for binary transfer more than ever. SFTP and SCP are the new defacto file transfer standards between machine to machine over a secured connection. Source control like GIT (or even SVN) make heavy use of binary transfers over SSH. The performance benefit to the entire world is immeasurable. Yet unless you explicitly go out of your way to manually compile and install SSH-HPN, you don't get it.
That said, given how slow SSH is on Windows (GIT pushes and pulls are exponentially slower than on *nix or OS X), does anyone have a good link to a Putty HPN build?
We tracked it down to a switch that was corrupting packets enough that the TCP checksum wasn't sufficient protection, and the packets would simply pass their checksum despite having been altered.
The out come was that we always use compression, or encryption, as an added layer of protection.
The more ambiguous situation is that early Juniper routers would fairly frequently re-order packets. That's nominally allowed, but a lot of protocols didn't like it.
There are way weirder things on satellite or other networks (spoofing acks, etc.).
I've been wondering about something not entirely unrelated we see sporadically from a small but widespread number of users. We serve deep zoom images and the client appears to run normally but sends malformed image tile requests - e.g. in the URLs "service" is consistently garbled as "s/rvice", "dzi" as "d/i". I've seen this from IPs on every continent and user agents for most common browsers as well as both iOS and Android. My current theory is that it's some sort of tampering net filter as a fair number of the IPs have reverse DNS / Whois info suggesting educational institutions but have thus far failed to confirm this, particularly since none of the users have contacted us.
I had a similar problem, less hairy, involving a bad bit in a disk drive's cache RAM. Took a day or so to figure out a solid repro.
Stuff like this does happen. Handling bit errors in consumer electronics storage systems is an interesting problem, and one that I'd love to see more attention paid to.
Recent news out of Apple regarding "cutbacks" at retail suggested he was nudging them in the same direction. Given that he got his first stock disbursement last week and was due $58 million over the next few years if he hung around, I'm guessing he was pushed. Great decision from Cook if that was the case.
This is good news.
Even better news is Browett's ouster. The business with his cutting operational corners in retail was a very, very bad omen. If they'd left him in, he might have poisoned a very important well for the company. Hopefully his replacement is closer to Ron Johnson's set of retail and service values.
1. Jony Ive's role is expanded from Industrial Design to Industrial Design AND Human Interface. In other words, Ive is the new Design chief for hardware and software. This is huge.
2. Scott Forstall is out (after an interim advising role to Tim Cook). iOS goes to Craig Federighi who already oversees Mac OS. So, now iOS and Mac OS are overseen by the same person.
3. Eddy Cue's role is expanded (he previously was in charge of iTunes, App Store, iBookStore, iCloud). He now also oversees Siri and Maps.
4. Bob Mansfield will lead a new group called Technologies (wireless and semiconductor).
5. John Browett of retail is out.
Overall, I view this move as extremely positive.
Tim Cook just elevated his most reliable and capable SVPs to assume more leadership role.
John Ive, Eddy Cue, Mansfield and Federighi have all proven to be pretty spectacular. Ive with industrial design, Cue with iTunes/AppStore/iCloud, Mansfield with hardware and Federighi with Mac OS.
Further, Tim Cook gets rid of his problem SVPs - namely Browett who didn't match the culture of Apple... and Scott Forstall (who advanced iOS in huge ways) but reportedly had problems with getting along with other SVPs and also who disappointed users with iOS6/Maps (and also in my opinion poorly designed and implemented Apple apps... appstore reviews for Apple apps have gone significantly down the last year or two).
Cook will probably give Forstall a good severance package with an agreement that Forstall doesn't go to a mobile OS competitor.
I'm actually more optimistic on Apple with this bold management shakeup. Tim Cook is showing the moves of a bold leader... and it's exactly what Apple needs.
> Jony Ive will provide leadership and direction for Human Interface (HI) across the company
And thus ended the reign of skeuomorphism at Apple. Or, at least, the reign of hyper-realism and hyper-whimsy in UI design. Jobs or Forstall always seemed to favour it, but could you imagine Jony Ive signing off on a Podcasts app where half the screen is a reel-to-reel tape that bounces when you pause?
1) Forstall apparently wants to be CEO, and run the company. That puts him at odds with Cook (the CEO), and Ive (who wants to drive Apple's design decisions).
2) He's divisive. There's claims that neither Jonny Ive nor Bob Mansfield would talk to him without Tim Cook mediating. There's also claims that he "managed up" (showed off to the boss) better than he "managed down", and stole credit while deflecting critisism.
3) He was the guy in charge of Siri and Maps.
4) He was probably the one driving the post-Jobs war with Google.
Siri and Maps are Apple's way of fighting Google. Siri competes with Google Search, and Maps competes with Google Maps. There are reasons why Apple wants to spite Google, but the whole strategy could also be Scot Forstall's way of creating his own empire in Apple. Going head to head with Google requires lots of resources, which would all be under Forstall's command.
I don't think it's a good gamble for Apple. Google doesn't really hate Apple. I bet they'll port everything they can to iOS, as long as they can keep pushing their ads. Nexus might see Apple as a competitor, but Nexus isn't worth as much as adwords. As Eric Schmidt said in an interview - "It's their call".
If Apple goes down the path Forstall wants, they'll be going head to head with Google in the things Google is best at. If they stop trying to turn into a data / AI company, they can focus on what they do best - making easy to use devices which sell like hotcakes, and command a fat profit margin.
Android will hurt them, but as long as they focus on their core strengths (hardware, marketing, industrial design, interface design, and integration) they'll continue to do pretty well. They milked the iPod for a decade, despite there being plenty of better value competitors. They can do the same with the iPhone. They can do the same with whatever the next big thing is. I'd say going to war with Google will be at best a waste of time, and most likely a string of humiliating losses.
The latest Podcasts App from iTunes is a skeuomorphic mess. It has a superfluous animation of a reel-to-reel player of course. But it utterly fails at its most basic task: playing a goddamn podcast. But don't take my word for it, it has a 1.5 star rating on iTunes: https://itunes.apple.com/us/app/podcasts/id525463029
Not to mention crashes... Apple used to make jokes about the Windows blue screen of death. Well, that's my new day-to-day experience with iOS apps. I'm constantly restarting crashed apps over and over.
Honestly, this is good news if Forstall really is the driving force behind the deteriorating user experience of many apps.
As both a fan and shareholder of Apple, I'm very pleased to hear Browett is out. The stories that came out a couple months ago about the changes in Apple Retail did fill me with admiration for his management style.
Also, given how Forstall is described in a Business Week profile, and that Bob Mansfield is not only sticking around, but heading up a new team, I wonder if Mansfield laid out an ultimatum to Tim Cook about 'him or me'.
I think it was evident he lost the power struggle and it killed his enthusiasm. His heart just wasn't in it anymore.
Forstall led development of the fastest-growing, most popular computing platform of the past decade or so with, to be sure, a few notable screw-ups, but mostly incredible innovation and efficiency. While his departure does sound like the result of a power struggle that needed to be resolved, I really believe we're shortchanging his incredible achievements. Forstall's departure is not unequivocally or even clearly a victory for those who are firm believers in iOS and its ecosystem going forward. The only reasonable reaction is that we'll have to wait and see.
If Apple had come out 3 months before iPhone5 and said "Look, we really need to divorce ourselves from Google Maps as we can't be relying on an arch-competitor for such an important service, but please be aware that our new Maps app will have issues for several months as we work out the kinks based on customer feedback", and reinforced that message several times, the issue would have been close to a non-issue - and I think most people would have understood.
Instead they came out and said "new Maps is the greatest thing since sliced bread!" (paraphrasing) which was downright wrong and people rightly felt let down.
This gives a smooth transition to the large teams under him over the next year or so, in exchange for more vesting of his shares.
People saying this is about Maps or Siri or Skoumorphism are focusing on relatively petty issues.
This is about how is going to be the leader of Apple. Tim Cook turned out to not just be the interim CEO, and waiting a year for the transition is respectful all around-- and is enough time for everyone to know what the right direction for their lives is.
Forstall may want to do a startup-- where he'd certainly be a CEO.
That could turn out great or horrible. (I'm not sure whether obviously great hardware designers can also be great UI designers.) I'm optimistic for now. Hopefully that means bye bye overt skeumorphism.
(I do think Apple's UIs have in the past always been above average, sometimes excellent. Their fashion choices, however, have at times been horrible. It would be great if Apple could change the second, not necessarily the first part.)
Could you please start up NeXT again? That would really rock.
I wonder how this will affect iOS. Forstall has been in charge from the beginning (afaik) so we might see some big changes and even better integration with OS X now that Craig Federighi is in charge of both teams.
I wonder if Forstall's departure has anything to do with Mansfield staying on?
Getting Jony Ive to oversee both industrial and software design, could lead to something very exciting, that provides the innovation, the software, has been lacking.
I have full confidence, in Eddy Cue, Craig, and Bob their new roles, and hope this means Bob will stay on longer.
As per the direct no apology firing of Browett.. Sweet! I was actually hoping for that. I was insane to gamble with the Apple stores reputation and service for a litle more margin. Having now seen a Dixons, I have no idea, why he was hired
Does Apple have a Scott Forstall problem?http://tech.fortune.cnn.com/2012/09/29/does-apple-have-a-sco...
Apple Retail Leadership Tells Stores It 'Messed Up' Employee Working Hours, Refutes Layoffshttp://www.nasdaq.com/article/apple-retail-leadership-tells-...
I've never been inclined to use it before the recent release of Letterpress which necessitates its use and it a steaming pile of you-know-what.
It's so jarring going from the lovely minimalism of the Letterpress game into parts of Game Center to manage match/friend requests.
It's the sort of experience that's had me worried about Apple, and this is (tentatively) good news, as I gather he was heading up the part of Apple responsible for such products.
John Browett is leaving Apple
I'm pretty much clueless on this topic, but from a quick scan this seems like pure gossip. I've learned nothing at all except for various HN opinions, and I have trouble understanding if and/or what the substance is here.
What am I missing? Put another way. What does anyone on HN actually uniquely know about this situation. Could someone help simplify this?
Apple has always strived to couple great hardware design with great software. Excited to see them bring the two departments closer again.
edit: What was the downvote for?!
I remember watching him in a keynote showing the piano app on iphone when they announced appstore/iphone. I was blown away by that demo.. those were the days maan..
As someone who regularly interviews prospective engineers at my current gig, I see no problem with expecting candidates to arrive prepared to answer algorithm questions or questions about their strongest programming language. Ditto for someone who wishes to change assignment within their organization, however they arrived there. If you're unwilling to provide proof you're not a bozo, you're probably going to be just awful to work with as well.
However, the blind allocation policy at Google sucks, and it continues to suck. I came in as an expert in field D and therefore according to Google's magic sorting hat I ended up a natural for assignment in field Q. I tried my hand at it for several months, but as someone else has already said, bored employees quit: http://www.randsinrepose.com/archives/2011/07/12/bored_peopl...
In order to avoid that fate, I futilely attempted to get reassigned to something close to field D (really, B, C, E, or F would have been just peachy) and that seemingly got me flagged as trouble internally. Shortly thereafter, I got a higher offer to go somewhere else and left.
However, unlike the author of this post, while Google recruiters regularly stalk my linkedin profile, none of them ever contact me, which is good.
Since I've been out of the Silicon-Valley-centred tech industry, I've become increasingly convinced that it's morally bankrupt and essentially toxic to our society. Companies like Google and Facebook â€" in common with most public companies â€" have interests that are frequently in conflict with the wellbeing of â€" I was going to say their customers or their users, but I'll say â€śpeopleâ€ť in general, since it's wider than that. People who use their systems directly, people who don't â€" we're all affected by it, and although some of the outcomes are positive a disturbingly high number of them are negative: the erosion of privacy, of consumer rights, of the public domain and fair use, of meaningful connections between people and a sense of true community, of beauty and care taken in craftsmanship, of our very physical wellbeing. No amount of employee benefits or underfunded Google.org projects can counteract that.
That Thursday came and went, and I found out that due to some internal bar-raising I would not be receiving an offer. I stayed here in the Detroit area, moved up with my current company, married my wife, and settled into being quite happy here.
Five years on I regularly have Google recruiters contacting me both via phone and email, asking if I'm interested in a position, and exclaiming how good the interview feedback was. When I decline to revisit any opportunity which would require me to move across the country, the recruiters are universally flabbergasted.
Sorry Google, the time when I was excited to move across the country has passed. I still want to do interesting things, I'll just do them on my terms now.
Why is this submission at 80 upvotes? What value am I missing that others are seeing?
This means that I support the notion real names on Google Plus, and I also believe that all speech should be free, but that you should also have the courage to attach your name to it. Yes, I understand that there are reasonable circumstances in which that would not be ideal, but perhaps due to my aforementioned luxury of being a 'normal' white male, I am ignorant to how much they would matter in real life. I am neither queer nor gender-queer, so while I am empathetic to their struggles, I just can't identify with what are possibly very real concerns about losses of anonymity, and as I've met people who are public with their genderqueer status who haven't been assailed or assaulted, I can't help but wonder if the fear isn't simply perceived fear or not.
Regardless, aside from that (which again, I empathy with, but cannot relate to) the only other thing I took issue with in the article was the categorization of the autonomous vehicle as a 'geek toy'. It isn't, and that marginalizes an entire category of technology that has a very real possibility of changing the world in a very positive way to 'something SV types are wasting money on', which I take issue with.
If you think I'm so great, make me an offer. Don't spam my inbox with "We're Hiring!" e-mails.
I did not understand that then, and I don't understand it now. I don't blame the guy for saying that I guess, and maybe he's right. That said I don't want to work at a megacorp doing software engineering, even if that megacorp is Google. It's just not for me.
Not out of any particular fanboyism, but because I've worked at plenty of places that are not Google, and the normal day-to-day in a place like a large defense contractor are categorically worse than even the worst nightmare scenario I've ever heard about working at Google.
I exchanged a few mails with the recruiter, who offered a SWE position. Then two telephone interviews followed (with the recruiter and extensive interview with some developer), then they arranged an on-site interview. Apparently the 2nd tech phone interview wasn't needed.
For the on-site interview I got some paper-mail, where the position had been changed to SRE.. So I'm thinking to myself.. right, this is going to be interesting.
So I show up on-site, and 5 interviews follow, with a ~1hr lunch break in the middle. The 1st guy who interviewed me was a bit pompous [hey, he had a PhD!], but OK; the 2nd and 3rd guys were extremely arrogant; the 4th guy (my supposed team-leader) seemed to have had a bad day but was otherwise OK; the 5th guy was the ONLY with whom I felt I could build accord and have an engaging conversation. With him, it didn't feel like an "interview"; we were more like two equals talking about an interesting technical topic.
The guy who I had lunch with was... interesting. Suffice it to say that I had the impression that he was on the verge of explicitly telling me NOT to take the position I was interviewed for. (As in, crappy job and crappy place to live in.)
They tried to impress me with how every employee gets two big screens, a laptop of their own choice, how big systems they're working with, how good the food in cantina was, the fancy office space, etc.. Their attitude was in general as if they were interviewing a teenager whose "wet dream" was to work for google.
I never found out what kind of project I would be working on. Everybody's attitude during the interview was "you ask, and we'll tell you if it's not confidential". The SRE position was briefly described as "root on google.com".
It turned out that I'd also be required to be periodically on-call (since the position went from SWE to SRE underway), and that the people I'd be working with would be the same people who interviewed me.
So I got an offer, a contract came in paper-post and I found out that I'd only be having ~15 workdays of paid vacation per year. Incidentally, in the country I was supposed to move to, it was allowed by the law to work NNN hours unpaid overtime per year... Guess whether NNN matched (or maybe was even greater!) than the number of paid vacation days.
I didn't take the offer, and it turned out to be a damn good choice.
I'm rather sure that somebody without other hobbies or desires to have some free time to spend on things other than computers would have had a different subjective experience.
[This post is deliberately vague about some details in order not to reveal too much about the persons involved.]
"Over time, I've come to consider that this situation is irremediable, given our current capitalist system and all its inequalities. To fix it, we're going to need to work on social justice and rethinking how we live and work and relate to each other."
Give me a break. Socialism isn't going to fix anything. Have you seen Europe recently? Capitalism and Democracy are the worst solutions for economies and governments, except for all of the others that have been tried. America and capitalism have given more opportunities to help people grow out of poverty into success than anything else in the world. If you think you are entitled to something, you're wrong. Get off your ass and do something productive and go take that thing you think you're entitled to. If you are sitting on your couch watching tv, the only thing you are entitled to is being overweight.
FYI, this policy has changed: https://plus.google.com/u/0/+BradleyHorowitz/posts/SM5RjubbM...
It's pretty unfair to blast Google for "eroding" the public domain, fair use, and consumer rights. Google has been a champion of all three of those things, unlike some other companies I could name.
Does it really matter if Google+ doesn't support anonymous comments? It's not like there's a shortage of places online to make anonymous remarks.
P.S. I just wrote a blurb about this last week: http://robdotrob.com/post/33737357324/recruiting-for-bigco-p...
Curl has an option, CURL_SSL_VERIFYHOST. When VERIFYHOST=0, Curl does what you'd expect: it effectively doesn't validate SSL certificates.
When VERIFYHOST=2, Curl does what you'd expect: it verifies SSL certificates, ensuring that one of the hosts attested by the certificate matches the host presenting it.
When VERIFYHOST=1, or, in some popular languages, when VERIFYHOST=TRUE, Curl does something very strange. It checks to see if the certificate attests to any hostnames, and then accepts the certificate no matter who presents it.
Developers reasonably assume parameters like "VERIFYHOST" are boolean; either we're verifying or we're not. So they routinely set VERIFYHOST to 1 or "true" (which can promote to 1). Because Curl has this weird in-between setting, which does not express any security policy I can figure out, they're effectively not verifying certificates.
They cast a really wide net, looking for as many examples as possible where non-browser applications fail to do SSL validation correctly, but then conclude that this will result in a security compromise without fully examining the implications.
For instance, they point out that many SDKs for Amazon FPS don't validate certificates correctly. But I didn't see them mention that the FPS protocol does its own signature-based authentication and that credentials are never transmitted in the clear: it was essentially designed to operate over an insecure transport to begin with.
Likewise, they point out an "unsafe" construction that an Android application that I wrote (TextSecure) uses. But they don't mention that this is for communication with an MMSC, that this is how it has to be (many don't present CA-signed certificates), and that the point of TextSecure is that an OTR-like secure protocol is layered on top of base transport layer (be it SMS or MMS).
So I think the paper would be a lot stronger if they weren't overstating their position so much.
Many security flaws found in commonly used SSL libraries.
Other than that, it is a great find.
This causes the page to throw an HTTPS warning: "this page loads insecure content" due to the css loaded over HTTP.
"Not the most interesting technically, but perhaps the most devastating (because of the ease of exploitation) bug is the broken certificate validation in the Chase mobile banking app on Android. Even a primitive network attackerâ€"for example, someone in control of a malicious Wi-Fi access pointâ€"can exploit this vulnerability to harvest the login credentials of Chase mobile banking customers."
It validates SSL certificates correctly by default. How about other languages?
I suppose my web browser has an extended list of CA that my OSX lion does not know about.
i'm not saying that this would solve all the problems, or that you should develop critical financial software by having people that don't understand much writing tests. but tests are pretty much common culture now; you'd think people would have considered this. and the argument the paper makes is not that the programmers are clueless, but that they are confused by the API, so they should be able to think up some useful tests...
of course, integration testing with sockets is a bit more complicated than unit tests (perhaps something toolkit apis should support is a way to allow testing without sockets?), but it's not super-hard. [edit: hmm. although testing for unreliable dns is going to be more tricky.]
You can see it here:https://github.com/rails/rails/blob/3-2-stable/activeresourc...
I'm pointing it out as it was not mentioned in the paper.
Edit: It looks like it has been that way since SSL was first implemented in Connection.
At any rate, here is a pull request for PHP which attempts to address the issue:
require 'always_verify_ssl_certificates'AlwaysVerifySSLCertificates.ca_file = "/path/path/path/cacert.pem"
http= Net::HTTP.new('https://some.ssl.site, 443)http.use_ssl = truereq = Net::HTTP::Get.new('/')response = http.request(req)
It is destined to be flawed as long as insecurity is allowed. Only when every exploit is exploited continously will people be vigilant.
? This code is intended for deployment in potentially dangerous regions for getting around government censors.
<falls off chair>
More info at the author's blogpost: http://musicmachinery.com/2012/10/28/infinite-gangnam-style/
The Echonest stuff, done over the selected works of an artist could make for some interesting mashups of their work.
Someone should analyze why this song is so catchy.
Infinite Gangnam Style - Frequently Asked Questions
What is this?
- Infinite Gangnam Style is a web app that dynamically generates an ever changing and never ending version of the song 'Gangnam Style' by Psy.
It never stops?
- That's right. It will play forever.
How does it work?
- We use the Echo Nest analyzer to break the song into beats. We play the song beat by beat, but at every beat there's a chance that we will jump to a different part of song that happens to sound very similar to the current beat. For beat similarity we look at pitch, timbre, loudness, duration and the position of the beat within a bar.
How come this doesn't work in my browser?
The app requires the web audio APIs which are currently best supported in Chrome and Safari
What does Psy think about this?
I don't know. I hope he doesn't mind that we are using his music and images. We hope you check out his official video and his web site too (but really you probably already have).
Who made this?
Paul Lamere at Music Hack Day Reykyavik on October 28, 2012
Sorry, this app needs advanced web audio. Your browser doesn't support it. Try the latest version of Chrome
I also like the helpful visualization below that shows which part of the song it is currently using.
Anyone else experiencing this?
You would have to improve the program a little bit, but this concept being realized with a vast music library?
Sounds quite interesting...
Btw, quick bug report: doesn't work for me if open in non-active tab in Chrome 22.0.1229.94 on Mac OS X 10.8.
Good fun and now do an automated version where ppl can paste their youtube links.Greetings,lx
Warning: if you watch it, the lyrics will get stuck in your head. http://www.facebook.com/photo.php?v=10101449851143489
He ("he" is correct; I was confused by the given name at first until looking the person up with a Google search) should blame the typical master contract with the teachers in the school district for that. That is a standard contract provision recommended by schoolteacher unions whether a state has a "union shop" or "right-to-work" rules. Usually, school districts cave in and adopt contract provisions like that, because in states where a union shop is not mandatory, and collective bargaining for public employees is not mandatory either, schoolteacher unions are still very influential political interest groups that can swing voter turnout in the typical low-turnout school board election. School boards have a lot more electoral incentive to align with the interests of schoolteacher unions than with the interests of learners. (The interests of learners align with favoring better teachers over worse teachers, rather than with favoring senior teachers over newly hired teachers.)
The crucial voter action influencing the daily lives of teachers at work happens not at the federal level
but at the state level and local level, where most of the funding for schooling is set (and what proportion of funding goes to anything other than staff compensation, by far the largest line item in any school budget, is set) and where work rules, especially priority for promotions or layoffs, are set.
There is considerable evidence that seniority rules lead to higher numbers of teacher layoffs than would be necessary if administrators were allowed to make effectiveness the determining factor in issuing layoff notices, rather than length of service.
A teacher who is doing a good job helping students learn is worth his or her weight in gold, but seniority doesn't match teacher quality sufficiently well to be the sole basis for determining promotions or layoffs in a particular school district. Actively identifying the most able teachers and encouraging the least effective teachers to find other employment, regardless of seniority, could do much to improve the efficiency of the public school system and free up resources to reward the best teachers better than they are rewarded now.
My Google search to verify the teacher's background turned up this post from the teacher's blog
covering some of the same issues, with a different slant for the blog's different audience.
"I give up. They win. I have joined the ranks of parents who have come to realize that we are only empowered to do one thing: take care of our own. I hope that things change, but I don't have the energy, the money, or the time to continue beating my head into a wall. And if the choices have run out for my toddler when he's ready for school, I will do it myself. Maybe I'll do it for others, as well. Who knows."
AFTER EDIT: Thanks for the several interesting comments. Wisty asks how teachers might be identified as effective teachers in the interest of making more effective teachers available to students. The same scholar of education policy I linked to for the general point that effective teachers make a difference has written extensively about identifying those teachers. These links
from his website (which link in turn to longer-form formal articles on the issues) are a sample of the research on the subject. Identifying teachers with good "value-added" is not at all easy, and there are immense incentives to cheat while attempting to identify such teachers, but there is also an enormous payoff from doing better than is done now in identifying effective teachers.
Teachers performance in the US has been terrible for many years. Partly this is because of bad management, partly because of low pay, and partly because of teacher unions preventing any action against the worst teachers and insistance on tenure tracks.
The reason testing is good isn't because it is somehow super accurate. It is good because it keeps teachers honest. Without testing, how do you measure teacher performance at all? How can you tell if someone who is capable of teaching well isn't just being lazy or getting distracted?
So, if you aren't a fan of testing as a way to measure and improve teacher effectiveness, please find an alternative that works better. Just not having any metric at all is far worse than the imperfect tests we have.
It's easy to point out problems. It's useful and important that people find flaws and fault. In this case, however, just pointing out the deficiencies in standardized testing doesn't help anyone unless it either leads to a more effective alternative, or improvements in standardized testing. Standardized tests might be very imperfect at measuring teacher performance, but it's far better to use the tool we have than to just throw up our arms and assume that all teachers are equally competent.
The district is forced to come up with a budget months before the state approves their budget, so they don't even know if they are going to get the money they need.
When money is low, they have gone to the community several times trying to pass a property tax to raise funds to keep teachers and maintain their facilities. The community has refused to fund the school each time.
The district is constrained by their contract with the teachers which forces them to keep the most incompetent and highly paid teachers and get rid of the good but non-tenured new teachers (which are the ones that the students want to have).
The way schools are funded are a major source of the problem. For example, they are funded per student with no regard for facilities, transportation, or other fixed costs.
Lastly, there are too many crippling regulations that don't allow for flexibly to meet the various needs of students in varying districts. What works in the large LA County district is just not going to work in rural northern California.
Throwing more regulations, tests, money, etc. at the system is not going to fix it. I really wish that a large group of educators (K-12, post-secondary), administrators, parents, etc. could get together and work out something different, perhaps even radically different.
The problem is, math is the subject which is BEST served by standardized tests. There is really no fuzzy aspect to K-12 math: answers are right or wrong. And there are of course many benefits to standardized testing like teacher and school evaluation, providing structure to the curriculum, etc.
His rant reminds me of another front-page HN article today (http://news.ycombinator.com/item?id=4712230), where the author claims that tough technical interview questions at Google bear no correlation with programming skill. Sure....
Academic freedom, tenure, and seniority (to a lesser extent) have a lot of positives. Getting rid of these should only be done if the reasons are compelling and valid. What is required is not a collection of anecdotes of how tenure protects bad teachers - there are equally many anecdotes showing that tenure protects students and educational integrity - but rather statistics, facts, and well reasoned arguments.
There are large portions of the United States where parents without any training or knowledge on teaching have very strong opinions on what should or should not be said in the classroom. Getting rid of tenure and academic freedom will, in some areas, lead to ignorant people making important educational decisions. Will the physics department stop talking about the Big Bang? Does the geology department stop talking about processes taking millions of years to work? Does the history department only talk about the good parts of Manifest Destiny?
Instead of tenure maybe 5 year, renewable contracts would work. I don't know. I do think it is in society's best interest if teachers treat society as the client and not the students as the client. Doing the latter leads to dilution of standards. Doing the former without fear of being fired, at least in me, leads to grading on knowledge and not fluff.
I had to fire our public school (in North Carolina; this was in the first district he moved to in the state) because it took them 3 years to do an IEP. In another example, a family member moved out of state, and it took the school two years to call up and ask if she would still be attending.
I'm know in some places the teachers are the problem, but the teachers we met were working their hardest. The administration just didn't seem to give a shit.
Teacher responsibility is a great thing, but we also need administrator responsibility.
"I'm tired of watching my students produce amazing things, which show their true understanding of 21st century skills, only to see their looks of disappointment when they don't meet the arbitrary expectations of low-level state and district tests that do not assess their skills."
I hope that this teacher finds happiness teaching in a more productive environment. Charter schools and some universities come to mind.
Her first year out of her probationary year was a Kindergarden teacher. She had a girl in her class who had telltale signs of EBD (Emotional/Behavioral Disorders), and spent the year trying to convince the girl's mother to seek the appropriate (free) care from the educational system. The mother refused to attend any meeting; my wife eventually drove to her house to find the girl living in a "crack den" (her words). The girl's mother refused to allow her to be tested for EBD, and the girl barely finished the year with passing marks.
Over the summer, the district noticed that my wife wanted to help kids...so, instead of putting her back in Kindergarden the next year, she was reassigned to a juvenile detention center/lockdown facility, where the kids didn't want to be helped. There were instances where they'd pull the kids out of her class, one by one, until it was just her and another student, before they come in to arrest the student for a crime, or, have a disgruntled student show up on my doorstep at midnight with a handgun in his waistband.
Teachers get shit on by society, coworkers, and parents. The good ones are worth their weight in gold. The poor ones need to be replaced with better ones -- the problem is that there's no true way to rank teachers and how they teach that isn't subject to tampering or isn't completely subjective based on inter-school politics.
There's not a good solution to the teaching problem...which is why I'm excited to look at what the technology/startup community comes out with over the next few years. Open Source SIS'es/Course Management/Educational Networking is something that can make the teacher's life easier, and provide pointes and guidance for parents who want to learn more, or students who want to self-learn/pace themselves faster or slower.
An engaged, informed, active, body of parents who will take action to ensure their children receive best schooling and care available.
Take 100 irate parents to the next North Carolina State Board meeting and have them raise individually one after the other motions of no-confidence in each member. Then try to elect this woman to the Board.
Will that help.
Yes if you keep up the pressure for the 14 - 18 years it takes your child to go through the system.
There is a website / startup in there...
Since that time, we have become addicted to assembly line, one-size-fits-all, bulk format education in which we put kids through a ton of information and measure them by how much they retain. Not the quality of what they retain, and not their actual skills, but what they've memorized.
By prioritizing memorized facts over learned application, we are losing a lot of our most talented kids. To compound the problem further, this one-size-fits-all approach isn't calibrated to the smart kids, but to the average. In public schools, it's also impossible to send home the disruptive kids.
The result is a system that is so hobbled by contradictions that it is dysfunctional. Dysfunction attracts lazy administrators who like to use test metrics to force teachers to teach to the test, thus making everyone look like a success, even when the graduates aren't good at doing anything.
The recent spate of test-cheating scandals should show us exactly why these tests are in favor among administrators. Instead of a broad open-ended task like "teach these kids to reason," all you have to do is make sure they make a pretty bell curve on the standardized test.
Its called a competent boss
Every Head knows which ones to get rid of and which ones to keep. Every Head also knows if they get rid of the bad ones, they will need double the budget to hire in new, also good teachers. Especially if every other Head does this at the same time.
if i could fix just one thing about the educational system, this would be it. it's the laziest option, so it ends up being implemented pretty much everywhere (this is not a us-specific problem; i grew up in dubai, and every time i visit my old school the place looks more like a prison), and all it does is alienate and disinvest students at precisely the time they need to be engaged and nurtured.
It seems as if the claim is that testing doesn't correlate with knowledge, since some kids "test well," or "test poorly," so it's a waste of time.
I've known lots of folks who claim to know a subject well, but test poorly in it. I've never been able to verify it though, because the followup discussions about the concepts involved left me... uncertain at best. I'm not sure undemonstrable knowledge is really any kind of knowledge at all.
But say we granted that an individual student's tests have a wide margin of error. Wouldn't the aggregate of tests for a given classroom or school still provide some information on whether or not a school was well or poorly run? (Assuming you use moving averages or something to soften noise.)
Do standardized tests really have no value, no redeeming benefits?
My impression is that things have gotten a lot worse/more political since I went to school. I wonder if any long-time teachers care to comment on the changes over the last 30 years?
This is what people who have enough faith in themselves to find another job do when they are in a terrible situation. I'm tired of hearing teachers complain about life as a teacher, but never quit. That tells me that they don't have faith in the marketability of their skills to take the plunge and get a different job.
If you did attend a high school with standardized tests - do you feel it negatively affected your education? How? Do you have specific examples or courses?
For clarity - I am not claiming that standardized tests do/don't help. I just want to hear form recent HS grads.
Is that how he teaches his students to write as well?
edit: Hi downvoters, I understand parallelism. When it's used the way this teacher has used it, he risks coming across as whiny and juvenile. His point would come across more powerfully if he reframed his grievances in a way that shows their impact on the real victims, the children, as opposed to himself.
"I have a dream" came from a place of hope and opportunity. King was laying out a roadmap for what progress would look like. Similarly, the Declaration of Independence was not a list of self-referential, logorrheic splatter. It contained specific, directed complaints combined with a plan of action.
This man merely said I hate the world in a selfishly-worded cry for attention, threw up his hands and gave up. We do not say the same of King or the Founding Fathers.
"The built-in front-facing camera for Skype is angled so that it'll work great when the kickstand is open, but again, only for Danny DeVito, or maybe for people who want to show off their chests in Skype."
"The Touch Cover is one of the Surface's biggest innovations. I thought I would hate it, but I didn't. It's not like typing on a completely flat surface: each â€śkeyâ€ť is raised slightly, so while there isn't any mechanical feedback, it does feel a bit like a keyboard."
"The Type Cover (the one with real keys) just works. I've got big hands that often struggle on undersized keyboards, but I can type very quickly on the Type Cover."
"He showed me Office, which was almost unusable: it was extremely sluggish, and touch targets were tiny and difficult to hit."
"So quickly, in fact, that I can outrun Microsoft Word on the Surface. I get the feeling that the Surface RT's CPU or Word code just can't keep up with my typing. Here's an example video:"
"The standard gestures don't help, requiring many in-from-the-edge swipes that not only aren't discoverable"
"After waiting over a minute for the machine to boot and launch the mail app, I got a blank gradient screen. User interface 101: if the app needs to be set up on the first launch, offer to do that, please. Folks from Twitter suggested that I swipe out from the right side and click Accounts"
So, can we conclude that these observations might be real (V. 1) problems without resorting to ad-homs regarding the author?
- Love the build. Very solid overall.
- 16:9 means it's one long tablet. Oddly, it's actually fairly usable in portrait; can't say the same for my old 16:10 Transformer (maybe just better balanced?)
- The touch cover is, like most say, surprisingly usable. Desperately needs a way to no-op Caps Lock though.
- Screen res lower than iPad, but still usable. Difference not near as noticeable as between iPad 2/3, but too many factors in play to make an objective call there.
- Metro takes getting used to, but I like it (even with KB/trackpad).
- It's the first time I've seen proper desktop Gmail and Google Docs usable in a tablet browser.
- Performance is generally decent. Not blazing, but decent.
- Windows RT appears to still contain far more of Windows than we've been led to believe. Even `csc` is installed, but missing a few dlls.
- No SSH client for Metro yet. That's one of the risks you take on a new platform (esp. a non-Unix one), but still aggravates me.
- Snapping is very, very handy; nice solution to bring proper multitasking to a tablet UI.
- When touch-scrolling over on desktop apps (what few remain), the entire window "bounces" at the head/tail of the content. Odd decision.
- No central notification bin (like Android's shade or iOS's Notification Center). Have to rely on scanning Live Tiles if you miss anything.
- The back camera seems to exist only to make the iPad 2's back camera feel better about itself. Has to be the blockiest camera I've ever seen.
- Handwriting recognition is pretty solid. Wacom junkies will be very pleased when so-equipped tablets ship. (Capacitive styli still suck)
- None of the Twitter apps have really thrilled me. Given the circumstances, I'm not that surprised.
- OS-level share support is a smart move; similar to Android's impl but more thorough (sharing pops up a share pane from your selected app in the sidebar, instead of bouncing you out of your current app entirely).
- Printing is mildly unintuitive; you have to open the "Devices" charm and pick your printer. No one is going to guess that's how to print.
- On the bright side, our network printer/scanner was detected and installed immediately, with zero user intervention. Very, very far cry from the WinXP days.
- There's no way to see your precise battery life outside of the desktop (in the classic sys-tray).
- Presumably due to the use of pressure sensors vs. capacitive, the Touch Cover isn't quite as accurate without a solid surface underneath.
- If you're not using the keyboard (watching movies, etc.), flip the cover backwards with the kickstand out and it's nearly as stable as a laptop.
- The intro tells you about the basic edge swipes (right for charms, left for app switcher, top/bottom for menu); not mentioned is swiping straight from top-center to bottom kills the current app.
- Screenshot is Win+VolDown.
- Wordament can be played while snapped. This is dangerous.
- IE lets you swipe on the outer edge of the page for back/forward, which would be smart if this didn't occasionally clash with the app switcher.
(PS: I typed this entire post on the Touch Cover.)
Yesterday was the big retail launch. I was on a mission to check out what my local stores had and, if they had anything that could do the job for me, buy it. I've always wanted a tablet, but only if it could be as useful as a laptop when paired with a keyboard. The new Windows 8 tablets are supposed to be just that.
Best Buy had one (1) Windows 8 tablet. It was a Asus Vivo Tab running Windows RT... supposedly. I don't want an RT tab, and this store didn't even have a working floor model of the one tablet they were selling. The one they had was stuck on a "failed to automatically repair Windows" screen. It was also glued to the display stand so I couldn't pick it up and get a feel for the hardware.
OfficeMax had zero (0) Windows 8 tablets. Heck, they had no Windows 8 touch screen laptops either. Or price tags. Or product specs. Or anything I could play with, really. There was one employee there setting up a display model of some laptop while complaining to another about how they were supposed to have tags for the computers but had none. Their electronics section was a joke.
Staples had one (1) Windows 8 tablet. It was a Samsung ATIV running Windows 8. Success! I actually spent some time playing with this one. Again, I couldn't really get a feel for the hardware, or specifically the weight, given it's got a pound of security alarms and tethers bolted onto the back chaining it to the display area. Beyond that, the specs just weren't up to snuff -- with 2GB RAM and 64GB storage, I'd just barely be able to run enough software to occasionally use it as a portable development machine. With nothing installed on it, there was only 14GB of free space -- the OS and preinstalled apps were using 50GB of the 64GB out of the box.
So all those trips were a waste of time. There's no Microsoft Store anywhere within 4 hours of me, so those 3 were the full range of retail options here.
I'm basically looking for a Surface Pro (Intel Core processor, 4GB RAM, 128GB storage). It's amazing that despite knowing Microsoft would be building this, nobody else built something comparable, and stores aren't carrying even the few tablets/hybrids they did build.
Right now there are two exceptions to this: Office (preview version - buggy) and a Desktop-mode version of IE. Everything else is 100% Metro. And I don't think you can even install anything yourself on it except via it's App Store. Hence it's Desktop-mode is not really there for the benefit of the consumer. And the Office offering will need to be further ported and refined for RT before everything is worked out. I'm not even sure why they put Office on it.
It's a device made mostly for browsing the internet and running some apps while holding it in your hands. Which is what the bigger market is for.
While this was a good and honest review, I think his use-case is off on this one and he will be better suited waiting for Surface with Windows 8 Pro.
I would also be curious to know what his height is, so I'd know what "for short people" means... The pics I've been able to find of the author, he's at least 6'2", maybe even 6'5".
If you are as toll as the author, you could probably either move the device away a bit, zoom out the image, or perhaps put something underneath it's stand to angle it properly.
After using Windows 8 I just see no good reason for anyone to use it on an old PC instead of Windows 7. I only see drawbacks, such as the forced Metro interface, and the inconsistencies in the desktop mode UI, which seem like a patched-up job done 6 months before the release or something, to make it more "Metro".
- A mail app that opens to a completely blank screen with no cues on how to continue. - An infinite login dialog that doesn't allow you to cancel and back out.
I've read my share of man pages and hand written my Xorg.confs many times in previous lives, I'm no stranger to complex and arcane software setup procedures.
But in 2012, in the world of smartphones and tablets, this is stuff that should just work. The answer to "the mail app is completely blank on launch" shouldn't be "sorry, you failed to read the manual". Ever.
And while I greatly respect Microsoft's attempt at entering this market, someone on their team, at some point, had to look at these issues and say, "okay, this software is ready to ship anyway". That does not bode well.
 The alternative, I suppose, is that no one noticed. Which is even worse.
and I still think it is true... IMO Microsoft made a mistake by leading out with the RT. Leading with the Pro and then offering the RT as a feature-reduced lower cost version would have cut down on the confusion as to what RT really is and lessened the initial impression that the Windows 8 experience is kind of underwhelming.
Huh, in the Anandtech review they thought the kickstand worked well everywhere except airplanes.
Has anyone else tried the SRT? This post alone is enough to scare me away.
> I admit, I fully expected a tablet version of my laptop. I wanted it to do everything my laptop could do, but with the added bonus of the touch screen, so I can play my games that make my phone freeze up while I'm sitting at my kids dance or karate classes.
If you're technically savvy enough to understand and follow focus of GUI elements, and don't mind a stylus, then there are a number of existing tablets that will fit this bill. In fact, they've been around since ~2000.
But the Surface will not be true competition to Apple. This product fails in too many ways, and I predict that the iPad will remain dominant for at least a few years to come.
It wasn't the first time I've bough I Microsoft mouse. I bought one back about a decade or so ago when another Logitech mouse died. It suffered the same fate as the Microsoft touch mouse. It was returned to the store and exchanged for another Logitech - for exactly the same reason.
Neither was acceptable for my workflow. Unsurprisingly, I spend a meaningful amount of time using CAD/BIM software. The touch mouse zoomed in when I adjusted my grip ("drawing" with a mouse largely involves holding it). There was no way to program the gestures. Likewise, the earlier Microsoft mouse had lots of buttons, but no way to program the middle button as a middle button - as an early "many button" mouse, the middle button had some dedicated function and I had about a decade of muscle memory and projects to push out the door.
The author is experiencing the same thing. The new device isn't tailored to his workflow. It probably isn't reasonable to expect it to be. It's competitors aren't; most people don't have a similar workflow; and it's still version one of the software (Word for RT).
This doesn't excuse the devices performance. But it also puts the author's experience in perspective. Right now, he's somewhat of an edge use case.
Even RT could be tolerable with the right apps as a remote machine with that keyboard, similar to what people do with Android+Transformer. I can program on it, work on remote machines.
That having been said, assuming it's somewhat usable on a lap, I'll wait for the Pro too, I have several things that need x86.
It is weird combo of laptop and tablet.
also vs the ipad 3 here - http://goodereader.com/blog/good-e-reader-videos/microsoft-s...
the pro might be a better investment, but most of the apps crash/buggy, not really worth being an early adopter with this product.
What I'm disappointed about is that the bottom edge of the glass is not completely flat. This shows poor build quality, and I expected at least Apple quality from this device. I also tried a second one, and it also has non flat glass on the bottom edge. All the demo units had this problem too. Try reflecting a straight line on that part of the screen and you will know what I mean. It does bug me a bit because I expect a high standard for a $600 ARM tablet.
When they go bad, they're catastrophically bad.
Overall the design looked cool, as I am interested in tablet with an attached keyboard with a trackpad. Though I want a tablet/PC type of device that allows me to use it as a tablet or a PC laptop. I guess the Surface is not what I imagined.
As a WIN32 developer for the last ten years I would have to agree that a good bit of software today for Windows lacks performance. Why ? IMO a lot has to do with mindset of not only developers, but also those who produce the programming languages developers use. I would venture to say that most programmers would admit that the computer they develop on is likely a more advanced computer than most mass market PC's. They like i5 or i7 CPU's, 8,16 or more gigs of memory, SSD's, etc. The mass market PC though, to be affordable comes far less equipped. This is why when I write software, my development PC is closer to a more mass market PC. I need to feel the problems with performance the moment I compile and run. Now if you write apps which run fast on a slow PC, imagine how they will run on the higher end devices.
Camera viewpoint doesn't cover your face when you put tablet on kickstand mode? put it little away. what's wrong. Other leading tablets in market doesn't even have one. Its been stated design wonder along with cover with keyboard. Should appreciate instead.
All issues noted in this article are exaggeration except the live sign-in bug while saving office doc.
This is a Surface Pro, which has not been released yet. Slowness is totally understandable because that guy installed an OS X on an iPad, which is the same thing as using Win8 full desktop environment on Surface RT hardware.
Now all we need is linux to credibly support it as well, or at least a linux built for mouse use. There are still many, many usability issues in gnome with a high ppi screen.
Also, lets not forget how far mobile gpus have come in the last few years, it would have been impossible to push that many pixels with anything but the most minimal of 3d use cases.
It's a much more complex problem than linus would suggest by simply having oems switch panels. Witness how relatively complicated apple's solution is, which came after years of supposedly "somewhat" supporting hidpi. Use the hidpi macbook pro at 1920x on the ivy bridge gpu and it's still noticeably laggy at some 3d operations.
Next up: IPS LCDs everywhere.
That's fine if you are willing to drop $1000 on a laptop, I don't expect we'll see $400 laptops @2560x1600 for several years - the Tablets have the advantage of free operating system, lower computing requirements, smaller physical screens, and, in the case of Amazon/Google, a willingness to subsidize the hardware sales in order to capture downstream Content/Search revenue.
Roll it out. Stop with this incremental horseshit for sciences sake and make a LEAP.
"It's ignorant people like you that hold back the rest of the world. Please just disconnect yourself, move to Pennsylvania, and become Amish.
The world does not need another ignorant web developer that makes some fixed-pixel designs. But maybe you'd be a wonder at woodcarving or churning the butter?ď»ż"
I agree with the "tiny font" bit. As a patient with severe myopia my mac air is a lot more comfortable on the eyes than my 16" widescreen acer.
In an ideal world I would agree completely, a better DPI is amazing both in terms of font readability AND for watching full screen media (movies, TV shows, games, etc).
But in the real world higher resolution means small screen elements. At 1600x900 fonts are readable at 125%, at 1920x1080 even at 125% fonts and some elements are literally too small to be comfortable read (you get eye-strain after less than an hour).
Now I would turn it up to 150% "text size" but that breaks SO many native Windows applications (e.g. pushing text off the viewable area) and does the same on Linux too (Ubuntu).
Ideally everything should remain the same size no matter what the resolution, and the DPI should just grow upwards. This is how it works on platforms like the iPad.
So, I disagree with him, I don't want higher resolution displays because Windows, Linux, and OS X still suck at handling resolution (and if you use a non-native resolution it hurts the performance since the GPU has to re-scale constantly).
And about every other hacker I know.
Personally I'd like to see a higher refresh rate. Even with triple buffering I don't think that horizontal scrolling is smooth enough. I'd love to have a 120hz laptop screen. The new Windows 8 start screen, and switching between workspaces in Linux would be so much nicer. Still it's not exactly necessary, just a nice to have.
(i had a 15" 1400x1050 display, then a 16" FullHD and now a 13" 1366x768)
Maybe some games? Maybe some designery stuff? Maybe some video creation stuff? (It might be useful for doctors and medical images, but I kind of hope they're using special purpose monitors for that stuff).
And does pushing those extra pixels have a cost in energy use?
>Christ, soon even the cellphones will start laughing at the ridiculously bad laptop displays.
They've been beating the pants off of them for some time now.
Computer displays have stagnated for too long.
It's a trade-off between various things, as usual. What Linus thinks may work for him (I happen to agree), but I can easily see someone wanting an ultraportable with basically VGA resolution and battery lasting entire day of active usage.
I have a 13" Retina MacBook Pro and scaled resolutions look blurry, so I'm stuck with 1280x800.
This is nonsense. You need pixels to a certain amount to have a good looking picture, but the benefit of having way larger resolutions is like a log curve: it stagnates as you go up and up, since you would notice the pixels less and less.
I am surprised to see Linus making this kind of claim, he used to be more practically-focused. Now he sounds like a marketing guy from Apple.
I say, why stop at 2560 * 1600. This is ridiculously low. Make 10 000* 7000 the new standard laptop resolution. Yes, we can. Tomorrow, please. Even if the capacity and the plants to make it do not exist, yet.
BS if I ever see it.
I can work just fine on my 2009 MBP with 1280x800 (or whatever it is) the text is perfectly readable, there's no noticeable pixelation at distances past a few inches from the screen and having everything shrink, as a result of increasing the res, would make it unusable.
He's probably exaggerating for effect, but it's not even remotely true that laptop resolutions have stagnated. They have steadily increased to the point where we now have retina screens on regular work laptops.
I think if Linus created a blog post saying he'd just set his background color to blue, it would make the top spot here!
It's a shame to see Linus stooping to mock apple's use of the term retina. They (Apple) name everything, like the fusion drive or any number of previous technologies. It's to humanize the tech so the average person walking into the apple store doesn't have to talk in tech-speak. It's just a marketing term, and every company has them.
The definition of "reasonable resolution" changes over the years, VGA seemed reasonable compared to EGA.
Compare that to cereal crops like wheat or maize or vegetable crops, which require long uninterrupted growing seasons and irrigation.
Why is this important? When a troop of rampaging soldiers cuts through your village and pillages everything in sight, you grab your cows and family and boogey out of there. Essentially, you have a mobile food supply.
In the event of a drought, you have options as well. With wheat or vegetables, no rain == no food. With a dairy animal, you go kill the guy who controls the next pasture and let Old Bessie the cow feast on the grass. (The other key development was the introduction of potatoes, which remain buried under the ground safe from the rampaging army above -- my Irish ancestors subsisted on potatoes hidden from the English taxman and a cow that lived in the house.)
In Europe and the Near East, these things were really important, because there was always pillaging armies marching across the continent. Today, it's unlikely that some Mongol horde is going to loot my supermarket, so I drink milk and eat cheese because they are really tasty.
The success of the lactose tolerance mutation may be partly due to sexual selection. It's been proposed that neoteny is a key feature of human evolution. The ability to drink milk as an adult is a neotenous trait, and it may have been "accidentally" selected for when other beautiful features were sexually selected.
David Rothenberg's book, Survival of the Beautiful, argues that biologists are sometimes "blinded" by natural selection and ignore sexual selection.
 - http://en.wikipedia.org/wiki/Sexual_selection
 - http://en.wikipedia.org/wiki/Neoteny
 - http://blogs.scientificamerican.com/thoughtomics/2012/10/25/...
edit: Growing up, we always had 2% in the house. From college on I drink skim, occasionally (once every few months) I get 1 or 2%, just to up the fat content (I'm a runner, not terribly concerned with weight gain, more or less trying to maintain body mass...)
Here's a tip for others - you can buy Lactase pills at a pharmacy and take them just before you eat any meal that contains milk. This gives you the enzymes you need without your body producing in it.
And it's really awesome. I only started doing this a year ago, but now I can eat many more cheeses, drink milkshakes, etc., without feeling bad. And it happens surprisingly often - every time you want to eat pizza, pastas, etc.
Seriously, is you're lactose intolerant, give it a try - it improved my life considerably.
Odd that they don't mention physical displacement: invasion, dispossession, death. The gene would likely have coincided with other developments of civilization, such as weapon technology, greater numbers, greater cooperation, specialised soldiers etc. Maybe there's evidence against it, but odd it's not addressed, with a puzzlingly high "selection differential". Another factor might have been sexual selection, if the new folk were healthier looking etc.
Note they are talking specifically about the West - agriculture and civilization spread throughout the East without this gene.
I'm just glad I'm not lactose intolerant, so thanks to whoever in my billions of ancestors decided to keep at it.
It moderates strong flavors, smooths out acidic drinks, fluffs up eggs among many other thousands of beneficial food uses.
Other dairy products like butter and cheese are key to an immense palette of flavors and cooking techniques.
Dairy is so delicious that I've even seen people with violent milk allergies put up with the consequences just to scarf down a few bites of custard or ice cream.
The article itself says>Two hundred thousand years later, around 10,000 B.C., this began to change. A genetic mutation appeared, somewhere near modern-day Turkey, that jammed the lactase-production gene permanently in the â€śonâ€ť position.
This is NOT a genetic mutation. The gene was already there but not turned on past the toddler years. I searched this entire page of comments and no one knows or points this out?
Really? A plant-based whole foods diet is probably the best cure out there for heart disease and type 2 diabetes. (google Dean Ornish, Neil Barnard, John McDougall)
The author tells a good story but his bashing of agriculture is unsupported.
Likewise, people in Sweden for example have a 100x higher lactose tolerance, because there's less sunlight throughout the year.
Plus, animals can graze on land you can't farm, and they're very portable.
"We became, in the coinage of one paleoanthropologist, â€śmampiresâ€ť who feed on the fluids of other animals."
Does anyone know why some East Asians (such as myself) are lactose tolerant? Is that evidence of interbreeding in the past?
I believe the benefit of drinking milk is obvious. A herd can take calories from grass and drink mud, while the human enjoys a source of clean, caloric, nutrient rich drink that can go anywhere. Farmers, on the other hand, can just be ran over, pilled or sieged by enemies.
This would support massive switch to tolerance (simple survival of the fittest).
It also supports the spread, as a bacteria or virus would not have made it out of the "islands" (himalayas, oceans) and so tolerance wouldn't have been an advantage.
Not hating for those who want to feel good about their love for milk, but I don't think today's milk is much more than a treat and baking ingredient.
Because the dairy industry in the US alone gets $4 billion per year in subsidies from taxpayers?
> I still had a job, which made everything near impossible, that I couldn't afford to quit. I worked during the day as a report writer, snuck in emails and business calls for Altsie over my lunch, and worked late into the night to take care of hundreds of necessary details to keep the project going.
> Despite my downward physical spiral, I managed to marry the love of my life
I appreciate that people have lives too but you just can't do two jobs and have a personal life. Sorry. Something has to give. I've read many tales of where having just the startup has put a strain on personal relationships.
I wonder what the situation was with the cofounders. How many were there? Were they full-time? If so, that could be a problem (in that they might end up feeling that they've gone "all in" when you haven't).
> Two years building and eight months running Altsie took its toll.
Two years to launch? i wonder how much quicker it would've been to launch if it had full-time resources. For something that isn't hugely technically sophisticated (correct me if I'm wrong but this doesn't sound like that kind of startup) that is (IMHO) too long. People talk about MVPs for a reason. You need to prove your idea and get feedback ASAP.
Whatever the case, eight months doesn't seem long enough to prove anything one way or the other.
I don't mean to be harsh so I apologize if it comes across that way. Lucas, good luck to you. I would suggest that when you wish to try your next venture (assuming you do), you do so when you can dedicate it to yourself full-time.
Lucas says "I put three years of my life into building and running Altsie,..." ... "As we approached launch last May" and "Two years building and eight months running "
What are the expectations on a business where you are looking for people to integrate a new thing (going to a bar to catch an indie movie) into their lifestyle? A week? A month? a year? five years? If you look at the restaurant business most seem to require a 3 year 'boot' cycle, the first year nobody knows about them but perhaps the local food critic trys them. The second year they have some foot traffic and perhaps they get written up in a more widely distributed guide, then the third year they have people coming who have read about them in the guide or found them on their phone's 'maps' product and they get to see how successful they are going to be. I can't imagine that any idea which requires people to change their behaviors in the real world could really be tested in less than a year.
The other thing that was sad to read was this bit, "I'd signed up to fight on the front lines. I still had a job, which made everything near impossible, that I couldn't afford to quit. I worked during the day as a report writer, snuck in emails and business calls for Altsie over my lunch, and worked late into the night to take care of hundreds of necessary details to keep the project going."
There is a reason YC and others ask you to quit your job if you're doing a startup. There isn't a lot of excess time. If you have a spouse or partner who can bring in enough income to pay the bills and maybe health care that is one thing, but being both the 'stable income source' and the primary mover of the new venture? Not a good idea as Lucas discovered.
Now the most important thing to do is to capture all of the things you learned into something you can use in the future. What worked? What didn't work? How did you spend your time, could you have out sourced any of that? What were your costs and how did you evaluate the business? What variables did you guess at? Did you guess high or low? People who have been through the ringer are twice as valuable as people who haven't done it yet because they have a better idea of what they need to know to make forward progress.
I hope that Lucas' next venture is a lot less stressful on his health/psyche and much more satisfying overall.
It's a bit sad to see. Especially because i believe that many people here know (or should know) how complex these topics are.
In my opinion great article. Thanks for sharing that honestly.
Perhaps one of them has an idea to cut costs, or would like to open source the code, or can line up a buyer for the assets, or ... something.
Telling your stakeholders/investors/cofounders after you've pulled the trigger seems like the exact backwards way to do it.
I have no idea how good a business idea that is (I guess not such a great one), but it sounds like a great idea and I wish something like it could be successful. In my moderately sized UK city it's impossible or very difficult to see a large proportion of new releases on a big screen.
Definitely identify with gaining weight. It's brutal how quickly you can fall out of shape.
After playing basketball 6 times a week since college I barely get out once every three months. I'm 30 now and feel 40.
Aside from the up and down roller coaster ride, the hardest part for me has been balancing a relationship that began at roughly the same time that my co-founder and I went into business together. I have no idea how you could possibly balance anything else (like a real job) outside of a startup and a new relationship for extended period of time.
There are times my relationship has been a distraction to our business. But well worth the juggling act :)
1. What pain does my idea solve?2. Does it solve it for a large number of people?3. Just how painful is it for not being solved?
Do you know plenty of people who are in pain because they can't find a venue to watch an indie flick? Does not being able to find an indie flick at an appropriate venue eat at their thoughts 24/7? Are they going to go nuts finding a solution if you don't provide one? How much money would solving this problem be worth to them?
Admittedly I know diddly about Altsie, and I'm not one for indie flicks, but let's compare Altsie to Airbnb. Airbnb solves a basic human need: that of housing. How painful is it when you don't have a house? Immensely. How much money are you willing to pay for a roof over your head? Thousands per year. How many people are searching for your solution. A shitload. Now replace housing with "Indie Flick", and objectively recalculate.
After doing so, you might think three years is a long, loooong time investment, hugely out of proportion to the level of pain Altsie solves, not to mention the price of solving that pain.
I honestly don't think anyone understands what it really feels like to build a company until you do it. Before I started running my first startup, I thought that the hardship and mental anguish other people describe was somewhat like what I already experienced during hard times at other companies. It wasn't. You pour your heart and soul into a startup and push to the side your physical health, hobbies, family and basically everything else. Then after a year or more of doing everything possible to try and succeed, you potentially end up with nothing. Like Lucas says, you don't really end up with nothing, but it sure as hell feels like it at the time.
First, Altsie is a pretty awesome idea! I really like the idea of going to a bar to watch an indie movie, I'm sure producers would love to get their film shown, and bars want extra customers coming in. This is something that definitely could have worked.
Second, the technology behind this product is trivial, a 2 year build is a huge warning sign. I cannot find on the site or in this description anything that should be hard to put together, and the fact that Lucas spent a few years building this in his spare time instead of hiring someone to do it in a (few) week(s) shows a dangerous prioritization of money over time.
Third, it takes a strong presence of mind (or maybe just good communication with your partner) to realize that what you're doing isn't making you happy. Kudos on letting it go.
UPDATE: Must have been a bug. It has now been fixed.
Unit conversions provide in-line converter widgets... it'll gleefully show you pictures of anything safe-search while playing dumb if you search for something "naughty"... web links you select pile up in little tabs that let you slide right back to the original query... it looks good... it makes pleasing sounds that let you know what's happening...
If Siri can stage a question to Wolfram Alpha, the result is great. But if she can't, she just lamely offers a button to (Search the web for ______?) that then kicks you out to Safari. Google voice search makes Siri feel clunky.
The voice recognition is verging on instantaneous. This is amazing work.
Well, I just did the test. Google voice search on a 40 month old iPhone 3GS is more responsive and much more precise than Siri on the latest and best 1 month old iPhone 5.
Apple has so much egg on their face.
I asked both Google and Siri, â€śHow much damage did Hurricane Sandy do?â€ť
Google heard it as â€śHow much damage did Hurricane Sandy too?â€ť and returned with official Hurricane Sandy emergency info and latest news stories literally as I stopped talking.
Siri took nearly five seconds to register my question as â€śHow much damage did hurricane you doâ€ť and responded with hockey league standings for the Hurricanes team.
And the execution of Google's product is more stylish than Apple's...given Google's lead in collecting voice data, nevermind their lead in search technology and algorithms...how can Apple hope to even compete in voice search except by forcing Siri on iOS users?
*edit: Here's a screenshot comparison:http://danwin.com/words/wp-content/uploads/2012/10/google-vs...
Now in the short term style > substance for the simple reason that it is easy to repackage something that is difficult to use into something simple.
Making things easy to use is obvious for designers - but not for engineers - because they focus on actually making complex things work instead of making it easy to use.
However, in the long run substance > style for the simple reason that anything that can be repackaged to make it simple will either become a minority player or a commodity item because style and veneer are easily copyable but substance isn't. Substance is a natural monopoly, and monopolies make lots of money.
What you see here is the fruition of substance over style - big data is a monopoly and Google owns it hard.
Google will be the first trillion dollar corporation. And it will do so by making everything else apart from Artificial Intelligence a commodity.
Disclaimer: I'm looking to buy Google stock and I recently exited an Apple short.
Google wants its apps on iOS, as they mostly care about ad revenue (not the few bucks they might make on Nexus, which is just one of many Android brands). Google has always been platform agnostic. Apple wants Google (Android) dead, but simply doesn't have the ability to beat Google at search.
Scott Forstall probably wanted Apple to create a massive data division, so they could go toe to toe with Google on search, and hope that people would still want iOS even if Google was locked out. I'm guessing the other execs were beginning to question this strategy - Google can make a "good enough" mobile OS better than Apple can make a "good enough" search engine and mapping platform. It's far better to let Google own search, and focus on doing what Apple does best.
Me: Will it be cold tomorrow?Siri: Yes, the low temperature will be 42 degrees.Me: What about Friday?Siri: Looking better. The low temperature will be 58 degrees.(exact working paraphrased)
Try this series of questions in the new Google search app. The first gives me a wiki.answers.com page as the top result, the second a Woody Allen quote.
Or try it on any conversation bot. Using pronouns to obscure the topic of conversation has always been the best way to reveal the stupid machine underneath. Siri is a little less stupid.
Google voice search cannot do this because it is fundamentally transactional--you ask one question, get one answer. It is just another interface to their web search, albeit one with seemingly great voice recognition.
Siri is not designed primarily as a search engine. It is designed to be a personal assistant and is optimized to accomplish tasks and answers certain questions in the process of doing so.
Google originally announced this app back in August, and said it'd be in the App Store "shortly"... It's pretty obvious why Apple held this back in the approval process since it definitely competes with Siri's functionality.
Edit: I'd go so far to say this is eerily similar to the issues levied during Microsoft's anti-trust case. Google clearly is unable to compete here for no other reason than artificial walls put up by Apple on their devices in software. This is mobile's IE vs Netscape.
With the data that Google has, I can ask it math questions, ask it questions about release dates of movies or video games. And now I can query that data through Google Now (or will be able to as they pipe through from that dataset to exposing it through Google Now).
Funny, even with some of the features just in 4.2, Now became as much or more of an assistant than Siri. I still can't get over it will scan my email for packages and give me notifications about it. That to me is the epitome of why I love what Google does. They are good at data.
Even if you're lucky enough to live in the US and be able to use Google Now (the voice search like in this iOS and Siri) you won't have the same features available. If you, God forbid, change your language away from US English into something like horrible UK English the features are disabled and just becomes dictation instead.
I'm sorry about being a little bit annoyed, but I can't understand at all why Google put in place extremely stupid restrictions of features requiring their users to hex-edit their binaries in order to get access to the availible features. It's moronic.
Apple of course does not allow keyboard replacement either, so we're all stuck with the crappy voice recognition in iDevices (I have several, statement of fact, not fanboy-ism)
I believe Google must be doing something right on the voice frontier when they can accomplish things like this. They must have some pretty efficient methods for teaching the system new languages.
Note for non-Americans: The app only speaks results back if your selected voice search app is 'English (US)'.
Apple vs. Google, style vs. big data. While I love Apple's sensibilities, here's more evidence that data will win in the long run.
I use Siri a lot, and its powerful because it actually feels par of the phone. I can actually do useful things with it.
Google, please release your maps app for iOS now!
Since google knows so much about me, can I say:
- what movie should I see? - book tickets - mark the route to the cinema
to eventually (with a self-driving car):
- take me out to the movies, google
I know that Siri works this way as well (which I also find irritating). Why won't it obey my settings!
"how far is it to X" doesn't work in Google but works in Siri. "how did (sports team) do" doesn't work in Google but works in Siri."what movies are playing" works but "where is Argo playing" doesn't. And this is just weird because "where is Looper playing" works.
In general I've been very pleased with voice search on Google Now -- just reading the blog post I wasn't too amazed by the examples they gave for iOS because it sounds identical to what Google Now provides. I assumed that Google would release these features for android before iOS, but am am surprised by the overwhelmingly positive comments others here have to say here. Can anyone do a comparison and shed some light?
I do have to say that Google Now is sometimes rather slow -- the voice recognition is very fast (type as you talk realtime) but web search can sometimes take 10+ seconds to load even when already connected to wifi. Other times, it just works.
Yes, the voice recognition is very fast, but then again, most questions only work in English, no chance to get anything useful in German or other languages.That's not really competition to Siri in this department.
"Who made you?", "Tell me a joke?", "Who am I?", "Will you marry me?"
I love their image search though. Take a picture of painting, show you a lot of info.
Great app technically though. Hopefully this'll push Apple to make Siri more responsive.
Now apple need to catchup on Siri
Whatever blogspot theme Google uses for this blog is just god awful (oh, I see, this is it: http://gmailblog.blogspot.co.uk/p/as-you-may-have-noticed-gm...), and I swear it gets worse every time I view a post there.
When I load the page I see the orange "loading" gears. Then that sliiiiides up to reveal the content. Really? Can they really not innovate here? This is a company that spends massive amounts of resources to get their homepage to load as quickly as possible. Heck, they even penalize companies in their index with slow loading times. And yet they purposefully add loading animations and transitions which add at least a second to page load, and probably more as far as time it takes me to engage in the content.
Also, every time I reload the page I see something different. Sometimes there is text in the black menu bar. Sometimes there is not. Sometimes there is an "extra" screen that slides up after the orange loading gears, sometimes not. Sometimes the sidebar navigation is there, sometimes not. Try refresh a few times yourself and you'll see.
Also, I love the five-second delay for the document URL update when you navigate via the sidebar.
All of these fancy, look-we're-using-ajax, gee-whiz-its-a-single-page-app features -- just for a simple blog post. Talk about complicating a simple problem!
Again, apologies for the rant but I couldn't even concentrate on what the blog post was saying because I was so distracted by this garbage.
Minutes? I do sometimes wonder where the day goes...
Alternatively, Shift-click on compose (or press capital C to with keyboard shortcuts on), and you'll get an actual window that you can move and resize. With the chat mini-windows I frequently wish that I could adjust them, and often accidentally collapse them by clicking in what looks like a title bar.
Or if you already have a message open, just shift-click on Inbox to open a new window. Or not --- Google in their wisdom has disabled that to prevent confusing some poor soul with a broken shift key, praise be their servers. But C-n for a new window, ma-return, /search, C-w isn't that bad.
I like the idea of minimizing the address area, but don't understand the advantages of the new approach. Is it primarily for tablet compatibility? Or is the concept of windows still considered unteachable? And why are they decreasing support for traditional click modifiers?
My big complaint about email, especially gmail, is that it seems to insist on keeping too many elements from desktop to mobile. My mobile email needs are much more like my SMS needs, yet the UX for mobile Gmail (for example) largely resembles the desktop client.
I know that's tangential, but it's what's on my mind regarding Gmail right now.
There, I said it.
The most annoying thing to me about gmail's interface is center-clicking on something (to reference it, like something in a mailing list) and having the entire window redirected.
This is cool :)
Midway while typing this post, I did a search and boom: http://news.ycombinator.com/item?id=3581613
Peter Norvig gave singularity summit talk on exciting work they're doing http://fora.tv/2012/10/14/Peter_Norvig_Channeling_the_Flood_... , why isn't that top spot?
HN is kowtowing to the pseudo-brogrammer crowd.
Open mutt (preferably under screen so you can 'C-a "' between mail folders), view message(s) you want to reference (optionally: tag relevant messages, using mutt's filtering tools as necessary, than 'l ~T' to restrict to just the tagged messages, allowing you to rapidly reference a set of messages.
Fire up a new terminal window (I bind this to '<shift><alt>t' in my window manager) and write "mutt -s 'subject line'" to start your message, drop into the address lookup window to designate recipients, and edit away in your editor of choice.
And all of this without the multi-gigabyte overhead of a full browser session + gmail.
Oh yeah: offlineimap means you can work on your GMail account (and/or any other accounts you've configured for mutt) readily.
My real pet peeve: can we start removing "Compose" from email vocabulary?
I've tried every recommendation, and of course hear nothing back from their support or in their forums. Gmail is almost unusable for me now.
Any recommendations or contacts would be appreciated. It seems the only way to get help from Google is to know someone.
I use Gmail basically ever since it was on early early stages (I even pay for more storage for years now) and the degradation of performance is the one thing that makes me think of leaving the service for something snappier.
(The main ones for me are the compose window never actually loading, or new messages in a thread not displaying (both requiring a refresh). Meanwhile, the chat hover has changed layout twice. Seriously?)
I always want to reply to the most recent message, but usually someone else responds while I am composing. So I have to view their message and reply to that instead of to the original message. Does this happen to anyone else, or am I taking crazy pills?
It feels to me though that google is still playing serious catch-up to it's main competitors (yahoo & microsoft) in the email arena who both have great web-mail solutions that don't get the accolades they deserve.
Now all I need is full folder support and I'm happy!
All of the pain points that this supposedly solves have already been solved with tabs. Middle click compose and all of those problems are solved with the added bonus of having an entire screen to write your email in.
Google is being taken over by pointy haired managers and marketing. RIP.
Secondly, are we stupid for using these annoying shift, control shortcuts? Google is not reinventing the wheel. Google is not used by elite computer programmers. I don't even use emacs because I am a VIM user. GMail is used by over a billion user and most of them don't even know some shortcuts or nice ways to make their tasks better.
I don't know all the secrets you guys are pointing out, and are you going to call me stupid? This is not reinventing wheel. It's just making the app more usable.
If anyone start spamming me with "you can already do this with X, Y , Z ways but it's not known by everyone...".
And the quality of the actual video isn't even HD?
PS: Already a down vote. Be man or woman enough to state your case.
My favorite speaker probably was Joel Spolsky (and his slow, organic growth vs land-grab talk).
I love how Joel used Fog Creek to fund StackExchange's development and now Trello, which both seem to be land-grab businesses. It's almost like Fog Creek is it's own startup incubator now. Maybe a new model of funding/startups?
This recording will never be available? I would like to watch his talk...
NOOO! This talk was fabulous!
I felt a recurring theme was "don't give up"... so I'll really try to remember that lesson when I hit future roadblocks.
I enjoyed attending and meeting some of you in person. Definitely looking forward to next year's edition!
I had to strain to hear what the speakers were saying.
Was the volume OK for those on the main level?
I find it hard watching talks where only slides got recorded or others where only the speaker gets recorded.
For the latter, I'll normally download slides and use them to move along with talk.
I wish I had something more substantive to say here, but the problem is that we give Facebook an extraordinarily huge power in our personal lives. It's not just some random web service.
I was impressed with the account recovery process ("you entered an old password -- do you want to recover your account?"), but I felt like they were completely optimized for recovery versus preventing the intrusion in the first place (ala Google's two-factor auth).
Anyway, in this case they obviously took the wrong approach with the blogger and I hope it blows up in their faces. (Microsoft and everyone else used to not be nice to security researchers, Facebook will no doubt learn that cooperation is a better strategy too).
By using an app you are giving them access to a whole bunch of your personal information. I always assumed that many were scraping data from my profile. This is why I have never use Facebook for authentication.
When I read the original post I figured Facebook would want the data so they could narrow down who the probable culprit is. I would have thought finding a common app among a million users probably wouldn't be too difficult.
That said the nature of this conversation is ridiculous.
On the other hand, they are trying to solve this issue secretly, no disclosure. And we dont yet know if they are taking any privacy measures to prevent this kind of data leak.
By reacting like that, I think Facebook can be considered as guilty as charged.
I dropped out of a CS program after first year. I was the classic case of a student who had always been told he was brilliant, so I never worked very hard. In high school, I coasted along simply on a fantastic memory, often 'studying' for the final exams that determine graduation the night before. I never learned how to learn.
Going to college was like being thrown into a bath of cold water. I had never been particularly conscientious, so being in an environment where I was now responsible for my learning was new to me. I skipped lectures, forgot homework that was due, turned in coursework late; the usual suspects. On raw talent though, I qualified for 2nd year, only failing Pre-Calculus. (I skipped the classes and tried to learn math from 1st principles. Ugh...)
I got a summer job at a small telecom startup. By time 2nd year rolled around, my student loan was denied, so I dropped out. I'd always hated school, so I didn't care. I never applied for leave of absence, nothing. I just didn't show up in September. That was 2006.
I was 20 then. I'm 26 now. I've had a lot of time (6 years!) to reflect on why I did so poorly despite being talented (not being conceited; my lecturers in 1st year said as much). There are quite a few reasons; but the major one is that I didn't know how to learn. So if something didn't immediately click, I'd give up in frustration, and decry the teacher as an idiot who couldn't teach (oftentimes true; but irrelevant). I didn't know there was another way.
Being around HN and places like LessWrong which exposes you to so many thought-leaders brought about some interesting side-effects, which culminated earlier this year. Upon reading an article on LW entitled "Humans are not automatically strategic", which was a reply to a Sebastian Marshall article "A failure to evaluate return on time fallacy", I had an epiphany that being systematic about things was the route to accomplishing great things. "Rationalists should win", the LW meme goes, and it's correct. I came to realize that for every goal, there exists an efficient path to achieve it. My task was to find that path, and execute ruthlessly upon it.
Since then I've made leaps and bounds in my personal development. I still slack off sometimes, but I won't fall into my old perfectionist way of thinking that I'm a failure. It's better to be 80% there than 0%.
I made the decision a few weeks ago to get my CS degree, albeit at a different, larger university. Since then, I've been devouring articles like this one. I recently bought two of Cal's books and wanna sometimes slap myself when I realize that if I had had this knowledge and the discipline to implement it 6 years ago, my life would be so much better. But c'est la vie. These articles on meta-learning are priceless.
So if you're in school now, or are going soon, pay attention to articles like these, Here are a few gems I've dug up recently:http://news.ycombinator.com/item?id=3427762
Thanks to knowledge like this from Cal Newport and others, I'm going back to college full-time as someone with an above-average cognitive toolset, and a myriad of experiences that will suit me. I'm much more sociable, have a great eye for design having moonlighted as a freelancer some years back, and will now know how to engage my lecturers on an adult level rather than the kid I was 6 years ago. I'm going for a 4.3 GPA. I'm tempted to say wish me luck, but with tools like these, I'll make my own luck.
This rationalist will win.
PS If y'all have more articles like this, let me know. If you wanna chat privately, email's in profile.
EDIT: formatting; clarity
I have ever seen. The shtick is getting old. Gee-whiz posts about a dilettante ramping up to a beginner's knowledge of a subject with little time and effort have nothing to do with the really challenging learning tasks in this world.
I'll be impressed when I see a headline like "Middle East diplomatic issues resolved by undergraduate who completed one course in international relations" or something like that. Show me someone who has solved a genuinely hard problem before proclaiming a new breakthrough in learning. For a refreshing change of pace from the usual blog post on quick-and-dirty learning, see Peter Norvig's "Teach Yourself Programming in Ten Years"
or Terence Tao's "Does one have to be a genius to do maths?"
for descriptions of the process of real learning of genuinely challenging subjects.
Based on that test, I think the title is link-bait as it isn't "mastering linear algebra" but "passing an introductory algebra course."
As an aside, I've never heard it called the "Feynman Techniques." However, one of my favorite things in the world is the so called "Feynman's Algorithm": (1) Write down the problem. (2) Think very hard. (3) Write down the answer. I just found to hilarious, but I digress.
There are two points of his with which I agree 100%.
Firstly, the process of writing a short summary paragraph of what you just read after reading a chapter or big section of a technical book. There is actually a fantastic book -- maybe one of my favorites of all time -- called, somewhat strangely, How to Read a Book. It's all about very active reading over passive, almost to the point of having a "conversation" with the text you're reading.
Ever since reading that book, I've gotten into the habit of writing a summary of each thing that I read. It really forces you to confront whether or not you "got" the point of what the book is saying. I usually find that there are quite a few bits that I either missed, or didn't quite understand, at which point I go through and search for the pieces I'm missing.
Secondly, looking at all of the low level pieces to understand the whole. This is something Salman Khan, of the Khan Academy talks about in (I believe it was) his TED presentation. Quite often, I find that there is some early concept that I glossed over which is slowing my understanding of the current material significantly. For me, doing this makes me being 'honest' with myself over the state of my current understanding -- which was kind of hard at first when I took this new approach to learning. So much of my 'ego' seems to be unfortunately wrapped up in 'what I know,' and thus I convince myself incorrectly that I do understand something, even when I don't, just because it's something that I "should" already know. Admitting to myself that I didn't understand, for instance, some basic math concept that I should have learned in high school was somewhat difficult -- as odd as that may sound. I suppose I have a fragile ego! But sometimes, getting a good grasp on my modern course work, meant stopping what I was doing, and going back a couple of levels and starting at the beginning.
The question of "What do I need to know in order to understand this" is, I find, an extraordinarily powerful one.
I absolutely believe what he writes, because he's quite precise about his experiment and how he did it and this really works for a couple of reasons:
* This guy isn't 20 anymore. He has actually explored and learned and trained "productivity and focus" which he blogs and writes books about - so he doesn't start like a 18 year old directly from school, unexperienced maybe in this level of focus and discipline.
* He was pragmatic in his goals - very much so. He didn't write "becoming the world's foremost expert in linear algebra" but "passing an exam". And so he did. He also didn't write "passing everything with a top grade" but "just pass, if better - wonderful".
* He actually did his math on "hours to put in" - a semester doesn't take full 6 months, you usally don't attend lectures/lab every day 3 hours a day but 1-2 times a week, 2 (university) hours plus preparation. If you carefully add this up, you actually get a surprisingly low count of actual course/lesson hours.
* Taking in a course in a focused manner is actually quite efficient and helps you (at least it does for me) follow the material without interruptions. You also can repeat as often as you like (he mentions a fast forward and replay button in his TEDx talk) - which btw. makes part of the success of e.g. Khan university material.
* He also put some effort and training into the right way of learning and _that_ pays off massively in terms of speed.
Also, one of the points he is actually making is part of what most of you critizise: Going through the list of MIT requirements is something different compared to "becoming an expert in X" - don't mix that up.
Would be more compelling if he was not selling books. Nothing wrong with making a profit but I'm just saying...
For maths-heavy subjects, I'm not really inclined to believe that traditional exams are the best way to assess a student's knowledge and understanding of the material (especially with regard to rote memorisation). Exams in such subjects haven't changed fundamentally in many many decades, even though we now have lots and lots of new things we could do with them.
For instance: do more with computers - like getting the students to solve real-world, many-tentacled, hairy problems by numerical methods, rather than giving them some carefully pruned equation that just happens to have nice analytical solutions. Or introduce more computer-assisted mathematical modelling (e.g. use classical mechanics, to start with). Or on the pure front, teach students to write or at least understand some interesting automated theorem prover.
Stuff like that.
I suspect that traditional exams have survived simply because they serve their purpose: a percentage of exam-takers fail the exam (which allows the exam-setters to claim that their standards of assessment are rigorous), and a fair percentage will pass the exam, some with flying colours. Whether or not the actual learning goal was achieved has not been determined, since the exam is deemed to be the only instrument that can measure that.
When I got into university I found every course very easy, didn't attend any lectures, got all my workshops to run on the same day to reduce my face time and maxed out my free time to do whatever I wanted (work/friends/extra/etc). I'm a STEM major at a top 30 world ranked engineering school with good grades.
I've often asked if I could max out my classes and finish a degree within a year and a half - but I've never been allowed to skip more than a few subjects (tests/bugging the heads of departments).
University shouldn't be time capped or subject load restricted - people should be allowed to do as many as they wish - or you'll find more and more moving towards MOOCs instead.
Not something I would ever want to repeat and was first year level courses. Basically I was doing a correspondence 3 year degree while working full time. I got heavily involved in my work and decided that I wouldn't continue studying. Then with about 4 weeks to go to the 2 week final exams period I thought, what the heck let's give it a shot...
Amazing what focus and hard work can achieve!
It's true that I didn't attend a lot of classes (since they all overlapped anyways), and had 2-3exams virtually every week. The only issue I see is that there is only so much you can do online. I also did the same thing with Chemistry and Biology, which had lots of laboratory classes, and I don't see how one could gain the practical experience of putting knowledge to work in those fields without a wet lab class. EECS however is amenable to this (for the most part - likely hard for an optics laboratory), and most of my EECS labs were really done in Athena clusters instead of a distinct laboratory.
This is a useful technique, giving motivation and focus. Though imperfect: it can't detect incorrect understandings that seem consistent. But to be fair, that's a tricky case.
He is, however, a master of self-marketing:
"To find out more about this, join Scott's newsletter and you'll get a free copy of his rapid learning ebook (and a set of detailed case studies of how other learners have used these techniques)."
> That works out to around 1 course every 1.5 weeks
WTF? What kind of university imposes that you take only one course at any given time? It's not just linkbait, it starts from a wrong assumption. When you take many related courses simultaneously, you see the pieces meshing together and that helps learning. That's different from taking them in a serial manner.
I would suggest just posting once a day, and using the Promoted Posts for the occasional big news that you want to make sure everyone reads.
Facebook pages isn't a panacea for brands or publishers -- not by a long shot. That panacea is one of those Frighteningly Ambitious Startup Ideas.
Uh huh. "Mom & Pop business" seems to be the new "won't somebody please think of the children" line designed to extinguish all rational thought. I'm getting a little tired of it.
(I'll save my rant on why I think most Mom & Pop businesses should be out of business for another day. I have to say I'm amused when I see a restaurant in my neighborhood apply a bunch of signs that say "absolutely no laptop use" and then go out of business a month later. Idealism is a bitch.)
You can STILL see posts of your favorite bands by going to their pages, which is how you used to have to find updates: by checking for them. The Newsfeed is new, and it's not a right.
Actually? Quite a few. I despise Comcast. I despise the big-4 cell phone companies (Verizon, AT&T, T-Mobile, Sprint). I despise the oil companies (BP, Chevron, Texaco, et. al). Notice a pattern? Despite my (and presumably many others) despising these companies, they are all enormously profitable. I think Facebook has got to the state where they at least think they have a monopoly on their users' social graphs and are willing to raise access prices sky-high. I'm not surprised it happened. I'm surprised it took this long.
I'd be angry if I'd given Facebook money under the old system only for them to change the value of what I got from them. The basic takeaway is that the rules that were in place where I might be willing to pay $2 for a like - a person who likes your page sees your post - had to be changed because there wasn't that much user attention in existence. Now it's been inflated to be worth about a tenth as many views, which is what you were buying, only Facebook called it a "Like" and it somehow means something completely different now.
I guess the moral of the story is don't invest in anything whose value can be arbitrarily changed by someone else.
You built a business inside someone's shopping mall, they started charging rent, so you complain. And at $4 CPK for promoted posts, you'll find FB advertising to be slightly cheaper.
 CPK aka CPM aka cost per 1000 views. Calculated from: To reach 100% of of our 50k+ Facebook fans they'd charge us $200 per post. Edit: $200 / 50 = $4, thanks Ryan.
What really frustrates me is that I'm missing entirely non commercial messages from my actual friends. I've missed posts from my girlfriend for godssake, it's ridiculous.
I understand that they need to make money, but the entire reason I and others are on facebook is to connect with our friends. Facebook needs to allow us to do that and then augment our experience with monied options, not imply that most of your friends will never see your posts unless you open up.
Don't make me go back to email. It's still there, waiting, full of delicious SMTP guaranteed delivery.
Another in a long, long list of customers whose plans fall apart when a free or one-price-for-life service realizes it cannot continue with business as usual. Today's pro tip: Do not build your livelihood around a third-party's free service. Eventually that service will either 1.) shut down, 2.) kick you out of their ecosystem, or 3.) start charging you.
I'm not sure what is more surprising: that people continue to build businesses with these Achilles heels or that they seem shocked when the third-party changes the game.
Facebook: Oh, definitely. Just have a look at your NewsFeed and see what they're doing.
User: Wait, I've got 2000 friends. Why am I only getting a NewsFeed post twice an hour?
Facebook: Because we decided that's the information that you're most likely to want.
User: But what if I want to know what everyone's doing at any specific moment?
Twitter: Can I be of assistance?
User: Oh, hello Twitter.
He want control of his fans, his like-ees. Not his (Facebook) "friends". Most of us know that is not a bug but a feature.
Now the problem that Facebook makes it to share one's email address with one's own real Facebook friends is annoying and something to complain about. But trying to leverage that to complain about not being able to push your feed is problematic. This is exactly what use Facebook for. An experience where you aren't bombarded with everyone's BS.
This is why I don't like many pages, and it's why FB needs clear and easy to use controls for what does or doesn't show up on my wall.
I only have 300,000 likes too. ;-) Basically, the trick is engagement. Give the audience what they want, when they want. Timing matters, pictures matter. Do it right, and you don't need to pay anything.
P.S. Making money from advertisements, pfft how ancient and boring! shamelessplug use Teespring instead.
That perspective actually gives me increased hope for Tent (https://tent.io), the decentralized social networking protocol that could one day be a Facebook alternative. When Tent was announced here on HN, a common criticism was that if you're popular, and you host your Tent server yourself, you end up paying a lot for the bandwidth cost of sending each post to thousands or millions of followers. Whereas the perception is that on a centralized social network you can send a post to millions of followers for free.
For now, that's still the case on Twitter, but on Facebook, apparently not. If you really want significant reach, you pay to publish even to people who already (by liking) signed up to follow you. So the situations aren't actually that different. I guess there really is no free lunch.
"See what we can do for you? See the traffic we can drive and link to you? Want more? Choose your level of traffic, choose your price."
The article makes the assumption that 3rd-party businesses that have been suckling at the teat of the social graph are the value to the facebook users. They're not. The users, the actual people are - businesses are just there to help pay for the whole thing, and follow the personal users. I say this as a business owner who uses facebook heavily, and occasionally pays them for the right to get a little bit back out of them.
I've yet to see a single person in my timeline say "I'd stop coming to facebook if all of these businesses didn't have pages here."
Facebook has a level of PR software as service which is free. They have another which is premium. If a company wants to spam their "fans," they have to pay.
If a business wants to have a high level of control over communications with it's fans, customers, likers, or whatever they are called, there's no free lunch. Either pay a third party (e.g. Facebook) or invest the hard work.
Using your blog or whatever to make specious (I assume) arguments about what someone else should/should not be doing with their business is your prerogative. Just don't expect people to actually listen to what you're saying while you beat them over the head with ads for trucks and cooking shows.
Again, I didn't read the whole thing, or even half before I bailed. But am I wrong in assuming this site uses the popular activity of Facebashing(tm) as a ploy to shove ads at unsuspecting visitors?
His conclusion? Not Facebook
I run a nonprofit alumni association here in Boston and I use FB as a way to update alumni of changes in events so that we can limit the numbers of emails we send. We were using Facebook as sort of an information platform and don't profit or make any money in any way.
I am very careful to not post too much, even entering in to specific agreements with the national alumni association so that they do not to post ads on our page for their merchandise etc.
What am I supposed to do now? Should I pay out of my pocket to reach users who definitely want to be reached already?
Facebook provides a great service, and they should be compensated, but I will now have to look at other options to potentially reach our group.
---And the flip side of this is that I would like to see posts from everyone I am friends with that I haven't explicitly blocked from my feed, going through all those names to re-add them seems like an amazing amount of trouble for me.
---The OP is hard to sympathize with, but he/she has a good point.
As a user, if my friends post something I want to see it. If my daughter's karate school or my favorite band posts something, I want to see it. If they're spammy, I'll unsubscribe. I would like to make this decision for myself, not have it made for me. If it has to be made for me, I would prefer it be made based on some approximation of relevance and quality, not because someone paid $5 to spam me with it.
As an advertiser, Facebook has consistently promoted ads as a way to build a following via the 'like' button. So I pay Facebook to gain exposure to build a following of 10,000 fans and now I have to pay again if I want to reach them all?? Classic bait and switch. I wonder how many past advertisers would have paid to build up their 'likes' if they had been told very clearly up front "Just because someone likes your page does not mean they will see your posts in their news feed".
The add supported model is terrible for social networks and needs to go. If you can afford a computer, smartphone, etc. Then you can pay 5-20$/year for an account.
Free limited accounts for people <18 years old, which have limited access to adult content? (Idea, but may work to both hook future customers, and protect kids)
What follows is speculation, but it's easy to imagine that out of a total fanbase, only a certain percentage "catch" your post while it's fresh, before it's buried behind newer stuff coming in from the ever-increasing number of pages people like. While it may have been the case that back in the day the response one got from posting something on a facebook page was much better than it is now, it's also true that facebook was never as popular as it is today and that users' newsfeeds were never as busy as they are now. And as people subscribe to multiple publishers and their attention gets diluted, you can't expect their engagement with all of these pages to remain at pre-growth levels (or grow).
There's another twist to this. Too many posts from pages thumping activity from friends may alienate users. How do you balance these two types of information? Someone's going to get less airtime, and since (I assume) the bulk of posts comes from pages, they get silenced based on whether or not you interacted with them recently and whatever other criteria facebook can come up with. Same for friends you don't care much for.
Whether or not facebook can be more transparent with regards to how it determines which posts to show and which to hide is another issue. Does the average Joe care? Will he mess things up if given controls that are too advanced? Note that Facebook doesn't censor information, it merely filters what you see by default. You can still go to individual pages or profiles and see their full activity.
There also seems to be a backlash against any commercial endeavour facebook may have. "Facebook is selling your information!" - is it? where can I buy this information? is it really selling in the sense that most people would understand? No. But that's the term that is being used. "Facebook is making people pay for airtime!" well, kinda. Personally I think that should be "Facebook is making people pay for ADDITIONAL airtime" for all the reasons stated above. Maybe they got into this mess due to poor communication but I don't buy the "broken on purpose" argument. That's against facebook's interest in the long term.
I don't mean to defend facebook, just bring into discussion the potential complexities behind developments which people tend to imply are malicious.
x) Disallow users from merely being a fan of the page, instead replacing that with "like"
x) Now make it so businesses can post to their page and the post shows up in the newsfeed for those who like the page. Previously only friend updates were shown. So liking a page has the side effect of getting spam by the company.
x) Facebook has now successfully facilitated spam, which is necessary for
x) Their new spam-prevention algorithm, leading to the end goal:
x) Now that Facebook has facilitated spam and we accept limited posts, the antispam filter can be circumvented by paying Facebook.
Voila, Facebook is now the post office, and spammers pay the post office to bulk spam you. Imagine if you went to local businesses and said, "Hi, I like you guys", that resulted in spam to your snailmail mailbox. You said, "Cut that out, that's wrong." So they fixed the problem they created, but now that the businesses are hooked, they can charge them for the ability to send out spam.
Facebook could easily make it so users are in charge of their filter, but this is counter to how Facebook wants to make money, so the UI is horrid for this and no one does this in practice. Imagine a UI where users rank friends of order of importance, with an easy UI, and the most important friends of mine are the ones who I am more likely to see. O wait, I have just described g+. Facebook will never have such an intuitive interface("close friend" is horrid), where you the burden of filtering is put on the user. Facebook wants to control that filter.
Eventually it will get to the point where you don't even need to like a page, you will get spam from the highest bidder, decided by auction. One of the main purposes of 'like' was to get users accepting communication from companies, once that was done, then they went in to monetize the link, before that it was just friend to friend chit chat, which doesn't pay the bills.
But who knows what special sauce is in FB algos. If I were them I would certainly distinguish between companies, news/blogging, musician/art and image macro posters. Those all have very different usages and annoyance levels.
Probably the interaction rate is factored in, but that also gets spread thinner and thinner. Obviously God and George Takei are winning the game, so the game isn't unwinnable.
FB's job is to keep the average user (who won't put much effort into sanitizing their wall whatever they clicked on in the past) happy while getting enough money out of their userbase as a whole to stay in business and keep the stockholders happy. It's not their job to keep the promotors who use FB as a tool happy.
That's about my consumption. On the other end, I have a friend, an artist with 5,000+ friends. He told me that the engagement on his posts dropped drastically, from like 200-300 'likes' per photo to something like 20 earlier this year, and as such he's considering not bothering to use the site any longer. Apparently Facebook thinks those people aren't interested in his content? Or they want him to start paying. That isn't going to happen.
I will say, if your posts show up so frequently in my stream, I will unlike your page. Facebook is definitely saving you from a lot of unlikes. Facebook is not Twitter - it's baby pictures from your friends.
I trust Facebook to control what to display to me MORE than I trust advertisers to post only things I would be interested in. That they can pay money ($200?) to get it there, that filters it too. They'll only pay for interesting stuff presumably. So thank the Lord Facebook pages don't get to control my stream directly.
The story here is now that Facebook is willing to be paid by brands to degrade the news feed experience for their users :)
IIRC there was previously a "see only important messages from this person" choice and it was better.
Facebook is a company, it's not a democracy asking their users what they should do. They can destroy their business if the want to, and your responsibility as a customer is go and look somewhere else to signify that their new rules do not work for you anymore.
1. Advertisers2. Real users3. Social media marketing scum
If they think most users would prefer not to see 10 posts per day after accidentally clicking a like button, then they're probably going to do that.
Do we as business and individuals really want to pay to promote our content AND be sold to advertisers AND build their network at the same time?
This isn't Facebook scamming you - it's simply that 100% of your fans don't check 100% of your posts 100% of the time.
A few years ago Facebook had a feature where you could weight your friends' from 1-10 and that would affect your feed. Now you can just limit by "only important updates" and such. It's not really clear what that even means.
For example, if my Crossfit box posts a new WOD everyday, I would greatly prefer to have that in my news feed rather than having to go search out the fan page again. I could have just gone to their actual web site.
It would be very nice if you could use the search box to search on your news feed posts. If I could quickly do a search for the Crossfit box and get to the daily post.... awesomesauce!
$75 for a 17-30K user reach is $0.0044 per user or less.
I actually think that's a good deal if you're announcing a new product or important product update.
Can't really do anything here other than sigh and shake head.
Reasonable? No way.
Best part is, the only way to change this is to shut it off for each individual friend - not exactly convenient.
Let's ignore the discussion about dynamic range and bit depth etc., and assume that the volume control on your operating system controls the DAC rather than doing the stupid thing of digital volume reduction. The fundamental issue is signal to noise ratio on the analog line. If you turn the volume too far down on the computer and turn the volume up on your speakers, the sound on the analog line is too low with regard to the electrical noise and will be hissy. If you turn the volume up too much on the computer and turn the volume down on your speakers, then the signal will be so loud as to produce distortion either in the DAC or on the line itself. You're looking for a middle ground: as loud an output from the computer that you can produce without causing distortion in your loudest music parts. Once you've got that set, change the volume on the speakers to compensate.
(A) Pretend that everything except the DAC was noiseless: The noise would be due to the nonlinearities and quantization in the DAC.
(B) Pretend that the DAC was perfect: The noise would be dominated by the noise-equivalent input-power introduced by the resistance present in the components (including the transistors used for amps).
In short: (A) is a function of how wide the range of bitcodes that you use. The smaller the range, the larger the noise component relative to the signal.
OTOH: (B) is a function of temperature: All of the noise power before the final dial to your amp is passed through as is the signal, so the ratio stays constant. There is also a constant noise power introduced after that final amp, but I would guess it is negligible compared to the amplified noise power.
So tl;dr = For a decent sound card, maximize the software volume and then use the analog dial.
Assuming this is true, the correct option would be to maximize any application volumes (e.g. YouTube), to maximize master volume to a level just below the sound clips (distorts) at the amplifier input, and to reduce the amplifier's pre-gain (if it has any) so the master volume control has a reasonable range.
This method will minimize the three (not just one) culprits of poor computer audio quality: quantization at the application layer, electronic interference over the physical connection, and clipping at the pre-amp.
On the PC, though, I rarely set my system volume to anything other than 100%.
Max your software (usually this is 80% to prevent clipping and distortion), then attenuate speakers to 50% (analog boost is much worse than digital as it raises the noise floor).
Source: Mixing at studios for last 10 years
This is really only true when The Audio System represents samples as integers and not floats like CoreAudio does.
You can see the objective differences between 16-bit and 24-bit output in NwAvGuy's measurements of the 2011 MacBook Air's DAC: http://nwavguy.blogspot.com/2011/12/apple-macbook-air-5g.htm...
One of my 'weird unverified theories of life' is that turning the volume on portable device down (laptop/phone/mp3 player) and the volume on the speakers up saves the battery of the device itself. (For example when you're in a car.)
For example if you built the YouTube player, what makes you think you need a volume control?
As far as I can tell, I rarely if ever have this problem with the same hardware in Linux with PulseAudio (though I can intentionally cause it using alsamixer by pushing "Master" to 100%) and didn't have this problem in the past on Windows with Creative Labs soundblaster cards.
For the case where an analog potentiometer immediately follows the DAC, of course, there's no practical difference.
Not all machines work this way, though. One way to check is to hook up an external amp and headphones, turn the computer's volume way down and the amp up to listen levels. If the quality is crap the it's probably just decreasing the bit depth. Or you can do a teardown on the sound pathway.
(Oh, if it isn't clear by this point, keep all your apps turned all the way up for best quality. Only turn them down on an individual, as-needed basis. All-software stuff has to decrease bit depth to decrease volume on a per-app basis.)
Volume should always be controlled as close to the source as possible. Anything else is simply inefficient and a waste of processing power.
There is no reduction of bit depth. total hoo-eee.
Now, there is even nothing strictly wrong about ignoring research like that. It's just annoying how they revel in ignoring all recent progress in the field.
Time for honesty: What bullshit.
- Go codebases by non experts are peppered with magical incantation (sleeps, etc.) to avoid the dreaded "all goroutines are sleep". Of course "they are doing it wrong", but that is the germinal point.
- A concurrent Go program will likely behave differently given 2 bits (just 2 lousy bits) of difference in the object binary. (runtime.GOMAXPROCS(1) vs runtime.GOMAXPROCS(2)). Imagine someone touching those 2 bits in a "large codebase". It is practically impossible to do the same thing in a large Java codebase and fundamentally change the programs runtime behavior. (Happens all the time in Go.)
- It is very difficult to reason about a Go routine's behavior in a "large codebase" without global view and a mental model of the dynamic system e.g. which go routine is doing what and who is blocking and who is not. Pretty much defeats the entire point of "simple" concurrency, to say nothing of "scaling". Programming in Go's variant of cooperative multithreading is actually more demanding than preemptive multithreading. Cute little concurrency pet tricks aside, Go concurrent programming actually requires expert level experience. "You are doing it wrong". Of course. Point.
- There is nothing, absolutely nothing, that you can do in Go that you can not do via libraries in Java. Sure, the cute syntactic go func() needs to be replaced with method calls to the excellent java.util.concurrent constructs, but the benefits -- high performance, explicit-no-magic-code -- outweigh the cute factor in this "programmer's" book.
- On the other hand, there are plenty of things you can do in Java that are simply impossible to do in Go.
- Once we factor in the possibility for bytecode engineering, then Java is simply in another higher league as far as language capabilities are concerned. (Most people who rag on Java are clearly diletantes Java programmers.)
If Go actually manages to be as effective as Java for concurrent programming at some point in the future (when they fix the somewhat broken runtime) then the Go authors are permitted to crow about it. Until that day, go fix_the_runtime() and defer bs().
One thing that programming in Go has made me realize is just how awesomely Sun/Gosling, et al. hit that "practical programming" sweet spot. No wonder the modern enterprise runs on Java and JVM.
It just works. (But it is "boring" because it's not bling anymore. Oh well, kids will be kids.)
We get some nice concurrency primitives, garbage collection, cleaner syntax, something between structs and objects that fits the right feeling, automatic bounds checking, cute array syntax, and a big-ass, well defined standard library. Oh, and this concept of interfaces that is so well executed it's not even funny.
Except. I feel like they are forcing the fanboy mindset. At one point in this slide deck, there is the following bullet: "The designs are nothing like hierarchical, subtype-inherited methods. Much looser, organic, decoupled, independent."
I didn't see the talk. But that is the most vapid, meaningless description I've ever seen of a feature of a programming language. Rob might as well have said, "it's hipster better," which would have conveyed exactly as much meaning.
So here's my question - and I hope there are real answers - can someone point me to >3 real, big systems that are built using Go? I'll accept Google internal systems on faith.
I just don't get this. If you statically link in small functions from a big library, you only get the little bit you need anyway. Are they saying you avoid compiling the "big library" over and over? But if it is already compiled, that should not be necessary. And the chances are you are going to be importing lots of "little code" from the "big library" anyway. Unless they are saying the implementation of net's itoa is somehow simplified and not a just a straight code copy...otherwise I don't understand this approach.
"Dependency hygiene trumps code reuse.Example:The (low-level) net package has own itoa to avoid dependency on the big formatted I/O package."
...now, if this kind of attitude stays in the core dev land I don't really care about it. But when I'll consider Go as an alternative for a large project, I'll start worrying if people adopt the "it's OK to reinvent some wheels" philosophy when they start building higher level frameworks in Go... I mean, how hard can it be to split the "big formatted I/O package" into a "basic formatted I/O" package and an "advanced formatted I/O package" that requires the first one, and have the "net" package only require "basic formatted I/O" (or maybe even make "basic formatted I/O" something like part of the language "built ins" or smth - I don't know Go so I don't know the Go terms for this)?
I've seen a lot of Go talks from various Googlers, and I have to say that this was the best-motivated, most humble, and most honest of them that I have seen. Rob knew he was speaking to an extremely PL-oriented audience, and structured his talk accordingly, and the result was fantastic. Go comes from a very different standpoint than almost all academic PL work, and in that respect, for those of us in academia, it's an interesting breath of fresh air and a reminder of the uniquely fine line between industry and academia in computer science.
I clicked on the link in the second slide: http://golang.org
Even Dart looks great: http://dartlang.org/
I feel a little guilty being negative about this, but presentation does matter, and Google ought to be able to afford it.
(This is, of course, horribly broken in C++ which likes to inline everything.)
> What makes large-scale development hard with C++ or Java (at least):
None of the points apply to Java.
Can someone tell me why GC was the obvious choice as opposed to say automatic reference counting?
Once I discovered exceptions back in the 90s, life got a lot easier.
Of course Java ruined exceptions with the invention of the CheckedException, maybe this tainted the Go designers' thinking?
Update: was trying to use a mouse to switch between slides, but later tried, and figured out that it only works with keyboard :)
In comparison tests, longer copy almost always wins. You keep offering more and more reasons to buy, and you keep converting more and more readers.
This relates to one of the basic observations about selling: people don't like to change their minds. They won't spontaneously go from "no" to "yes". But if you offer a new piece of information, they can change their mind without admitting they were "wrong" before. Every new piece of information, or new story, is another opportunity for them to get to "yes".
Obviously, the copy also needs to be good.
But in re-checking the site I didn't see any claim that these are somehow trends of 2012; in fact, they say, "Let's take a moment to look around some trends we witnessed in last couple years."
The prototype should never be better than than the final version because what we try to give is awesome design which work awesome as a static version but when it come to dynamic view developers keep changing things.that will completely change everything you need to change accordingly
sometimes people think that single page apps are better.it's true in some cases but not in all the cases
trello is the best example it can be a single page app but they dint.pjax is what you can really use for dynamic design but still when it comes to micro-blog or blog the ajax will just fine. but you should really try pjax technic for mega apps.
i work on django so thats what i suggest for others using pjax is awesome
Edit: Just correcting to help, not mocking.
Here it is http://dev.82.io/carousel/
There seems to be a bug with the carousel though. Clicking next doesn't work and the images randomly start skipping really fast etc, at least for me (Chrome/OS X)
From looking at the code, all I see is a bunch of boilerplate CSS that seems to deliberately work against the nature of CSS (protip: the C stands for cascading), and is very brittle and tied to specific classes and markup rather than using selectors to be general and reusable. Is that really all it is, just a "I'm too lazy to design my site, so I'll just use twitter's design"? Perhaps it is just the word "framework" throwing me off since it doesn't appear to be a framework in any way? I know this is going to sound needlessly critical to some people, but I am expressing genuine confusion here, I really don't understand what I am supposed to use this for, or how it would help me in any way.
Anyone found a solution that works well under Ubuntu? (I use it with WordPress and Drupal; Rails would be nice but there I run for Sass/SCSS).
I've been in one of the schools when they have the after school club and it's amazing how much the kids get really quickly. They're making their own games without needing any help from the assistants, the drag and drop element of Scratch makes it a lot easier than getting syntax wrong and getting frustrated. Some of the kids love it so much that they're disappointed when it's half term and they can't do it that week. One kid now wants to be "a programmer or stuntman" when he grows up.
They're in around 300 schools in the UK now and have roughly 15 children per club, so that's an extra 4000+ children in the UK learning to code each week.
Disclaimer: I help out Code Club and develop their site
She showed the children a Python program with a while loop, and says they "got it". I've tried explaining iteration to a (bright) seven year-old by using indented text and they found it hard to comprehend, but the equivalent in a graphical lego programming environment was obvious to them.
Kids today (both male and female) grow up with so much technology around them. My bet is that this will drastically influence the number of women entering technology focused career paths in the coming years.
What struck me the most was the sheer number of questions I got. 4 or 5 hands in the air the whole time when I was answering questions. A lot of smart questions and comments. Very intense and high energy. Contrast that to giving a talk to adults - usually there a lot fewer questions.
Overall it was a great experience, and I recommend it if you have the opportunity.
One of the things I tried to impress upon the kids is to look at where the jobs are, and what they pay. I don't think that's emphasized nearly as much as it ought to be. For instance, prior to making the switch to full-time development last year, I was working as an editor at a newspaper. I loved it, and I was good at it ... but the newspaper industry was (and still is) in the tank, and there was very little job security. And, of course, there's an oversupply of people with journalism degrees, so the wages aren't much to write home about.
I told them I wasn't trying to talk them out of pursuing a highly competitive, not-so-highly-paying career. But I think students should know, going in, what they're getting themselves into.
I have no real affection for scratch, but I feel that he was making the argument that children should learn to program in an environment that models (at least to a point) the one in which a developer develops in, at least with regard to language preference.
I feel at this point, the language choice (barring ease of use etc) is pointless. Whether you use scratch, python or Haskell, if it piques the interest of a child, then nothing will stand in the way if that student wants to go on and learn every programming language available.
If you think of the first language you ever learned, and what you are now programming in. For me, my first language (a type of kiddy basic) gave me what I needed. A concept of execution flow. How to make things come up on the screen, basic 2d programming and it made it very easy to make some GUI based stuff.
My point is that don't hate any language (even if it is a fake language like Scratch) if it builds the initial building blocks in a child (or adults) head.
Ha! My inner child feels somewhat vindicated.
The school where I visit is really average, some rich kids some poor kids, all kinds of backgrounds. The format of this career day is that each class period somebody will come and talk to the class that is somewhat related to the subject - so I usually end up speaking to a math or computer class. In a class of 25, there are probably one or two kids who already know some limited programming (or have made a website). Almost everybody that age is online (all Facebook, a handful of Twitter) and plays console video games. Probably about half have cell phones.
When they ask me questions, it's usually about how to steal their friends' Facebook passwords, conceal their browsing history, or build their own video game. I do spend some time talking about privacy, reminding them that their behavior online can stay around forever and that they should be careful who they are talking to online.
The terms don't vary much by district; they vary by age. Kids younger than these use the term "number" to mean positive, decimal, integral numerals. That's all they know.
Kids at this age are introduced to some new distinctions: fraction vs. whole, negative vs. positive, and decimal fraction vs. common fraction. At that point, they will use the term "whole number" to mean not some type of fraction and "decimal" to mean a number that uses this nifty, new fractional notation that has digits on the right side of the decimal point.
A few more years pass, and they no longer see "fractions and decimals" but just "numbers." At that point, they switch over to referring to integers and real numbers (with no emphasis on exactly how a fraction is represented), and if they begin working with binary numbers, they'll use the same term, "decimal", to make the distinction of base, not type of fraction notation.
The term "float" is not a mathematical term. Many older math professors don't know it. It is a tech term for a form of storage and display of approximations of real numbers.
These terms are not regionalisms; they represent the distinctions being made by the students at their stage of development.
Wait another 2-3 years, and you will be their new hire.:)
Seriously, I wouldn't think of them as 'juniors' or 'new hires', that will be only a very short temporary state. Think of them as your future colleagues, competitors, hacker friends, fellow tax payers.
Great article though!
I'd recommend it for any technical parent - what you do is definitely cooler than being a lawyer. =P
Just like trying to give the computer the same input over and over again. I find this hilarious, kids are the best
How is this person a "step above" a nutritionist?
I'm not versed in the arcane arts of EEâ€" could anyone give me a basic definition of voltage sag? And would there be any reason to build it as a feature in a charger?
Does anyone know what's the deal?
It'd be nice to see an even broader test
On a non-technical note, Apple chargers should lose simply because of how goddamned large they are. They basically eat two spots on a power strip and are easily knocked out of wall sockets because of their weight.
Unbranded, common on ebay.http://mm0hai.net/blog/2012/08/01/Message-to-an-ebay-seller....
They were both awful. I would be interested to know about a known good one.
I might consider buying a copy of Windows 8 Pro at that price and then waiting until it hits SP1 to install it.
I might even spin up a VM to try it out.
I like that the $39 upgrade applies to anyone with Windows XP, Windows Vista or Windows 7. I think they're realizing that a lot of people don't upgrade OS because they don't want to upgrade their hardware.
(like my old Win XP laptop that I use as a VNC terminal to other machines).
The only reason why I wouldn't want to jump in with two feet is that I have a general dislike for the Xbox dashboard and I suspect that Metro would be very similar to it.
 You can use this tool to check that you have a genuine version of Windows http://go.microsoft.com/fwlink/?linkid=52012
 Windows OEM licenses are transferable if it included the hardware
 Windows retail licenses are transferable
Here's a direct link to a PDF for Windows 7 Home Basic in English
 Windows Anytime Upgrades are pretty much considered to be OEM
17. TRANSFER TO ANOTHER COMPUTER. (retail) a. Software Other than Windows Anytime Upgrade. You may transfer the software and install it on another computer for your use. That computer becomes the licensed computer. You may not do so to share this license between computers. b. Windows Anytime Upgrade Software. You may transfer the software and install it on another computer, but only if the license terms of the software you upgraded from allows you to do so. That computer becomes the licensed computer. You may not do so to share this license between computers. 18. TRANSFER TO A THIRD PARTY. (retail) a. Software Other Than Windows Anytime Upgrade. The first user of the software may make a one time transfer of the software and this agreement, by transferring the original media, the certificate of authenticity, the product key and the proof of purchase directly to a third party. The first user must remove the software before transferring it separately from the computer. The first user may not retain any copies of the software. b. Windows Anytime Upgrade Software. You may transfer the software directly to a third party only with the licensed computer. You may not keep any copies of the software or any earlier edition. c. Other Requirements. Before any permitted transfer, the other
Short version - outside of Metro it's basically Win7SP3 and it works great. Metro is every bit the usability disaster that people have claimed when not running on a touch screen.
The good news is that you really don't have to interface much with Metro at all. It replaces the start menu, but it does so in a manner that works with how I'm used to dealing with the start menu already. That is, I already just hit the Win key and then start typing until the thing I want pops up, and that behavior has carried over.
So, yeah Metro is awful for all the reasons everybody has already laid out. Despite that, Win 8 has been a solid performer and I won't be loading Win7 back on this system.
My primary home system will continue to run Win7 until I am comfortable that my production applications will all run successfully (and by that, I mean "games").
The Surface, not so much (7.0): http://www.theverge.com/2012/10/23/3540550/microsoft-surface...
For instance, going to the traditional desktop is as easy as clicking the "Desktop" tile. And opening a new tab in the metro-IE was a bit confusing but after figuring out that double finger pressing the touchpad brings up the tab list and url bar it has become easier.
I also like the new native mail client and calendar apps.
For the record, I am running Windows 8 on a 2011 macbook air via bootcamp and it runs perfect. Guild Wars 2 also gets about 10 fps more than it does on the mac client for what its worth and makes it actually playable on an Air :)
Following this tangent a bit more, I feel like if the drivers were updated enough to support 3 finger left and right gestures to wipe between the different screens I wouldn't revisit OSX for a while.
Windows 8 is a fun operating system.
I'm surprised more people haven't picked up on this rather bold move.
The elephant in the room for me is the horizontal scrolling. I'm sitting there spinning the mouse wheel vertically, and what's on the screen is moving horizontally. That's a total disconnect.
Why this emphasis on horizontal scrolling? I don't see how the horizontally scrolling items are in any way easier to use than a vertically scrolling set of items. Seems like different, for difference's sake.
The price is much lower than for previous versions of Windows, this makes me suspect that we should start expecting new releases of Windows much more frequently, similar to how Apple does it.
With the radical changes going on in Windows 8 it wouldn't surprise me to see a tweaked and improved Windows 9 in less than 2 years.
The computer booted up to a home screen with icons for all of your programs, and you had to click exit to desktop to get into windows.
Looks like they're setting the font explicitly to 'Segoe UI' and nothing else in many spots. Telerik, a .NET CMS provider does a similar thing.
However I can't say I am any more productive than I was with Windows 2000.
At work the IT dept will hopefully skip this version all together, or take a few years before "approving" it.
> Back when Firefox 2 was released (six years ago this week!), > the Internet Explorer team started a friendly tradition of sending Mozilla a cake > as congratulations. This continued for Firefox 3 and Firefox 4.
They should have doubled the padding to be safe.
> Just 30 minutes later, Michael Bolan tweeted that the cake was gone.
I don't know how many people there are in that office, but I hope it's sufficiently few that no-one got Miltoned :).