USA316.8 Canada: 35.1 Southern North America: 176.6 Northern part of South America: ? Southern part of South America: ? ...South America: 387.4 UK: 62.2 Scandinavia: 25.7 Western Europe (excl. UK): 278.7 Eastern Europe: 176.7 Southern Europe / Mediterranean Europe: 153.5 Russia: 142.2 Northern Africa: ? Southern Africa: ? ...Africa: 1032.5 Middle East: 370.9 India: 1210.2 China: 1353.8 South East Asia: 610.0 Australia and New Zealand: 35.6 Japan: 126.6 South Korea: 50.2 Other: ?
>Northern part of South America
>Souther part of South America
Looks needlessly fragmented and imprecise. What is 'Southern part of South America'? Cono Sur? Then Brazilians from Minas Gerais and Porto Alegre are in different groups? Was that the intention? Why 'Southern North America' and not Central America? Where does Caribbean belong? Middle Asia (say, Kazakhstan)? Is Spain Western Europe or Southern Europe? Is Czech Republic Eastern Europe or Western Europe (since you don't offer 'Central Europe' as an option and some Czechs may be unhappy with identifying as Eastern Europeans)? Is Estonia in Eastern Europe? It damn well is in Eastern Europe, but some Estonians identify Estonia as Scandinavian.
Next time you guys should use some well established scheme such as this one http://en.wikipedia.org/wiki/United_Nations_geoscheme and just link to it in the post so no one is confused.
* San Fransisco Bay area* Seattle/Redmond area* Other west coast* Boston/Cambridge* New York* Other east coast* Chicago* Ann Arbor* Austin* Other noncoastal
99% of the time I have no regrets about moving here from the United States. That other 1% of the time when you are sick or require the services of the police, it's not the best place to be. My insurance plan covers an airlift to Thailand in the event of a serious emergency.
Edit: Uh oh, New Zealand isn't covered.
In day time, I'm usually in Asia for school, and some nights, and most weekends I'm in Eastern Europe.
Voted for Eastern Europe and Other (since this part of Asia is not in the list).
I think a better term would be "Sub-Saharan Africa": https://en.wikipedia.org/wiki/Sub-Saharan_Africa
Also, missing: "South Asia (excluding India)", which would cover an area with around 390 million people.
Edit: Actually, for the purposes of this poll Czech Republic is "Eastern Europe", just don't tell that to a Czech person ;)
Australian in Costa Rica.
It should say
Central AmericaSouth America
I'm from Chile by the way.
Hello to everyone else from Helsinki, Finland - we should organize a Hacker News meetup!
Any workaround for it ?
edit: thanks for adding it to the list!
The review article by Frank L. Schmidt and John E. Hunter, "The Validity and Utility of Selection Models in Personnel Psychology: Practical and Theoretical Implications of 85 Years of Research Findings," Psychological Bulletin, Vol. 124, No. 2, 262-274 sums up, current to 1998, a meta-analysis of much of the huge peer-reviewed professional literature on the industrial and organizational psychology devoted to business hiring procedures. There are many kinds of hiring criteria, such as in-person interviews, telephone interviews, resume reviews for job experience, checks for academic credentials, personality tests, and so on. There is much published study research on how job applicants perform after they are hired in a wide variety of occupations.
EXECUTIVE SUMMARY: If you are hiring for any kind of job in the United States, with its legal rules about hiring, prefer a work-sample test as your hiring procedure. If you are hiring in most other parts of the world, use a work-sample test in combination with a general mental ability test.
The overall summary of the industrial psychology research in reliable secondary sources is that two kinds of job screening procedures work reasonably well. One is a general mental ability (GMA) test (an IQ-like test, such as the Wonderlic personnel screening test). Another is a work-sample test, where the applicant does an actual task or group of tasks like what the applicant will do on the job if hired. (But the calculated validity of each of the two best kinds of procedures, standing alone, is only 0.54 for work sample tests and 0.51 for general mental ability tests.) Each of these kinds of tests has about the same validity in screening applicants for jobs, with the general mental ability test better predicting success for applicants who will be trained into a new job. Neither is perfect (both miss some good performers on the job, and select some bad performers on the job), but both are better than any other single-factor hiring procedure that has been tested in rigorous research, across a wide variety of occupations. So if you are hiring for your company, it's a good idea to think about how to build a work-sample test into all of your hiring processes.
Because of a Supreme Court decision in the United States (the decision does not apply in other countries, which have different statutes about employment), it is legally risky to give job applicants general mental ability tests such as a straight-up IQ test (as was commonplace in my parents' generation) as a routine part of hiring procedures. The Griggs v. Duke Power, 401 U.S. 424 (1971) case interpreted a federal statute about employment discrimination and held that a general intelligence test used in hiring that could have a "disparate impact" on applicants of some protected classes must "bear a demonstrable relationship to successful performance of the jobs for which it was used." In other words, a company that wants to use a test like the Wonderlic, or like the SAT, or like the current WAIS or Stanford-Binet IQ tests, in a hiring procedure had best conduct a specific validation study of the test related to performance on the job in question. Some companies do the validation study, and use IQ-like tests in hiring. Other companies use IQ-like tests in hiring and hope that no one sues (which is not what I would advise any company). Note that a brain-teaser-type test used in a hiring procedure could be challenged as illegal if it can be shown to have disparate impact on some job applicants. A company defending a brain-teaser test for hiring would have to defend it by showing it is supported by a validation study demonstrating that the test is related to successful performance on the job. Such validation studies can be quite expensive. (Companies outside the United States are regulated by different laws. One other big difference between the United States and other countries is the relative ease with which workers may be fired in the United States, allowing companies to correct hiring mistakes by terminating the employment of the workers they hired mistakenly. The more legal protections a worker has from being fired, the more reluctant companies will be about hiring in the first place.)
The social background to the legal environment in the United States is explained in various books about hiring procedures, and some of the social background appears to be changing in the most recent few decades, with the prospect for further changes.
Previous discussion on HN pointed out that the Schmidt & Hunter (1998) article showed that multi-factor procedures work better than single-factor procedures, a summary of that article we can find in the current professional literature, for example "Reasons for being selective when choosing personnel selection procedures" (2010) by Cornelius J. Knig, Ute-Christine Klehe, Matthias Berchtold, and Martin Kleinmann:
"Choosing personnel selection procedures could be so simple: Grab your copy of Schmidt and Hunter (1998) and read their Table 1 (again). This should remind you to use a general mental ability (GMA) test in combination with an integrity test, a structured interview, a work sample test, and/or a conscientiousness measure."
But the 2010 article notes, looking at actual practice of companies around the world, "However, this idea does not seem to capture what is actually happening in organizations, as practitioners worldwide often use procedures with low predictive validity and regularly ignore procedures that are more valid (e.g., Di Milia, 2004; Lievens & De Paepe, 2004; Ryan, McFarland, Baron, & Page, 1999; Scholarios & Lockyer, 1999; Schuler, Hell, Trapmann, Schaar, & Boramir, 2007; Taylor, Keelty, & McDonnell, 2002). For example, the highly valid work sample tests are hardly used in the US, and the potentially rather useless procedure of graphology (Dean, 1992; Neter & Ben-Shakhar, 1989) is applied somewhere between occasionally and often in France (Ryan et al., 1999). In Germany, the use of GMA tests is reported to be low and to be decreasing (i.e., only 30% of the companies surveyed by Schuler et al., 2007, now use them)."
Before the interview, I ask them to write some code to access an HTTP endpoint that contains exchange rate data (USD, EUR, GBP, JPY etc.) in XML and to parse and load said data into a relational database. Then to build a very simple HTML form based front-end that lets you input a currency and convert it into another currency.
I ask them to send me either a link to a repository (Git, SVN etc.) or a zipball/tarball. If the job specifies a particular language, then I obviously expect it to be in that language. If not, so long as it isn't in something crazy like Brainfuck, they have free range.
If the code works and is basically sane, that goes a long way to get them shortlisted.
During the interview, I'll pull the code they sent up on a projector and ask them to self-review it. If they can figure out things that need improving in their code, that weighs heavily in their favour. Usually this is things like comments/documentation, tests, improving the structure or reusability. If it's really good, I'll throw a hypothetical idea for refactoring at them and see how they think.
The reason this works is that, despite Hacker News/Paul Graham dogma to the contrary, "smartness" isn't the only thing that matters in programmers. It's actually fairly low down the list. When hiring programmers, I want people who are actually able to do the daily practical job of writing code, modest and self-critical enough to spot their own mistakes, and socially capable to actually communicate their decisions and mistakes to the people they work with.
I interviewed a guy who was intellectually very smart and understood a lot about CS theory. I asked him why the PHP code he sent me didn't have any comments. "I don't believe in comments because they slow the PHP interpreter down." Sorry, he can be smarter than Einstein but I ain't letting him near production code.
One example I use is getting the candidate to write crud, list, and search controller actions for a simple category data structure. Given a basic category data model (e.g. Name, Parent), the candidate starts with the crud actions.
Crud actions aren't meant to be difficult to solve and serve as a basic screener to verify the candidate has working knowledge of the basics. The only edge case I look for the candidate to ask about is if orphaning child nodes is allowed (I.e updating parent node, deleting a node with children)
List action(s) start getting more interesting since recursion comes into play. A basic implementation of an action that can load the tree given an arbitrary category as a starting point is expected. If the candidate has some prior experience, a discussion of what performance concerns they may have with loading the category tree is a follow up question. The tree loading algorithm is then expected to be revised to handle an optional max depth parameter. An edge case I look to be considered is how to signify in the action response that a category has one or more child nodes that weren't loaded due to a depth restriction.
The search action implementation has a degree of difficulty scaled to the candidates experience level. All candidates have to write an action that returns a collection of categories matching a search string. Those with previous experience are asked about a paging solution. Senior level candidates are asked to return matching categories in a format that indicates all ancestors ( for instance: "Category 1 -> Category 1.1 -> Category 1.1.1" result for search string "1.1.1")
For an added degree of difficulty, candidates can be asked to recommend data model tweaks and algorithms supporting tree versioning requirements necessary to allow for loading the category tree's state at a given point in time.
The candidate's performance to this exercise seems to give some insight into their level of experience and ability to implement algorithms from a common real world example without having to ask much trivia or logic problems.
1) I think a lot of start-ups want to hire "smart" people. Because they expect the new person to eventually wear many hats. Objective-C, Java, Android, CSS, server side concurrency, monitoring. An we've all seen Hunter and Schmidt reference that tokenadult usually posts when talk about interviewing comes around and it does seem that a general mental ability test (like an IQ test) combined with a work samples seem to predict future performance of that employee. Well except that one can't just straight up give IQ test to job applicants (there is a court case about that). So we are left with a job sample (which many forget to give, as is the point of the author). But instead many focus on the GMA and create proxies for it -- cute little puzzles about blenders, round manhole covers, and other such silly things.
2) Those interviewing don't know the technical stuff and are afraid you'd out-bullshit them. "How does an Ajax request work" well if the interviewer themselves doesn't quite know the details the might not be able to evaluate it properly. They could have it written down but well, some technical questions have many different levels of depth that a candidate might descent to. So a quick written answer to the question might seem wrong but it is really because the candidate is more advanced. So puzzles seems to be a generic and "easier" to handle.
This problem was addressed nicely in this functional pearl by Jeremy Gibbons, et al.: http://www.cs.ox.ac.uk/jeremy.gibbons/publications/rationals... . As interesting as the result is, however, it's a pretty well-made point that research-level ideas from the programming languages community are not really software engineering interview material in the vast majority of cases.
This is yet another example of "rockstar developer"-itis, wherein startups are given to believe that they need the best of the best when in fact they do not. This particular example is entirely egregious because they asked her about something that requires enumerating the rationals when what they really wanted was an iOS code monkey. Then they fired her, based on their own shoddy interview.
If you really want to know if someone has the capacity to pull their weight as an engineer, ask them about what they've built. Even if they are fresh out of college, the best engineers will have projects they can talk about and explain. Ask how they approached/solved specific problems. Ask what they're most proud of building. Ask what was most frustrating.
Those are the kind of questions that will provide insight into a person's problem solving capabilities and offer a decent picture of what they're capable of doing.
Certainly, asking only math questions is stupid as well, people should know at least a little about the stuff they're supposed to work with, but teaching an actual language to a smart person eager to learn is a breeze compared to teaching problem solving to someone who memorized the reference manual.
This would test programmers ability to learn a new language.
Interviewer: "How can we optimize the character replacement in a string such that we use no extra memory?" Me: "We do this and that and this. But, should we consider what situations we would need this optimization?" Interviewer: "What? Why?"
I can now use this as a filter as I interview organizations. Optimizing algorithms by creating your own core data structure classes (instead of using the built-in ones) is great in certain circumstances, but an absolute waste of time in many others. And if you're not going to ask me about those times when making those improvements is important, then you're not asking questions for a programmer -- you're asking them for a theoretician who can recall syntax.
It's poor practice, and I've seen it everywhere.
Probably because the only person who doesn't lose from this is the interviewer: they get to have fun. Honestly, when you spend all day buried in code, it's fun to play with puzzles for a change.
Perhaps it's time we started optimizing interviews for hiring success rather than interviewer happiness.
I believe this is deeply valuable. For some roles, I would much prefer to hire someone who can quickly see the value of breadth-first search from both ends.
If he/she doesn't happen to know the syntax of Ruby, or Java, etc. it's less important to me.
I just don't have the experience or tools or interest for them.
And yet, somehow, in 20 years of business geekery I've never come across a problem I can't solve.
Maybe when writing Tetris for J2ME I would have saved myself 10 minutes googling if I'd had the experience to realise that right angle based matrix translations don't require fp maths and maybe when writing financial indicators, I'd have saved myself half a day if I hadn't had to look up integrals but this sort of stuff is definitely in the minority as far as my experience goes.
The position I was filling is a part-time position for a CS major, sort of like an internship. I devote time to develop his/her skills, s/he would get real-world experience, and a little money to help with cost of living. If everything works out, a position could open up for full employment.
I had a pretty good idea what I was looking for. Someone that had good grasp on theory but had no experience coding. Preferably enrolled in Uni. I had 5 applicants but the only candidate I interviewed is enrolled in Math-CS.
I basically tried to gauge if he had deep interests and asked him to code a bit, solve a simple control (find me the article with the highest hitcount from the day a week ago, gave him 10 minutes).
He failed the coding test but I made the hire regardless. Reason why was 2 things out of the 4 hours we spent together: When I asked him who he considered the father of CS he rattled off von Neuman, Djikstra and Knuth. Yeah, you can make that argument I suppose, but he knew who the influential people were. The other thing was: even if he failed the coding test he failed it by not reading the code examples quite right, he was using my code to try to help himself solve the problem. I'm sure he'll work out.
We as a field should employ internships a lot more than we do, get the college kids and undergrads working on real-world problems a lot more than we do.
Being a developer is 80% Google and 20% actual coding knowledge. We are hackers at the end of the day, not miniature Einstein's with encyclopaedias for brains.
For example: "This database contains 100,000 problems with standardized parameters. The problem definition is defined in the file spec.txt which you can grab from our code repository. Write the code to solve these problems efficiently, passing each solution to a remote service via POSTing to a REST API, the documentation for which you can find here. Bonus points for parallel execution. Feel free to use any editor/IDE and reference online documentation, Stack Overflow, etc. that you want. If anything's not clear or you need a hand with something, just ask as you would if you were an employee already. Ready to get started?"
The great thing is that once you've identified a candidate, you can do remote screen sharing and have them write code before they even have to come into the office. I've interviewed a fair number of remote people this way and it's excellent for weeding out the people who can talk the talk but can't program worth a damn. And it limits bias because you don't care about much beyond their communication ability plus their technical ability.
OK, so there is a difference between computer science and programming. that's why there are two different stack-exchanges:
it's actually really fucking INCREDIBLE that
* you can know tons of CS without being able to build a decent app* you can a decent facebook clone without having any idea how it works
I feel really bad for Emma. I was a math major, but app developers won't even look at me b/c I'm not a full-stack whatever. So now I'm a Data Scientist at an advertising firm in Puerto Rico.
In most cases an applicant must be able to read English (to google some code to copy-paste and occasionally search through documentation) and able to install and run Eclipse.
The real problem with hiring is that a HR middleman is ignorant and can't tell a good code form a restaurant menu. So he must give a very few simple exercises from common text-books with known answers.
The even bigger problem is that almost no one needs coders, everyone wants programmers which is a complete different set of analytical and engineering skills.
Coding is just a process of translation of a ready-made by someone else, poorly understood (if at all) specifications into a spaghetti [Java] code by calling poorly understood methods of ready-made classes, coded by someone else.
Programming is a process of understanding and describing reality (in terms of design documents, protocol specifications, and then, least importantly, source code in a several languages).
The criteria of success for a coder, btw, is when it just compiles (unit-tests? what unit-tests?) by the industry-strength most advanced compiler of the most sophisticated industry standard static-typing language (static typing is a guarantee from stupid errors, everyone knows) which is even verified to run correctly on the most advanced VM which incorporates millions of man-hours of optimizations, unless.. Never mind.
Success of a programmer is when it, like nginx or Plan9 or OpenBSD, is good-enough.)
The irony is that, in an effort to hire the "smartest" people, they leave out the wisest. Which is arguably more useful.
If a startup asks you to solve math puzzles, it's possible that the work you will be doing heavily involves the creative use of math or information analysis. (This is more broadly valuable than many people recognize.)
Also, it's also possible that that particular startup doesn't know how to effectively interview.
It doesn't sensationalisticly mean all Startups (capitalization yours) don't know how to effectively interview.
Also, rather than focus on your ability to learn, I would humbly recommend you reconsider the basic nature of employment. An interview should be considered a two-way conversation. You're not selling yourself as a slave, you're entering into a mutually-beneficial, private, voluntary arrangement. Thus, even someone who goes into an interview willing to accept anything and everything they offer could be expected to ask simply, "And what exactly will I be doing?" But better yet, grill them about every nitty-gritty detail you can think of. Although some insecure interviewers may be taken aback (I'm guilty of asserting the interviewer was wrong on more than one occasion, both times still receiving an offer), I for one am impressed when a candidate demonstrates a sharp, critical and skeptical mind in this way.
Because of this I've pretty much given up on hiring graduates based on their technical skills so instead I'm looking for someone smart, who gets that they've got a lot to learn, who is interested in technology and can get on with the other people in the team.
I don't think asking people math questions per se is a great idea, but if you've studied a maths degree it's a good way of working out if you're smart and if you were paying any attention at all during university.
(Incidentally this may be different in other countries (I'm in the UK) or in a company where you're able to attract the very best who have picked up really solid skills, but for most organisations that's not the case as most graduates spent more of their own time in the bar than coding.)
- After a first non-technical call, we ask the candidate to create a very small project based on our SDK. We send him the documentation and a very small sample. He can almost use every tools he wants to create that small project and, of course, we do not set any deadlines. It allows us to see how the candidate architecture his applications and it gives us a project to discuss during the following call.- If all goes well, we invite the candidate on site to present our code/project and eventually brainstorm together. So that both parties can see if they can work together and the candidate has an insight about how we work, how our code looks like.
Clearly, it's far from perfect and we are often considering changing it. Imagine if every company where you are applying would ask you to create an app from scratch with their SDK? We may lose some candidates, but at least we hire only people that fit the company's culture.
I tend to hate the interviews that ask me to solve math and logic brainteasers because I don't see the value in them regarding my knowledge of programming.
The likelihood of failure of a startup approaches 100%, so you should optimize for likelihood of survival, not for IQ.
If you're not a startup, then the top ranked comment applies. But it doesn't really otherwise.
That aside, one must have a way to measure the abilities of a candidate -- and asking the same set of questions to many people allows you to compare the answers as apples to apples.
I generally don't restrict my people from asking any particular question, but I will ask them to consider what a failed answer really means for the specific job (questions are generally adjusted then).
As an aside, some questions of mine that aren't specifically about coding:
* do you code outside of work (a love of coding translates to good coders)
* send me a link to some code you've written that you are proud of (let see what you got)
* tell me about a problem you had where your solution wasn't correct (how have you dealt with failure).
I would (and have) asked if the interviewer or organization has any evidence to show that interview puzzle performance (or shit like Myers-Brigg) predicts job performance. No? Not surprising. Google did look into it and found no relationship. (http://www.businessinsider.com/how-google-hires-2013-6)
Programmer interviews are so crazy and sometimes sadistic that I catalogued some of the more common interview patterns:
I like to ask "what will I be working on in the next 6 months" that way you don't rock up and than the second day they through you in the deep end of building a iPhone app.
Granted, startups only have a vague idea of what they will be programming with short periods but it helps.
Also ask "what will be my performance indicators". If they don't include "being able to very quickly learn new technologies" its hardly your fault.
Anyone who supports math puzzles (or whatever else) in an interview would have to argue that their perception of the candidates performance offers a clear enough data point that it doesn't dilute other information available to them. Given Google's study finding data otherwise, they certainly have the burden of proof.
Yet, I have never had the balls to pursue it professionally. I build stuff and usually never launch it. I have learned several times over that marketing is not my strong suit.
That said, I'd actually like to work for a startup. Hit me up if anyone wants to talk.
Programming isn't difficult and you don't need to know complex maths or be able to solve mind bending puzzles to be a great developer.
I really understand that a startup with scarce resource would like to do its best shot. However as discussed long ago (https://news.ycombinator.com/item?id=2385424), it is really frustrating that asking math puzzles are assumed as the best way to hire the best for the job.
+ knowledge - generally mastery of math/CS concepts and can be thought of as the potential
+ application skills - modeling a real world problem into a theoretical, computable, and (ultimately) programmable form
+ execution skills - implementation (coding) of a solution including the ability to utilize requisite tools/technologies such programming languages, DBs, OS, and so on
That said, hiring process should cover each of these areas and programmers should work on all these as well.
But this one talks about getting inadvertent benefit of being good in Maths to get selected for programming, and suffering the consequences later on.
Also, it highlights the importance of what is mostly taken for granted and thought of as mundane stuff, of programming - the idiosyncrasies, jargon, and best practices of various languages and OS environs.
Not long ago, Facebook made that 4.74 degrees of separation on its networks. Meaning a maximum of only 4.74 persons are necessary to connect any two random persons on the network.
You can also find an article on Wikipedia about the "Kevin Bacon" reference.
I could learn heroku/RoR/whatever other technology but news things are always coming out and some people keep up with it so easily. I'm not sure being a dev is right for me if I take so long to understand such basic stuff. But I love coding and algos! I write python scripts to do all my homework... and then run them in codecademy labs because doing it in unix makes me so confused.
If anyone has had the same problem please let me know how you got over this hurdle. Thanks.
background; sophomore, cs major, cornell
Or I might ask them to describe how an event loop works.. or what the I/O path between their program and the disk looks like in as much detail as they can.
Someone that loves the field is going to have a decent idea about these things even if they never had to build one before.
caveat: these examples are very system level but you can substitute them with appropriate web, financial etc domain specific knowledge.
I was asked, as part of my application, to take a programming quiz. The quiz consisted of a graph theory problem. I did pretty poorly on it, given that I have no real knowledge of graph theory.
Had they asked me a question about statistics (or something similarly related to data analysis), I think I would have actually been able to answer, or at least been at a point where my programming knowledge- not my math knowledge- was what was holding me back.
The reason is smart people can figure out git, or databases, or objective-c, or whatever, in a fairly short amount of time.
For example, my co-founder learned objective C off free online video tutorials and built an iOS app (talking an app with serious firepower and back-end transaction logic) from start to end by himself in less than 3 weeks.
That's why we're not as concerned about what you know right now as what it's possible for you to learn in 3 more week.
Check out the last technical interview task that I got```Objective:Write a program that prints out a multiplication table of the first 10 prime numbers.The program must run from the command line and print to screen one table.
Notes: - DO NOT use a library method for Prime (write your own)- Use Tests. TDD/BDD- IMPRESS US.```
I mean I can impress you but how will this correlate with production code?
This is like solving your submarine problem. Jeese.
Stop asking this fine young lady math puzzles to determine her programming abilities. She is good at solving your seemingly pointless math puzzle, because she was practicing problem-solving since she was ten. But she is not anywhere near as good at programming, yet - which caused her problems at the actual jobs she had to do after she was hired.
Isn't XY years of records in the same field of interest working for a successful companies a good sign that I can code?!
Ask me theory - pay me to code.
E.g. if somebody hire John Carmack (ID Software), nobody will let him do some math test or ask him trivial programming questions.
But you are not John Carmack ;-)
It is like in every other job: if you are not a rockstar you are nobody.
MS hate is vicious on here. I remember recoiledsnake [1, 2] alluding to it, and not that particular topic, infact lots of MS topics are bumped off the frontpage while having lots of points. Not on this site, I made a point on neoGAF debunking a point regarding XboxOne related to a technology that I am very much familiar with. I was ambushed by 15-20 people in matter of 10 minutes and banned. One single post, nothing inflammatory. On this site, yes I do see MS hate from lots of members. I do not think I remain enthusiastic in posting on here. Some of the members call themselves veterans and use that status to just point barbs. Disagreements are one thing and can be deliberated in civil manner, but downright unencumbered hate and allegations is another.
- https://news.ycombinator.com/user?id=recoiledsnake - https://news.ycombinator.com/item?id=5716419
Greg has been a believer in Microsoft. He went to all the Tech-Ed conferences, attended every MSDN event he could. Conferences are grand stages that leave an impression. He drank all the cool-aid that was served at these conferences.
Things were really good early on, this was the last decade. The computing scene at that time revolved around Microsoft like the many moons of Jupiter. Greg and his team built products with Silverlight, WPF, .Net, Windows Workflow, Biztalk, Remoting, and the like. Every conference offered something new, something exciting. The apps they built worked great, looked great.
Fast forward to now. Greg is a decent programmer, but he wants a new job badly. The problem is that nobody wants to use all that stuff that he knows. People want to build on standards; apps that work on every device. Not just on Windows and not just on Internet Explorer. Greg still doesn't get it. He hasn't seen much of the world outside Microsoft, and still wonders why people don't want Silverlight. Still tells me how WPF is so much better that anything else out there. And running only on IE, why is that even a problem? Everybody has IE. Poor Greg, tough times.
There may be many issues with Microsoft. But more than anything else, I would fault them for building their entire ecosystem with total disregard for standards, their refusal to work with whatever community existed outside. This probably wasn't intentional, they must have probably believed in what they told their developers. Even though so much has changed since their glory days, there's a part of Microsoft which still refuses to engage.
There was a Steve Jobs interview from the late 90s in which he said, "The problem with Microsoft is that they have absolutely no taste". Jobs wasn't talking about aesthetics; it is true of pretty much everything from Microsoft. From UIs, to development frameworks, to tools, to shells and even APIs. Back then, having "no taste" was totally fine because people communicated far less.
Now we have a whole bunch of people who are stuck using this stuff. And many of them don't really get it yet.
Edit: I just saw that you work for Microsoft, and specifically Microsoft Research. You guys make awesome stuff. The above is mostly about the Windows platform.
So you posted a real classic flamewar topic here... heh. Enjoy the war. But here's my take on Microsoft Corp, since you asked nicely.
You guys don't play nice. You've never played nice, and the fact that you've gotten better lately seems to be more due to the fact that you've lost dominance and have to interop with other operating systems. I'm not really going to provide significant examples, there are lots out there for a quick search. Things like file formats, threading models, frigging slash directions in filenames, deviance in compiler standards. Not to mention that MS had a terrible rep for being aggressive and with bad ethics in the 80s and 90s (leaving aside the F/OSS fight).
Technically, I find MS offerings to still be catching up in automatability to Linux. Still. Not only that, but you have had since the 90s this obnoxious habit of having "moving targets" for your APIs. So learning one API just meant that I'd have to learn a new one to do the same thing in a year.
I've recently had the opportunity to do heavy .NET development, and my opinion is that as a developer whose worked for years in Linux, Microsoft technologies wasted my time comparatively. Everything from Windows 8 out to the shenanagins with IIS to actually get my webapp deployed. I was able to do equivalent work in Ruby on Rails (a language and framework I didn't know) in a fraction of the time I spent fighting C#, MVC API and IIS; this experience was repeated with Caveman and Common Lisp (a framework I didn't know, a language I did). I can not believe how painful it is to develop on Microsoft tooling, and how meekly people accept it as the way it is. I don't like having my time wasted.
I'm not going to say Apple or Google (or Oracle, SAP, etc, etc, etc) is blameless, okay? But I don't really like Microsoft policy and technologies, as a rule of thumb. Note that I really respect your arm of the company - MSR - and think that it does great stuff like F#, Pex, and others. That still doesn't obviate my dislike of MS as a corporation.
For context because I'm sure some people (I do think there is an anti-MS bias here on HN, though not as pronounced as OP thinks it is) may think I'm some stereotypical Microsoft-using rube. I grew up in the 80s, programming first on the C64 (BASIC, 6510 Assembler), then Amiga, then various UNIX systems (SunOS, IRIX, AIX, Ultrix, Solaris, later Linux).
I avoided Windows like crazy until Windows 2000 came out because prior to that release the idea of using an OS where one process could crash the system at any time seemed ridiculous (post my Amiga days), and running NT was only for "enterprises" (at that time). But since I first started using Windows 2000, Windows has always been my "primary" OS, partly for gaming reasons, partly for access to commercial software (currently Adobe Lightroom and Photoshop), and partly because I simply actually just like the look and feel of it.
In the meantime I spent quite a number of years doing Win32 software development (though now I'm mostly doing Android/Java and some embedded stuff with Go) and to this day, Visual Studio is still the programming environment by which all others I use are measured and found wanting. There are some pretty decent other ones, but I still miss the absolute power of the VS debugger (against C, C++ and C#) and everytime I find myself doing printf-debugging because the FOSS tool I'm using doesn't have solid debugging support I cry a little inside.
I also still do a lot of Linux-based programming these days, each Windows system I use has half a dozen or more Linux VMs running on them regularly, but I still prefer Windows as an overall primary desktop. It is fast and extremely stable these days (hell, the GPU driver can crash due to Nvidia bugs and Windows will just restart that mofo and keep going, how cool is that?).
All that said, yes, Microsoft hasn't always acted in the best interests of the overall industry (but neither has any other company near their size, and Microsoft has gotten better over the years while some others I won't name have gotten worse, IMO). Also, I still haven't found any good reason to buy a WinRT device. But my overall impression of Microsoft is pretty positive.
After learning several OSS stacks, I have nothing but contempt for Microsoft technologies. I wouldn't say I hate MS - they are what they are - but I am certainly conditioned to be very suspicious of their offerings. I would never take a job working on a MS stack again, ever.
I currently work for a large enterprise that uses a mix of MS and OSS, and I take every chance I get to swap out the MS tech with OSS. The devs love it and it makes me happy.
So is HN basically becoming Slashdot where Microsoft hate occurs by default?
The guidelines for this site suggest that it is bad form to compare HN to other sites, especially when your account is under a year old. They also suggest that users should not complain about downmodding (which you are doing).
I think this would have been a reasonable post if you had found evidence that articles involving Microsoft consumer electronics received more negative comments or flags that articles from other companies. Instead, the post and its comments are just a bunch of unfounded accusations of anti-Microsoft bias.
I would argue that the facts that you assumed that articles about the Lumia disappeared because people were maliciously flagging them, that you posted an extremely positive comment about it  without disclosing your Microsoft affiliation, and that you reposted an Engadget article about it  just 6 hours after it was originally submitted, and at the same time you were insulting people in the original submission's comments  just as troubling.
Can you link to one of these stories that got flagged off the site?
Because of Microsoft's shady business practices over the years (including Elop killing Nokia's most promising phone OS, Meego, and driving the share price down so MS could scoop it up) and the fact their software is just plain bad.
No one actually wants to use MS Windows. Microsoft ruined/killed some of their most beloved franchises (Age of Empires, Flight Simulator, Combat Flight Simulator). Internet Explorer is a joke.
Not to mention, the ridiculous licencing terms that come with MS software, the high prices, and questionable functionality. If paying for Vista was bad, it was worse that Windows 7 wasn't a free update.
And then there's all the attack ads and FUD Microsoft has spread over the years (especially against Linux), which continues to this day with the 'Scroogled' campaign and attacks on non-Windows phones.
Anyhow, a better question would be who actually likes Microsoft? Even OEMs are jumping ship and desperately searching for alternatives (witness all the Chromebooks coming out now)...
I'd really like to go out of Microsoft's way, but they don't let me. I'd like to look for any workplace and be asked the first day: "which OS / software stack I prefer?" or give me a blank box to set up. But I usually just get a Windows box which I'd choose last (somewhere behind pen and paper).
They do patent extortion (they make more money of Android than Windows Phone).
They don't contribute much back to the world at large. I don't mind proprietary software, but I insist on open interfaces that let software play together. They don't do that. They don't publish essential specifications, don't contribute code to the community much and if you reserve engineer their protocols to provide compatible services, they sue you and extort royalties. And then there are things like OOXML that they forced through ISO.
Companies working together with Microsoft are regularly burned.
They have been repeatedly used very dirty tactics to corner the market and got fined for it.
I don't particularly fancy their software (I'm much more comfortable with Linux systems). Automation of Windows software is horrible and they suffer a bad form of NIH.
This rubs me the wrong way.
Make them an optional thing in my life that I can avoid and I stop having hard feelings for them.
Having said that, launching the day before the iPad was bound to invite negative comparison without something really special, notwithstanding the good value proposition of this tablet. What has personally held me back is that Windows RT has nothing in particular for me because it can't run any x86 legacy apps, while surface Pro seems rather expensive.
As with Google & Android, and MS with many previous versions of Windows, this platform is poor for musicians and not great for visual artists. I know creatives are a small market, but they're a very influential one. I don't like Apple or iOS much, but next time I buy a tablet with a view to making music, what other choice do I have?
MS made a lot of mistakes - so did Apple. I was an Apple tech support - it really sucked not having multitasking and dealing with so many OS issues. Now I make a living off the MS stack.
I think MS sticking with RT is a bad move - go with full Windows support. I would still say a Nexus device is probably the best bang for the buck.
In the end it's just an opinion, but you don't need to jump down a company's throat because their not the ones in vogue at the moment.
On MS (mostly opinion here as most posters in this kind of thread):
I used to love MS in the 80s, because of the MSX ; to me that all was very open and nice.
With Windows 3.1 I saw something different; I was used to unix in university by then and Windows 3.1 was so horribly unstable and generally completely worthless that I thought the world had gone mad from buying and using that crap. I used to look in pity on the people sitting behind the very frequently crashing 3.1 (browsing with Netscape on 3.1 was like pulling teeth) machines as I sat behind Solaris which never crashed. Which made MS, to me, the company who releases things into the wild which do not work and they dare to ask money for it. I know they couldn't really help some of that; you could crash 3.1 as easy as DOS, but software under DOS crashed less frequently, wasn't that much of a pain to work with (one open application at a time; good for focusing too :). Matters became worse that, after a while, they had NT and still they were peddling, for money!, that 3.1 abomination on humanity.
With 95 things didn't improve much (at the time it seemed it did and up from 3.1 it did, but in the big scheme of things it still crashed all the time) and by then a solid version of NT was on the market so there was not much excuse for releasing '95. I became aware of their dubious business tactics against small companies and with their partners; as a result of the technical crap they released and their tactics I got 5 sparcstation 5 machines from my old uni for free and installed redhat on my PC.
I try Windows and the eco system ever so often;I have a Lumia; love the hardware, not the OS; many issues I've written about before. I tried to like Windows 7 and 8; 7 is ok, but not more than that and 8 is... weird. I wish they would've just had some balls and just only put Metro all the way with no way to go back. Now it's just, like the Surface; neither meat nor fish; not tablet, not laptop. For a client I had to install the MS-SQL/Sharepoint/Exchange etc stack and write some software on it; I thought I liked it at first, but after a while I got into the quirks which had no documentation and not much online relief.
Basically; I try to like MS their products ever so often because I think their should be competition; I just don't see any competition compared to what I use daily. And stuff like the Android patents still stings; unless they turn that back they haven't changed since the 90s and are still evil.
I don't 'hate' anything though; it's just something they shouldn't do if they want my money.
Add on wasting the purchase of Danger (Sidekicks were amazing), intentionally changing their OS solely to screw with competitors, the terribleness of embrace and extend, and I just can't get excited about anything from Microsoft. Sure, they've behaved better recently, but they've had real competition recently. I stop short of hate, but I'm not excited about their stuff unless it's something as big as the Kinect can drive my car to the moon.
HN dislikes Microsoft by default the same way Americans dislike government by default. Americans views of government are shaped by their experience with their local DMV, which leads to a conclusion of "government is a slow, inefficient, bureaucratic, unfriendly mess". Similarly, web developers opinions of Microsoft are shaped by their experiences with IE. And past versions of IE that they still need to support.
If Microsoft wants web developers to have ANY respect for them, they need to improve the interactions that web developers have with them. Today, I ran into a JS bug in IE9 that manifests itself 100% of the time when the debugger is closed, and 0% of the time when the debugger is open. Clearly, debugging this is... frustrating. And clearly will never be fixed (as it's an old browser version).
I can't download an ISO - I have two computers in my house and I'm in India with 2mbps,30 gb bandwidth (which is not cheap). Seriously, why would you do that?
I could not ask my friends for a CD.
The second problem - I can't do a clean install of 8.1 using a windows 8 key. Because Windows 8.1 is supposed to be "an upgrade from Windows 8, if you have a 8 key". So the only way to clean install 8.1 is to clean install 8 and THEN launch the upgrade installer. Combining with the above issue, Im looking at about 12-15 gb of download to install two computers. All because some sales suit thought it was a bad idea. Again, seriously? Look at how Apple did the Mavericks release - the bar is much higher.
I want to like Microsoft - I really think you guys innovated with Windows Mobile (although Win 8 Metro sucks) , but your business practices soon turns that into hate.
I've got a bunch of example topics: MS, the NSA, F/OSS, Google & privacy, CISPA, etc.
In each of these cases, there's usually some voices of reason (grellas, anigbrowl, tptacek, masklinn, gruseom, I'm looking at you) and a lot of people who treat the story as life and death.
CISPA is the END of net neutrality. Google is the END of privacy on the internet. F/OSS is about what's RIGHT and what's WRONG and F/OSS is RIGHT and proprietary software is WRONG. MS SecureBoot isn't about addressing a well-studied security problem by Microsoft and the security industry, it's about PUTTING DOWN the Linux desktop. The NSA is the END OF ALL PRIVACY.
I'm not sure what the reason is but people just overreact.
So here's my take on MS. It's a software company. Use their stuff, don't use their stuff: whatever. The days when it was THE software company are over. If we don't ship you a compelling experience, use something else.
I do not think the important topic of Ethics should be brought so low as to help us decide on whether to flag or not a link on HN. Let's replace it by "stupid".
Then yes, it is stupid to flag a post just because it relates to a company we do not like.
However, it is not stupid to dislike Microsoft. You might remember Paul Graham's Microsoft is Dead(1): for people a bit older than 20, Microsoft is a company which was very frightening, a company which did really try to kill Internet, and force all our industry into a nightmarish path where we programmers would all be happy slaves.
Just because they have changed the color of their last make-up will not change this, and I would hope Hackers here and there would actually despise more Microsoft and other similarly dangerous companies.
I would guess that 1% of Office sales are for folks who "need" the functionality of (most likely) Excel over what is offered by OpenOffice, Google Docs, etc. All the rest are driven by the effective monopoly of the Office file system "standard."
I am happy to pay for great products, like my MBP, but these aren't great products - even after decades, Word is still a <NSFW> to use.
Disclosure: I've primarily worked as a MS stack developer and admin for over two decades. But I've also used a full spectrum of other technologies over the years, too. I've been agnostic and objective when it come to the industry, but I've eaten enough excuses from Redmond. There's good alternatives, I'm exploring more of them.
Oh, and sticking a UI designed for touch-screen tablets on ordinary desktops and laptops is just stoopid. They have become a Blackberry-esque laughing stock as far as making terrible business decisions and missing opportunities.
How does your wife think Nokia would be doing if they had released Android phones instead?
I've been browsing slashdot practically since it started and if thats your barometer, then the most tech websites are 'Slashdot!'. In my opinion, the vast majority of the anti-ms comments can be safely ignored as they are just trolls looking for attention. Whats interesting is that the trolls that attempt technical arguments are also wrong the vast majority of the time. And if they bring an ideological argument, then they are some kind of open source zealot and bring nothing new to the already dead old open-close source flamewar. (Open-Source won BTW :P)
I know several people at MS.. and MS has great talent as well as some extremely well engineered products. With all that said, MS has done some pretty shitty things in the past. And all of those shitty things have been bouncing around in the internets echo chamber - being twisted into half-truths to complete lies for about 10 years. There is just too much misinformation entrenched in the community for MS to be able to counter that. I don't know if they deserve it but its going to be a long long time before you can expect any kind of fair treatment from average geeks.
We're not the OSDN. In fact we (as a collective group of users) are not related to the open source movement in any way. Why should MS's closed-source viewpoint matter to us? Are we not here to build businesses in the tech space?
Sometimes the right tool to use is made by Microsoft. Sometimes it's open source.
That's ok, great, you are totally biased in favor of this company.
So you can't understand or respect other people opinions. "hate" is a very strong term for not caring enough, or not caring as much as some family with all members working for the company.
Well. HN has a strong proportion of open-source people, and Microsoft's relationship with the open-source community has been historically poor, in no small part due to ethically-challenged decisions made by Microsoft management. I'd even argue that Microsoft has essentially lost all trust when it comes down to it. Embrace, extend, extinguish, etc. Much as Oracle's brand does not attract the best feelings here.
Though I'll point out that Microsoft Research as a distinct unit produces extremely valuable work, and that many folks talking about the bad quality of Microsoft products haven't touched Windows since the dawn of the century.
> Is it ethical to flag something because the article is related to a company you don't like, even if the source is generally reputable (theverge, engadget, ars)?
No, I wouldn't say that it is.
yeah. this. That.
(I fully endorse discussion about meaningful topics, but I think it's a bit stupid to have a microsoft vs. not microsoft post push up at the top of HN. Everyone, post your opinion on this topic now, instead of actually talking to each other~)
Apple is so much worse than Microsoft, business practices wise, in nearly every way. But they're a minority, and they compensate for their rapaciousness with a level of quality and attention to detail that nobody else does, period.
Linux and their ecosystem is like the Borg. Everybody will be open source eventually, and if you don't get on board they'll just clone you. And eventually they'll win. Look at how far gimp has come. They're gonna get everybody sooner or later.
Google wants to be Microsoft in the worst way, but has not yet achieved the level of hubris that would allow them to forget that a new search engine is just a click away. And it is. If google pisses us off enough, they could be wrapped up overnight. And they know it.
Microsoft, on the other hand, is a mean competitor that doesn't really do quality and has its roots in the nineties when it OWNED EVERYTHING. And it still owns the desktop, and office productivity. Which is the company, frankly. Anything else they do is window dressing or a loss leader in search of finding their way back to the center of the universe, which isn't going to happen. And the beef people have with them comes from our remembrance of their tender mercies when they ran everything everywhere. We, the consumers, are vastly better off in a multipolar tech world, and its difficult to imagine anybody allowing a single company to accumulate that level of monopoly ever again.
Plus Microsoft astroturfs for pr like nobody else, so fans are automatically suspect.
2. The ugliness of Microsoft Office. I dont see each version of M$ office were ever an improvement. While it did generate excitement, it wasn't until someone who actually tired iWork to know how easy it is to do beautiful documents and charts. And to be it wasn't until the Office 2k7 did they start to react ( But they got Ribbon in there which is an even bigger let off ). And although many improvement since then, those days i would remember how i am forced to use office.
And M$ was really rich. The Richest company at one point in time. That is not to say people hate the riches. It is merely a point that they have so much money why didn't they go and fix things. Things that should have been done long long time ago. And as the Mac Vs PC ads have put it, they put so much more budget inside marketing then fixing bugs!
Their Business Practice is also a point of hate. Using Windows Monopoly to get rid of competing technology by including something similar of their own. Personally I have no problem with that. Honestly if the product offering from Microsoft were superior then people would use it anyway.BUT THEY WERE NOT.
And there were a lots of other little things there and there that shows they are just a huge pile of mess.
Of coz Credit where credits due. There are amazing things Microsoft did. Microsoft Research for instance, i saw the presentation on real time voice recognition and translation. And many other things from Microsoft Research as well. Xbox 360, from PS3 prospective were quite good ( not great, but good ). And Mouse and Keyboard, that is the only competitor against Logitech in consumer range.
And I dont think People are Pro Apple and therefore Anti M$.
I spend about 50% of my time developing .net apps and the other half on linux/web stuff so I'm pretty familiar with the MS tech stack and it always feels very clunky and outdated.
MS hasn't created a really compelling consumer product since the Xbox and they have just totally lost consumer mindshare, they are the slow, clunky old thing you use at work because you have to, not the thing you buy when spending your own money.
In my view the future is(at least in the medium term) Linux on the server and mobile devices, Unix on the laptop/desktop in the form of OSX and maybe MS on the console and Windows running legacy systems and some servers.
I'm not a particular fan of Apple either(I've never purchased any of their products) and only use Linux(Ubuntu) and Windows in a VM but it seems to me consumers just don't care about MS any more and I'm not sure that MS has the skills to change that.
More seriously, I've noticed that a large portion (like in 90%) of the CS department and Statistics department in my Uni (US, Ivy) run mac os x, or linux. Is that a general trend elsewhere? I have my own reasons for preferring OSX, and I have a hard time believing that these 90% picked osx or linux just because of a "vogue" or "trend". Genuinely curious here.
It's been like this for some time. Bias is slightly changed from being very google-centric to apple-centric and back, but hate for microsoft is pretty much clandestine. So much for anti-flamewar features, encouraging group thinking.
I don't think you can fight it much, just don't read hn in the days of apple presentations, you won't miss anything.
I'm a 90's linux user so my hate for MS is self explanatory and these days mostly irrational. Recently I found myself working with a group of Microsoft employees and it's tough to "hate" them, their company or the really nice products they flaunt around (Surface, Windows Phones, etc).
There are a lot things that continue to feed my dislike for the company though. It's silly things, like the way they continue to ignore the existence of industry standard protocols (ssh! there is BSD code! just copy it!!).
In day to day dealings with the company I sometimes still get a sense of arrogance and not-invented-here type scenarios that prevent a better solution from being perused.
Also recently I started to see posts from older HN members who don't like what the community is turning into.
Engadget and The Verge are professional plagiarists, it would be bad for this community to adopt those sites as some kind of standard for tech news, and it's awesome seeing them fail over and over again to get a foothold here.
Like Slashdot, HN is a diverse community of people who tend to have strong feelings about technology and tech companies, and negative views are naturally highly visible. So, yes, there's a subgroup from whom consistent Microsoft hate is to be expected, a subgroup from which consistent Google hate is to be expected (including at least two distinct smaller, overlapping subgroups -- one which will refer to NSA collaboration in every Google story, and one which will dismiss every Google story with a reference to the Reader shutdown), and on and on and on.
In early Steve Jobs/Apple meetings their was a lot of hate for IBM. After some time the hate was reserved for Bill Gates/Microsoft. The latest enemy is Google.
Context matters a lot for Microsoft.
Amongst older people, there are enough casualties of Microsoft's success around to warrant a default hatred for the company and it's values.
However, the saddest indictment is even at the height of their success many people didn't like or even hated using their products (take the parody of Windows/BG in South Park the movie in 1999 as an example if you like).
So I think Microsoft as a deserved reputation for considering the enjoyment of their products as a separate from the success of their business, at least in the mainstream. It's not to say that Microsoft don't do good products, but it's difficult to regain trust which is lost.
I guess that Microsoft-related topics are not regarded as supremely interesting to most of the startup scene, which still drives this site in many ways. MS aren't high growth, haven't been a startup for decades, and their stack doesn't seem popular with startups. Even if a few people hate Microsoft, I'd characterize the overall tone I've observed on HN regarding Microsoft as largely indifferent.
It's the number of people who doubt us. We suck, we hate OSS, we are evil, we are incompetent.
Thing is, Microsoft isn't a monolith. It's little startups, small groups and big groups. I went there 5 years ago to do open source and I'm doing it. I can't speak for the other gajillion groups but mine doesn't suck and we work hard doing nice things.
It's tiring be to doubted so consistently.
You will find people that embrace it (large majority all over the world) and others who despise it with a passion and still others who are indifferent. There are those that dabble in it once in a while to see what the fuss is all about and those that actively resist it to make sure their culture doesn't get polluted.
I would say for the most part, the HN community is like Quebec in Canada. Largely in love with their own culture and heritage (in this case open source stacks and Linux) with a strong feeling to keep it that away. But, Quebec also knows that English culture and the English language won't go ahead as its too pervasive so they try to do their best to keep it in check. Just as in Quebec, you have people that love the English culture and follow it but not too publicly.
I have noticed HN crowd is likely very SF focused and the biases tend to skew that away.
Well, so much for the analogy but at the end of the day, its part of the territory when you become as big and successful as Microsoft. The same ASK HN would be relevant if the company was changed to Google, Apple, IBM and so forth.
Nowadays they're fairly tame, however. It's time to treat them as any other tech company.
I often joke about having the nerve to set up camp in a coffee shop there and whip out my Surface Pro, fire up visual studio, and sling some C# and see how quickly I'll get hated on.
I do wonder, will that really happen? Honest question.
1) Its history both when it was dominant and even with recent stuff like trying to force always on DRM with xbone. Techies at large just distrust MS. In MS's defense, I feel that both Apple and Google are working their way to the same place, a lot more slowly but surely.
2) The difficulty of developing in Windows while not using the MS stack. Sure, it's gotten a lot better over the years, but it's still not as easy as using OSX or some Linux distro. Even when you do use the Windows stack you get burned, I've known former VB devs as well as .NET devs who were with the Windows 8 transition. Your open source eco-system also really sucks, leading to a lot of unnecessary re-inventions of the wheel, which I don't have to do when using other tech stacks. (Codeplex was really too little and too late.)
I also think the second point is why there's not as much hate for Google and Apple, since their main offerings just work better from the techie perspective.
Microsoft had the great misfortune of being the dominate computing platform in the 90's and early 00's. Computers just weren't as reliable or easy to use back then. Maybe Microsoft could have made Windows better back then, maybe not. Mac System 7/8/9 was by all accounts less reliable than windows and anything 'nix would have been unthinkable to a typical user.
So just about everyone has had a crapware infested computer running something like windows ME or Vista that crashed every few hours. For many people, the first non-widows computer they used was an android or iOS phone. Therefore the association is Microsoft == crappy/unreliable vs Apple/Google == easy to use/dependable. OEMs in a race to the bottom on hardware don't help Microsoft at all.
I actually think windows 7 was a rock solid OS and Windows 8 is good once you get over the awkward interface changes. But Microsoft's brand image has been permanently damaged.
I was here in the browser war of the 90s. I was present when it destroyed Wordperfect. I was present when it trampled on Netscape and others.
Now I see what they're doing with Internet Explorer and I'm thinking it's just more of the same. They're only not in a position to assert themselves as they used to.
And I continued to be a fan. But then one day I realized GMail was so much better than hotmail. So much free space! No more deleting! Hotmail refused to change, so I switched over. Then one day I realized that linux was such a great place for me to learn how to program. So I picked it up.
With time I tried other products and, one by one, I realized there were alternatives I prefer to MS's offerings. Today I bought my first Macbook. My first non windows machine. I dont plan on purchasing any more Office licenses. And for the first time in a while, I see no MS products on the horizon for me. Not the XBox One. Not Windows 8 mobile. Not surface pro. Nothing.
Why? I prefer their competitors products. Its not MS hate. They just dont have a single product that exites me. Nothing.
Now if you asked me about Windows 8 UI, I would tell you that I think its an abomination. That might look like MS hate.
If you asked me about the Surface, I would tell you I hate how it tries to do so much and fails to lead in just about anything. If you ask me about IE, I would tell you that I think Firefox and Chrome are better, although the new one seems crisp.
Nothing to do with the brand, all to do with the product.
Given the strength of the incumbents, Microsoft is going to have to pull off something pretty special to make a dent in the smartphone market. Microsoft played to its strengths by buying Nokia, but that's the only strength Microsoft has in this brave new world of lightweight, portable always-on devices, and its nowhere near enough. The organisation just isn't capable of producing a smartphone consumers will want to own and use.
Take my wife: she's no computer lover, and certainly not a Linux nerd or Mac fanboy. She has to use computers in her day-to-day life, and she finds it stressful and confusing. But she loves her Android Nexus 7. In her mind, the Nexus 7 tablet and her Windows laptop belong in different categories. If I told her someone was trying to merge those categories ("The guys who make Windows and Excel are going to make smartphones. Do you want one?") I know she'd run a mile.
I can understand why we have very different takes on Microsoft. I'm also well aware that Microsoft have solved a whole lot of problems so well that people don't even think about them any more. And I LOVE the awesome work you guys do in MSR. But I don't see enough of that awesome in the products people have to work with every day, and in the smartphone market that's going to hurt!
At least for me, all this has created strong dislike towards that company. It's nothing that cannot be fixed, but not quickly, and I really don't see them trying a lot yet.
Fortunately, the situation these days is much better than in the 90s since now there are real alternatives. What was once hatred is now just suspicion.
If you see the pattern of adoption by developers, I see more developers walking around with Macs, and linux boxes and running VMs to test out IE compat than anything else.
So, MS has become another OS to work with than the OS to develop on. imo.
I had been in MS ecosystem for 10+ years before moving to other technologies and it has taught me more about concepts of distributed computing (dos and donts) than anything else. I can apply those to any problems i see today. But, I will likely not not develop another asmx and aspx page.. :(
I'm an OS X user, and before that I was a Linux/Unix user. I'm not really familiar with Microsoft products (especially recent ones). When some cool new tech gets announced, I'm interested by default.
I have nothing against hearing about stuff from "foreign ecosystems", on the contrary. I was an active member on Channel 9 back in the Scobleizer days, and I loved hearing about the interesting things you guys had been working on.
I didn't catch the Nokia article today, but chances are it wouldn't have caught my eye even if it had been among the top ten on the front page. First, it's not actually interesting on a technical level. Second, a lot of us here on HN speculated what MS was doing when you positioned that trojan CEO at Nokia, and then of course it turned out to be true. Not that there is anything wrong with it per se, but I don't see how that dishonest-yet-obvious takeover puts MS in a position to offer anything interesting that it couldn't offer before. The Nokia name accomplishes very little in this case.
It's true that there can be some group hate on HN, however I don't see a lot of it projected at MS - at least not beyond the usual background noise. We as a community are way more hostile towards certain programming languages and startups. Sure, every Apple thread, every MS thread, every time something from 37signals comes up, there are disgruntled people. But enough to single out MS hatred specifically? I don't think so. Disinterest is the more likely culprit.
I still remember when they give money to SCO to send us letters to stop using linux, and right now they offer linux servers. If your market sector is not profit for microsoft, you are going to be ignored.
Also, because of microsoft, our goverments spent a lot of our money.
I don't say Microsoft is bad, neither that I hate it. It is just one more.
I know many people who own an iPad, several others who have Android tablets, but zero Windows tablet owners. I guess that should explain by itself why people are not really excited about it. Fuss and hype are not basic human rights.
I definitely remember being anti them when I worked for Lotus, and when they tried to push horrible non-standards on the world.
Now though? I just don't think they're that scarey any more.
Or is it just another 'me too' product? It is.
At least they're transparent about it... Even if by accident.
Being a fan or enemy of a company is stupid, you are just playing into consumerism. I hope every tablet company has success because competition is good for me.
Microsoft has always had animosity with the hacker crowd, and vice versa. This has been going on for four decades.
Otherwise, Microsoft doesn't bother me.
Specifically regarding MS v. Apple - because of Apple's position in the marketplace and the timing of their product releases, I think it entirely reasonable that discussion of their new tablet trumps discussion of one of the host of new Windows tablets.
Apple has a cult following. As such it also might be boring but it's sometimes interesting to even non cult followers what the cult is up to.
It is important as a community that we keep an open view about the products that we see getting posted here. Microsoft might have made mistakes in the past and had produced lowly products.
But who knows, it is possible that the next best thing in the world may come from one of them - or for that matter, anybody. A prejudiced eye can only have a blurred vision.
Being an Apple shill is a good thing here
This past week, I spilled water on the PC and took it into the shop. I picked-up a back-up ASUS, loaded with Win 8.
This was the beginning of my nightmare.
1. Win 8 was just terrible. It took me way too long to figure out how to perform simple actions. Win 8 is Frankenstien - it is Microsoft's attempt to unify the computing experience by (a) copying numerous OS UI elements; (b) slapping a tablet version of their operating system on top of a desktop version; (c) burying elements behind keyboard shortcuts and some gaudy, horrible startup screen that advertises other Microsoft products.
In a nutshell, they have built the perfect operating system for a schizophrenic blind person.
After a day of cursing out Microsoft while trying to figure out basic things (like getting to the "start" menu), I tried installing Office, the entire reason I still use a PC. I rebooted the PC and voila - Win 8 told me it couldn't start Win 8 and I had to revert to an earlier point in time.
I tried doing that and after another hour, Win 8 told me it couldn't do that either and I had to reinstall Win 8.
I put the laptop back in its box and returned it and just started learning how to use Office on Mac.
This is why Microsoft is just pathetic. I feel pathetic for giving Microsoft a chance. This isn't hyperbole: Windows 8 is really a complete and utter failure.
And they lost.
It's -ethical- to flag comments that are false, off-topic, or do not contribute to the discussion; not ones of differing opinion.
If you think people hate MS, ask them specifically what they do not like. Maybe THEY CAN'T EVEN GET TO SAY WHAT THEY DO NOT LIKE about the product and discomfort grows into hate?
Mostly you guys are way ahead of me inknowledge of current software tools, especially on Linux and 'open sourcesoftware' (OSS -- I had to look up thatone!).
But I'm doing a startup the center ofwhich is a Web site. If people likethe site, a huge if, then it could growto be a big thing around the world. DidI mention if people like the site? Peoplemight not like the site. But if they do then I will need to grow a significantserver farm, etc.
So far I'm a solo founder and doing allthe work.
I'm keeping most of the site architectureand software dirt simple. At the core ofsome of the server side software are twoservers that have some software I wroteimplementing some applied math I derived --still, just as computing, the architectureand software are simple.
For various reasons, I decided to standon Microsoft's software. Here is mythinking, and where am I on thin ice?
(1) I can understand that if I hadthe knowledge and/or staff to know some version of Linux and other OSSin detail, then Linux and OSS mightoffer me more power and flexibility.My concern, forever, would be that Iwould be getting in the business of operatingsystems, middle ware, and tools, andthat is definitely not my business.So, I'm eager to leave that work toa vendor that specializes in such things,and for such a vendor all I could seewas Microsoft. So, right, it soundslike I want to pay money for myoperating system, middle ware, tools,etc., and in a sense that is correct.I.e., if something goes wrong, thenI want an 'account executive' to calland ask for help.
(2) Sure, Linux and Unix have a longand powerful background back throughSun, Silicon Graphics, BSD, AT&T, etc.But for my time on x86 I went fromPC/DOS to OS/2 to Windows XP, and along that path, each year, I thoughtthat the OS I was using was likelythe most suitable for me on x86. E.g., instead of PC/DOS or OS/2 onx86, I was not going to buy a Sunor SGI workstation at several timeshigher price.
(3) As of now, as a desktop OS onx86, 32 and/or 64 bit addressing,as far as I can tell, XP and/orWindows 7 look okay with Linuxand OSS without huge advantages.Where am I going wrong here?
(4) There are a lot of developerswriting for Microsoft, and justwhat the 'platform' is is fairlyclear, e.g., the .NET Frameworkof some version 2, 3, 4, 4.5 onWindows XP, 7, or Server. Sothere is some definiteness tothe platform.On Linux I would have to learn aboutthe versions of the different 'distributions'. I don't evenknow what would be involved.
Due to the definiteness and thelarge number of developers, on the Internet itshould be relatively easy toget answer to questionsfor the Windows platform.Is this roughly correct?
(5) My biggest complaint withMicrosoft is the quality ofthe technical writing in theirdocumentation. It looks likethe documentation is fromsome nerds who know the softwarebut have no idea how to explainit to others and writers whoknow spelling, punctuation, anda little more and are highlydiligent but, still, don't knowhow to explain software. Myfear is that bad technical writingis common in computing and thatin the world of Linux and OSSthe situation would be worse.E.g., for serious questions, maybe commonly the solutions is just to read the code. Is this roughlycorrect?
(6) So far I've been pleased withthe reliability of the Microsoftsoftware I've been using --XP SP3, .NET Framework 4, VisualBasic .NET, ASP.NET, ADO.NET,IIS, and SQL Server. And fromsome of the large, busy Web sitesstanding on the Microsoft platform,I suspect that Microsoft will beable, maybe if at times I talk to them one on one,to provide what I need fromthem for my site. Of coursethen I will be using WindowsServer and developing onWindows 7 with XP out'a here.
(7) The Microsoft softwareis from, right, Microsoft, andsince they wrote it and sell it,my understanding is that theysupport it. Actually via someforums, I've already gotten somequite good support for free from someMicrosoft people apparently assigned to give serious answersto serious questions. But it'sbeen a while since I had such a question. But in the future Ianticipate questions, from meand/or my staff (if my site issuccessful enough for me to havestaff), and then I will want theoption of getting high qualitypaid support for serious questions.So, maybe my site is crashing;I don't know why; and I want tocall for serious help. I suspectthat I can get such help fromMicrosoft (even if I have to pay)but am unsure just what the situationis for Linux and OSS where, e.g.,where's the company with accountexecutives?
(8) So far I like the Windows CommonLanguage Runtime (CLR) and .NETFramework and the managed codeof Visual Basic .NET, C#, etc.So far I'm writing justVisual Basic .NET and am quitehappy with it; as far as I cantell C# offers little or nothingmore but has just a differentflavor of syntactic sugar,one related to C and that I don't like.I believe that, compared with C#,Visual Basic .NETis easier to read on the page,is less prone to bugs due to beingmore verbose, and will be easier toteach to new staff.Where am I going wrong?
For the world of Linux and OSS,I don't know what programminglanguage I would usethat I would like as wellas Visual Basic .NET.What would the options be?
(9) From some of what I've seen ofhigh end server farms on the Microsoft platform, the automationof system installation, configuration,monitoring, and management isexcellent, but my view has beenonly from, say, 1000 feet up.If this is so, then I'm impressed.Where am I going wrong?
As a software developer, everything is harder on Windows. I have three choices:
1) Ignore Windows and ignore >50% of desktop market share.
2) Ignore non-Windows OSes, because things get easy if you do everything the Microsoft way.
3) Endure the pain of porting to Windows, which is greater by orders of magnitude than the pain of porting to Android, iOS, or small-market-share OSS OSes like OpenBSD. It's like task one is "support the entire universe except Microsoft," and task two is "support Microsoft."
In my previous life working with telcos, I once tried to teach a particularly huge customer how to use CVS how to manage configurations across a 10+ machine cluster of machines. They didn't see any value in it, so they stuck to their good old process of SSHing into each machine individually, "cp config.xml config.xml.20131022", and then editing the configs by hand. Didn't take too long until a typo in a chmod command took down the whole thing (= the node couldn't take down a network interface anymore, so failover stopped working), and they spent several weeks flying in people from all over the planet to debug it... and they still didn't learn their lesson!
He said he saw the whole dev team just power off and go home at 11am, followed quickly by the rest of the employees. At that point, there was nothing they could do.
The craziest thing is that it went on for so long. No one caught it until their own traders so it come across Bloomberg and CNBC. They actually thought it was a rival HFT and tried to play against it.
The only people that came out of this ahead were aggressive algos on the other side and a few smart individual traders. A lot of retail guys had stop losses blown through that normally would never have been hit. After trading was halted they set the cap at 20% loss for rolling back trades. So if you lost 19% of your position in that short period of craziness, tough luck.
The Knight computer error was spectacular and catastrophic but us humans have a longer track record of making catastrophic financial decisions in the market.
Having said that, we deployed a system that was mostly automated, with the human operator to oversee investments and if any out-of-the-ordinary transactions (based on experience) were taking place, to shut it down. She happily sat there approving the recommendations even though the recommendations were absolutely outside of anything we'd ever generated in the past, and bled accounts dry in one evening, so sometimes even with a human observing you're still boned.
Cool - all you have to do to get away with financial crimes is create a system with no protections against breaking the law.
That is just painful to read. How many times do we hear a company couldn't figure out how to migrate code properly? Do any software engineering programs teach proper code migration?
Next time a manager questions money spent on integration or system testing, hand them a printout of this SEC document and explain how much the problem can cost.
I think you'd be surprised at what happens in large companies. I went through four, count em' four major releases with a company and each time the failure was on load balancing and not testing the capacity of the servers we had prior to release.
Even after the second release was an unmitigated disaster, the CTO said we needed more time to do load testing and making sure the servers were configured to handle traffic spikes to the sites we were working on. It happened again, TWICE after he said we needed to do this.
You would think something as basic as load testing would be at the top of the list of "to do's" for a major release, but it wasn't. It wasn't even close.
*Backing away is when a market maker makes a firm offer to buy or sell shares, receives an order to execute that transaction (which they are ethically and legally obligated to do) and instead cancels the trade so they can trade those shares at a more favorable price (capturing enormous unethical profits in fast-moving markets while regulators did virtually nothing to enforce the rules in a meaningful way)
Learn more: http://bit.ly/1ddUzWP
> Sadly, the primary cause was found to be a piece of software which had been retained from the previous launchers systems and which was not required during the flight of Ariane 5.
Seems like as a rule, they're likely to cause instability, and I have a hard time seeing any benefits in them.
Current antiobiotics are themselves mostly derived from "natural" chemicals emitted by microorganisms so that those microorganisms survive natural selection to go on reproducing in a world full of bacteria. Many of the early antiobiotics, for example penicillin, are derived from mycotoxins produced by fungi. Human medicine can use chemicals from fungi for protection against bacteria because human beings and all animals are more closely related to fungi than either fungi or animals are related to bacteria, so fungi have a biochemical similarity to animals that makes it likely (although not certain) that a mycotoxin that is lethal to bacteria will be relatively harmless to human beings.
And this is the way forward to developing new antibiotics. As we reach a deeper biochemical understanding of the basis of all life, we will eventually understand the differences, which are biochemical differences at bottom, between human beings and bacteria, between human beings and protists, between human beings and fungi (yes, there are some systematic differences between animals and fungi) and between human beings and all other harmful microorganisms. Only human beings have science labs and clinical research studies to come up with new defenses against the thoughtless, largely immobile threats from other living things. We can form hypotheses, test those hypotheses rigorously, and perhaps make some lineages of harmful microorganisms as extinct in the wild as the smallpox virus and rinderpest virus now are. The intelligence that the hominid lineage has evolved gives human beings advantages that bacteria will never possess.
For the first time in the memory of anyone alive today, we're going to see medical science step backwards. We're going to be more vulnerable tomorrow than we are today, and we did it to ourselves.
The other big problem we face is that certain antibiotics are like steroids for farm animals. I believe that they kill the bacteria in the gut of a cow or big that signals when they should stop eating, resulting in larger stock (or something like that). This increases the exposure of bacteria to the antibiotics, making things less safe for all of us.
But drug companies and farmers aren't to blame for our antibiotic situation. Capitalism encourages profit and doesn't ask questions about how it's made. Corporations have a fiduciary responsibility. And doctors can't be faulted for overprrscribing antibiotics either. A sick person is the ultimate debugging task and most doctors will try anything that could help the patient. I don't know if this problem has a good solution.
If we research new antibiotics then bacteria will eventually evolve to resist them (kicking the can down the road). If we stop using antibiotics then more people will suffer, potentially unnecessarily (destroying the village to save the village). Trying to fight evolution is a losing game. I am, however, confident that someone somewhere will come up with a break through in the next few decades that will allow us to temporarily solve this problem once again.
I'm not saying opiates addiction isn't a genuine problem, but it's a largely individual one. There's no widespread negative externalities to prescribing opiates to a patient. Antibiotics on the other hand, present a classic limited pool resource allocation problem (the same species of problem as the tragedy of the commons).
It's antibiotics that should require a three part pad, with one copy sent off to the federal government and investigations into over-prescribers -- not painkillers. It's antibiotics that should be subject to intentional treaties governing their distribution and use -- not painkillers. It's antibioatics that should have criminal penalties for misuse -- not painkillers.
I am more optimistic than that.
Sure: as long as antibiotic resistance is crucial for bacterial survival, bacteria have a natural need to evolve it. And, they will.
But, this will come with a genetic cost to the bacteria.
The reason that antibiotics work is because they are attacking some function that has deliberately evolved, through natural selection, to be like that. Antibiotic resistance must literally cost bacteria some efficiency in some of their other functions.
This cost was originally such that the bacteria would die. Fantastic. But note: we wouldn't actually benefit from all bacteria dying at the mention of the word antibiotic, and some bacterial resistance is good for us.
Under normal circumstances, bacteria that don't need to carry around antibiotic resistance with them will most likely have a lower genetic cost and thrive better. This may be why we have seen MRSA predominantly in hospitals and rarely in the 'wild'. (If MRSA was necessary or not costly, all SA would be MR all the time).
This gives me some hope - that antibiotic resistance is balanced, genetically forcing bacteria to be less effective in other ways and less competitive in other circumstances.
We humans are not out yet.
The profit motive is almost as blind a watchmaker as natural selection. We've built an environment which encourages bacteria to develop antibiotic resistance. Let's structure a pharmaceutical industry in which antibiotics are profitable.
The problem appears to be myopia. Antibiotics make money for a few weeks, chronic diseases for a lifetime. Fortunately, finance long ago solved the temporal shifting of incentives and payoffs. We need smooth the lumpy, often in-the-future, demand for antibiotics.
The government could tax the pharmaceutical industry, medical insureres, or the public. The proceeds would fund tax credits for the developers and/or producers of antibiotics. Alternatively, a more elaborate system by which health and life insurers incentivise antibiotic research, perhaps by issuing credit default swaps on pools of their reinsurance liabilities to antibiotic developers, could be structured.
"Defeating the superbugs" (http://www.bbc.co.uk/programmes/b01ms5c6) has a segment showing bacteria developing resistance to antibiotics.
(http://v6.tinypic.com/player.swf?file=24goih4&s=6) (Sorry about the lousy host; YouTube's content sniffing detects this as BBC property and blocks it.)
They have a slab of nutrient jelly. The jelly has sections of differing strength of antibiotic. There's a section with no antibiotic, then 10x, then 100x then 1000x. (They cannot dissolve any more antibiotic into the jelly at that point, they've reached the limits of solubility)
They drop a bit of bacteria on the zero antibiotic section.
A time lapse camera shows the bacteria growing, and developing resistance to each section. After two weeks the entire slab, all sections, are covered. The bacteria has developed resistance to the antibiotic, and is resistant to antibiotics at a strength that could not be used in humans.
It's an excellent, scary, bit of video.
They've reduced MRSA from 18-24 hours down to 6 hours. Salmonella from 24 hours to a 30 minutes. Mycobacterium tuberculosis from 21 days to 1.5 hours. Etc.
Stopping these problems before they get the chance to spread is how I believe these infections will be slowed, as antibiotics become less effective.
I was under the impression that in a population of bacteria, genes express themselves in any number of random ways. If we expose the bacterial culture to antibiotics, the bacteria susceptible to the antibiotic dies, while the resistant bacteria lives on free to reproduce, leaving the descendant bacteria with resistive characteristics...
My question to the HN scientists is, doesn't this just destroy some subset of bacteria? Is new genetic information produced that did not exist before? Taking this trimming tree down the line, wouldn't the "superbug" antibiotic resistant bacteria have been created/survived and thrived anway? Or does the antibiotic exposure actually cause, "the bacteria to want to survive", in the sense that exposing them to antibiotics leads to more rapid mutation of descendants? Why wouldnt the antibiotic resistant bacteria be created with or without overuse of antibiotics? Isn't the spectrum of the genetic tree just trimmed?
MRSA was a bit of a wakeup here in the UK, but the main 'solution' was concentration on cleaning hospitals rather than developing new anitbiotics.
It is my opinion that unfortunately it will require high profile people to start dying before support is galvanised.
It would seem from the outside that HIV/AIDs started to be addressed when superstars like Freddie Mercury started succumbing.
If all else fails, I guess we'll depend on the cycles of nature's adaptations and break out a new set of antibacterials every 50 years or so depending on the resistance trends we see crop up and hope we don't lose too many humans in the process. At any rate, I'm glad lots of smart people are working on this problem.
To my knowledge, bacteria don't have an agenda, they don't want to survive and they certainly don't change in order to survive. Instead, they change at random, which sometimes helps an individual to survive and sometimes not.
Are people who have avoided prescribed antibiotics in a better position than those who haven't? Then, what is the effect on such people of the antibiotics taken in by eating meat from animals which have been give antibiotics? Have the people who have been avoiding completely wasted their time?
Hopefully we'll be able to create some alternatives, but let's stop the bleeding if we can.
All citizens will be required to carry a smartphone or other GPS tracking device that reports their location every 5 mins to a central database run by the National Security Agency. When a new infected person is discovered, National Bureau of Health agents will contact everyone who was close enough to the infected person to have possibly transmitted (given or received) the infection over the previous two months. Those people will be tested and infected people will be incarcerated in National Health Concentration Centers for healing. They will stay their for life, or until no longer infected.
Will it come to this?
What about mandatory death penalty (plus confiscation of all family assets) for anyone who gives antibiotics to an animal or who supplies antibiotics to a farmer?
I believe this is quite likely a worse issue than the other problem: the blatant overprescription of antibiotics by weak and obsequious family doctors looking to defend themselves from lawsuits and approbation from wealthy and stupid patients with colds and coughs, which has also accelerated resistance.
To put the above into simple English: Our problem isn't that we give antibiotics out like candy, it's that we give them to the elderly, people with AIDS, the poor, etc. This massively increases the chance of antibiotic resistance developing.
What can we do about it? To start with, run the numbers, make some cost-benefit calculations, and think about the problem. There may be technical as well as social solutions.
Not thinking about the problem, making it harder for the healthiest people to get antibiotics, and pretending that you are doing something is also a viable option. It's what we're doing now.
To me this seems like the big problem here. Antibiotic resistance is an inevitability regardless of our usage rates - there's too much selective pressure for it not to. To co-opt the Red Queen hypothesis slightly, we have to constantly be developing new antibiotics just to keep pace.
I suspect this problem will self-correct eventually, with the unfortunate side-effect that the cost of effective antibiotics will skyrocket for awhile.
That being said, we're obviously not doing ourselves any favors by dispensing them like candy, especially to the agricultural industry. It definitely encourages cycles - Effective antibiotics are rare and therefore profitable so tons of $$ goes into R&D -> Lots of new antibiotics are created -> price goes down because there's so many options/patents expire -> Overuse -> Resistance develops quickly and we're left with few effective options.
^1 who names these things and are they purposely trolling conspiracy theorists?
No more anti-biotics, for anyone.
Recently I had a wisdom tooth extracted. The dentist prescribed anti-biotics, but (unknown to him) I didn't take them. I healed fine. And so it was in the 20's and before that. Plenty of people survived and thrived before anti-biotics. And life will go on when we don't have them anymore.
No doubt these super bugs have had to give up certain advantages to attain what is (for their species) a very specialized survival mechanism. Which means that if we ease off of the drugs for a while, the bacterial populations will compete, and the less drug resistant ones will thrive. Then we can use our drugs again. Or that's the idea.
What I'd really like to see are the internal assessments of big pharma of these gram neg bugs. Why isn't it economically feasible to create new drugs for them? This article makes it sound like there is a large and growing market of suffering people who'd be more than willing to spend every last cent for a pill to make the pain go away. And if the prospect of people willingly bankrupting themselves for drugs doesn't perk big pharma's interest, I don't know what would.
The only argument I can think of against antibacterial cleaning products would be that our bodies get less exposure or "practice" against ordinary bacteria?
Perhaps silver in nanoparticle form will make a comeback, as bacteria don't seem to be as able to survive the cell wall disruption that silver can cause.
Not sure where the thread is but the story here: http://www.nature.com/news/silver-makes-antibiotics-thousand...
The article's cool though.
If this is the end of the Age of Antibiotics, I hope its the beginning of the Age of Probiotics. Working with the good bacteria and developing more targeted strategies of taking out the bad. Snipers, not nukes. (Im no scientist but perhaps learning from how good bacteria fight off bad bacteria is a good place to start) http://www.sciencedaily.com/releases/2010/03/100324094717.ht...
This seems like another area where libertarianism is crashing against the rocks of reality-- as socialism, communism, and all other political ideologies have already done. I have a profound sense that all political ideologies are failed, and that we're entering a post-ideological age of pragmatism driven by either populism, oligarchy, or technocracy... take your pick.
> bacteria will always change in order to survive.
Excellent article. Everyone should read it from top to bottom twice. Forward it to your entire network. This is a serious matter.
However. I really cringe when I see scientists get loose with language like this. I know he knows perfectly well how evolution works. This is an attempt to make it simpler to swallow for those who might not be up to speed and, perhaps, come to the discussion lacking a minimal scientific background to be able to rationalize it. I get it. Among that population the misrepresentation of the driving mechanisms behind evolution can actually do more harm than good.
Taken far enough you end up with? "Oh, so you mean to say that a monkey WANTED to survive and CHANGE in order to become a human". Which makes you sound like an insane lunatic, of course.
The mechanism is dead simple: Out of a pool of organisms exposed to an environment some die and some survive. This "environment" can be anything, from an antibiotic at the bacterial level to a flood in a canyon. Of those who survived some did so due to blind chance. Others because they might possess a characteristic that helped them survive the environment. Survivors mate and reproduce. Some mutations occur. The cycle repeats with the new population. If the environmental "attack" (antibiotics, the flood, whatever) remains the same, over time populations will develop that will have better and better resistance to their particular challenges. This is the brutally simple result of the demise of those who simply could not handle whatever was dished out. Over time either the entire population is killed off and game over or those who were resistant, for whatever reason, will --without intent, goals or knowledge-- help evolve populations equipped with increased resistance to what is trying to kill them.
In evolution there is no "wanting" to do anything. There isn't even the idea of wanting to survive. There is no struggle for survival. There is no conscious desire to change or to become something else. It is brutal and simple. Some die. Some don't. Those who survive repeat the cycle. Eventually either all die or you end-up with one or more new species/variants that got past the killing spree and emerge resistant to whatever ailed them. And it goes on. Challenge after challenge.
Part of me wishes people would have a better handle on this very simple scientific fact so we could move on to more important topics. We went to see Richard Dawkins at Caltech this weekend. He mentioned that in the US some 40% of the population think the earth is 6,000 years old and reject evolution. What they reject might very well be what ends-up killing them.
This issue of bacteria evolving past our ability to concoct antibiotics is a very serious one. I've always believed we are all going to be killed-off by something microscopic that nobody is going to see coming. The potential is there for hundreds of millions of people to die over a short period of time. Airplanes will contribute to that greatly, helping take bacteria all over the world before we even realize what's happening.
That's why I don't understand why we don't get behind this --as a planet, not just a nation-- with great force. I see virtually no use for our military and that of other nations. Can't we lobby for the elimination of the horrible waste that is the maintenance of massive military forces and, instead, devote those funds to more worthy causes? Imagine if we, as a nation, devoted half our current military budget to honest medical research. I am not one for huge government programs, but there would be ways to do such a thing without having government bureaucracies devolve the thing into a cash burning furnace.
The point isn't the details but rather the idea that something like this should be priority one. We are looking at the possibility that within the next 25 to 50 years there could be a massive antibiotic resistant bacteria outbreak that takes out a huge chunk of the human race. We need to be ahead of that event, not behind it. And it is far wiser to throw billions of dollars into medical research of almost any kind rather than into making the latest wiz-bank how-to-kill-more-people-per-round machine.
Utopia. I know. Sad.
EDIT:I neglected to add how I would explain evolution to a general audience without resorting to "want" and "desire" type analogies. In other words, don't be critical without offering a solution. Well, I think it's simple, I sort of did:
When faced with challenges organisms either excel or die. Those who excel go on to reproduce. In reproduction there is mutation. Small changes to each and every new organism. Reproduction does not produce clones. Reproduction results in a population of new and distinct individuals with some of the traits of their parents and some new ones. Their offsprings, if faced with the same challenges will, just the same, survive or die. If none survive the population goes extinct. Otherwise, over time, the only organisms who will continue to survive are those who continue to carry the traits that made their ancestors survive. This repeats over time and across challenges.
That's not the elevator pitch, of course. So here is that one, applied to bacteria in particular:
When attacked by antibiotics some bacteria survive. These reproduce and produce new bacteria that might carry-on some of the traits that allowed the parents to survive. Random mutations might also make some members of the new population even more resistant to the same antibiotics. The process repeats over many generations. Over time new populations emerge with immunity to the antibiotics that killed so many of their ancestors.
The more we expose bacterial populations to wide ranges of antibiotic challenges the greater the effect can be. Over time populations will evolve that will be resistant to anything we have on the shelves to throw at them.
Then for some kind of low annual fee I could ship things in and out as needed.
This service would include pre-scheduled shipments of holiday decoration.
The problem I have is that I forget what is in my attic. On a few occasions I've purchased something only to find out I already own one. It was just buried in the attic and I forgot about it. If I try to buy something on Amazon, this service would remind me that I already own it and ship it to me.
Besides the attic stuff, I also have small random, rarely-used things that I know I'll need in the future, but don't know where to store them so I'll find them in the future.
Someone once suggested that I just keep a running list of items near the attic door. I tried it, but didn't keep up with it.
It would be nice to set some kind of expiration of my stuff as well. If I don't request an item from Amazon Attic in 18 months, it can be sold. Maybe that's a way to offset my fees.
Another idea... This could have a social aspect (what doesn't these days!?). I could give select friends access to my personal Amazon Attic catalog and they can borrow something, again for a low shipping fee. Amazon will send them a friendly email to return it and then charge them eventually if they don't.
(YC, here I come.)
If you are using this as a long-term storage solution you have to be careful because Amazon charges, "A semi-annual Long-Term Storage Fee of $22.50 per cubic foot will be applied to any Units that have been stored in an Amazon fulfillment center for one year or longer...Each seller may maintain a single Unit of each ASIN in its inventory, which will be exempted from the semi-annual Long-Term Storage Fee."
Are there any mobile apps for scanning barcodes on books and automatically building your Amazon catalog?
When your object of question hits a low-enough dollar value that your opportunity cost for making a buck off it exceeds your time value, why not donate it to a Goodwill instead. :)
I am lost for words.
And you can't use the space it takes up, which is probably the most expensive thing about old stuff. If you pay $2000 a month for 1000 square feet, every square foot costs you $24 bucks a year. An old PC taking up 3 square feet for 5 years costs you $360.
And, sure, it probably was going to be empty space. But we do need empty space, just as we need white space. All the clutter has a psychic cost.
Too bad, because this sounds handy. Kind of wish they had even a halfway decent competitor, though.
I sold a laptop though which is the only item I'm worried about being returned. Luckily I listed it as not having a battery and not having a hdd so it's already listed as not in working condition.
You start by finding your product in the store, and they give you prices for different condition levels. You pick a level, checkout, ship your items for free, and await receipt and review. If accepted as the condition you picked, you get an Amazon gift card for the amount. They might even upgrade your items to a higher condition.
If not accepted, your items are returned, free of charge. The only risk is the waste of time.
A month ago, I traded in two nearly-3-year-old iPhone 4's. I listed them as "Good". Both were accepted and one was upgraded to "Like New" for $20 more. I got $380 total, which I was extremely happy with.
Or am I missing something and this wouldn't work?
I looked at the info on Amazon's website and I still have a couple questions.
(1) Amazon charges a fee of something like $0.42 per pound when shipping. Is this just for supersaver shipments? Or does it also apply when Amazon collects from the customer for standard or expedited shipping?
(2) I see Amazon charges fees for storage and shipping, but I don't see where they take any percentage of the sale. Am I missing something?
My simple tips for selling online:New/Like New -> AmazonUsed/poor/missing things -> Ebay
When the top news on HN is how to make $5 selling used computer cables on Amazon, you know coding is dead.
> Because I'm just a single person and I don't want to offer up my spare time doing support or dealing with feature requests that I don't care about myself [...]
Sort of defeats the point of open source, no? Sublime Text's author could argue a similar case about keeping it closed source himself.
Hello, thanks for the interest. I hope some of you will take the time to contribute in the form of pull requests.
I just updated README.md with a screenshot, here's a direct link: http://i.imgur.com/VIpmjau.png
As it stand it not likely that many people will build a thing they never even seen a screenshot.
> This is a fork of visualfc's qt4 bindings, and several critical bugs are inherited along the way. Until these bugs are fixed, this package is not recommended for any real use. I don't have any time to actively work on this project, but I'll keep reviewing and merging pull requests.
I guess GUI programming in Go isn't really up to anyplace you can consider 'stable'
1. I couldn't compile completion. There is no build.go in the build directory 2. I tried compiling lime anyways, by it required go-qt5 even though the repo page says that's optional.
There are some awesome text editors out there that have features I wish were in sublime text but sadly arent.
For example, Rob Pike's acme editor has some really awesome ideas like mouse chording, (contextual right click. middle click command executions, etc), 9p vfs interfaces that allow plugins to be written in any language.
And then there are little experimental editor ideas like Conception with great ideas that are worth checking into.
Sublime Text gets alot of things right, but during my experience using it and using acme. I often find myself wishing there was a way to take both editors and mash them together because using one often makes me miss using the other.
If Lime is intended for Mac, could do worse than starting with the Kod code base.
$ go get code.google.com/p/log4go github.com/quarnster/parser github.com/quarnster/completion github.com/howeyc/fsnotifypackage github.com/quarnster/completionimports github.com/quarnster/completionimports github.com/quarnster/completion: no Go source files in /home/oscar/go/src/github.com/quarnster/completion$ go versiongo version go1.1.2 linux/amd64
How did you guys install lime?
There's only one feature that I feel could benfit ST3 right now, and that's the ability to move files from the sidebar.
But even then, I can't hate on the creator because that seems a bit outside the purview of a text editor in the first place (IMO that moves into IDE territory). I'd love it, and Sublime pretty much is my IDE already (what with build commands, project files, source control integration plugins, etc), but ultimately I'm fine with it not being there because I have a perfectly capable file manager open in a window right beside it.
I don't mean to rag on OP, I absolutely support the mentality that if you feel like something is broken you should fix it: however, in this instance I just feel like it ain't broke.
Try to solve new problems, don't copy the solution to existing problems.
Best of luck with the project.
that said, i'm looking forward to giving this a go, as an open source sublime text would be perfect.
We have TextMate, we have SublimeText we have an endless supply of code editors and IDEs...do we need another just because of "open source"? I can't help this feels like a wasted effort.
Also, after going through the checklist, the app is supposed to support SublimeText and Textmate snippets, colors, bundles, bindings and more. Seems like the dev might have been better off contributing to these projects rather than trying to push yet another code-editor into an already crowded market.
Question is though, what about support. You have a 3 year old Mac of some flavour and upgrade to this OS, what about issues you may have.
ANother aspect would be that by in effect making this level of OS available to all supported models out there warranty wise, they make life in supporting systems a little bit easier and very likely will drop support for the other older OS's on these models quicker. Again making life in support easier in many ways as well as making developers lives a lot easier. Especialy if they can target toward the a single denominator OS wise and take advantage of all the features without working towards only features common to the previous flavours.
Either way a good move. Though can bet there will be somebody on the older flavour of OS who will have some essentual application that they must have which has issues. But time will tell.
Good move on many levels I'd say by Apple.
What on earth compelled Nokia to launch their new Windows-tablet on the same day?
Edit: Sarbanes-Oxley, not McCain-Feingold. I had forgotten.
I'd like to burn a bootable DVD, and load it up as a non-networked offline install. Is such a thing even possible?
Does this mean I can run the new OS in a VM to check out the Apple ecosystem? Or is this only free (as in beer) to people who have an existing paid license for a previous Apple operating system?
... except before... I think it was... System 7.5? All new OSes used to be free for Mac and Apple II.
Even for free, I'm weary of upgrading my OS. Too scared to take a performance hit.
Very interested to see if Microsoft will follow suit.
iWork is more than good enough for many MS Office use cases. If you work on documents that you often don't need to exchange with co-workers, then you don't really have any barriers to switching. Obviously, financial analysts with their massive models in excel aren't going anywhere anytime soon, but real estate agents like my mother and others like her have few reasons not to switch.
Next time someone goes to upgrade office, they are going to have to compare an expensive software license with putting that money into a brand new computer instead and who doesn't love having a fresh new computer.
One downside is that paid software upgrades provide a useful feedback mechanism. It must get over the 'is this worth $100' bar to sell. I think this is probably important for iwork also. Between creating revenue vs enhancing the mac's value & pulling users deeper into Apple-land, I think the right decision could be to go free. OTOH, that would absolve iwork from having to be good enough that users choose to pay money for it.
Still, free OS upgrades seem like the right choice.
As with a mobile OS, the services layer on top of the OS seems as important as the operating system itself. With so much of the value in owning a Mac being iLife, iWork, iCloud, maps, iTunes, the Appstore and so on, could Apple open up OS X in the way Android is open (by which I mean for inspection more than contribution and just the core, not the services)?
Seems against their culture but this takes away one of the big reasons why they wouldn't.
There's also a huge difference in what each update adds when compared to 8.1 . Where 8.1 tries to fix all the initial issues with W8 (missing start button anyone?), Maverick adds serious new features (1+ hour of battery, compressed RAM).
The free iWork suite is a direct attack to MSFT Office. Giving it away for free will pay long term in decreasing market share for Office. Office H&O is $220. Buy a Mac/iPhone/iPad and you get that for free.
I understand the why - the chipset/cpu is 32bit, and Mavericks is 64-bit only. Bit of a bugger. The user won't notice though (My wife)
I'm sure the upgrade experience would be better than my Windows 8 -> Windows 8.1 experience of last week. The best way to describe that would be train wreck.
Base SDK = 10.9 Minimum Deployment Target = 10.9
Yes i am naive like that.
Can anyone point me to an ISO of this?
The competition is already free and Apple has still a high security prison as far as the lock-in effect of its technology is concerned, so they're still safe.
The web equivalent would be like claiming that Chrome OS isn't open because the source to Gmail isn't available.
Google is stuck behind a rock and a hard place. If they don't try to create incentives for a unified experience, they get bashed for encouraging fragmentation, if they do assert a level of control, they get bashed for not being completely open.
It's all according to the previously openly aired plan. Google keeps all of the existing code open source. Anyone who wants to build a fork can do so. Now if they want a hardware platform to run on, go find one outside the Open Handset Alliance ecosystem. It's fair game -- if a hardware partner thinks that one of Google's competitors can provide a better Android fork, they are free to leave the Alliance and go partner with that competitor. They will still get an enormous amount of code for free in AOSP. They just won't get all of the services that Google is building specifically for its own version of Android. How is any of this maintaining an "iron grip" in any way? Just contrast this with Apple where it is the sole owner of everything to do with the OS and app marketplace.
This is blatantly false. Google bought Android in 2005, two years before the iPhone was announced.
Open source doesn't require you to cooperate with anyone, it doesn't require you to give away access to APIs, it doesn't require you to do anything beyond whatever is explicitly stated in the license.
Google, Canonical, Oracle, IBM, Red Hat, SUSE, etc... aren't required to be good team players or corporate citizens. They're just required to abide by the terms of the licenses on code they use...
If Google didn't do any of this, and was totally altruistic, Samsung and others would already have completely screwed things up.
While it's certainly very much to Google's benefit, it also benefits most users because overall, Google has done a far better job than any OEM regarding user experience.
> This makes life extremely difficult for the only company brazen enough to sell an Android fork in the west: Amazon. Since the Kindle OS counts as an incompatible version of Android, no major OEM is allowed to produce the Kindle Fire for Amazon. So when Amazon goes shopping for a manufacturer for its next tablet, it has to immediately cross Acer, Asus, Dell, Foxconn, Fujitsu, HTC, Huawei, Kyocera, Lenovo, LG, Motorola, NEC, Samsung, Sharp, Sony, Toshiba, and ZTE off the list. Currently, Amazon contracts Kindle manufacturing out to Quanta Computer, a company primarily known for making laptops. Amazon probably doesn't have many other choices.
That is fairly incredible, I'm surprised it is not an anti-trust/competition issue.
Android has come a very long way in the last few years in terms of usability and design. A large part of this has been due to an increasingly uniform design language and feel. That, and the new distribution model for what are basically Android updates (Google Play Services) has made Android feel more polished and actually allowed it to stand on its own against iOS. It also means that developers like me don't have to spend nearly as much time worrying about fragmentation in the traditional sense. Each day the percentage of people using sub-ICS phones falls, and we all get one step closer to the day we can support ICS+ only.
However, companies like Amazon would force me to rewrite the maps integration, the sign-in portion, the wallet, etc... Amazon did a great job of replicating Google Maps API V1 but they have yet to mirror V2 and don't mirror the other components I mentioned.
Aside from fragmentation and developer sanity, the article mentions another key point here:
"[M]any of Google's solutions offer best-in-class usability, functionality, and ease-of-implementation."
Exactly! Google APIs are not perfect, and there's bugs (like when Google Maps broke map markers on high resolution phones like the HTC One). But generally speaking, I'm really happy with the quality of the APIs and services. In an ideal world, Amazon and Google would work together to provide great and uniform single-sign-in APIs, great maps, etc... As it currently stands though, I don't believe either party is interested in doing so. Prisoner's dilemma?
This is false. Google wins when more people use the Internet. Android is fulfilling its initial goal incredibly well: offer a free and open-source mobile OS to encourage mobile device proliferation.
Android is doing exactly what it was designed to do.
This statement is utterly false. In-house does not mean free.
I think it's also pretty standard to open-source the core and keep the baubles proprietary. GitHub, for example, made their git interaction library open-source but their git hosting service itself is closed, as far as I know.
If you want to compete with Google, using Android poses a choice: If you make Google-branded Android devices that use Google's proprietary apps, you will have to give that up in order to use Android with other ecosystems.
Thirdly, if you want to use the Google ecosystem in a product, you have to use all of it. You can't substitute someone else's location services, for an example that was litigated.
Google could develop Android in the open and retain the same level of control over OEMs, and I think they should.
Google appears to be inconsistent in enforcing restrictions on OEMs. OPhone OEMs also make Android handsets, despite the fact that OPhone is an Android derived product. Maybe that arrangement pre-dates Google's current policies.
It's understandable why Google would lock people out of seeing the back end of their closed apps. But you have to look at what the long-term implications of them slowly removing support for ASOP apps are. As Google continually pushes out fantastic products that tie in so well to the mobile experience, why would anyone/developer want to have/develop [for] anything else. As this power grows, Google can strong-arm phone manufactures to develop hardware/features/etc to work with what they are developing. They have to sign contractual agreements to get the top version of Android and are then locked in to keep up the good terms. Google is outsourcing the hardware manufacturing to other companies and ensuring that if a user wants a good phone, they will be using their services.
Many people here are claiming any company can leave Google's garden like Amazon did. While some companies may be able to do that, I'm struggling to think of a one with the technological background, money to invest, and callousness for risk who are willing to try. Amazon has a huge assortment of media that it can toss at its users who use their hardware. Other companies don't have a differentiating factor or the software development to be able to make a truly competitive product to drive people away from Google supported Android. Just look at how much Microsoft, a software giant, is struggling to gain any shred of market share.
No executive in any reasonable company is going to propose to invest billions in order to squeeze into the highly competitive mobile OS market. Its a huge risk that only a startup could swallow, and yet few startups could even raise the money required to topple the Google supported android market.
What the future is starting to look like is the one Google was initially afraid of, that users were faced a Draconian future, a future whereone company, one device, one carrier would be [the] only choice. As Google gains more power, the open source part that Android users love is going to slowly disappear. This may or may not happen, there are many variables that could prevent it, but it is a future that would bring Google the highest return and that is the goal of all market traded companies.
To be successful on mobile you also need a fairly extensive layer of services. Some of those (web, mail and so on) are easy to bolt together but others such as maps and app stores are far harder and are about data and commercial deals as much as they are about software. While it would be wrong to say that these services can't be opened up, in many cases doing so isn't as straight forward as sharing source code.
It doesn't feel as if Google has changed so much as what it means be a mobile OS has.
As a user I'm happy that Google is making sure that I can hop device manufactures without loosing my apps or functionality, if everybody would roll out there own app store and removed Google's you would be locked in with the OEM. Now you can safely change to a different phone, also they don't mind you downloading the Google apps when using an alternative ROM.
Android is open source but does that mean that you are not aloud to make money of it by providing closed source apps and service, many open source companies do that. The work that went in to Android if freely available for competitors. Lots of kernel enchantments went back in to Linux and now you have Ubuntu touch and Firefox OS both based on the Android kernel which in turn is based on Linux, how cool is that.
It's already kind of like windows, no? It runs on hundreds if different devices. It's often bloated by OEM software that people hate. It's prone to security wholes. It's slow and clunky unless you run it on the latest hardware. It bends over backwards for compatibility sake. It's more and more closed sourced...
Android is Mobile windows of the 90s. I hope Ubuntu Mobile will be successful.
When the iPhone debuted, no doubt Google sensed the impact, and Apple's ability to create an effective closed ecosystem had already been proven with iTunes. I believe that Google wanted to undermine the market long enough to understand it. True enough, "android winning" was not the same as "Google winning," but it did mean everyone else "losing." I believe that for Google, Android started as a strategy in search of a goal. It was a smokescreen to prevent Apple from taking a dominant position by default. As the data poured in, they began to understand how to leverage it, and the Nexus line became an expression of such understanding, working to establish more control, and hopefully emerge from the smokescreen they had created.
I'd fully support their modules that connect to the cloud servers being open source / GPL / etc, but to expect them to open them up to unauthenticated requests is untenable and leaves them way open to abuse / lack of rate limiting / making the service a bad time for all involved.
This is the future of smartphones. You pick a phone and by doing so, you pick the walled garden you're going to most comfortable playing in, pure and simple.
"Since the Kindle OS counts as an incompatible version of Android, no major OEM is allowed to produce the Kindle Fire for Amazon. So when Amazon goes shopping for a manufacturer for its next tablet, it has to immediately cross Acer, Asus, Dell, Foxconn, Fujitsu, HTC, Huawei, Kyocera, Lenovo, LG, Motorola, NEC, Samsung, Sharp, Sony, Toshiba, and ZTE off the list."
Google Apps and APIs are fine and good, but I don't think any company should dictate to an OEM what products they can make for other companies.
This seems like a terrible situation for users. Can someone with a Samsung smartphone confirm this?
If this is the case, how are the apps organized when you first buy the phone - are they all in one big apps list?
It seems that the main problem is the gatekeepers who manufacture phones.
Here's a different perspective:
This is why Microsoft needs to keep building their own hardware like the Surface. As time goes on, if Microsoft does it right, Surface is going to be the best Windows experience. At least, I would hope so.
But both Windows and Linux are more than capable to get this all sorted. Like Google showed with the Nexus 7 - focus is all that's needed. It's just harder for Microsoft considering everything they need to juggle.
Edit: Fun fact: Apple's own Boot Camp drivers disable USB selective suspend on the 2013 Air! Check our powercfg /energy for more fun :)
Edit 2: Surface is Tegra 4 SoC isn't it? Microsoft still is limited by Tegra's power characteristics as far as what I can tell from Anand's review. So the integration story is better but still no match to Apple.
If you want a device that delivers maximum battery life for light web browsing, there's no question that you should get something with an Apple logo on it.
Except the top two champs on the Anandtech chart - champs which are way ahead of a fairly close pack - don't have such logos. If one is honestly trying to illustrate the simple point that Windows has a power problem, its a strange conclusion to draw.
>The screen is somewhat lower resolution
No, 1920x1080 isn't only somewhat higher than 1366x768, it's 1.97 times the number of pixels and the panel is PLS unlike the TN in the MBA. The display is the component that eats up the battery the most when doing things like Wi-Fi browsing, even the battery life of the MBA scales heavily with the level of brightness.
>not touch capable
It also has an another, separate layer for the Wacom digitizer, it has to be constantly emitting an electromagnetic field, which does take its toll on power consumption.
It's i5-4250U, with a considerably lower base clock (1.3GHz vs 1.6GHz).
Another fact that all the recent articles about the Surface Pro 2 fail to mention, but isn't really relevant to the what the article is about, is that, with the Power Cover that was announced at the Surface keynote, it should be able to get 11.45 hours of Wi-Fi browsing if you extrapolate the results from the Anandtech benchmark, or about 12.9 hours if you do it with the 7:33 hours it got on The Verge review. This does bring the weight and thickness of the device above the MBA though.
Battery life varies greatly between e.g. Google Chromebook systems, running the same software (and between windows systems for that matter).
Some of this is to do with the power usage of the CPU, and whether video decoding is done in low power hardware, or which wireless chipset is used. But just looking at the power usage manager on your Android phone will tell you that the screen uses most of the power.
Windows hardware varies from high priced ultra books (where all is sacrificed for shininess and performance), to bargain bin systems where using an old backlighting technology saves a few bucks.
Question to Apple users - comparing windows laptops to your mac, which shipped with the more aggressive power saving settings in terms of turning the screen off when not in use?
Also there are some decent performance improvements that would mean less cpu usage (or bursts of them, which is more power efficient).
Are there any benchmarks? Or is it still behind NDA?
The charts are quite ridiculous in the article - comparing an Ivy Bridge, actively cooled laptop-tablet to a Nexus 7? Why?
BTW, the biggest difference is maybe CPU core hotplugging, this exists in Android and iOS but does not in Windows RT.
Maybe misconfigurations like these are also causing more power consumption than needed.
I think the answer is that Apple really, really cares and has been extremely focused on power/performance for a number of years. It has the focus, the institutional awareness and know-how that's been built up since around Tiger, and last not least the people on their performance teams.
That's how you get great battery life.
Now they are turning a full-fledged multi-user OS into a tablet OS. Let's make this tank into a bicycle. History tells us there will be a few painful years.
Smartphones are particularly bad at it. If anyone remembers Palm devices, they could last for more than a week (!) on one charge. With my Nexus 4, even with light usage, I've come to a habit to plug it in whenever possible, it goes that fast.
We all love feature-rich devices, but I think, currently, the promoted features and hardware are way ahead of the battery capabilities. And it's not OK.
Which seems to point to many apps (embarrassingly mostly MS apps) setting the OS timer interval to something like 1ms (from a default of 15.6ms) and not resetting it.
Anyone with any experience with the Windows 8 timer care to weigh in on this issue? I'm well out of my depth when it comes to processor / kernel level power tuning.
Of course, if what you care about is the efficiency of the PM software, looking at total battery life probably isn't the most meaningful factor either, as they all have different-sized batteries, so what's important is power drain as a function of time.
We see the comparisons:Surface <-> iPad, OSX MBA <-> Win7/Vista MBA
Surface has different architecture than the iPad, so the battery difference is easily explained, and maybe driver support is just less than stellar, meaning HW isn't as efficient and/or doesn't scale back quickly enough?
I had an Asus laptop some years ago that would last 3 hours under Vista, and would be dead in the water in 1 hour under Ubuntu. I think it was either GPU or CPU scaling or both that wasn't supported in the linux drivers I was using
In any 5 year period, Apple has a tiny list of exact hardware configurations OS X is designed to run on. It's so small, they even use the OS X software update mechanism to push BIOS updates! They have so much room to do better than Microsoft here it's barely even funny. This isn't an excuse for Microsoft's poor performance, but if you try to gloss over the fact as Atwood does here, then you're omitting the full truth.
Any of #2 would impact both.
I'm wondering what the difference is between the hardware the Surface uses and other Windows 8 tablets.
could explain this problem?
In reality I get 2.5 hours maximum. It's so bad that I actually returned the first one I got as I though something was wrong with it. Nope, 2.5 hours is it. Not even enough for a half-day working in a cafe.
So I guess 5-7h only applies if the screen is turned off, no programmes running and no wifi. Useful. :/
Windows RT on the Surface 2 appears to have better battery life than the Samsung Galaxy Tab (according to his chart).
To look at what it is doing, download Procmon:http://technet.microsoft.com/en-us/sysinternals/bb896645.asp...
Flash has been in decline since the first iPhone, but is still used to track people with unkillable cookies and to make obnoxious ads. Hopefully those days are now over.
I wish Google and Microsoft would follow suit. Google probably will resist the most due to the entrenched interests of DoubleClick and YouTube.
The other main highlights from my perspective: "App Nap" energy-saving API (p. 13), generally better battery life, even on old hardware (p. 18), & support for offline speech-to-text (p. 23).
I have a macbook pro and I hook in with my thunderbolt->DVI connector to get my big monitor.
I can throw an appletv onto the monitor with an hdmi->DVI connection and finally go cordless! This is an improvement that means something real to me!
On the other hand, the battery life is definitely better. It's not really worth the performance hit, though...
Trackpad scroll speed on my 13" MBA is also noticeably slower without significant load on the machine. This seems deliberate, but it's a move in the wrong direction for people that already have the trackpad sensitivity maxed out.
Python 2.7.5 (v2.7.5:ab05e7dd2788, May 13 2013, 13:18:45)
[GCC 4.2.1 (Apple Inc. build 5666) (dot 3)] on darwin
Type "help", "copyright", "credits" or "license" for more information.
>>> a = 1000
Segmentation fault: 11
I thought that would have looked awesome on my iMac!
I think he overlooked the text right on top of the menu, which says "Show these tags in the sidebar:". Pretty obvious to me.
> In the years that have passed since then, the Mac has > indeed been on a steady march toward the functional ideal > embodied by the iPad, a product that is arguably the > culmination of Jobs' original vision of personal computing
Think about it. With the drop of the non-Retina display MacBook Pros today, no Macs are now officially user-upgradeable.
What was the reason given?
2mm in thickness. Two. Fucking. Millimeters, so you can stare at the edge of your laptop with a hard-on. Oh, and that absurdly high resolution display that you'll need a goddamned loupe to appreciate.
All kernel extensions now must be signed in Mavericks. OS features brought about in Lion still bug me, like the absolutely back-asswards autosave system that uses duplication, and the lack of direct manipulation while scrolling. Also, Gatekeeper is a huge uh-oh.
It's the reason why my MacBook is now sitting in a closet, and why I'm using a 2005 Toshiba Tecra with Debian on it. Amazing how Linux news has gotten so rare these days... but stick an Apple sticker on something and it shoots to the top of the front page. Sad.
Really looking forward to upgrading.
If Mavericks is free, why does the App Store need a credit card in order for me to download it?
I do not plan on purchasing anything through iTunes. Never say never, sure, but I don't. Ever.
Guess I can't have Mavericks.
Even though it's free.
Kudos, Apple, you've given me my first reason to feel less than happy about a hardware purchase I reveled in.
Can't tell if he's joking or not.
I made a few sketches myself as part of a larger project I'm working on (a year without cameras): http://crafture.me/post/64711241777/startup-school-2013
For those of you who want a bit more context, here are two sets of notes from this weekend:
There is some overlap between the two but also some differences so I'd suggest reading both.
[Disclaimer: I produced the first set - https://news.ycombinator.com/item?id=6578780]
I think the theme that stands out for me personally from all of these notes is: Find something you can work on almost non-stop, expect to fail a lot (because you will), learn and adapt, keep trying.
Thanks for the great work Greg!
Big fan of sketch notes myself so I'll definitely be forking that repo.
They fixed that problem but it was too late. In my eyes they proved they are not to be trusted with data.
Had they called themselves MangoCache or MongoProbabilisticStorage, fine, can silently drop writes, I don't care it is not database. But telling people they are a "database" and then tweaking their default to look good in stupid little benchmarks, and telling people they are webscale, sealed the deal for me. Never looking at that product again.
Most people don't like mongo because 10gen gives the impression that mongo is better than it actually is, many people feel that mongo is not reliable enough for at-scale applications. They're right; it's not. But that's ok, because:
Mongo's really great for rapid prototyping. You don't need to worry about updating the schema at the db level, it can store any type of document in any collection without complaining, it's really easy to install and configure, the query language is simple and only takes a couple of minutes to learn, it's pretty fast in most use cases, it's pretty safe in most use cases, and it's easy to create a replica set once your prototype gets usage and starts scaling.
Mongo does everything well up until you reach the level where you need heavy-hitting, at-scale, mission-critical performance and reliability. Most projects out there (99 in 100?) will never reach the level of scale that requires better tools than mongo. And since the rest of it is so easy to use, that makes mongo a great starting point for most projects. You can always switch databases later, but mongo gives you the flexibility to concentrate on more important things in the early stages of a project.
RDBMS performance was fine most of the time as we're not doing big data really. Our problem was developing and maintaining a schema that holds lots of metadata many levels deep. Our app allows for unlimited user defined forms and fields, some of which may hold grids inside which hold some more fields... Our app also handles lots of logs and large file dumps, which slowly made data, cache and fulltext search management mission impossible. Even though we had considerable previous experience with Mongo, it took us a long time to switch because we were utterly scared. It's nice to sell a product that is Oracle-based, as that sent out a message about our "high-level of industry standardization and corporate commitment" bullshit that (we thought) is quite positive for a startup competing against the likes of IBM, HP, etc.
To our surprise, our customers (some Fortune 500 and the like) were VERY receptive to switch to a NoSQL, opensource database. Surprise specially given it would be supported by us instead of their dreadfully expensive and mostly useless DBA departments. It even came to a point where it has changed their perception of our product and our company as next generation, and surprisingly set us apart from our competition even further.
In short, as many people here know, not all MongoDB users are cool kids in startups that need to fend off HN front page peak traffic day in day out. Having a schemaless, easy to manage database is a step forward for sooo many use cases, from little intranet apps to log storage to some crazy homebrew queue-like thing. 10-gen superb, although criticized, "marketing effort" also helps a lot when you need to convince a customer's upper-management this is something they should trust and even invest on. I can't express my gratitude and appreciation for 10-gen's simultaneous interest in community building, flirting with corporate wigs and getting the word out to developers for every other language. Mongo is definitely a flawed product, but why should I care about the clownshoeness of its mmapped files when it has given us so much for so long?
Every single MongoDB step has had the old timers groaning.
Even with something solid like Tokutek's storage engine in it, its going to be a hard sell.
None of how MongoDB works is a secret. And just like everything else it has sweet spots and problem areas. And like many others, development continues and it gets better.
The database does not get the job done - it is a tool to help get the job done.
Keystore where the "engine" is ZFS works mighty well and is reliable. There is little need for simple solutions like MongoDB if the filesystem rocks.
Funny punchline at the end there too.
Nobody writes about the filesystem like they do the database, and yet they do the same job - store and retrieve data.
* you have to learn to do indexing right later (if you have to scale)
* failure and miss starting to occur (as you scale)
* more code to write to manage legacy schema and optional fields
The last is painful and ugly. Whereas if you start out with a good schema that last point is in a good hand. When you use SQL you always have the restraint that "xyz" attributes are repeating and you can just make a new relation, whereas with mongo you'd stuff 20 fields into a single collection. The refactoring is harder.
I will begin to migrate back to SQL for new projects.
Also ecosystem is richer in SQL. I have not seen a good ORM for Mongo. MongoEngine is fine but implementation + db have a lot of issues make that ORM a bit unusable from time to time. SQLAlchemy is good.
PS: For quick PoC and Hackathon projects sure prototyping with mongo is fine.
I've done this before when I was doing work for a client using an existing simple web host with no built-in options for databases. It works well, and the nice part is that there's a simple, obvious way to do any query. The bad part is that anything other than a primary key lookup is slow unless you add a lot of complexity.
If you use MongoDB in production, you should definitely take he time to learn about the durability options on the database side AND in your driver. By using them appropriately, you can have as little or as much as you like. Data sets larger than 100GB are no problem either -- right now I'm running an instance with a 1.6TB database.
As always, use the right tool for the right job. If you need joins/etc. and don't need unstructured data, Mongo probably isn't a great choice (even with the aggregation framework).
I think more often its easy to poke fun at _how_ its used.
When any tool or tech is used globally, before knowing its limitations, problems are likely. Attempting to use MongoDB in all storage or persistence scenarios is no more sensible than using MySQL in all cases.
Yes, there is marketing around this product that must be looked at critically - after taking into account that many newly developed technologies won't solve all the problems older tech have worked for decades to solve.
Whether 10gen are vapid spin-meisters or not, even whether they have developed a usable product, seems orthogonal to the question as to whether a schemaless persistent storage layer might be a better fit for some projects than a relational database.
 - http://kkovacs.eu/cassandra-vs-mongodb-vs-couchdb-vs-redis
I'm considering moving away from MongoDB before I have to implement what seems to be an incredibly complicated architecture to get it to scale on the level tens/hundreds of millions of documents.
I believe you are wrong about dismissing that top comment in the other post as snarky, negative and useless. That comment has a lot of very useful information from someone who appears to have been doing the Thailand thing longer than you have.
1) Thai law was brought up a number of times and you do appear to be violating it. This is probably something that needs to be said.
2) You mention how cheap it is while he believes its more expensive, but you may have gotten a good deal or stayed in areas that others wouldn't want to. It's your experience vs his; I see no reason to dismiss him as 'snark'.
3) He shares a number of anecdotes (sex workers, etc) that differ from your anecdotes. Thailand is a big place, you can both be right, and the more information the better.
You really have no idea how much of a time sink HN really is. If I said it took a fair portion of my day, it would be the biggest understatement of the year.
Love from everyone who reads Hacker News
From my perspective there is simply nothing there(2), and 5 seconds looking at a static page is just not enough information for me to make a decision. That decision is not just to give you my email address, but to "sign-up", which is a huge step too far.
At the very least I need a "find out more" option, and I'd need that without having to give you my email address or other details.
You clearly write very well, so why not tell the Tubelytics story underneath the landing page. Let me scroll down and read the story, see the screenshots, hear about the use cases and experience the success stories.
By the time people get to the end they should know what the product is, how awesome it will be for them to use it, how much it costs and whether or not they will buy.
(Advanced) Ideally I could play with the product and even set it up with my youtube videos(3) without logging in, and once I experience the product then I can save the data by creating an account and, more likely, pay you.
So rather than not getting an email, perhaps there is a better way to get a paid sign-up.
(1) https://tubelytics.com(2) I'm OSX Safari with flash block on(3) I'm not a target customer
I have a Rackspace 256mb ram slice hosting Wordpress. No caching at all, none.
I also run, on the same server, a teamspeak server for a friend that still plays games.
I peaked at 290 simultaneous people reading my post. Teamspeak server was still working fine.
These numbers tell me Ill have to completely change the landing page as its not converting well.
 15,000 uniques led to just 78 trials, 1 paid customer
As much flak as HN gets, I have found that it's the best source of traffic. My post yesterday about building a desk stayed at #2 for half a day and I got about 20,000 views with an average time of 40 seconds. https://news.ycombinator.com/item?id=6566643
2. I agree with your point about the standard top comment on most Hacker News posts being contrarian. It can be very annoying, but having a strong dissenting voice also helps keep BS posts in check. I have noticed a lot more jokes as the top comments within posts though. I'm not sure if that's good or BSD. The first example that comes mind is a post titled 'You have a 0.000007% chance of becoming a billionaire'. The top rated comment was the common Reddit joke 'So you're saying I have a chance'.
Good work !
150+ points on HN170+ comments on HN
280+ points on HN90+ comments on HN
The sequel has ~2x upvotes, plus a better karma/comment ratio as well...
I made it last year. I needed some feedback and early adopters, it was great to kick start small community. Also my project changed name and that post made it #1 result on Google in 3 weeks. There was handful of serious job offers as direct result of that post, last one 3 months latter.
It has been year since my last post on HN. My project is stable-enough, I have some pilot customers, and even made first profit this month. On other side I have only 50 twitter followers and no invitations to conferences. So I will probably hit HN again in a few weeks :-)
It's very well designed, has interesting content and it's easy to just read the next article.
It's probably one of the best blogs i've ever met (excellent place for your links (at the end of your article for a follow-up story).
And, you have converted me to the panda show. Great music!
So congrats and nice job :)
PS. You didn't convert a lot of your users, because it had nothing to do with Tubelytics. It was an awesome read, but probably missed your core audience attention :-) (personal opinion though).
Would you be interested in doing a talk on all this at the next Beercamp at Punspace?
It reads like a case of minimizing dissonance. In other words, it seems the author is attempting to rationalize away another person's viewpoint, by simply characterizing it as snark.
Some more probable explanations why the comment made it to #1, could have been:
- It felt authentic. "I've lived and built two companies in Thailand over the last 14 years."
- The answer expressed a contrary viewpoint, giving HN readers a more balanced view of the topic.
I have a Catch-all address setup on my Domain so that I can give every site I interact with their own custom email address. In this case it was email@example.com. Since the email address doesn't exist, and they're the only company I've shared it with, they're the only ones with a record of it's existence.
When I emailed them asking if they'd had a security breach or if they were selling email addresses they responded saying they would opt me out of marking emails. When I responded with the context and header info of the emails I received and asked if this was in fact from them things turned. About an hour later I got a response, the tone had changed significantly and they indicated that the incident had been escalated to their security department and that they would be in contact with me as their investigation progressed.
I can say this has been the best response to the dozens of emails I've sent to companies about the same issue. The worst was Best Buy whose response was something along the lines of "Eat Dk, we do what we want."
It would have to be out of the Caribbean or some place with lax data privacy laws, and strict confidentiality laws.
It's the same course/series, but with interactivity, so Haskell can coded/evaluated from the browser. In fact, one "dir" up, you will find a bunch of similar tutorials here : https://www.fpcomplete.com/school.
1. Yann Esposito's Haskell Fast and Hard (on FPcomplete -- https://www.fpcomplete.com/user/yogsototh/haskell-fast-hard)
2. Learn you a Haskell by Miran Lipovaca http://learnyouahaskell.com/
The latter author decided to write the book based on his experience in learning Haskell. It's definitely one of the simplest and clearest programming books I've read.
What I really really need is something which walks me through doing something significant with Haskell - like, a GUI app on Linux or something (my current focus: I've never really done it, but if I'm learning something new I'd like there to be a practical product at the end).
A bunch of language constructs, while technically interesting, don't help me to grok the language at all.
I'd still say the type system is there to help you in C, C++, and Java, it just doesn't do nearly as good a job of it, and winds up in your way more often because it's less expressive.
Did this stop you or how did get past it?
How does Learn Haskell Fast and Hard compare?
At university the first thing everyone had to (in programming) do was a Haskell course. Felt weird at the time, but in hindsight it was fantastic. It meant everyone had to throw their preconceptions about programming out the window.
It didn't occur to me until recently (10-15 years later), that functional concepts are actually a good thing to apply in any language; that it makes code parallelizable, modular, maintainable, testable, and so on. I just thought functional was functional (i.e. elegant but hard) whereas imperative was imperative (inelegant but easy). Much like the difference between algebra and arithmetic.
So go learn a second language, or even a third. Even if you intend to speak english and Java for the rest of your life. I'd choose Haskell and Spanish.
Off topic but does anyone know of a Rails/Play + Linq to SQL/ScalaQuery equivalent in Haskell?
Beyond that just being able to generate PDF invoices, send out emails and have access to a decent date/time library (like JodaTime) would cover the essentials for web development.
The selection of artwork is pretty nice too.
Very very good. Thank you author and poster.
I shared a number of work arounds with the general haskell community a few weeks ago here: http://www.haskell.org/pipermail/haskell-cafe/2013-September...(there are alternative work arounds, but I only listed the ones which are simple and easy to communicate with other people and be able to expect them to follow the steps correctly.)
First of all, you should be using the `null` function instead of `== xs` because the `==` operator only works if your list contents are Eq-able.
But the most important thing is that pattern matching is more type safe. If you use `head` and `tail` you, as a programmer, need to make sure that you only call them on a non-empty lists or else you get an error. On the other hand, if you use pattern matching the compiler helps you make sure that you always covered all the possible cases (empty vs non-empty) and you never need to worry about calling the unsafe head and tail functions.
Although I am not sure about the premise - I doubt Haskell, as a language close to mathematics, can be learned fast. This tutorial seems quite shallow on some things, like monads.
What westerners might call 'casual sex' -- sex without the framework of a relationships that implies various other promieses/committments -- is normal, and also not likely to be spoken about frankly, especially to a reporter, and much less a British one.
Sometimes I'll witness a young woman asked about it at a social gathering (as people have a few drinks and speak more freely). "I don't have anybody... I can't remember the last time I slept with somebody," she might say. What she means is that she doesn't have a steady boyfriend, and thus it is certainly none of your business who she's fucking.
Or I will see a guy asked about it. "Well, romance is too complicated with all I've got going on... I've learned to live without it," he might say, with just the right amount of sheepishness. What he means is that he is seeking only sex that doesn't come with implied commitments and hassle.
These two people might very well end up leaving together.
It's probably too much to ask - but I think there should be something in between a scientific journal and common journalism. In technology we have some pretty insightful articles coming up in blogs now and then, articles where sources are cited and you can go deeper on any subject if you like. Why can't traditional journalists not use the web the way it is meant to?
"Tomita says a woman's chances of promotion in Japan stop dead as soon as she marries. "The bosses assume you will get pregnant." Once a woman does have a child, she adds, the long, inflexible hours become unmanageable. "You have to resign. You end up being a housewife with no independent income. It's not an option for women like me."
Great, so if a woman marries she loses independence and her own income, and then people don't know why women don't want to marry?
Same thing here:
"Romantic commitment seems to represent burden and drudgery, from the exorbitant costs of buying property in Japan to the uncertain expectations of a spouse and in-laws"
If living and being in Japan has taught me anything, it's that generalising from anecdote is not a good idea.
Case in point, if you visit an outlet mall a few miles outside of central Sendai on a weekend, you'd have a lot of trouble convincing anyone that Japanese people aren't making enough babies. It was very, very difficult to spot single people, or even couples without babies crawling all over them on our one day out there.
In a population of nearly 130 million, if there's any generalisation you want to make about Japanese people, you'll find enough anecdotes to put together into a convincing article.
On the usage of mendokusai, I think the author of the article may have misunderstood in the situation he's describing. I believe that in this situation mendokusai meant "It's tiresome to be constantly propositioned by male colleagues at work" rather than "I would have sex with everyone, but I can't be bothered". IME you use mendokusai whenever you're tired of something, along with describing a task that is tiresome.
Resource depletion is among the highest risks to civilization. Seems straightforward that severe population decline should be desired, and voluntary refraining from reproduction should be welcome.
(Please don't just extrapolate the trend till extinction. See http://xkcd.com/605/. )
I think that being the first rich, advanced large country that isn't western is the reason. On paper, they have a similar relationship with money, technology, their religion and traditional culture. I can't explain the strangeness away with those differences.
When I read something like this about some trend or supposed cultural pathology cropping up, I have nothing to connect it to. I don't intuitively get where its coming from or why. Not even a hint. I don't know whether to dismiss it as conservatives concerned with something harmless, some fringe phenomenon or whatever.
I think what the article was, however, was an advertisement for the former dominatrix. Apparently you can pay her to talk to you about women, pay her more to get naked while she talks about women, and maybe if you play your cards right...
Sums up my feelings pretty much exactly. I lack much, if any parental instincts. I also lack any family or peer pressure to marry and make babies. Without the push to produce offspring relationships have more downsides than up. Children 10 times as so.
I enjoy relationships (well most of them). But, there is so much to do and experience in the world. I'll never get to all before I die. There's no reason to spend my time in sub-optimal pursuits.
So, I've looked into the government research about it, at least for income part as it is being tangible. 
As for income, there have been some research that lifetime unmarried rate would collaborate with income. (Mostly for male) 
Their research further says that "In 2010, for male percentage of marriage for permanent employee is 27.7% while same for temp workers are 6.7%, resulting in about 20% of difference, while for female it is 28.2% for permanent workers, and 25.8% for temp workers, shows the much less difference. Therefore the increased rate of male temp workers is contributing to unmarriage rate or marriage at an older age. 
This baby dearth, combined with the ability of the japanese to keep their elites from increasing mass immigration, means less supply of labor, which means higher wage (remember supply and demand? It applies to labor, too). That means that the corporations that support the corporate media via advertising buys will pay more for labor. The media hates that!
Also, fewer people means fewer consumers for the products advertised in the media. The media hates that!
The media is the enemy of the majority working class citizen of the developed nations.
For some reason being independent (able to buy what I want, when I want it and work 20 hours a day) is better than rising a family and having children. Things that are after all, the reason of what as species, are suppose to be doing in this planet.
The signal is then clear, you fail to do that, but being to extinct. This is exactly what begins where this trends are prevalent. Perhaps we need to rethink the roles and if being independent is something that in the long term will create more value for the group.
Talk about blurred lines.
o driving cars with tires that last only 15,000 miles,
o driving cars with engines that last only 80,000 miles (due to poor lubrication due to water and gasoline in the oil due to inaccurate fuel mixture and ignition),
o typing on typewriters with carbon paper,
o doing business communications with printing and/or typing on paper sent by USPS,
o an office telephone switchboard manned by employees.
Now enter new technology, and each of theseactivities uses some automation, is much cheaper,and puts the old employees out of work.
So, have lots of unemployed people with little orno money who want to make money and consume butcannot get jobs.
Why not jobs? For one, the total number of jobsshrank, and as in musical chairs some people don'tget one.
For another, to create more jobs need some ideas fornew products/services, some capital to get thebusinesses going, and some qualified employees. Theunemployed people will need some new training, thatis, investments in 'human capital'.
As we know from depressions, can have a shortage ofcapital, lots of people not consuming who want towork but no jobs for them to do. Then if have awar, suddenly can have three jobs for everyone whocan work at all.
There is also an effect of the ratios of people toland and other natural resources. With high densitypopulation, the prices of such resources increase.Maybe when the population of Japan, Finland, France,Germany, Russia, the US, etc. shrink, the new ratiowill make it easier for a couple to form a family.E.g., if the population is low enough that a youngcouple can easily buy 200 acres of good farm land,then there will be some new alternatives for thatcouple to form a family. Now at maybe $5000 anacre, let's, see, 200 acres would be $1 million.She's 18; he's 22; and where are they going to get$1 million or even a down payment plus farmequipment and materials for a house?
Presumably in time startups will find new productsand services that people want and that can make useof the unemployed.
But, in that town when automation put the typistsout of work, the jobs of some people were notaffected and, first cut, by having the company spendmuch less on typing, could pay their remainingemployees more. So, some people are doing well.
And in Japan, the article seems to suggest that realestate prices are so high that no one can affordthem! No! Instead, real estate prices are so highjust because enough people actually can afford them.So, some people are doing well, and maybe they, ortheir lucky heirs, are forming traditional families.
So there should be not just'friction' but 'love' (with thehelp that Mother Nature provides)with commitment, caring, affection, intimacy, passion,joining of lives, vows, romance, trust (he doesn't return from abusiness trip and discover thatshe's drained the checkbookand savings account, took thebest of the household belongingsand the dog, and is gone),respect, responsiveness(they respond to each other),supportive families (e.g., he getsa job in her father's business),collection of activities, memories,and traditions like, don't want to lose,can't get anywhere else, and that cause'lock in',homes ("where the heart is, where you are loved even when you are wrong"), and children("the most rewarding thing we did").
Joining? For a few hundred years in Western Civilization, standard marriagevows started off with "We gather togetherto join this man and this women withthe bonds of holy matrimony.", and thereis wisdom there.
Or, for a more direct explanation,she has two legs. He has two legs. If she breaks a leg, then she is down to one leg to help her broken leg get well. But joined with him with good vows, commitment, etc. they have four legs with three good legs to help her broken leg get well. The three legs are three times better than just the one. Can rattle off 10,000 such cases faster than can say them.
People don't want to be alone. Being alone is scary. Nearly all baby mammals know this. Being joined to someone is much more secure.
This can be great stuff, some of thebest in life, even without children,and one night stands are nothing likethe same. The article omitted nearlyall the really good stuff.
(Whoever gets "officially" issued that VID is going to whine when they notice it's already being used for hobbyist purposes anyway, which means that the technique of just picking one will guarantee uniqueness.)
IIRC they're not the first, either. And there've also been organizations freely allocating unique ethernet addresses out of their OUI space.
The amount of trouble they receive from the USBIF/IEEE/whoever seems to vary a lot from case to case, though. I expect it depends on which individual person the situation comes to the attention of. In some cases the USBIF or IEEE has actually revoked the VID/OUI assignment, leaving everyone who tried to play by the rules effectively squatting on an unassigned ID anyway.
Some ancient history: https://forum.sparkfun.com/viewtopic.php?t=931
"Since other USB device vendors such as Microchip and FTDI give away USB PIDs for free"
Does that actually mean, they give them for free? If so, how can they do that? Why does VTM allow them to do it? And what is the actual problem at all if you can get them for free?
If they can't responsibly offer their product without revoking it for simply using it as intended, they are racketeers. The Open Source community has not broken any legal agreements in simply publicizing the idea of a shared VID. The USB Forum are completely in the wrong for their behavior.
MCS is in Holland, where both their jurisdiction and the fact that they licensed their VID from USB-IF a long time ago make it impossible to enforce the prohibition against reselling PIDs. For EU 10 each, it's worth it just to kick sand in the face of the asshats at USB-IF.
I use lots of hobby stuff with USB ports. I have to lookup the vendor ID to make it read write in linux by default.
Presumably getting a proper ID makes this pain point go away from consumers somehow?
What's the gain I don't understand it?
Where can I donate to Arachnid Labs?
Also, if an open source is popular enough to the point a unique VID is needed for proper driver support, I will say this project will easily get fundings to go commercial (just like RedHat).
At first I'm thinking, oh, I wonder how they convinced Apple to let them use some private APIs, and then... curiosity turns to revulsion as soon as I saw that proxy diagram. Good god... LinkedIn MITM IMAP. That is truly terrifying.
How would you even go about installing that on the user's phone? Oh, that's in there too... they ship a 'configuration profile' which adds a new email account, so your password is leaving the device in cleartext and being used to create the profile server-side which is then shipped back to the phone and installed, how exactly?
This just gets worse and worse if I understand correctly... I'm surprised that configuration profiles can be shipped to an arbitrary device from a third party this way without the user manually installing LinkedIn's certificate as trusted. In other words, it should be a lot harder to "Accept" these profiles outside an enterprise setting, because it sounds exploitable. What else can you configure "so easily" I wonder?
<s>Thanks LinkedIn... really, I'm impressed. When exactly did Walter Bishop start working for you?</s>
P.S. I look forward to following your pending class-action lawsuit for violation of US federal wiretapping laws. Cheers!
The value for LinkedIn to vacuum up my email is immense! They'll know everyone I email and the content of the emails as well. They'll know where I shop and what I purchase. If I send a private email to a friend who has this installed, I've now unknowingly bcc'ed LinkedIn. Not only that, but they know this for the entire history of my email account! The person I stopped emailing 7 years ago... LinkedIn has access to that as well.
But in this case I don't think the value prop for the user is big enough to make me overcome this large of an ask.
I appreciate LinkedIn addressing this in their Privacy Pledge, but so long as they retain the right to change it at any time, I'm too uncomfortable to install this. But, I'm still in awe of the creative work-around. :)
I have never joined LinkedIn and have never been interested in any position that requires an easily gamed LinkedIn profile instead of meatspace references.
I think the privacy concerns of having your mail (potentially) available over yet another server in exchange for modest convenience makes it unlikely that I would use this, but I'm sure many will find the trade-off acceptable and desirable.
> I ran some tests with two brand new mailboxes, and it seems that LinkedIn > accesses both the Contacts and the Sent Items.
Will customers be explicitly told that all of their emails will be going through and stored on LinkedIn servers? I doubt it. I do envision a dialog box along the lines of "Click Here to make your experience better". Sadly people will click without realizing the implications.
If you don't trust LinkedIn, fine. Don't use it.
But please, don't assume that LinkedIn is universally not trusted, the same way you assume that Microsoft is universally hated.
This is a neat feature, and I'm sure that many people trust LinkedIn enough to think that the trade-off is worth it. Would you prefer to not have the choice to have access to this feature, and prevent others from having it too?
I don't see this kind of reaction when 99% of other services ask access to a third-party API. Why is this so different? Is it because they have access to emails? What makes email SO MUCH more important than any other data to be in a category of their own? I don't think you can draw a line, and it's pure subjectivity.
Surely, the service itself is not a problem. Google would do the same thing, and you would all think it's the best thing since sliced bread? Why? Because most people already trust Google with their emails (and everything else), and accept that they know everything about them.
So please, don't criticize the solution, don't blame the hack (unless you can suggest a better way to do it). The only good reason not to use it is for lack of trust for LinkedIn, and nothing else.
I've had enough of your drama-seeking behaviors, and I don't think I'm the only one. Grow up.
It's a cool hack, however.
I burst out in laughter at that point. Yeah, that silly presumptuous email client assuming an email is some kind of text message that doesn't change every time you read it!
What I hope is going to prove truly impossible is doing anything like this without requiring the user to explicitly accept the configuration profile. Even so I expect they will trick many into allowing "enhancement" of their email.
LinkedIn has a history of abusing email. From the early days* where they would email all of the contacts on your machine if you didn't read carefully enough to today where you can click unsubscribe many, many times and still get "important updates". It's a wretched hive of scum and recruiters, and they will never get between me and my email.
*spoke too soon! looks like they still do it: http://community.linkedin.com/questions/10106/i-want-linkedi...
I have noticed that on websites that clearly don't intend that behavior, and it's quite annoying. Does anyone have any details about the exact circumstances required for this phenomenon?
Cute web hacks. I don't understand the problem with simply using their mobile app if you were really looking for work.
It sounds like an unnecessary feature for people who are looking and an annoyance to people who are not. That seems to be the problem of Linked In. They harass those who are working with vague and misplaced job requests in an attempt to expand their reach.
I also hate iFrames. Cool trick though.
This is probably the most blatant disregard for privacy and security for the smallest possible benefit that I have ever seen. Well, next to giving LinkedIn the password to your email so that they can spam your friends and hack your account.
Everyone needs to stop using this piece of shit service. They're incompetent and malicious. LinkedIn is the Zynga of HR. I'm gonna go buy some puts.
Again, VERY cool how they did it but it requires quite a bit trust in a company that I don't find very trustworthy.
I'm just not comfortable giving my email credentials out when access to my email is effectively a skeleton key for the rest of my accounts via password resets.
The author is being a bit arrogant, there are more complex stuff that modifying gmail on the fly (remember greasemonkey?).
(Specifically, iframes in emails have been stripped from most modern email clients for years)
EDIT: not an app apparently.
To all those who consider this a cool hack - it's not. It's ugly as hell. Sometimes you need to do this kind of shit to get the job done, it's true, but you know this is kind of thing that you look at after couple of month and think "Oh God, I should get a another job. They shouldn't force me to create THIS. Oh God, I feel so miserable.".
Looks like it is time to dump Linkedin.
All the privacy issues it raises are already discussed.
Isn't the standard library the place where packages go to die?
Isn't the reason pip is actually useful because has a nice health release cycle (http://www.pip-installer.org/en/latest/news.html) and isn't frozen into the standard-library ice age of the past?
Won't this make it even harder to build a compliant version of python that runs on mobile devices where you can't install packages (>_> iOS)?
I get that it's convenient, I'm not 100% convinced its a good idea.
Edit: Reading the PEP in detail its now clear that this is not bundling pip with python (see 'Including pip directly in the standard library'). This is bundling a pip installer as part of the standard library. Much better.
Nice reminder that pip is a dev tool and should be used as such. It makes sense to be included in Python.
About time really.
This sounds like a great step forward to me in making Python packages easy to create, install, and uninstall!
easy_install can install from binary installers or eggs. I'd like to see that added to pip.
Next up: Local package installs.
What's going on here is that Python has added support for another kind of polymorphism known as "single dispatch".
This allows you to write a function with several implementations, each associated with one or more types of input arguments. The "dispatcher" (called 'singledispatch' and implemented as a Python function decorator) figures out which implementation to choose based on the type of the argument. It also maintains a registry of types -> function implementations.
This is not technically "multimethods" -- which can also be implemented as a decorator, as GvR did in 2005 -- but it's related.
Also, the other interesting thing about this change is that the library is already on Bitbucket and PyPI and has been tested to work as a backport with Python 2.6+. So you can start using this today, even if you're not on 3.x!
"Text is always Unicode. Read it in as UTF-8. Write it out as UTF-8. Everything in between just works."
This was not true up through 3.2, because Unicode in Python <= 3.2 was an abstraction that leaked some very unfortunate implementation details. There was the chance that you were on a "narrow build" of Python, where Unicode characters in memory were fixed to be two bytes long, so you couldn't perform most operations on characters outside the Basic Multilingual Plane. You could kind of fake it sometimes, but it meant you had to be thinking about "okay, how is this text really represented in memory" all the time, and explicitly coding around the fact that two different installations of Python with the same version number have different behavior.
Python 3.3 switched to a flexible string representation that eliminated the need for narrow and wide builds. However, operations in this representation weren't tested well enough for non-BMP characters, so running something like text.lower() on arbitrary text could now give you a SystemError (http://bugs.python.org/issue18183).
With that bug fixed in Python 3.4, that removes the last thing I know of standing in the way of Unicode just working.
But, ever since I found out about algebraic data types from other languages, I keep wanting those. There's not quite a good way to do those in Python. (I've used both "tuple subclass" and "__slots__," but both of those have their own little quirks.)
Thanks! Now I don't have to do this on every box anymore for using Python.
"This PEP removes the current limitations and quirks of object finalization. With it, objects with __del__() methods, as well as generators with finally clauses, can be finalized when they are part of a reference cycle."
This has been a notorious limitation in Python forever, and in 3.4 it's finally solved.
Is there anyone else who is still procrastinating moving their workflow to Python3?
I have recently switched my work to Python and just started development on what will become a series of web projects all done in Python + Django. Yes, when it comes to Python I am a noob.
Looking at it with fresh eyes it seems that the most useful ecosystem is solidly rooted in the 2.7.x branch. Books and online courses promote the idea of using 2.7.x. My kid enrolled in the MIT intro to CS edX course and they use 2.7.5. Codecademy, same thing.
From the perspective of developing a number of non-trivial web-based products, how should I view the 2.7.x and 3.x ecosystems? Do you see a timeline to a transition? How should one prepare for it (or not)? What should one avoid?
At the moment it seems safe to pretty much ignore 3.x. I kind of hate that because I have this intense desire to always work with the the latest stable release of any software I use. Here things are different due to the surrounding ecosystem of libraries and tools. I'd certainly appreciate any and all help in understanding this a bit better.
They have a set of beliefs that they tell powerful stories about, in various mediums, to people who already believe or want to believe these same beliefs.
And in this context they also offer products that make things like remote work collaboration easier. But you're not even bothered by that, because they are clearly experts at doing this well (they wrote a book about it and are doing it themselves) so it all just fits together.
It's thought-leadership in a very practical way. A lot of good lessons to be learnt.
For example, individuals engaged in highly creative multi-disciplinary endeavors -- like the writers, artists, and technicians who together make a Pixar film -- seem to produce great results when they are regularly interacting with each other face-to-face. Steve Jobs, in fact, forced a redesign of Pixar's headquarters to promote face-to-face encounters and unplanned collaborations.
When remote work starts to fall apart is when there is a dedicated office where 75% of the team works.
In these cases the remote workers miss the critical "Let's grab a room meetings" where decisions are made. I think the reason is that its much easier to make decisions with people when you can see them face to face rather than in group chat.
Once I entered Silicon Valley, I learned that the above is the exception rather than the rule. And I think that's where all the horror stories arise. For I no longer get to choose my co-workers, and there's all sorts of pointy-haired edicts that come down the pipe from various Peter Principals(tm) inflicting their view of reality upon the collective they oversee. Just forget trying to run a remote team this way - it's incurably broken - and from this perspective, I agree with Marissa Mayer's tough choice to eliminate telecommuting as an option. Sadly, this movement is wreaking havoc on the daily commutes of everyone here.
My own solution however has been to seek work where I'm an IC and I have someone immediately above me covering my back. That's hard to find, and you're going to be perceived as a job hopper during the hunt, but it's worth the temporary downside IMO.
Or even better, just start your own shop with people you trust...
I can take a break when I need to and be in my comfortable space. I don't have a commute. I can open my door and see my daughter when I go downstairs for a coffee. I can take a walk to the local coffee shop and sit down with the other freelancers and remote workers and catch up on local happenings. If I need a break or I'm done for the day I'm not just watching the clock and waiting for a few other people to leave first.
I contribute regularly to open source projects and so I know what it takes to collaborate remotely. We're experimenting with sqwiggle, screenhero, and use github and pivotal. We email each other constantly keeping the teams up to date. We have a VPN and use our ssh-keys for as much as we can. It works really well and I think the tools have come a long way.
It's good to see that the world we were promised in the 60's is finally coming to fruition.
My main problem is when I need to interact with office culture. People in large conference rooms with a shitty speaker in the middle of it. When more than one person talks at once, it all becomes totally inaudible. Accents also become much harder to discern under the fuzzy sound quality that this kind of environment produces. I never have these problems with predominantly remote teams.
How to bridge that culture gap? Does the office world just need better speaker phones?
* How about same-time communication? E.g. I need to discuss things with my designer, oh wait, he's gardening at the moment, or he works at night.
* How do you know when to start/stop your day? E.g. In "normal" office, it's usually 9 to 5.
* How do you schedule projects? Fixed time simplifies scheduling ("This project is 5 man-days").
* Who pays for supporting equipments (desk, LCD, internet, nice coffee) if I work from home?
* Specific to 37signals: How did they make the video? Did the video guy traveled to each and every employee in the video to tape them?
Is the target audience employees who work remote or want to, or is the target market managers who need to understand remote employees?
We already use chat rooms, individual chats, tickets, video conferencing, email, dashboards, whatever.
But sometimes the bandwidth of just sitting with someone and looking at a problem together is so much higher that you can solve big issues in a very short time.
I think some psychologists could study this, it's right up their alley - what is lacking in remote co-operation devices. I personally think it's partly that you need to see what the other person is looking at and also you must be able to observe their body language. A way to do "Look here, this is important in about 0.1 seconds." But I have no evidence.
In my case, I traded the city for a modest home on a beautiful lake in rural Minnesota. It also lets me be near my autistic son who attends online public school (school at home is another digression, but off topic).
Yes, there are challenges, like deciding if today is a shave day or not.
All kidding aside, working at home is tool that great companies with great employees can wield effectively to obtain excellent results and acquire talent they can't find locally. Companies, like Yahoo, that don't understand this are not great companies and probably never will be.
Long live working at home!
If anybody has links to research (rather than anecdote) around the topic I'd love to have additions Especially any that show distributed teams performing to the same levels (or better) than co-located ones. Thanks ;-)
I've worked with remote developers (via video) before, and it didn't come close to the interactions I have in-person with people at the office. Would be interesting to hear some comments from remote developers. Related (from a blog post of mine): http://henrikwarne.com/2013/04/02/programmer-productivity-in...
I do a genomics start-up, with two offices, one in Europe, one on the East Coast. #1 wishlist item is only if there was a way the two offices could be physically together.
Remote works as well as a long-distance relationship. It may work in a mature environment, 5-10 years into your relationship. Not Day #1. Start-ups battle many odds, remote is not one you want to tackle from the get go.
The big question is how much does this scale? I hope this is addressed in the new book.
Having less than 50 employees allows the flexibility of doing many things differently. I would argue that this is one of them.
I don't know what the magical number is for size of a company where working remotely becomes a negative investment.
The bias with 37signals is very strong. They actively seek talent and find people that are not only able to work remotely but enjoy doing so. It also works well to have staff that can work 24 hours across time zones.
How relevant is this to a company with 1000 employees that is not technology related? I can't really answer that definitely but to say that based on my experience, not too much.
If you're a developer and want to work remotely, we're backed by great investors and hiring! We also use the latest tech like WebRTC for our video conferencing.
But also, from talking to a lot of Silicon Valley friends, they don't have much work from home either....unless you're a sales rep, sales engineer, or a consultant who likely has to spend most of the time on-sight (away from corp office)...
One thing I'm hoping to accomplish at my new company is a culture of work-hard/play-hard remote workers. Those who want to live in say mountain towns, but who want a real career as they get older...I'd not even have a problem with the idea of a "Powder Day" and they don't login until noon. WIth the right happy workers...
While there were certainly benefits, I experienced a few downsides that also drove me crazy.
As a product manager, I was in the sticky situation of needing to coordinate with a bunch of different people, and hit certain deadlines that the rest of the company may or may not have aligned with. I felt like I lost much my day-to-day ability to get-shit-done, especially as I was competing for time & resources with other projects.
I also found it easier to stop caring as much, since the emotions & passion weren't as readily communicated remotely.
Definitely some personal shortcoming in there as well, but, there's definitely issues to watch for if working remotely.
To be noted, but not as the basis for a new world order.
Anecdotal evidence of my own:
- Stop thinking about labor in a capital-intensive business
- I can't do a damn thing with collaboration without a real whiteboard
- I don't work well with too much technology.
- Time zones make things tricky: half of "our" team is in Europe, but deals with a different group of companies. FTSE vs NYSE trading hours make things difficult.
- The one work-from-home person on "our" team was let go due to inconsistent quality of output
- Living in a city is a part of work/life balance for pretty much anyone under age 30
Working remotely, as I understand it, means, that you have to transform the company employer relationship in a way that it works using rather abstract or technical infrastructure or interface. The company has to know or to learn, and define, what kind of services or deliverable it can expect from its employees. Also the employee has to learn how to present his service or deliverables in a way that they get noticed and impress someone very far away.
It is sort of an bidirectional API that both have to serve and use.
My question now is, what is the reason for a company to have employees if the service they need is so well defined and could be offered by anyone capeable of serving that API? Why have employes if you can have contractors? The same question holds for the employees, if capable of offering that service in such a well defined way, why not turn into a company themselves and offer that service to anyone willing to pay?
The only bummer for me was that I was a contracted employee so basically I was paid for the hours I worked (my full-time job is salaried so it's nice to know I'm going to expect X dollars each month) and since I was doing it in my free time it started to get difficult to balance the extra work time with family time since I ended up no longer having much free time.
If a company offered a $90,000+/yr salary + health and retirement benefits to work remotely then I think a lot more people might be interested, but in some cases (when you're working simply as a remote contractor) that's definitely not nearly as good.
The initial transition to remote working is especially hard - you require infrastructural and organisational changes to accommodate remote workers, and there's an up-front cost (not just financial) to that that dissuades getting started.
I previously worked in an environment that had a culture of face-to-face meetings, informal chats and the like, and it really would have required a total change in culture to implement frequent remote working. By contrast, about 25% of my current engineering colleagues frequently work from home - we've got Google Hangouts and the appropriate equipment and infrastructure to pull it off.
There are real upsides and downsides though - obviously a remote worker saves on a commute, but they do tend to miss out on the more social aspects of an office. Like, "Let's go get lunch," or "It's Matt's birthday, let's all have some cake." Those are definitely some of the perks of working with pleasant colleagues.
I've ended up thinking that remote working is for people who are traveling a lot or people with a family they would like to stay at home and spend time with.
We're hiring strong PHP dev's. www.veerwest.com/jobs <-- The Makers of FormAssembly.
I've since come to believe that reading the specifications and having the attention necessary to delve into these kinds of details and ask the right questions is important for mastery. It seems to me that learning 1 - 2 languages to this level of detail is worthwhile. I've been thinking of cutting back the number of languages I, "know," down to just those for which I am familiar with the specifications and how they're compiled, assembled, etc. Everything else is superficial.
Sometimes all you need is just a cursory knowledge to get something done and the ends justify those means. However if you really love your craft then mastery should be the goal, no? It seems to be the difference between, "getting something working," and, "pushing the boundaries of what is possible."
In an interview, I would much rather hear a person say something about a statically declared variable with no initialization being poor code to leave behind for the next person than some arcana about the standard.
"What is the point of having a virtual destructor on a class like this? There are no virtual functions so it does not make sense to inherit from it. I know that there are programmers who do inherit from non-virtual classes, but I suspect they have misunderstood a key concept of object orientation. I suggest you remove the virtual specifier from the destructor, it indicates that the class is designed to be used as a base class - while it obviously is not."
She's right that the class described on the slide probably shouldn't have a virtual destructor.
A base class should have a virtual destructor if and only if objects of its derived class are to be deleted through base class pointers.
The following four statements are wrong:
1. A class with other virtual functions should have a virtual destructor.
2. A class without other virtual functions should not have a virtual destructor.
3. A class designed to be a base class should have a virtual destructor.
4. You shouldn't inherit from classes that don't have virtual functions.
It's a narrow-minded to say that programmers who inherit from "non-virtual" classes have "misunderstood a key concept of object orientation." Which key concept is that, by the way? Object orientation isn't the be all and end all of C++. There are reasons to use inheritance that have nothing to do with run-time polymorphism. Maybe you just want to reduce redundancy and organize your data types in terms of each other.
Another gripe is that on slide 369, she says:
"When I see bald pointers in C++ it is usually a bad sign."
Naked pointers should usually be avoided for memory management. That's true. But they make great iterators, and they're useful, along with references, for passing objects to functions.
On the other hand, seeing the keywords "new" and "delete" in code is usually a bad sign. Resources (not just memory) should be managed by resource management classes. If you try to do it manually, especially in the presence of exceptions and concurrency, it's very easy to cause an inadvertent resource leak.
There is also the fact that very often non-specified behavior, or implementation dependent or everything else that is not cool does not lead to a warning, so the learning is absolutely not reinforced by the compiler. Whereas a warning/error leads to questions that leads to google and some learning; you can be stepping far in the Pampa of undefined behavior for years when someone comes with a superior attitude in your company detects it and calls you a moron in a powerpoint.
And this also leads to very hard to write code sometimes, if you want to do some serious IEEE754 in C/C++ you will basically be pitting the spec of the language against the spec of numerical computation in a ring.
In 'foo(b++, a++ && (a+b));' the sequence point introduced by the '&&' only makes the 'a' see the effect 'a++' but the 'b' might not see the 'b++' (function args are not sequenced in C nor C++).
But I'm very aware of "smart code" and we shouldn't be writing code that relies on the details (especially ones that might change between compilers)
The majority of this is undefined behavior or implementation details of your compiler. If you rely on that, I don't want your code anywhere near my machine.
Hermione (I'm sure that's her) rates her C++ knowledge at 4-5, and Stroustrup himself at 7!?
Bullshit. Either they are poorly calibrated, or they are displaying false modesty. Sure, they probably still have plenty to learn about C++, but come on, Hermione is already at the top 97% in terms of language lawyering.
Wanting to be stronger is good. Not realizing you're already quite strong is not so good.
"If you compile in debug mode the runtime might try to be helpful and memset your stack memory to 0"
This is a retarded explanation (pages are set to 0 when they are recycled by the OS so you don't end up having data from dead processes mapped in your memory, with all the security implications). Also, actually randomizing memory in a debug context would actually be more helpful to trigger those initialization bugs...
People that think they know everything are a lot more dangerous than people actually aware of their limitation and safely working within them.
Because we 1) for some reason or another we _must_ use C or C++, 2) we're coding for a single CPU platform, and 3) we need to get the friggin' job done in this century.
There really is a lot more to programming than just algorithms, UI design or syntax.
That or they're assholes.
Either way, walk.
For this reason, a lot of compilers have options to not strictly enforce the aliasing rules
Also C and C++ are both permitted to reorder structs, it's just that they don't because that's the easiest way to follow the standard.
i = i + 1; i += 1; i++; ++i; a[i] *(a + i) a->foo (*a).foo
the most productive code i have ever written generally involves me working around the constraints of the language to implement a paradigm which is missing at compile-time, or juggling macros and templates so that i can reduce boilerplate code down to a template with a macro to fill the gaps the template is too featureless to give me (vice versa, the template is there because macros aren't complete enough either).
its good to understand this deep language stuff though because you can understand why C/C++ are limited. for instance the C sequence points limit the compiler in its ability to perform optimisation, as do struct layout rules and many of the other weird and wonderful specifics...
what saddens me most though is that nobody has offered anything to improve C and C++ in these areas which matter most to me... its not even hard. just let the compiler order structs because most programmers don't understand struct layout rules.
its not a good thing that these things are so explicitly specified for the language - its gimping the compilers, which is limiting me. also it results in pointless interview questions about sequence points.. :P
Deep understanding of the language used will make you do a better job.
This reminds me of the story that I've run into once. The junior developer wanted to raise PHP's memory_limit parameter because his code crashed almost every time while writing big file content to the output. He didn't know what output buffering is and that he can turn it off and print the file directly to the output. :D
Why I really, really like it: If you're building something in a namespace polluted environment (say, a component for a popular CMS), this is a damn god-send.
I was on the fence before, from now on, it's fontawesome all the way for all my projects. They've done an amazing job so far and I'm looking forward to what they will do next.
We don't all agree with a site's font choices.
And at least img made an attempt at semantic meaning and accessibility with alt= attributes. The failure modes for img are actually quite good. There's no such concern in the minds of those who gave birth to icon fonts.
These collections are trying to cater to everyone by growing in size when the better solution here is a modularizer.
I can create my own font file with icomoon so it only contains the icons that I needed for my site. Only issue I have is that there is no way to save the created font file so that I can modify later.
I'm about to decide on one to use.
I wonder how much faster it is.
A new generation of webdevelopers using Wingdings when designing pages in Microsoft Frontpage.... sigh.
For more practical CSS madness, I'd recommend Ana Tudor's creations. She had a cool talk at CSSConf.eu about the math behind building some of her CSS creations.
<div id="b7"> <div id="b71"> <div id="b71"> <div id="b71"> <div id="b71"> ...
Is there some tool they used to make this? Because my human brain can't imagine how this came to exist.
This is one of the first CSS animations I've seen that works flawlessly for me, at least on the iPhone. I also noticed the non-unique ID attrs as is noted below, but let's be real. With that amount of CSS to conceive of and write, would you really glance twice at the twenty lines of HTML you're using as a fly-by-night DAG? For a non-commercial passion project? The creator of this was in the ZONE!
The pure insanity makes me grin and long for the pre-teen days where there was time for this. All the ANSI art, the HyperCard stacks, the strange games made using dirt-cheap language implementations. Sigh, but a nice sigh. Also makes me damn grateful for open source and standards.
Anyways, I hope ad people don't catch on to how CSS is a bit harder to block than JS.
Is anything known about the author or license of this work? Will Comedy Central likely object?
I've gotten into CSS3 and JS recently but I'm not sure how this works.
Can someone explain to this old C dog the principles of how it works, though? I thought this would require JS to work?
I apologize in advance for being out of touch. :(
Just in time for halloween.
"This page has been locked by Wikipedia in response to deceptive practices paid for by Engulf and Devour to circumvent our community standards and mislead readers."
If you want this to stop, you have to give the clients a disincentive. That will drive the good clients out and these firms will be left with erectile dysfunction flim-flam as their market.
People who want to help Wikipedia improve as unpaid volunteers have a number of channels for doing that. One thing that would help Wikipedia's goal of better content quality is adding more reliable sources to articles. I try to help that process by compiling source lists in user space that any Wikipedian can use for updating articles. It's a long slog to fight the rot on Wikipedia. Reading Wikipedia takes a sharp eye for propaganda and advertising in disguise.
I hope every single one of their spurious sockpuppet accounts get deleted.
OUR AFFILIATES MAKE BIG MONEY.
Just leave us your name.
Or worse, if Wikipedia's trustworthiness is tarnished beyond repair. I remember when I was in high school 5 or 6 years back Wikipedia was kind of seen as a joke by my peers. Now it's taken as near fact. Although I think skepticism of anything read on the Internet or elsewhere is healthy, I would hate to see it revert to the first state because of greedy "PR" firms.
How well did average wikipedians deal with the editors and their clients? Was anyone turned into a useful editor? Or were more people left frustrated and baffled by the WP process?
That said, I can see why e.g. Microsoft, the East India Trading Company and BMW should be recognized in an encyclopedia. And there are examples of products (lines) that could/should be mentioned in a vast online encyclopedia as well (e.g. Windows, BMW 3 series) because they influenced industries/trends/zeitgeist and/or lifes.
But why, for the love of god, should every consultancy, contractor, forrester and his second cousin have an entry on this site?
Oh Jesus Christ, common now we're a community of entrepreneurs & hackers, someone just create a new startup that's wikipedia for people.
PeoplePedia.com is taken but here, but I've got http://www.infopag.es so it's perfect for something like InfoPag.es/ChrisNorstrom.
If someone wants to join in reply to this comment. So basically I'm envisioning a wiki for people. However, there's 2 routes I can go down:
a) Anyone can create a page on a person and anyone can edit and add onto or delete content from that page. (lots of growth, but lots of potential for abuse)
b) People must register to create a page on themselves, anyone can edit that page and add onto or delete content but the registered owner must approve the edits.
Which sounds better?
The basic argument hinges on the three counts, related to three laws he was alleged to have broken: the Economic Espionage Act, the National Stolen Property Act, and the Computer Fraud and Abuse act. The third charge was dismissed by the district court because it rested on the fact that he had either accessed systems he was not authorized to or exceeded authorized access. However, he was authorized to access the source code in question, and what he did with it afterwards has no bearing on whether he exceeded his authorization, so it doesn't fall under the CFAA.
The district court did convict him on the first two counts, but the Appeals Court reversed. Their argument is that the National Stolen Property Act doesn't apply because it applies only to actual physical goods, not mere intangible ideas. Had he photocopied the source and walked out with it, or loaded it onto a thumb drive at the office and taken that with him, it would have counted as stealing a physical good, but merely uploading it to a server and downloading it onto a thumb drive later does not count.
The court further argues that he did not violate the Economic Espionage Act because the clause in question he was prosecuted under specifically requires that the "trade secret ... is related to or included in a product that is produced for or placed in interstate or foreign commerce". Since Goldman Sachs' HFT trading system entirely proprietary and internal and not produced for or placed in interstate commerce, that particular law does not apply. Apparently Congress specifically intended this restriction, because earlier drafts of the statute had broader language that merely included "proprietary economic information having a value of not less than $100,000". The fact that Goldman Sachs uses the product for interstate commerce is not compelling, it had to itself be produced for or placed in interstate commerce.
That last part is interesting. It implies that if you run proprietary, internal code that is not sold or intended to be sold in the future, you appear to lose federal criminal trade secret protections. It's interesting that they tried to prosecute him on theft, trade secret infringement, and exceeding authorized access, but not copyright infringement. From my reading even unpublished work is subject to copyright.
Neither the original conviction nor the appeals court opinion ever addressed the copyright issue. In order for him to have stolen something, it would have had to be something of value; so why wasn't he further prosecuted for copyright violation? From the documents I read (not all are available on PACER), the copyright question never even came up.
More documents from the case:
Motion to dismiss the original case in district court: https://ia600209.us.archive.org/9/items/gov.uscourts.nysd.35...
Government's response to the motion:http://www.archive.org/download/gov.uscourts.nysd.358303/gov...
Affadavit of the investigating officer:http://www.archive.org/download/gov.uscourts.nysd.358303/gov...
District court's opinion dismissing the third count but refusing to dismiss the first two:https://ia600209.us.archive.org/9/items/gov.uscourts.nysd.35...
List of files requested in discovery, to demonstrate that what he took was insubstantial and not proprietary: http://www.archive.org/download/gov.uscourts.nysd.358303/gov...
A few of the things he had downloaded were their version of the Erlang platform, which is available under the Erlang Public License, a derivative of the Mozilla Public License. So it would be more fruitful to debate the merits of that license, not the GPL.
All of the currently uploaded items in the docket:https://ia600209.us.archive.org/9/items/gov.uscourts.nysd.35...
> The GPL does not require you to release your modified version. You are free to make modifications and use them privately, without ever releasing them. This applies to organizations (including companies), too; an organization can make a modified version and use it internally without ever releasing it outside the organization.But if you release the modified version to the public in some way, the GPL requires you to make the modified source code available to the program's users, under the GPL.
Distributing the code within a company/organization does NOT constitute 'distribution', and does NOT require you to release your code.
Edit - and in the licence itself - Version 3 of the GPL
> The Program refers to any copyrightable work licensed under this License. Each licensee is addressed as you. Licensees and recipients may be individuals or organizations.And
> To propagate a work means to do anything with it that, without permission, would make you directly or secondarily liable for infringement under applicable copyright law, except executing it on a computer or modifying a private copy. Propagation includes copying, distribution (with or without modification), making available to the public, and in some countries other activities as well.
I learned relatively early that above some level (that's not even all that high), you'll find that you're dealing primarily with people whose sole purpose in life seems to be to serve their love of money and quest for validation. They are like insects drawn to a light and about as thoughtful.
Suffice to say that when you find that you have to play harder and harder to make up for the stress of your daily work life, it's time to take a look at what you really want and why you're doing what you're doing.
He is active in on the Erlang mailing list. He wrote a cool C++ to Erlang interface. Here is his Github account:
I am happy for him. He is a great asset to the open source community.
I can't help smile at the thought of what non-CS people might mistake a "subversion repository" for.
"On 8 September 2005, the Seoul Central District Court ruled that the GPL was not material to a case dealing with trade secrets derived from GPL-licensed work. Defendants argued that since it is impossible to maintain trade secrets while being compliant with GPL and distributing the work, they are not in breach of trade secrets. This argument was considered without ground."
PS: If i remember correctly, hadn't Sergey resigned from GS by then? I thought he was just hanging around an extra week or two to train his successors.
It pretty explicitly details how the "backup" program he wrote had flags to select what to copy and some of those flags specifically copied GS option pricing code that he had never worked on.
There is a lot of collateral damage when they do things like this, damage that hurts them when even when it comes to developers like me who've never written a single line of code that integrates with Google Checkout.
P.S It's kind of funny that one of the alternatives they pushed at the time of the initial announcement was Braintree who then went on to be acquired by PayPal. Shows you how fluid/active this space is.
No one in Ecommerce wants to touch their tech stack during the only quarter that matters to them.
 Apparently the shutdown was announced in May. Regardless, it probably would have been better to shut it down in September, than right before Cyber Monday.
I hope he updates the page now that we have this additional data point, I'd love to see how his predictions performed.
It sucks to have to close things down, but sometimes you do, and they gave everybody 6+ months of notice to migrate back in May, but being disingenuous about the best alternative is mealy-mouthed and "un-Googly".
* Merchants selling digital goods may transition to Google Wallet for digital goods
* Merchants selling through Google-hosted marketplaces (e.g. Google Play) will be unaffected
* Merchants selling physical goods will need to switch to third-party alternatives (see below)
We need stronger and tougher competition in the online payments space. Paypal seriously can't be the only globally working payment and checkout substitute around, maybe Google has a similar service that's stronger and more feature packed up their sleeves? I hope so.
HN discussion: http://news.ycombinator.com/item?id=5740447
Is there any way that Google can even be thought of as not evil anymore?
I'm slowly shifting everything away from Google. I'm using an iPhone 3G as my main smartphone; switched to DuckDuckGo as my search engine, and have switched my email to private hosting for anything confidential and Outlook for public email.
There's really nothing they could do, good or bad, to get me back at this point. They've proven, time and time again, that they don't care about the customer, and frankly, I'm tired of it.
The problem with their approach of having many half finished products that they were actually helping competition: they were discovering and validating market which is then captured by smaller companies.
And my prediction is that the following Google products will probably die out soon:
* Google App Engine - competition is improving at rapid pace and it is hard to keep up without focus.
* Google Blogger - they will probably just make something similar under Google+
* Google Groups
Probably there is more... but these are the one I see.
However, I wouldn't recommend trying to learn from it in its current form, unless you already know a lot.
Just skimming over it, I found that it was teeming with grammatical errors and typos, with entire sentences garbled to the point where I couldn't tell what the authors had intended to say. I saw syntax errors in the code samples, which means that they weren't all verified to run as printed. Again, not a real problem except for beginners, but beginners are the target audience.
Finally, there is some utility to the concept of prerequisite that the authors seem to avoid.
This book tries to teach the beginner everything that he might need to know, from what "ls" does in the shell to how to use git and set up virtual environments. Maybe it is practical to go from never having seen the command line to deploying working, secure Django projects just by using a (cleaned up version of) this book, but doesn't it make more sense to learn things in a more solid progression of stages? It's OK to expect the student to already know some basics, and build on those. You don't find a tutorial on arithmetic in a book about topology.
I've tried to get into both Rails and Django twice now (I'm a PHP guy usually), but every time I seem to get going, I get bogged down by StackOverflow after StackOverflow that have seemingly contradictory information or offer a third party solution rather than solving the problem within the framework.
In Rails, for example, Rails is easy enough, until you're dealing with RVM * , Passenger * , and installing correct dev versions of database drivers.
In Django, you deal with South migrations when you want to update your database schemas, virtualenv * , virtualenvwrapper * ; in fact, I've heard that one of the criticisms of Django is that, in order to get to production quality, you essentially have to switch out every component of it with a third party.
* The starred apps don't technically have anything to do with the framework; they're more utilities for managing multiple framework instances. Still, you're likely to find tutorials that use them as a de facto standard, which only adds to noob confusion.
I've started reading Michael Hartl's Rails tutorial, which seems promising. I found that the highly-recommended "Two Scoops of Django" book was a little too narrative for me (just tell me what to do, dammit!); there's definitely a need for more Django tutorials than just the Django website's basic one -- kudos to the author for that.
Official tutorials and docs don't cover nearly all of the accepted standard practices. As a relative outsider it seems a lot of this Django/Python knowledge is taken for granted by the tightly knit community of the skillful developers who interact with each other and exchange various tips, while a beginner who is not really embedded in the community misses out on all that and picks it up only when it's widely enough used that it hits the blogs and podcasts in bits and pieces.
I used to struggle with Django, so I started to look into Flask, and I really feel like I finally understood what I was doing.
Flask tuto : http://flask.pocoo.org/
Which framework should I focus on if my priorities are professional development and ease of workflow? I realize this is a hard question to answer, but I'm interested in hearing different perspectives.
Quick comment: here, you are teaching render_to_response http://www.tangowithdjango.com/book/chapters/templates_stati... while you should be using render instead https://docs.djangoproject.com/en/1.5/topics/http/shortcuts/... which is simplier than render_to_reponse (where you have to use the horrible context_instance=RequestContext(request) to be able to do certain things in the template which confuse a lot people).
Thanks for this contribution to the django community!
So far I have made it to the Ajax page.
Why don't you use render in your views, and url tags with the name in urls.py?
Also there is a lot of typos. Is the site open source?
Wondering why it didn't really get out there tho.
The rest of the website has more information about the journey, and an actual fake Skype client.
See page 4 [PDF]:http://eur-lex.europa.eu/LexUriServ/LexUriServ.do?uri=OJ:L:2...
For interoperability purpose, "reverse engineering is a legal practice, and that the distribution of software using a protocol found in this way is not illegal"
This can be compared to letting people try reverse engineer some indigenous language in my eyes.
I don't know if it's just because the judges are more sane but well done. I think I respect the french slightly more now :)