This is why Microsoft needs to keep building their own hardware like the Surface. As time goes on, if Microsoft does it right, Surface is going to be the best Windows experience. At least, I would hope so.
But both Windows and Linux are more than capable to get this all sorted. Like Google showed with the Nexus 7 - focus is all that's needed. It's just harder for Microsoft considering everything they need to juggle.
Edit: Fun fact: Apple's own Boot Camp drivers disable USB selective suspend on the 2013 Air! Check our powercfg /energy for more fun :)
Edit 2: Surface is Tegra 4 SoC isn't it? Microsoft still is limited by Tegra's power characteristics as far as what I can tell from Anand's review. So the integration story is better but still no match to Apple.
Battery life varies greatly between e.g. Google Chromebook systems, running the same software (and between windows systems for that matter).
Some of this is to do with the power usage of the CPU, and whether video decoding is done in low power hardware, or which wireless chipset is used. But just looking at the power usage manager on your Android phone will tell you that the screen uses most of the power.
Windows hardware varies from high priced ultra books (where all is sacrificed for shininess and performance), to bargain bin systems where using an old backlighting technology saves a few bucks.
Question to Apple users - comparing windows laptops to your mac, which shipped with the more aggressive power saving settings in terms of turning the screen off when not in use?
Also there are some decent performance improvements that would mean less cpu usage (or bursts of them, which is more power efficient).
Are there any benchmarks? Or is it still behind NDA?
The charts are quite ridiculous in the article - comparing an Ivy Bridge, actively cooled laptop-tablet to a Nexus 7? Why?
BTW, the biggest difference is maybe CPU core hotplugging, this exists in Android and iOS but does not in Windows RT.
Maybe misconfigurations like these are also causing more power consumption than needed.
Now they are turning a full-fledged multi-user OS into a tablet OS. Let's make this tank into a bicycle. History tells us there will be a few painful years.
Any of #2 would impact both.
could explain this problem?
Windows RT on the Surface 2 appears to have better battery life than the Samsung Galaxy Tab (according to his chart).
In reality I get 2.5 hours maximum. It's so bad that I actually returned the first one I got as I though something was wrong with it. Nope, 2.5 hours is it. Not even enough for a half-day working in a cafe.
So I guess 5-7h only applies if the screen is turned off, no programmes running and no wifi. Useful. :/
We see the comparisons:Surface <-> iPad, OSX MBA <-> Win7/Vista MBA
Surface has different architecture than the iPad, so the battery difference is easily explained, and maybe driver support is just less than stellar, meaning HW isn't as efficient and/or doesn't scale back quickly enough?
I had an Asus laptop some years ago that would last 3 hours under Vista, and would be dead in the water in 1 hour under Ubuntu. I think it was either GPU or CPU scaling or both that wasn't supported in the linux drivers I was using
In any 5 year period, Apple has a tiny list of exact hardware configurations OS X is designed to run on. It's so small, they even use the OS X software update mechanism to push BIOS updates! They have so much room to do better than Microsoft here it's barely even funny. This isn't an excuse for Microsoft's poor performance, but if you try to gloss over the fact as Atwood does here, then you're omitting the full truth.
- I already keep notes in markdown using nValt. It's fast. And it's free.
- I sync these notes, which are just .markdown files in a directory on my machine, with Dropbox. Now I can edit these notes on my iDevices.
- Your service costs me money to do what I do for free.
- If I need more bells and whistles, I use Evernote. Evernote is also free for me.
- Emailing myself notes with tags in the subject in is also free.
Why should I spent $5/mo to use a digital journaling service, and then more time to make this service work with the rest of my workflow, when I already have things in place that take care of my note taking problem?
I would need a great incentive to switch how I take notes. I'm thinking a bunch of other developers thought the same thing.
Finally, at least you shipped something. Nice retrospective. Keep at it.
Edit: It's interesting seeing this post and "Ask HN: How do you store and organize your startup ideas?" on the front page at the same time :)
You say you built a landing page with mailchimp signups for your new product. Did you get many signups? And I suppose these were just signups, no money involved, right? Did a good chunck turn into paying customers?
I would love to know more.
It was a significant amount of money for this dev - but it could have been done even cheaper with the aws free tier and getting an ssl cert from somewhere like positivessl / commodo.
If you start small, write glowing reviews of products and company directives so that you get invited to pressers and tours, and maybe someday you too can get invited to go work inside the very companies the public trusted you to cover in an unbiased fashion! Be careful though, anything under 4.5 out of 5 stars and you might get a phone call from the PR department expressing their disappointment while they take you off of their most exclusive lists.
Congresspeople who take jobs as lobbyists after their terms in office are synonymous with journalists who join companies they covered "in the public's interest" during their professional careers.
(Disclaimer: I'm not saying that this was Pogue's position re: Yahoo. This is a rampant problem in new and old media in general.)
I guess it is showing a shift from Yahoo to be a quality content producer.
Is that true? Is it really visited more often than Facebook, Google, Twitter, or YouTube? Alexa doesnt agree.
According to a recent article, Yahoo is the most popular website in 2 regions: Japan and Hong Kong.
Voltaire said something like:"God, please protect me from my friends. I take care of my enemies."
Fun fact about that : this number is actually bigger than french population (~ 66M).
NSA spook in black shades hands a usb stick to the Minister "your info on Angles strategy for the next eu summit"
French foreign minster (sote voce) "Oh, thank you very much."
Snark aside: what may have riled French feathers is the fact that NSA spied on politicians too. I bet they have lots of blackmail stuff.
"CryptoSeal Privacy Consumer VPN service terminated with immediate effect
With immediate effect as of this notice, CryptoSeal Privacy, our consumer VPN service, is terminated. All cryptographic keys used in the operation of the service have been zerofilled, and while no logs were produced (by design) during operation of the service, all records created incidental to the operation of the service have been deleted to the best of our ability.
Essentially, the service was created and operated under a certain understanding of current US law, and that understanding may not currently be valid. As we are a US company and comply fully with US law, but wish to protect the privacy of our users, it is impossible for us to continue offering the CryptoSeal Privacy consumer VPN product.
Specifically, the Lavabit case, with filings released by Kevin Poulsen of Wired.com (https://www.documentcloud.org/documents/801182-redacted-plea...) reveals a Government theory that if a pen register order is made on a provider, and the provider's systems do not readily facilitate full monitoring of pen register information and delivery to the Government in realtime, the Government can compel production of cryptographic keys via a warrant to support a government-provided pen trap device. Our system does not support recording any of the information commonly requested in a pen register order, and it would be technically infeasible for us to add this in a prompt manner. The consequence, being forced to turn over cryptographic keys to our entire system on the strength of a pen register order, is unreasonable in our opinion, and likely unconstitutional, but until this matter is settled, we are unable to proceed with our service.
We encourage anyone interested in this issue to support Ladar Levison and Lavabit in their ongoing legal battle. Donations can be made at https://rally.org/lavabit We believe Lavabit is an excellent test case for this issue.
We are actively investigating alternative technical ways to provide a consumer privacy VPN service in the future, in compliance with the law (even the Government's current interpretation of pen register orders and compelled key disclosure) without compromising user privacy, but do not have an estimated release date at this time.
To our affected users: we are sincerely sorry for any inconvenience. For any users with positive account balances at the time of this action, we will provide 1 year subscriptions to a non-US VPN service of mutual selection, as well as a refund of your service balance, and free service for 1 year if/when we relaunch aconsumer privacy VPN service. Thank you for your support, and we hope this will ease the inconvenience of our service terminating.
For anyone operating a VPN, mail, or other communications provider in the US, we believe it would be prudent to evaluate whether a pen register order could be used to compel you to divulge SSL keys protecting message contents, and if so, to take appropriate action."
I ask because in a recent blog post from Silent Circle (a secure comms company), they explicitly state "we are not a U.S firm" . I'm beginning to think any company that wants to offer security products like this has to place their Global HQ outside the US's legal jurisdiction. I doubt it solves all the problems but it probably helps to some extent.
mike@glue:~$ wget -q -O - https://privacy.cryptoseal.com/ | gpg --verify gpg: Signature made Mon 07 Oct 2013 12:38:07 BST using DSA key ID D2E0301F gpg: BAD signature from "Ryan Lackey <email@example.com>" mike@glue:~$
Amazon said they didn't want anything powered by FreeBSD in AWS. There are currently negotiations about running on a larger instance that supports the HVM, and avoids the 'Windows tax', but there are significant usage fees for that tier (today thats the cluster compute and M3 instances) as well.
We could release a variant of the AMI as a "public" AMI. It wouldn't be in AWS then, but it would be available. If your account is new enough, it would allow a completely free VPN service on Amazon's "free tier".
It would also allow people to setup their own VPN service (OpenVPN and IPSEC are both fully supported.) Hosting on top of EC2 isn't perfect (there are possible key recovery attacks from others hosted on the same infrastructure), but, correctly configured, Law Enforcement would need more than a pen register order to obtain anything beyond the enclosing IP packet data. Since, in theory, you would be your own provider, the FBI (or an equivalent in other EC2 zones) would have a higher burden to install even a pen register.
My question is: should we bother? Anyone with sufficient clue could setup a linux instance to do the same thing.
Didn't they have an IPSec cert for each individual subscriber?
If not.. I wouldn't have wanted to go anywhere near them if they were using one keypair for all traffic.
Public-facing websites are usually dependent on a single server cert because they can't easily provide a separate client cert for everyone who visits. A private, subscription-based service should not be using that model and thus should not have encountered the 'Lavabit Paradox'.
Neither should Lavabit, but I digress.
Stanford is a diverse place. Just because a small number of students are dropping out to start companies doesn't mean that the end of the university is nigh.
Shouldn't it be a place to drift, to think, to read, to meet new people, and to work at whatever inspires you?
This interaction between industry and academia has long been one of Stanford's greatest strengths. There is no Silicon Valley without Stanford, and that didn't happen overnight or even lately.
Sure, CS is once again an absurdly popular major as it has been in the past. But with Stanford you don't have to choose between Harvard & MIT, you get both ['04 CS, color me biased].
Is Stanford a strong STEM school? Definitely. But if you take two seconds to actually look at the degrees that students end up earning (http://ucomm.stanford.edu/cds/2011.html#degrees), you get a picture of a much more well-balanced environment.
Clinkle's not worth taking seriously in any context, and its existence does not establish anything, much less "the end of Stanford".
Up with the Blue and Gold, down with the red!
The review article by Frank L. Schmidt and John E. Hunter, "The Validity and Utility of Selection Models in Personnel Psychology: Practical and Theoretical Implications of 85 Years of Research Findings," Psychological Bulletin, Vol. 124, No. 2, 262-274 sums up, current to 1998, a meta-analysis of much of the huge peer-reviewed professional literature on the industrial and organizational psychology devoted to business hiring procedures. There are many kinds of hiring criteria, such as in-person interviews, telephone interviews, resume reviews for job experience, checks for academic credentials, personality tests, and so on. There is much published study research on how job applicants perform after they are hired in a wide variety of occupations.
EXECUTIVE SUMMARY: If you are hiring for any kind of job in the United States, with its legal rules about hiring, prefer a work-sample test as your hiring procedure. If you are hiring in most other parts of the world, use a work-sample test in combination with a general mental ability test.
The overall summary of the industrial psychology research in reliable secondary sources is that two kinds of job screening procedures work reasonably well. One is a general mental ability (GMA) test (an IQ-like test, such as the Wonderlic personnel screening test). Another is a work-sample test, where the applicant does an actual task or group of tasks like what the applicant will do on the job if hired. (But the calculated validity of each of the two best kinds of procedures, standing alone, is only 0.54 for work sample tests and 0.51 for general mental ability tests.) Each of these kinds of tests has about the same validity in screening applicants for jobs, with the general mental ability test better predicting success for applicants who will be trained into a new job. Neither is perfect (both miss some good performers on the job, and select some bad performers on the job), but both are better than any other single-factor hiring procedure that has been tested in rigorous research, across a wide variety of occupations. So if you are hiring for your company, it's a good idea to think about how to build a work-sample test into all of your hiring processes.
Because of a Supreme Court decision in the United States (the decision does not apply in other countries, which have different statutes about employment), it is legally risky to give job applicants general mental ability tests such as a straight-up IQ test (as was commonplace in my parents' generation) as a routine part of hiring procedures. The Griggs v. Duke Power, 401 U.S. 424 (1971) case interpreted a federal statute about employment discrimination and held that a general intelligence test used in hiring that could have a "disparate impact" on applicants of some protected classes must "bear a demonstrable relationship to successful performance of the jobs for which it was used." In other words, a company that wants to use a test like the Wonderlic, or like the SAT, or like the current WAIS or Stanford-Binet IQ tests, in a hiring procedure had best conduct a specific validation study of the test related to performance on the job in question. Some companies do the validation study, and use IQ-like tests in hiring. Other companies use IQ-like tests in hiring and hope that no one sues (which is not what I would advise any company). Note that a brain-teaser-type test used in a hiring procedure could be challenged as illegal if it can be shown to have disparate impact on some job applicants. A company defending a brain-teaser test for hiring would have to defend it by showing it is supported by a validation study demonstrating that the test is related to successful performance on the job. Such validation studies can be quite expensive. (Companies outside the United States are regulated by different laws. One other big difference between the United States and other countries is the relative ease with which workers may be fired in the United States, allowing companies to correct hiring mistakes by terminating the employment of the workers they hired mistakenly. The more legal protections a worker has from being fired, the more reluctant companies will be about hiring in the first place.)
The social background to the legal environment in the United States is explained in various books about hiring procedures, and some of the social background appears to be changing in the most recent few decades, with the prospect for further changes.
Previous discussion on HN pointed out that the Schmidt & Hunter (1998) article showed that multi-factor procedures work better than single-factor procedures, a summary of that article we can find in the current professional literature, for example "Reasons for being selective when choosing personnel selection procedures" (2010) by Cornelius J. Knig, Ute-Christine Klehe, Matthias Berchtold, and Martin Kleinmann:
"Choosing personnel selection procedures could be so simple: Grab your copy of Schmidt and Hunter (1998) and read their Table 1 (again). This should remind you to use a general mental ability (GMA) test in combination with an integrity test, a structured interview, a work sample test, and/or a conscientiousness measure."
But the 2010 article notes, looking at actual practice of companies around the world, "However, this idea does not seem to capture what is actually happening in organizations, as practitioners worldwide often use procedures with low predictive validity and regularly ignore procedures that are more valid (e.g., Di Milia, 2004; Lievens & De Paepe, 2004; Ryan, McFarland, Baron, & Page, 1999; Scholarios & Lockyer, 1999; Schuler, Hell, Trapmann, Schaar, & Boramir, 2007; Taylor, Keelty, & McDonnell, 2002). For example, the highly valid work sample tests are hardly used in the US, and the potentially rather useless procedure of graphology (Dean, 1992; Neter & Ben-Shakhar, 1989) is applied somewhere between occasionally and often in France (Ryan et al., 1999). In Germany, the use of GMA tests is reported to be low and to be decreasing (i.e., only 30% of the companies surveyed by Schuler et al., 2007, now use them)."
Before the interview, I ask them to write some code to access an HTTP endpoint that contains exchange rate data (USD, EUR, GBP, JPY etc.) in XML and to parse and load said data into a relational database. Then to build a very simple HTML form based front-end that lets you input a currency and convert it into another currency.
I ask them to send me either a link to a repository (Git, SVN etc.) or a zipball/tarball. If the job specifies a particular language, then I obviously expect it to be in that language. If not, so long as it isn't in something crazy like Brainfuck, they have free range.
If the code works and is basically sane, that goes a long way to get them shortlisted.
During the interview, I'll pull the code they sent up on a projector and ask them to self-review it. If they can figure out things that need improving in their code, that weighs heavily in their favour. Usually this is things like comments/documentation, tests, improving the structure or reusability. If it's really good, I'll throw a hypothetical idea for refactoring at them and see how they think.
The reason this works is that, despite Hacker News/Paul Graham dogma to the contrary, "smartness" isn't the only thing that matters in programmers. It's actually fairly low down the list. When hiring programmers, I want people who are actually able to do the daily practical job of writing code, modest and self-critical enough to spot their own mistakes, and socially capable to actually communicate their decisions and mistakes to the people they work with.
I interviewed a guy who was intellectually very smart and understood a lot about CS theory. I asked him why the PHP code he sent me didn't have any comments. "I don't believe in comments because they slow the PHP interpreter down." Sorry, he can be smarter than Einstein but I ain't letting him near production code.
One example I use is getting the candidate to write crud, list, and search controller actions for a simple category data structure. Given a basic category data model (e.g. Name, Parent), the candidate starts with the crud actions.
Crud actions aren't meant to be difficult to solve and serve as a basic screener to verify the candidate has working knowledge of the basics. The only edge case I look for the candidate to ask about is if orphaning child nodes is allowed (I.e updating parent node, deleting a node with children)
List action(s) start getting more interesting since recursion comes into play. A basic implementation of an action that can load the tree given an arbitrary category as a starting point is expected. If the candidate has some prior experience, a discussion of what performance concerns they may have with loading the category tree is a follow up question. The tree loading algorithm is then expected to be revised to handle an optional max depth parameter. An edge case I look to be considered is how to signify in the action response that a category has one or more child nodes that weren't loaded due to a depth restriction.
The search action implementation has a degree of difficulty scaled to the candidates experience level. All candidates have to write an action that returns a collection of categories matching a search string. Those with previous experience are asked about a paging solution. Senior level candidates are asked to return matching categories in a format that indicates all ancestors ( for instance: "Category 1 -> Category 1.1 -> Category 1.1.1" result for search string "1.1.1")
For an added degree of difficulty, candidates can be asked to recommend data model tweaks and algorithms supporting tree versioning requirements necessary to allow for loading the category tree's state at a given point in time.
The candidate's performance to this exercise seems to give some insight into their level of experience and ability to implement algorithms from a common real world example without having to ask much trivia or logic problems.
1) I think a lot of start-ups want to hire "smart" people. Because they expect the new person to eventually wear many hats. Objective-C, Java, Android, CSS, server side concurrency, monitoring. An we've all seen Hunter and Schmidt reference that tokenadult usually posts when talk about interviewing comes around and it does seem that a general mental ability test (like an IQ test) combined with a work samples seem to predict future performance of that employee. Well except that one can't just straight up give IQ test to job applicants (there is a court case about that). So we are left with a job sample (which many forget to give, as is the point of the author). But instead many focus on the GMA and create proxies for it -- cute little puzzles about blenders, round manhole covers, and other such silly things.
2) Those interviewing don't know the technical stuff and are afraid you'd out-bullshit them. "How does an Ajax request work" well if the interviewer themselves doesn't quite know the details the might not be able to evaluate it properly. They could have it written down but well, some technical questions have many different levels of depth that a candidate might descent to. So a quick written answer to the question might seem wrong but it is really because the candidate is more advanced. So puzzles seems to be a generic and "easier" to handle.
This problem was addressed nicely in this functional pearl by Jeremy Gibbons, et al.: http://www.cs.ox.ac.uk/jeremy.gibbons/publications/rationals... . As interesting as the result is, however, it's a pretty well-made point that research-level ideas from the programming languages community are not really software engineering interview material in the vast majority of cases.
This is yet another example of "rockstar developer"-itis, wherein startups are given to believe that they need the best of the best when in fact they do not. This particular example is entirely egregious because they asked her about something that requires enumerating the rationals when what they really wanted was an iOS code monkey. Then they fired her, based on their own shoddy interview.
Certainly, asking only math questions is stupid as well, people should know at least a little about the stuff they're supposed to work with, but teaching an actual language to a smart person eager to learn is a breeze compared to teaching problem solving to someone who memorized the reference manual.
If you really want to know if someone has the capacity to pull their weight as an engineer, ask them about what they've built. Even if they are fresh out of college, the best engineers will have projects they can talk about and explain. Ask how they approached/solved specific problems. Ask what they're most proud of building. Ask what was most frustrating.
Those are the kind of questions that will provide insight into a person's problem solving capabilities and offer a decent picture of what they're capable of doing.
Interviewer: "How can we optimize the character replacement in a string such that we use no extra memory?" Me: "We do this and that and this. But, should we consider what situations we would need this optimization?" Interviewer: "What? Why?"
I can now use this as a filter as I interview organizations. Optimizing algorithms by creating your own core data structure classes (instead of using the built-in ones) is great in certain circumstances, but an absolute waste of time in many others. And if you're not going to ask me about those times when making those improvements is important, then you're not asking questions for a programmer -- you're asking them for a theoretician who can recall syntax.
It's poor practice, and I've seen it everywhere.
This would test programmers ability to learn a new language.
Being a developer is 80% Google and 20% actual coding knowledge. We are hackers at the end of the day, not miniature Einstein's with encyclopaedias for brains.
Probably because the only person who doesn't lose from this is the interviewer: they get to have fun. Honestly, when you spend all day buried in code, it's fun to play with puzzles for a change.
Perhaps it's time we started optimizing interviews for hiring success rather than interviewer happiness.
I believe this is deeply valuable. For some roles, I would much prefer to hire someone who can quickly see the value of breadth-first search from both ends.
If he/she doesn't happen to know the syntax of Ruby, or Java, etc. it's less important to me.
I just don't have the experience or tools or interest for them.
And yet, somehow, in 20 years of business geekery I've never come across a problem I can't solve.
Maybe when writing Tetris for J2ME I would have saved myself 10 minutes googling if I'd had the experience to realise that right angle based matrix translations don't require fp maths and maybe when writing financial indicators, I'd have saved myself half a day if I hadn't had to look up integrals but this sort of stuff is definitely in the minority as far as my experience goes.
The position I was filling is a part-time position for a CS major, sort of like an internship. I devote time to develop his/her skills, s/he would get real-world experience, and a little money to help with cost of living. If everything works out, a position could open up for full employment.
I had a pretty good idea what I was looking for. Someone that had good grasp on theory but had no experience coding. Preferably enrolled in Uni. I had 5 applicants but the only candidate I interviewed is enrolled in Math-CS.
I basically tried to gauge if he had deep interests and asked him to code a bit, solve a simple control (find me the article with the highest hitcount from the day a week ago, gave him 10 minutes).
He failed the coding test but I made the hire regardless. Reason why was 2 things out of the 4 hours we spent together: When I asked him who he considered the father of CS he rattled off von Neuman, Djikstra and Knuth. Yeah, you can make that argument I suppose, but he knew who the influential people were. The other thing was: even if he failed the coding test he failed it by not reading the code examples quite right, he was using my code to try to help himself solve the problem. I'm sure he'll work out.
We as a field should employ internships a lot more than we do, get the college kids and undergrads working on real-world problems a lot more than we do.
For example: "This database contains 100,000 problems with standardized parameters. The problem definition is defined in the file spec.txt which you can grab from our code repository. Write the code to solve these problems efficiently, passing each solution to a remote service via POSTing to a REST API, the documentation for which you can find here. Bonus points for parallel execution. Feel free to use any editor/IDE and reference online documentation, Stack Overflow, etc. that you want. If anything's not clear or you need a hand with something, just ask as you would if you were an employee already. Ready to get started?"
The great thing is that once you've identified a candidate, you can do remote screen sharing and have them write code before they even have to come into the office. I've interviewed a fair number of remote people this way and it's excellent for weeding out the people who can talk the talk but can't program worth a damn. And it limits bias because you don't care about much beyond their communication ability plus their technical ability.
OK, so there is a difference between computer science and programming. that's why there are two different stack-exchanges:
it's actually really fucking INCREDIBLE that
* you can know tons of CS without being able to build a decent app* you can a decent facebook clone without having any idea how it works
I feel really bad for Emma. I was a math major, but app developers won't even look at me b/c I'm not a full-stack whatever. So now I'm a Data Scientist at an advertising firm in Puerto Rico.
Among people who can code, don't you think that those who understand math can do something deeper (e.g. prove to themselves that code does what they think) than those who can't?
I suppose that it would be fair to exclude math majors without CS background as false positives, and make them prove they can code as well. But OP's CS degree ("So I became a math and CS double-major") should have more than taught them how to code.
If you have a major in CS then you should know how to code. It sounds like the real thing that let OP down is receiving the title of CS major without the skills and understanding to back it up. How did this happen?
How did the OP become any less qualified than any other CS major? Was it by taking math classes instead of database classes to fulfill the requirements?
It sounds like OP was a bit let down by the school's granting of a major in CS (as opposed to a minor) and employers using it as a false signal to not even check for code: of course if someone is a CS major you will not make them actually code anything up. You just want to know if they can think or not.
Does this describe a CS major to you:
Some more relevant things they couldve asked me, but didnt:
How comfortable are you with Unix?
I can change directories, list things, and run Python programs. Oh! And rename files. Probably.
Describe how Ajax calls work.
Umm, isnt that what Gmail uses? Its like, refreshing the page without refreshing the whole page?
What version control systems are you familiar with?
Oh, we used SVN for our senior thesis project. I accidentally triggered a conflict one time. Someone fixed it for me.
How would you implement a [deck of cards, public garage, hotel reservation system] using object-oriented design?
Variables. Variables everywhere. With counts. And maybe some strings.
Absolutely anything at all about databases.
It sounds like her school seriously should consider their requirements for a double-major!
The likelihood of failure of a startup approaches 100%, so you should optimize for likelihood of survival, not for IQ.
If you're not a startup, then the top ranked comment applies. But it doesn't really otherwise.
That aside, one must have a way to measure the abilities of a candidate -- and asking the same set of questions to many people allows you to compare the answers as apples to apples.
I generally don't restrict my people from asking any particular question, but I will ask them to consider what a failed answer really means for the specific job (questions are generally adjusted then).
As an aside, some questions of mine that aren't specifically about coding:
* do you code outside of work (a love of coding translates to good coders)
* send me a link to some code you've written that you are proud of (let see what you got)
* tell me about a problem you had where your solution wasn't correct (how have you dealt with failure).
But this one talks about getting inadvertent benefit of being good in Maths to get selected for programming, and suffering the consequences later on.
Also, it highlights the importance of what is mostly taken for granted and thought of as mundane stuff, of programming - the idiosyncrasies, jargon, and best practices of various languages and OS environs.
Because of this I've pretty much given up on hiring graduates based on their technical skills so instead I'm looking for someone smart, who gets that they've got a lot to learn, who is interested in technology and can get on with the other people in the team.
I don't think asking people math questions per se is a great idea, but if you've studied a maths degree it's a good way of working out if you're smart and if you were paying any attention at all during university.
(Incidentally this may be different in other countries (I'm in the UK) or in a company where you're able to attract the very best who have picked up really solid skills, but for most organisations that's not the case as most graduates spent more of their own time in the bar than coding.)
Check out the last technical interview task that I got```Objective:Write a program that prints out a multiplication table of the first 10 prime numbers.The program must run from the command line and print to screen one table.
Notes: - DO NOT use a library method for Prime (write your own)- Use Tests. TDD/BDD- IMPRESS US.```
I mean I can impress you but how will this correlate with production code?
I would (and have) asked if the interviewer or organization has any evidence to show that interview puzzle performance (or shit like Myers-Brigg) predicts job performance. No? Not surprising. Google did look into it and found no relationship. (http://www.businessinsider.com/how-google-hires-2013-6)
Programmer interviews are so crazy and sometimes sadistic that I catalogued some of the more common interview patterns:
Anyone who supports math puzzles (or whatever else) in an interview would have to argue that their perception of the candidates performance offers a clear enough data point that it doesn't dilute other information available to them. Given Google's study finding data otherwise, they certainly have the burden of proof.
The irony is that, in an effort to hire the "smartest" people, they leave out the wisest. Which is arguably more useful.
- After a first non-technical call, we ask the candidate to create a very small project based on our SDK. We send him the documentation and a very small sample. He can almost use every tools he wants to create that small project and, of course, we do not set any deadlines. It allows us to see how the candidate architecture his applications and it gives us a project to discuss during the following call.- If all goes well, we invite the candidate on site to present our code/project and eventually brainstorm together. So that both parties can see if they can work together and the candidate has an insight about how we work, how our code looks like.
Clearly, it's far from perfect and we are often considering changing it. Imagine if every company where you are applying would ask you to create an app from scratch with their SDK? We may lose some candidates, but at least we hire only people that fit the company's culture.
I tend to hate the interviews that ask me to solve math and logic brainteasers because I don't see the value in them regarding my knowledge of programming.
Programming isn't difficult and you don't need to know complex maths or be able to solve mind bending puzzles to be a great developer.
Yet, I have never had the balls to pursue it professionally. I build stuff and usually never launch it. I have learned several times over that marketing is not my strong suit.
That said, I'd actually like to work for a startup. Hit me up if anyone wants to talk.
Not long ago, Facebook made that 4.74 degrees of separation on its networks. Meaning a maximum of only 4.74 persons are necessary to connect any two random persons on the network.
You can also find an article on Wikipedia about the "Kevin Bacon" reference.
Stop asking this fine young lady math puzzles to determine her programming abilities. She is good at solving your seemingly pointless math puzzle, because she was practicing problem-solving since she was ten. But she is not anywhere near as good at programming, yet - which caused her problems at the actual jobs she had to do after she was hired.
I really understand that a startup with scarce resource would like to do its best shot. However as discussed long ago (https://news.ycombinator.com/item?id=2385424), it is really frustrating that asking math puzzles are assumed as the best way to hire the best for the job.
This is like solving your submarine problem. Jeese.
Isn't XY years of records in the same field of interest working for a successful companies a good sign that I can code?!
Ask me theory - pay me to code.
E.g. if somebody hire John Carmack (ID Software), nobody will let him do some math test or ask him trivial programming questions.
But you are not John Carmack ;-)
It is like in every other job: if you are not a rockstar you are nobody.
One of the comments at the direct link to the film in the archive is shocking in its ignorance and bigotry. On the whole, viewing this film is a good discussion-starter for thinking about current issues.
I have lived in another country (Taiwan) when it was a dictatorship so I appreciate genuine democracy. In the community I live in now in the United States, there is a high degree of general respect. And I have substantial practical power to resolve social wrongs and improve the well being of my family and myself. I'm glad to report that Taiwan, the former dictatorship I lived in, has since become a genuine democracy, and also rates well as to respect and power to all the people.
EDIT: Also, it's kind of scary how topical this video is.
Organizational metrics, which are not related to the code, can predict software failure-proneness with a precision and recall of 85 percent. This is a significantly higher precision than traditional metrics such as churn, complexity, or coverage that have been used until now to predict failure-proneness. 
Also, how about appropriately titling the link instead of paraphrasing the opening line?
I think I found that method.
Run `gpedit.msc`. Navigate to:
Computer Policy > Administrative Templates > Windows Components > Bitlocker Drive Encryption > Operating System Drives
Allow Secure Boot for integrity validation Use enhanced Boot Configuration Data validation profile
But of course the option to disable secure boot was grayed out, and it took me some searching on the Internet to find a solution: first you have to go to the key management window, and delete all the keys there. Then you're allowed to turn off secure boot.
If they wanted it to be usable, they'd just offer me an option 'boot once without secure boot' (and ask for the BIOS/EFI administrator password if set).
After this experience, my hypothesis is that the main purpose of "secure boot" is to discourage the user from installing anything non-default (aka Linux).
In particular, the bit about wanting to dual-boot WP8 on Android phones: https://news.ycombinator.com/item?id=6497126
Oh. I know what a good word for it is: douchey.
Maybe that's because a prominent reason to disable secure boot is to use Windows Loader to pirate Windows. Of course, there may be an updated version soon that tricks Windows into thinking it booted securely.
Because an important security feature (in their eyes, DUH) is disabled.
> If a message is needed, at all, then why not display it on the System View Basic Information about your computer control panel item.
Instead of fixing this -- and to maintain backward compatibility -- they've always applied security models further up the tree, closer to the apps and the user. As a result MS has more and more complex security controls but is less secure. This complexity and security bloat results from trying to patch a boat that's full of holes in its fundamental design.
Secure boot is needed for the same reason lots of other controls are needed-- to make it harder to permanently screw the system once you've gotten malware onto it. This is so important because it is historically so easy to get malware onto Windows.
In todays world of wonderfully powerful machine, a WindowsXP or Windows7 installation in a seamless Virtualbox machine will solve most Windows-related problems: proprietary apps at work, a photobook creation Windows app or maybe an old game or so. For everything else there's at least one Linux distribution that works.
I install Mint 15 Mate for my retirees and with it they can surf, bank, write e-mail and word process. After installing it I never hear about viruses, trojans or weird popups telling them that something is out of date.
Now if only younger people would have the courage to try something other than Windows for once. Unfortunately you're going to be playin the latest Call of Duty or Madden 2047, but them's the breaks.
Then when they came out they might really believe it and be thrilled. This would help for the next visit. Though it might be disappointing to figure out years later.
Lay in this tunnel. You're immobilized and can't move. Oh and no one can hear you call for help because of the loud jack-hammer like sounds. Wear these head-phones and we'll blast music into them so you'll feel better. Squeeze this little thing if you panic and we may come and help you.
I've had two MRIs when I was a child. For some reason the "thunks" are relaxing to me and I fell asleep during both procedures. I even pressed the "panic button" by mistake when I had a sleep spasm! MRIs are nap time for me :)
To reduce the noise, I wonder why they haven't investigated active noise cancelling like the engine mounts used by some auto makers. The computer introduces waveforms that are the inverse of the frequencies they want to reduce. Or at least added mass to panels that conduct noise, using material like Dynamat eXtreme (common in car audio).
MRI machines are incredibly loud and "thunk" repeatedly.
On topic: His solution was pretty smart. One of those elegant solutions that seem obvious in retrospect. Hopefully it helps make the process a lot less stressful for some kids.
I see opportunities for gamification quite often. If we took any tedious or daunting task and broke it down into a fun, easy, and simple problem for everyone, society in general would benefit from a harvesting a lot of wasted productivity.
Instead of seeing dozens of people on the subway playing Candy Crush, if they enjoyed answering questions on StackOverflow just as equally, how much faster could we advance our knowledge and solve unique problems?
It's the same course/series, but with interactivity, so Haskell can coded/evaluated from the browser. In fact, one "dir" up, you will find a bunch of similar tutorials here : https://www.fpcomplete.com/school.
First of all, you should be using the `null` function instead of `== xs` because the `==` operator only works if your list contents are Eq.
But the most important thing is that pattern matching is more type safe. If you use `head` and `tail` you, as a programmer, need to make sure that you only call them on a non-empty lists or else you get an error. On the other hand, if you use pattern matching the compiler helps you make sure that you always covered all the possible cases (empty vs non-empty) and you never need to worry about calling the unsafe head and tail functions.
I'd still say the type system is there to help you in C, C++, and Java, it just doesn't do nearly as good a job of it, and winds up in your way more often because it's less expressive.
What I really really need is something which walks me through doing something significant with Haskell - like, a GUI app on Linux or something (my current focus: I've never really done it, but if I'm learning something new I'd like there to be a practical product at the end).
A bunch of language constructs, while technically interesting, don't help me to grok the language at all.
Did this stop you or how did get past it?
How does Learn Haskell Fast and Hard compare?
At university the first thing everyone had to (in programming) do was a Haskell course. Felt weird at the time, but in hindsight it was fantastic. It meant everyone had to throw their preconceptions about programming out the window.
It didn't occur to me until recently (10-15 years later), that functional concepts are actually a good thing to apply in any language; that it makes code parallelizable, modular, maintainable, testable, and so on. I just thought functional was functional (i.e. elegant but hard) whereas imperative was imperative (inelegant but easy). Much like the difference between algebra and arithmetic.
So go learn a second language, or even a third. Even if you intend to speak english and Java for the rest of your life. I'd choose Haskell and Spanish.
Off topic but does anyone know of a Rails/Play + Linq to SQL/ScalaQuery equivalent in Haskell?
Beyond that just being able to generate PDF invoices, send out emails and have access to a decent date/time library (like JodaTime) would cover the essentials for web development.
The selection of artwork is pretty nice too.
Very very good. Thank you author and poster.
Although I am not sure about the premise - I doubt Haskell, as a language close to mathematics, can be learned fast. This tutorial seems quite shallow on some things, like monads.
supports all major distros: openSUSE, Fedora, Debian-based, even Arch.
I hope people will see the importance of not putting all of your eggs in one authentication basket.
As with all tech media discussion about patents these days, this article talks about what the patent "covers" without so much as an idea of what claims are.
For me, Apple has already moved into "prefer competitors' products" territory, but they keep pushing me closer to all-out boycott.
If the patent is upheld, and Apple, having the hate it does for Android tries and bans Android Phones, then in the end, all it does is hurt the users who want a decent alternative to Apple Products. People expect multi-touch on their phones, and non-physical keyboards. I can just see my mom asking me why her new android phone doesn't swipe to scroll because her last phone did.
Such a shame.
It is so frustrating knowing the Prior work of Microsoft Surface before the iPhone with multi-touch and pinch to zoom.
I would love all Android phones to just stop being sold in protest immediately. Though this would never happen. The very idea that this was something invalid in September and now passed through smells of some outside pressure.
The only two companies that this would apply to are Samsung and Google/Motorola.
Procog too much?
I found the dialup number to the MIT media lab, and tried logging in as 'RMS' and viola, no password, and I had my first shell account on an internet-connected Unix machine, although I was only a teenager, and didn't attend MIT.
RMS's act of charity benefited me greatly, I was relatively poor growing up in inner city Baltimore, and his account was a life line to a new world of the internet and away from the crackhouse infested streets.
I find it interesting that he has changed his standpoint from one of radical transparency to techno-privacy.
Remember, RMS is the guy who hacked LCS's computer lab password file, decrypted all the passwords, and emailed everyone suggesting they change their password to empty string. Now, I get that what he really wanted was to allow anyone to have access to LCS resources, and that would have been better served by just allowing anyone to create an account. But some early GNU accounts nevertheless did not have passwords, and I could read their email, shell histories, etc.
I think there is an interesting question is to the extremes of privacy and transparency in a democracy. If for example, it was not possible to discriminate against people, and if the government could not abuse any information gained on someone, then it might be the case that society would better off if there was very little privacy, because private distributed abuse amongst non-state actors would then be the biggest danger. If on the other hand, the state is far more abusive, then the fraud and violence perpetrated by small actors uncaught by surveillance is dwarfed by the damage done by the state having this information.
The question is, is it black and white, or is there some level of justifiable dragnet surveillance? Can democracy also tolerate Cryptoanarchy?
The current situation is neither of those. It's a large expensive system of state oppression .. that acts on remarkably few people(+). There is a gulag archipelago, Guantanamo, but it contains only 46 prisoners now. Outside it, hundreds of millions of people live pretty free lives in the western world. So there's little public appetite for doing anything about it. If you're not reading about it in the news you can ignore it entirely.
Perhaps the main output of the surveillance program is the targeting information for drone strikes. This results in thousands dead .. but they are a long way away, in a part of the world that has its own problems with violence.
Your actual chances of being victimised by the surveillance state for engaging in nonviolent leftwing politics are very small. But perhaps its worth noting that radical leftwing groups seem more likely to be investigated by law enforcement than radical rightwing groups that advocate all kinds of crazy things, including actual violence against the government (second amendment supporters).
(+) (note that I'm talking about just surveillance here, as distinct from the War on Drugs, the horrifyingly high American prison population, racism in the police, or heavy-handed public order policing)
(note 2: I'm from the UK, which has its own problems with official support for surveillance, occasional brutal policing, and particularly the state's role in violence in Northern Ireland has not been properly dealt with nor atoned for).
I understand the concern. As someone who has advocated for stronger electronic privacy regulation, no one likes someone having the ability to look through their stuff. However, the answer is more likely a balance than a denunciation of all surveillance in any form. Surveillance with restriction is fine and probably a good thing. It helps prevent crime and can help catch criminals once crimes happen. Just as it's easy to argue that a government with information will misuse it, bad people with a closed communication network will use it to commit crime. Sure not everyone is going to plan a terrorist strike, or organize a gang online but some will. Is it worth enabling that kind of behavior?
Also what should and should not be private also has to do with where/when the information is collected. If the government were hacking into all of our computers and keeping back ups of our hard drives that's very different from collecting things that are sent on the internet. Legally there is currently a big debate about how to treat something that is taken from a stored medium - like a hard drive - vs one that is captured in transmission - like an email being sent. As it currently stands the government would have a hard time justifying accessing your computer remotely without a warrant but an easy time reading emails once they left your computer. Why? Because the sent email is akin to yelling something in a public place. Once it leaves your computer, it's not private while its being transmitted. If this sounds like a stupid distinction, that's because it is.
This stood out for me in the article. And I think this applies to all forms of data.
Very sadly, he is completely wrong.
This is the beginning:
"How Much Surveillance Can Democracy Withstand?
The current level of general surveillance in society is incompatible with human rights. To recover our freedom and restore democracy, we must reduce surveillance to the point where it is possible for whistleblowers of all kinds to talk with journalists without being spotted. To do this reliably, we must reduce the surveillance capacity of the systems we use."
Not true. Not even slightly true. Its so tragic.
In principle, I could not agree with him more, but that does not appear to be a reality the vast majority can be bothered with.
Why? The vast majority simply do not care. Worse still a huge chunk of society, on a daily basis, give up more personal information than any government can possibly hope to ask for. We in the UK groan when the census comes up, every 10 years. But the average facebook profile contains more personal information than any census has ever asked for. And many many people up date FB daily. Imagine a government asked us to document our lives daily? I could go on, but that's the general thrust.
So, how much surveillance can society withstand? Loads more.
Is it incompatible with human rights? Well, humans don't seem over bothered, in fact they offer up more information that the government could ever ask for. Hence the NSA/GCHQ slurping.
Because the government will only ever use the data in a small targeted way, it will never ever negatively effect the vast majority of people. So they will never be inconvenienced by it. Only "those" people will be effected, and "they" are guilty evil doers. So, there will never be an uprising or revolt, because most people are unaffected.
See, even people most out raged by this agree that its good if they can round up terrorists, pedophiles or who ever the current bogyman is. Well, while we accept that, we accept the method, and there for that "evil" must exist. When it exists, it can be easily and silently abused. The expectations are the gaps through which evil seeps. This is why we are or try to be absolute about torture, chemical or biological weapons, racism, and so on. We know if we allow it in any way, mission creep will happen.
Of course, the real hypocrisy of people is that when something bad happens, we blame government for not having enough control over circumstances. We immediately say, "why didn't they do this that or the other. They failed." What if all this slurping of data could have prevented 9/11?
But in the end, from what I have seen, society can easily with stand a hell of a lot more surveillance. We allow it, government moderates it's use such that most never see the down side, government loves control, and we expect government to have that control.
Truth is, really, people want more surveillance so that they can live nice risk free lives. Frankly, I'm not sure people really want real freedom at all. They want a freedom, or their freedom, one that suits their daily lives. But are only too happy to deny freedom to others as long as their freedoms are preserved.
If this slurping is really that evil and unacceptable, incompatibly with human rights, why haven't millions of people descended on Washington and London, rioting in the streets, bringing down our respective governments?
Or are all these out raged people trusting democracy and the ballot box will sort it out?
Or, is it that really they don't care?
Like, I just loaded the page. Is it down for other people, or does this trigger just for me?
Edit: Oh nevermind, this website thinks x-forwarded-for is my real IP-address. I set it to '"\ which occasionally triggers database errors on php or asp.net websites, highly amusing :P
Edit2: Also interesting is when hackernews crashes just after I re-enabled my header modifier and try to save the previous edit.
That said, society can withstand 100% surveillance, total transparency. But that surveillance has to be done by the people and be publicly shared.
I was hoping that this was something I could pay for. It's still a free web browser.
I have a Catch-all address setup on my Domain so that I can give every site I interact with their own custom email address. In this case it was firstname.lastname@example.org. Since the email address doesn't exist, and they're the only company I've shared it with, they're the only ones with a record of it's existence.
When I emailed them asking if they'd had a security breach or if they were selling email addresses they responded saying they would opt me out of marking emails. When I responded with the context and header info of the emails I received and asked if this was in fact from them things turned. About an hour later I got a response, the tone had changed significantly and they indicated that the incident had been escalated to their security department and that they would be in contact with me as their investigation progressed.
I can say this has been the best response to the dozens of emails I've sent to companies about the same issue. The worst was Best Buy whose response was something along the lines of "Eat Dk, we do what we want."
It would have to be out of the Caribbean or some place with lax data privacy laws, and strict confidentiality laws.
The web equivalent would be like claiming that Chrome OS isn't open because the source to Gmail isn't available.
Google is stuck behind a rock and a hard place. If they don't try to create incentives for a unified experience, they get bashed for encouraging fragmentation, if they do assert a level of control, they get bashed for not being completely open.
It's all according to the previously openly aired plan. Google keeps all of the existing code open source. Anyone who wants to build a fork can do so. Now if they want a hardware platform to run on, go find one outside the Open Handset Alliance ecosystem. It's fair game -- if a hardware partner thinks that one of Google's competitors can provide a better Android fork, they are free to leave the Alliance and go partner with that competitor. They will still get an enormous amount of code for free in AOSP. They just won't get all of the services that Google is building specifically for its own version of Android. How is any of this maintaining an "iron grip" in any way? Just contrast this with Apple where it is the sole owner of everything to do with the OS and app marketplace.
This is blatantly false. Google bought Android in 2005, two years before the iPhone was announced.
If Google didn't do any of this, and was totally altruistic, Samsung and others would already have completely screwed things up.
While it's certainly very much to Google's benefit, it also benefits most users because overall, Google has done a far better job than any OEM regarding user experience.
Open source doesn't require you to cooperate with anyone, it doesn't require you to give away access to APIs, it doesn't require you to do anything beyond whatever is explicitly stated in the license.
Google, Canonical, Oracle, IBM, Red Hat, SUSE, etc... aren't required to be good team players or corporate citizens. They're just required to abide by the terms of the licenses on code they use...
Android has come a very long way in the last few years in terms of usability and design. A large part of this has been due to an increasingly uniform design language and feel. That, and the new distribution model for what are basically Android updates (Google Play Services) has made Android feel more polished and actually allowed it to stand on its own against iOS. It also means that developers like me don't have to spend nearly as much time worrying about fragmentation in the traditional sense. Each day the percentage of people using sub-ICS phones falls, and we all get one step closer to the day we can support ICS+ only.
However, companies like Amazon would force me to rewrite the maps integration, the sign-in portion, the wallet, etc... Amazon did a great job of replicating Google Maps API V1 but they have yet to mirror V2 and don't mirror the other components I mentioned.
Aside from fragmentation and developer sanity, the article mentions another key point here:
"[M]any of Google's solutions offer best-in-class usability, functionality, and ease-of-implementation."
Exactly! Google APIs are not perfect, and there's bugs (like when Google Maps broke map markers on high resolution phones like the HTC One). But generally speaking, I'm really happy with the quality of the APIs and services. In an ideal world, Amazon and Google would work together to provide great and uniform single-sign-in APIs, great maps, etc... As it currently stands though, I don't believe either party is interested in doing so. Prisoner's dilemma?
If you want to compete with Google, using Android poses a choice: If you make Google-branded Android devices that use Google's proprietary apps, you will have to give that up in order to use Android with other ecosystems.
Thirdly, if you want to use the Google ecosystem in a product, you have to use all of it. You can't substitute someone else's location services, for an example that was litigated.
Google could develop Android in the open and retain the same level of control over OEMs, and I think they should.
Google appears to be inconsistent in enforcing restrictions on OEMs. OPhone OEMs also make Android handsets, despite the fact that OPhone is an Android derived product. Maybe that arrangement pre-dates Google's current policies.
This is false. Google wins when more people use the Internet. Android is fulfilling its initial goal incredibly well: offer a free and open-source mobile OS to encourage mobile device proliferation.
Android is doing exactly what it was designed to do.
This statement is utterly false. In-house does not mean free.
I think it's also pretty standard to open-source the core and keep the baubles proprietary. GitHub, for example, made their git interaction library open-source but their git hosting service itself is closed, as far as I know.
It's understandable why Google would lock people out of seeing the back end of their closed apps. But you have to look at what the long-term implications of them slowly removing support for ASOP apps are. As Google continually pushes out fantastic products that tie in so well to the mobile experience, why would anyone/developer want to have/develop [for] anything else. As this power grows, Google can strong-arm phone manufactures to develop hardware/features/etc to work with what they are developing. They have to sign contractual agreements to get the top version of Android and are then locked in to keep up the good terms. Google is outsourcing the hardware manufacturing to other companies and ensuring that if a user wants a good phone, they will be using their services.
Many people here are claiming any company can leave Google's garden like Amazon did. While some companies may be able to do that, I'm struggling to think of a one with the technological background, money to invest, and callousness for risk who are willing to try. Amazon has a huge assortment of media that it can toss at its users who use their hardware. Other companies don't have a differentiating factor or the software development to be able to make a truly competitive product to drive people away from Google supported Android. Just look at how much Microsoft, a software giant, is struggling to gain any shred of market share.
No executive in any reasonable company is going to propose to invest billions in order to squeeze into the highly competitive mobile OS market. Its a huge risk that only a startup could swallow, and yet few startups could even raise the money required to topple the Google supported android market.
What the future is starting to look like is the one Google was initially afraid of, that users were faced a Draconian future, a future whereone company, one device, one carrier would be [the] only choice. As Google gains more power, the open source part that Android users love is going to slowly disappear. This may or may not happen, there are many variables that could prevent it, but it is a future that would bring Google the highest return and that is the goal of all market traded companies.
> This makes life extremely difficult for the only company brazen enough to sell an Android fork in the west: Amazon. Since the Kindle OS counts as an incompatible version of Android, no major OEM is allowed to produce the Kindle Fire for Amazon. So when Amazon goes shopping for a manufacturer for its next tablet, it has to immediately cross Acer, Asus, Dell, Foxconn, Fujitsu, HTC, Huawei, Kyocera, Lenovo, LG, Motorola, NEC, Samsung, Sharp, Sony, Toshiba, and ZTE off the list. Currently, Amazon contracts Kindle manufacturing out to Quanta Computer, a company primarily known for making laptops. Amazon probably doesn't have many other choices.
That is fairly incredible, I'm surprised it is not an anti-trust/competition issue.
To be successful on mobile you also need a fairly extensive layer of services. Some of those (web, mail and so on) are easy to bolt together but others such as maps and app stores are far harder and are about data and commercial deals as much as they are about software. While it would be wrong to say that these services can't be opened up, in many cases doing so isn't as straight forward as sharing source code.
It doesn't feel as if Google has changed so much as what it means be a mobile OS has.
As a user I'm happy that Google is making sure that I can hop device manufactures without loosing my apps or functionality, if everybody would roll out there own app store and removed Google's you would be locked in with the OEM. Now you can safely change to a different phone, also they don't mind you downloading the Google apps when using an alternative ROM.
Android is open source but does that mean that you are not aloud to make money of it by providing closed source apps and service, many open source companies do that. The work that went in to Android if freely available for competitors. Lots of kernel enchantments went back in to Linux and now you have Ubuntu touch and Firefox OS both based on the Android kernel which in turn is based on Linux, how cool is that.
It's already kind of like windows, no? It runs on hundreds if different devices. It's often bloated by OEM software that people hate. It's prone to security wholes. It's slow and clunky unless you run it on the latest hardware. It bends over backwards for compatibility sake. It's more and more closed sourced...
Android is Mobile windows of the 90s. I hope Ubuntu Mobile will be successful.
When the iPhone debuted, no doubt Google sensed the impact, and Apple's ability to create an effective closed ecosystem had already been proven with iTunes. I believe that Google wanted to undermine the market long enough to understand it. True enough, "android winning" was not the same as "Google winning," but it did mean everyone else "losing." I believe that for Google, Android started as a strategy in search of a goal. It was a smokescreen to prevent Apple from taking a dominant position by default. As the data poured in, they began to understand how to leverage it, and the Nexus line became an expression of such understanding, working to establish more control, and hopefully emerge from the smokescreen they had created.
I'd fully support their modules that connect to the cloud servers being open source / GPL / etc, but to expect them to open them up to unauthenticated requests is untenable and leaves them way open to abuse / lack of rate limiting / making the service a bad time for all involved.
This seems like a terrible situation for users. Can someone with a Samsung smartphone confirm this?
If this is the case, how are the apps organized when you first buy the phone - are they all in one big apps list?
It seems that the main problem is the gatekeepers who manufacture phones.
Here's a different perspective:
I see how this code would be appropriate in larger apps but when things are just getting started. I would rather see an if else than having to go into three different structures to find out when a certain block of code is to be rendered.
* managing consensus among many nodes (as he says, "where is my libPaxos?");
* testing distributed code (for example, to see how the system behaves under unusual/unexpected loads or scheduling scenarios); and
* devops (for example, monitoring and managing throughput across all nodes to avoid 'TCP incast'-type problems).
Infoq wants me to login to access the slides. :(
Nice project though.
A longer thread about Nimrod: http://news.ycombinator.com/item?id=6272600.
In particular, I don't like the fact the for-loops are often used in first examples. I had that issue when I looked at tutorials for D and Go for the first time. My initial reaction was "Seriously?"