When my friends ask for laptop-buying advice I tell them if they like the keyboard and screen, then its just plain hard to be disappointed with anything new.
I think I can pinpoint when this happened - It was the SSD. Getting an SSD was the last upgrade I ever needed.
Above that, PCs aren't necessary for a lot of people, because people do not need $2000 Facebook and email machines. For the median person, if you bought a PC in 2006, then got an iPad (as a gift or for yourself) and started using it a lot, you might find that you stopped turning on your PC. How could you justify the price of a new one then?
Yet if there was a major cultural shift to just tablets (which are great devices in their own right), I would be very worried. It's hard(er) to create new content on a tablet, and I don't really want that becoming the default computer for any generation.
I think its extremely healthy to have the lowest bar possible to go from "Hey I like that" to "Can I do that? Can I make it myself?"
I think its something hackers, especially those with children should ask themselves: Would I still be me, if I had grown up around primarily content consumption computing devices instead of more general purpose laptops and desktops?
Tablets are knocking the sales off of low-end PCs, but we as a society need the cheap PC to remain viable, if we want to turn as many children as possible into creators, engineers, tinkerers, and hackers.
The limiting factor is if your computer's feedback loop is tighter than your brain's perception loop. If you can type a letter and the letter appears, your computer is fast enough for word processing. But, if you can run a data analysis job and it's done before you release the "enter" key, it just means you should really be doing better analyses over more data. Certain use cases grow like goldfish to the limits of their environment.
A lot of people don't want to cook, so are happy with smartphones and tablets.
Why buy a desktop or laptop when an iPad will do everything you need for a fraction of the price? That's what people mean when they sound the death knell for the PC.
The only time I felt like I've needed an upgrade is while playing Planetside 2, which is/was very CPU bound for my setup. However, when it was initially released, Planetside 2 ran like a three-legged dog even on some higher end rigs. It's much better after a few rounds of optimizations by the developers, with more scheduled for the next month or two.
I dual boot Linux boot on the same machine for my day job, 5 days a week all year. For this purpose it has actually been getting faster with time as the environment I run matures and gets optimized.
As good as it is now, I remember struggling to keep up with a two year old machine in 2003.
The Post-PC devices (tablets / smartphones) are it for the majority of folks from here on out. They are easier to own since the upgrade path is heading to buy new device and type in my password to have all my stuff load on it. If I want to watch something on the big screen, I just put a device on my TV. Need to type, add a keyboard.
The scary part of all this is that some of the culture of the post-PC devices are infecting the PCs. We see the restrictions on Windows 8.x with the RT framework (both x86/ARM), all ARM machine requirements, and secure boot. We see the OS X 10.8+ with gatekeeper, sandboxing, and app store requirements with iCloud.
The PC culture was defined by hobbyists before the consumers came. The post-PC world is defined by security over flexibility. Honestly, 99% of the folks are happier this way. They want their stuff to work and not be a worry, and if getting rid of the hobbyist does that then fine. PC security is still a joke and viruses are still a daily part of life even if switching the OS would mitigate some of the problems.
I truly wish someone was set to keep building something for the hobbyist, but I am a bit scared at the prospects.
1) Yes, I'm one of those that mark the post-PC devices as starting with the iPhone in 2007. It brought the parts we see together: tactile UI, communications, PC-like web browsing, and ecosystem (having inherited the iPods).
2) I sometimes wonder what the world would be like if the HP-16c had kept evolving.
My dad went to Walmart and bought a computer (why he didn't just ask me to either advise him, or ask if he could have one of my spare/old ones I don't know) and monitor for $399.
It's an HP powered by a AMD E1-1500. It's awfully slow. Chokes on YouTube half the time. My dad is new to the online experience, so he basically uses it for watching streaming content.
I could have grabbed him a $99 Athlon X4 or C2D on craigslist and it would better than this thing. I'm not sure if he'll ever experience a faster computer so I don't think he'll ever get frustrated with this machine, but it's amazing that they sell an utter piece of shit like this as a new machine.
Just because it doesn't sit in a big box doesn't mean it's a different class of system. The difference is really the openness of the platform, comparing something like iOS to Win 8 pro.
That said, many tablets are basically what we would have thought of as PCs before. Consider something like the Samsung 500T or similar, or thinkpad helix. Components are small and cheap enough that they can be packed behind the LCD, and you have essentially a laptop that doesn't need it's keyboard.
Will iPads take over PCs? No. They are too limited, not because of hardware, but because of OS limitations. Will tablets take their place though? Quite possibly. The portability is quite handy. That I can dock a tablet with a keyboard and have a normal PC experience, but have it portable when I need it is a selling feature.
The obvious cavaet is that a limited OS is fine as long as the majority of data is cloud based. In that case even development can be done on a closed platform, and the tablet becomes something more akin to a monitor or keyboard. More of a peripheral than a computing device. We might get to that point, but that's not the cause of the current trend.
In high school I recall lusting after a $4,500 486DX2 66Mhz machine with an astounding 16MB (not GB) of RAM, and a 250MB hard drive. A few months ago I spent a little less than that on a laptop with 2,000X that amount of RAM, 8,000X that amount of hard drive space, and a processor that would have not so long ago been considered a supercomputer.
I for one am glad that we have continued to innovate, even when things were good enough.
It's that when tablets hit the scene, people realized they don't need their PC for 90% of what they do on a "computer". Email, social networking, shopping, music, video etc.
Us old geeks who swap hardware, play PC games, tweak OS settings and generally use yesterday's general purpose PC will be the ones remaining who keep buying new hardware and complete machines.
The general public meanwhile will only buy a PC if their tablet/smartphone/phablet needs expand beyond those platforms.
The market will shrink but it will turn more "pro". The quicker MS evolves into a modern IBM the better.
I'm still running fine with my 2007 Macbook, but I think my iPhone has extended its life because now my laptop almost never leaves the house and sometimes doesn't even get used in a day, whereas pre-smartphone I used to cart my laptop around rather frequently and use it every day.
They bought a windows machine for what to them is a lot of money (more than a iPad), it didn't last long before it slow and it's got extra toolbars and all sorts of rubbish. What's worse is that this happened last time they bought a PC and the time before and the time before that. They are not going to add a SSD because that's not how they think + they don't how + it's throwing good money after bad + they are dubious of the benefits.
The iPad in contrast exceeded expectations and in the year or two they've had it they had a better experience. They can't get excited about a another windows machine because it's expensive, more of the same and not worth it really.
Back in Q1 2010 I got an Intel Core i7 980X which benchmarked at 8911 according to http://www.cpubenchmark.net/cpu.php?cpu=Intel+Core+i7+X+980+...
Now in Q2 2013 (3 years later) the very top of the line processor available, an Intel Xeon E5-2690 v2, is only twice as fast at 16164: http://www.cpubenchmark.net/cpu.php?cpu=Intel+Xeon+E5-2690+v...
It used to be that things got faster at a much faster rate. And until this new E5-2690 v2 was released, the fastest CPU was only 14000 or so, which is less than 2x as fast.
This is the number one reason why I love the PC above any other kind of computing machine. Need more disk space? Sure, go get a new disk, you may not even need to remove any of the others. Want a better graphics card for that new game? Easy as pie. Your processor died because the fan was malfunctioning? Too bad, but luckily those two are the only things you'll have to pay for. The list goes on.
I bought my current PC on 2009. The previous one still had some components from 2002.
---Why does nobody talk about them? Because nobody wants them, thats why. Imagine somebody brings you a personal desktop computer here at South By, theyre like bringing it in on a trolley.
Look, this device is personal. It computes and its totally personal, just for you, and you alone. It doesnt talk to the internet. No sociality. You cant share any of the content with anybody. Because its just for you, its private. Its yours. You can compute with it. Nobody will know! You can process text, and draw stuff, and do your accounts. Its got a spreadsheet. No modem, no broadband, no Cloud, no Facebook, Google, Amazon, no wireless. This is a dream machine. Because its personal and it computes. And it sits on the desk. You personally compute with it. You can even write your own software for it. It faithfully executes all your commands.
So if somebody tried to give you this device, this one I just made the pitch for, a genuinely Personal Computer, its just for you Would you take it?
Even for free?
Would you even bend over and pick it up?
Isnt it basically the cliff house in Walnut Canyon? Isnt it the stone box?
Look, I have my own little stone box here in this canyon! I can grow my own beans and corn. I harvest some prickly pear. Im super advanced here.
I really think Im going to outlive the personal computer. And why not? I outlived the fax machine. I did. I was alive when people thought it was amazing to have a fax machine. Now Im alive, and people think its amazing to still have a fax machine.
Why not the personal computer? Why shouldnt it vanish like the cliff people vanished? Why shouldnt it vanish like Steve Jobs vanished?
Its not that we return to the status quo ante: dont get me wrong. Its not that once we had a nomad life, then we live in high-tech stone dwellings, and we return to chase the bison like we did before.
No: we return into a different kind of nomad life. A kind of Alan Kay world, where computation has vanished into the walls and ceiling, as he said many, many years ago.
Then we look back in nostalgia at the Personal Computer world. Its not that we were forced out of our stone boxes in the canyon. We werent driven away by force. We just mysteriously left. It was like the waning of the moon.
They were too limiting, somehow. They computed, but they just didnt do enough for us. They seemed like a fantastic way forward, but somehow they were actually getting in the way of our experience.
All these machines that tore us away from lived experience, and made us stare into the square screens or hunch over the keyboards, covered with their arcane, petroglyph symbols. Control Dingbat That, backslash R M this. We never really understood that. Not really.---
These days I just don't see that. Graphics cards seem to improve by 30-50% each generation, and because so many games are tied to consoles now, they often aren't even taking advantage of what's available. With multicore processors and the collapse of the GHZ race, there's no easy selling point as far as speed, and much less visible improvement (now all that useless crap can be offloaded to the second core!) and most consumers will never need more than two cores. Crysis felt like the last gasp of the old, engine-focused type of game that made you think "man, I really should upgrade to play this"... and that was released in 07. Without significant and obvious performance improvements, and software to take advantage, why bother upgrading?
I will use my 2011 smart phone until it physically breaks. If a 1.2GHz device with a 300MHz GPU, 1280x720 screen, and 1GB of RAM can't make calls and do a decent job of browsing the web, that's a problem with today's software engineering, not with the hardware.
And if Google decides to doom my perfectly good device to planned obsolence, fuck them, I will put Ubuntu Touch of Firefox OS on it. The day of disposable mobiles is over, we have alternatives now just like we do on PCs.
Yes, PCs aren't ageing as fast as they used to.
But they are obsolete beyond 'not being portable'.
Here is why tablets are winning:
1. Instant on. I can keep my thoughts in tact and act on them immediately. No booting, no memory lags, no millions of tabs open in a browser.
2. Focus. Desktop interfaces seem to be desperate to put everything onto one screen. I have a PC and a Mac (both laptops). I prefer the PC to the Mac; better memory management for photoshop and browsing, and I love Snap. But that's where the usefulness stops. With an ipad, I have no distractions on the screen.
3. Bigger isn't better. That includes screens. Steve Jobs was wrong. The iPad Mini is better than the bigger variants. Hands down. Same goes for desktop screens. I want a big TV, because I'm watching with loads of people. I don't need a big screen for a PC because the resolution isn't better than an ipad and I'm using it solo. Google Glass could quite possibly be the next advancement in this theme.
4. Build quality. PCs look and feel cheap. Including my beloved Sony Vaio Z. The ipad in my hand could never be criticised for build quality.
5. Price. The ipad doesn't do more than 10% of what I need to do. But, I do those 10% of things 90% of the time. So why pay more for a PC when the ipad has no performance issues and takes care of me 90% of the time.
I used to think shoehorning a full desktop OS into a tablet is what I wanted. Seeing Surface, I can happily say I was wrong. I don't want to do the 90% of things I do 10% of the time. That's inefficient and frankly boring. PCs and Macs are boring. Tablets are fun. There's one last point why tablets are winning:
6. Always connected. It strikes me as absurd seeing laptops on the trains with dongles sticking out. It takes ages for those dongles to boot up. I used to spend 5-10 minutes of a train journey waiting for the laptop to be ready. My ipad mini with LTE is ever ready. And cheaper. And built better. And more fun.
The PC isn't dead, but it will have next to no investment going forward, so will suffer a mediocre retirement in homes and offices across the world.
Note: I love my PC. I just love my ipad mini more.
Today, the calendar says it's time for me to upgrade again. Yet the pain of obsolescence of a five-year-old laptop in 2013 just isn't the same as in 2008: USB 3.0? What new applications is it enabling? Anything I need Thunderbolt for? Not yet. New Intel architectures and SSDs at least promise less waiting in everyday use... but I'm hardly unproductive with my old machine.
Intel, AMD, etc. might want to consider slowing their desktop product cycles down a tad. Instead of spending extra to bring every incremental performance to market as soon as it can be, perhaps longer product cycles will bring down costs.
Personally, I think of these hardware market developments with an eye toward interplay with the software market. Historically, software developers had to consider the capabilities of consumer hardware in determining feature scope and user experience. Hardware capabilities served as a restraint on the product, and ignoring them could effectively reduce market size. The effect was two-sided though, with new more demanding software driving consumers to upgrade. Currently, in this model, the hardware stagnation can be interpreted as mutually-reinforcing conditions of software developers not developing to the limit of current hardware to deliver marketable products, and consumers not feeling the need to upgrade. In a sense, the hardware demands of software have stagnated as well.
From this, I wonder if the stagnation is due to a divergence in the difficulty in developing software that can utilize modern computing power in a way that is useful/marketable from that of advancing hardware. Such a divergence can be attributed to a glut of novice programmers that lack experience in large development efforts and the increasing scarcity and exponential demand for experienced developers. Alternatively, the recent increase in the value of design over raw features could inhibit consideration of raw computing power in product innovation. Another explanation could be that changes to the software market brought about by SaaS, indie development, and app store models seem to promote smaller, simpler end-user software products (e.g. web browsers vs office suites).
I wouldn't be surprised if this stagnation is reversed in the future (5+ years from now) from increased software demands. Areas remain for high-powered consumer hardware, including home servers (an area that has been evolving for some time, with untapped potential in media storage, automation and device integration, as well as resolving increasing privacy concerns of consumer SaaS, community mesh networking and resource pooling, etc), virtual reality, and much more sophisticated, intuitive creative products (programming, motion graphics, 3d modeling, video editing, audio composition, all of which I instinctively feel are ripe for disruption).
The CPU is slow by current standards, but a Core2Duo isn't slower than the low-clock CPUs in many Ultrabooks. The 3 hour battery life could be better, but I can swap batteries and many new laptops can't. The GPU sucks, but I don't play many games anyway. DDR2 is pricey these days, but I already have my 8gb. SATA2 is slower than SATA3, but I'm still regularly amazed at how much faster my SSD is than spinning rust. It's a little heavy, but really, I can lift six pounds with one finger.
So the bad parts aren't so bad, but nothing new matches the good parts. The screen is IPS, matte, 15" and 1600x1200. Aside from huge monster gaming laptops, nothing has a screen this tall (in inches, not pixels) anymore. I can have two normal-width source files or other text content side by side comfortably. The keyboard is the classic Thinkpad keyboard with 7 rows and what many people find to be the best feel on a laptop. The trackpoint has physical buttons, which are missing from the latest generation of Thinkpads. There's an LED in the screen bezel so I can view papers, credit cards and such that I might copy information from in the dark, also missing from the latest Thinkpads.
Interestingly it seems like some would love to run their old OS on them. My Dad sort of crystallized it when he said "I'd like to get a new laptop with a nicer screen but I can't stand the interface in Windows 8 so I'll live with this one." That was pretty amazing to me. Not being able to carry your familiar OS along as a downside. That reminded me of the one set of Win98 install media I had that I kept re-using as I upgraded processors and memory and motherboards. I think I used it on 3 or 4 versions of machines. Then a version of XP I did the same with.
I wonder if there is a market for a BeOS like player now when there wasn't before.
The mysterious K Mandla gives 10 reasons not to buy a new computer
The TOPLAP project (a real hack - give a teenager an old laptop and Ubuntustudio or similar, light blue touch paper, retreat). By the way, if anyone has resources for live-coding in puredata, please post here
The Zero Dollar Laptop Project  and current progress 
Now, I made a major discovery over the summer: I am actually more productive on a laptop than on a desktop with a large screen. Strange but true, so I am donating the desktops and adopting a couple of Thinkpads off Ebay (X60 from Dec 2006 and X200s from March 2010) as my major computational devices. One with Debian stock and the other with gNewSense 3.0 for a giggle.
i agree that the increased (functional) life of pcs is a contributing factor to slowing unit sales, but its laughable to attribute it to the idea that people who once would have bought a new pc are now just buying more ram and upgrading internals.
the percentage of people who would have any idea how to do that, or even consider it as a viable option, is far to small to have any real impact on demand..
The motherboards for PCs built 5 years ago are completely different from those built today, and the CPU sockets have changed every other year. New processors from Intel will be soldered on.
The performance of a PC from five years ago is probably adequate for web browsing and office tasks. For anything more demanding, the advances in power consumption, execution efficiency and process node are huge leaps from five years ago.
1. Buy mid range processor with a lot of L2 cache2. Find mobo that supports lots of ram and stuff it to the max.3. SSD is a must4. Buy the second card of the high tier model (the cut chip from the most recent architecture (in their times that were 7950, 570 etc ... but with current branding of NVIDIA a total mess it may require some reading if you are on team green)5. Any slow hard drive will be enough for torrents6. In 2 1/2 years upgrade the video to the same class.in 5 years ... if the market is the same repeat. If it is not - lets hope there are self assembled devices on the market non locked.
I have been doing that since 2004 and never had a slow or expensive machine.
the PC market isn't dead, it is slowly receding and it won't stop. It's because of the new alternatives, and assuming finite budge, when you get one of the alternatives, which cost roughly around a consumer-level laptop, you don't have enough for another PC that you don't need.
The article to me seems extremely narrow in both its oversight and scope. People don't care about processing power not because it's a marketing gimmick, but because they don't care. People who do care are the ones who know enough to care, and they will always be minority.
1) I don't need to buy a new PC every two years anymore2) Someone should make a tablet with slots so it can be upgraded like a PC
At no point during the 4-year tenure of the 3GS did it stop being astonishing to me that I had flat-rate, always-on internet in my pocket, all my music, ebooks and audiobooks, videos that I took of my wedding, and photos that I took of our first child, who's now inherited it and mostly uses it for In the Night Garden.
Personally I think that because of the reduced horizons of smartphones, they're actually every bit as long-lasting as your PC. Sure, at some point OS updates stop coming, and with that app upgrades, but the performance of the 3GS was fine, and I'm not afraid to admit that part of the latest upgrade was just embarrassment at having such a naff old phone, as much as I loved it.
I'm hoping that a new generation of largish (24-27") 4K displays will lead to a rebirth in desktop PCs, if only because we depend on them so much for professional work where they've fallen behind in experience when compared to high-end laptops, which shouldn't be the case!
This is not end-consumers nor businesses. Enthusiasts who were building and upgrading their computers were always a small market.
The article talks about upgrading repeatedly, but I don't think the author can extrapolate their own expertise over the rest of the traditional desktop users.
It's wasteful to be throwing away computers constantly. In the PC world, I've noticed that it's particularly prevalent among "gamers" that are convinced that they need a new computer every couple of years.
Personally, I upgrade incrementally, and I still use my PC on a regular basis. The machine I have now is a hodge-podge of parts from different ERAs. I have an Intel Q6600 but DDR3 RAM, and a modern, quite beefy graphics card that I bought when it was in the upper-echelons in early 2013. It runs most modern games pretty well. I have an SSD for most software but also three big HDDs, one of which I've had since my first build in 2004.
Now do the math. If everyone - smart, average, stupid, young, old, are buying tablets and smartphones, then of course this makes PC sales look like death.
It's more like a "post-PC-avoidance" world we're in now. A lot of stupid people avoided using PCs back in the day. Now all those people own tablets and smartphones and use them for entertainment.
I think this article gets it about right - I've started enforcing a 3 year cycle for both phone and laptops because they were costing me too much (in a mustachian sort of way) - and I've stuck to it with laptops (I made 3.5 years on a 2009 MBP) and will be doing so with the iPhone (due for replacement spring 2015.) If the nexus devices keep getting cheaper and awesomer, then I might jump to those a bit earlier (particularly if I can sell the 32GB 4S for an appreciable fraction of the new phone cost.)
Working with the 3.5 year old laptop got slightly painful (re-down-grading back to snow leopard from lion was essential, I even tried ubuntu briefly) but perfectly bearable for coding and web browsing. I'll see how slow the phone gets, but I'm quite relaxed about not having the latest and greatest iOS features (I've not seen anything compelling since iOS 5; I only did 6 because some new app requested it.)
 or rather, one was, and then I gradually replaced all the parts until I had a whole spare PC to sell on ebay, and one mobo bundle later and I'm still using it with no problems, playing games etc.
My old T400 was "dying" until I put an SSD in it. Blew my mind how significant an upgrade that was. When it started "dying" again I maxed out the RAM @ 16GB.
The CPU is a bit lacking now that I want to run multiple VMs side by side, and the chassis has seen perhaps a bit too much wear, so a replacement is coming -- but I've managed to put it off for years, with relatively inexpensive upgrades.
I used to update for gaming and 3d almost entirely.
I also used to update more frequently for processor speed/memory that were major improvements.
If we were getting huge memory advances or processor speeds still there would be more reason to upgrade. Mobile is also somewhat of a reset and doing the same rise now.
I'm inclined to believe that mobile sales are "artificially" inflated by these subsidies to a large degree.
Of course, if this business model is sustainable over the long term I guess it doesn't matter for mobile h/w manufacturers.
But for s/w developers the fact that people upgrade h/w every 2 years because of subsidies doesn't mean that those h/w sales are translating into a greater user base.
Yes, on paper, the latest processor is faster than the one released two years ago but you have to be doing specific types of workloads with it to really make a big difference.
However for those of us that use our computers 8 hours+ every day, I think it makes good sense to upgrade to the newest hardware every 2-3 years.
I just assembled a computer from new parts myself, and its nice now to have a fully encrypted workstation, with zero performance hit. Q87 motherboard with TPM(asus q87m-e) + UEFI bios + UEFI GOP Compliant videocard(EVGA GeForce GTX 770) + M500 SSD + Bitlocker + Win2012R2(or Win8.1) means you can enable the builtin hardware encryption of the M500 SSDs. It gives me a certain peace of mind to know that a burglar wont be able to grab my personal files and source code if my computer was ever stolen. I also imagine the TPM+Secure boot combo will make it harder for a rootkit to go unnoticed.
Not to mention the lower idle power usage resulting from the 22nm haswell and 32nm lynx chipset.
My friends at work seems to think I'm crazy for replacing a 2 year old computer :) Although as I pointed out to one of them, he spent more than twice as much on a new mountain bike, and I'm sure i spend alot more time on my computer than he does on his mountain bike ;)
I have a desktop with twice the processing speed and twice the ram, but for all intents and purposes, it runs almost exactly the same as the little Acer. Unless I am playing a game or running illustrator, I simply don't need the power.
If any, what is dead is the software need for the Moore's law
The netbook handled just about everything I threw at it, and with FreeBSD and dwm it ran faster than it did when I first bought it.
Unfortunately I'm not too pleased with the HP Envy 15. The AMD A6 Vision graphics aren't so bad, but support for the Broadcom 4313 wifi card is sparse in the nix world...
Soon I'll be tearing it apart to swap out the bcm 4313 for something supported by FreeBSD, but for now, I'll not be purchasing a new PC any time soon.
Games can always use more resources. AFAIK there is still a lot of progress being made with GPUs. 60fps on a 4K display will be a good benchmark. The funny thing is that GPU makers have taken to literally just renaming and repackaging their old GPUs, e.g. the R9. As for the game itself, there is a looming revolution in gaming when Carmack (or someone equally genius-y) really figures out how to coordinate multiple cores for gaming.
But yeah, most everything else runs fine on machines from 2006 and on, including most development tasks. That's why Intel in particular has been focused more on efficiency than power.
 Tom's Hardware R9 review: http://www.tomshardware.com/reviews/radeon-r9-280x-r9-270x-r...
 Carmack at QuakeCon talking about functional programming (Haskell!) for games and multi-core issues: https://www.youtube.com/watch?v=1PhArSujR_A&feature=youtu.be...
I had a 2005 imac before acquire this 2011 iMac and in between I've bought MacBooks and Macbook Air. I'm thinking in getting my new desktop on 2015.
Thing is, when I go to my parents house, I see 2003 computers. I think this reality apply's to many families: parents don't care about speed, they get used because their needs are less computational and more casual, like browsing, Facebook and Skype. The trend I'm seeing in Spain is getting iPads for parents is getting notably high. All my friends instead upgrading their parents pc desktops are buying ipads and parents love it. Are you having the same experiences?
It's really nice when some build process takes less time because of better hardware. Also, try running some upcoming games on an old PC. Obviously the need for some hardware depends on what you are planning to do.
SSDs just changed the game, and it was about 2009 when that started.
There was a time when you felt like a new PC was obsolete the second you took it out of the box. But that was because we were just scratching the surface of what we could do with new hardware. We're now at a point where it's hard to find consumer and business applications for all the spare hardware that you can afford.
Mobile adoption has been so quick because everyone is buying devices for the first time (tablets), or there is an incentivized two-year replacement cycle (phones). But I'm still using an original iPad that works just fine, and a 3 year old cell phone with no reason to upgrade. Eventually, I think we'll start to see the same leveling off in mobile as well.
For laptops it's a different story. The big push seems to be in reduction of power consumption for longer battery life, which sounds pretty sensible to me. I guess if battery life is a big concern for a PC user, then it makes sense to go to a smaller process. That does seem like a pretty small reason to upgrade, though.
Another good indicator that the PC "game" has changed is that the two major commercial PC OS's just released their latest versions (Mavericks & 8.1) for free.
Microsoft and its SharePoint platform will keep SharePoint developers upgrading their desktops upon every release.
Tablets, those funky phones are popular today something else will get popular after them. PC may never get as popular as them but they are here to stay.
Haswell architecture couldn't have hit the market at a better time for laptop owners, with more powerful integrated graphics and low power use. I'm sure it isn't a coincidence.
Saying that the PC is dead is being correct. Almost everyone I know buys a laptop instead of a PC. I know a lot of people that do not have a PC, but I don't think I know a single person that doesn't have a laptop.
It's like saying the Novel is Dead. Plenty of novels are being written, but it is really not the one major form of art that people are discussing. That is being replaced by television and film. Will there be novels written fifty years from now? Most definitely. But still, the idea that the novel is the one true form where the greatest art occurs is over.
Although one could argue that network bandwidth is still an area affects the "everyday stuff".
1. Consumer affordable monitors. You'll need a better GPU, and probably Display Port. I don't expect most consumers wanting 30" 4K display. They'll want 22-27" displays of 4K resolution, a la Retina. (PPI scaling) Everything is still the same size as people are used to (compared to 1080p), but everything is sharp as Retina.
2. 4K adoption of multimedia on the Internet. The more 4K videos that pop up on YouTube, the more people who are going to want to upgrade their hardware. This one isn't specific to PCs though, it could apply to mobile devices as well.
Go to YouTube and find a 4K video (the quality slider goes to "Original"). Now look at the comments. Many of the comments in 4K videos are people complaining how they can't watch the 4K video because of their crappy computer (and sometimes bandwidth).
i'm thinking my parents - they will use that 2000 pc until it's not booting up, and then they'll worry on upgrade
Intro will enable LinkedIn to have the IP address of all of your staff using it, and thus (from corp Wifi, home locations of staff, popular places your staff go) they will know which IP addresses relate to your staff members (or you individually if you are the only person on a given IP).
This means that even without logging onto LinkedIn, if you view a page on their site they can then create that "so and so viewed your profile", which is what they're selling to other users as the upgrade package to LinkedIn.
Worse than that, as a company you can pay to have LinkedIn data available when you process your log files, and from that you know which companies viewed your site. And that isn't based on vague ideas of which IPs belong to a company according to public registrar info, this is quality data as the people who visited from an IP told LinkedIn who they were.
Think of that when you're doing competitor analysis, or involved in any legal case and researching the web site of the other party.
And VPNs won't help you here, as you'd still be strongly identified on your device and leaking your IP address all the time.
There are so many reasons why this LinkedIn feature needs to die a very visible and public death, and very few about why it should survive. It's a neat hack for sure, but then so were most pop-up and pop-under adverts and the neatness of overcoming the "impossible" is no reason this should survive.
1. Attorney-client privilege.
I'm guessing most law firms use third party email servers, anti-virus, anti-spam and archive/audit systems which this would also apply to. It would also apply if you're using Raportive, Xobni or the like (or integrated time-tracking, billing, crm, etc.).
2. By default, LinkedIn changes the content of your emails.
Irrelevant. Unless you read your emails in plain text every modern email client changes how email is displayed.
3. Intro breaks secure email.
Yes. Except iOS mail doesn't support crypto signatures anyway.
4. LinkedIn got owned.
Yes. LinkedIn adds an extra point of vulnerability.
5. LinkedIn is storing your email communications.
Well metatdata but yes.
7. Its probably a gross violation of your companys security policy.
Yes. As is using Linkedin itself. Or Dropbox. Or Github. Or Evernote. Or Chrome. Or any enterprise software that uses the bottom up approach.
8. If I were the NSA
The NSA has access to your emails if they want them anyway. Email isn't a secure protocol against a well funded adversary.
9. Its not what they say, but what they dont say
10. Too many secrets
These all seem to be questions that can either be answered by testing or ones that LinkedIn would probably be happy to disclose, but unlikely to be major issues to mainstream users.
So fundamentally it comes down to two points, granting Linkedin access to your email creates a new point of attack and Linkedin themselves might use your email in ways you find undesirable.
So it's essentially a trade-off for the benefits you get from the app versus those risks. For a personal account which you use for private emails, personal banking, etc. the evaluation is obviously going to be very much different from say a salesperson's work account which they use for managing communication with leads.
In the later case they may already be trusting LinkedIn with similar confidential information and already use multiple services (analytics, crm, etc.) that hook into their email so the additional relative risk might be smaller.
As people with technical expertise we shouldn't use scare-mongering to push our personal viewpoints upon those with less expertise, but rather help people understand the security/benefit trade-offs that they're making so they can decide for themselves whether to take those risks.
It's important to treat the wider non-technical community with respect and as adults capable of making their own judgements and not as kids who need to be scared into safety.
This is really just a case of well-branded spearphishing. You should already be protecting against that.
We wanted to provide additional information about how LinkedIn Intro works, so that we can address some of the questions that have been raised. There are some points that we want to reinforce in order to make sure members understand how this product works:
- You have to opt-in and install Intro before you see LinkedIn profiles in any email.- Usernames, passwords, OAuth tokens, and email contents are not permanently stored anywhere inside LinkedIn data centers. Instead, these are stored on your iPhone.- Once you install Intro, a new Mail account is created on your iPhone. Only the email in this new Intro Mail account goes via LinkedIn; other Mail accounts are not affected in any way.- All communication from the Mail app to the LinkedIn Intro servers is fully encrypted. Likewise, all communication from the LinkedIn Intro servers to your email provider (e.g. Gmail or Yahoo! Mail) is fully encrypted.- Your emails are only accessed when the Mail app is retrieving emails from your email provider. LinkedIn servers automatically look up the "From" email address, so that Intro can then be inserted into the email.
Really? I guess you better have your own SMTP server set up then, or hope your email provider is willing to go to bat for your rights...
> 8. If I were the NSA
Yeah, it sounds like they definitely have needed it so far...
5 other of the things are basically the same point, remade in 5 different ways. This is a really weak list. There are certainly concerns, but most of these problems are symptomatic of our email system as it is. And have we all forgotten how crazy everyone went when we found out google was going to start advertising in Gmail?
What does the sig it appends look like? I will have to make sure to never send email to anyone who has the tell-tale "I opt into spyware" flag.
Serious questions though, if you are an IT shop - how do you defend against this trojan horse app?
In the first instance I thought this was an app that was running in the background on your phone, I would have called that doing the impossible. This is just a MITM, and not a very good one at that.
It's interesting this "blog post" came from a professional security company who makes it money from scaring individuals and companies about security threats.
Is it just me, or is this firm even worse than LinkedIn?
If it's modifying the message, it likely breaks DKIM too. meaning your messages will be more likely to be flagged as spam.
More generally, this is the catalyst for me leaving LinkedIn. They've never generated any new business (not even a single lead), and if I'm honest the only reason I use it is more about my ego than anything useful.
"LinkedIn Founder says 'all of these privacy concerns tend to be old people issues.'"
The bit about privacy starts at the 13 minute mark.
After all, we are talking about the same team more or less, and surely the same company who owns Rapportive today.
If my concerns are real. One might find this is ironic that Rapportive was backed by YC and Paul Buchheit, the creator of Gmail, and now this very company violating GMail users' privacy.
How did the C-people even found out such thing is possible? Some intern who just found out how mail works probably was flapping his jaw too much.
> These communications are generally legally privileged and cant be used as evidence in court but only if you keep the messages confidential.
"They went after high ranking military officers. They went after members of congress. The Senate and the House - especially on the intelligence committees, and on the armed services committees and judicial. But they went after other ones too. They went after lawyers and law firms. Heaps of lawyers and law firms. They went after judges. One of the judges is now sitting on the supreme court that I had his wiretap information in my hand. Two are former FISA court judges. They went after state department officials. They went after people in the executive service that were part of the White House - their own people! They went after anti-war groups. They went after US companies that do international business around the world. They went after US banking firms and financial firms that do international business. They went after NGOs like the red cross and people like that that go overseas and do humanitarian work. They went after a few anti-war civil rights groups...
Now here's the big one. I haven't given you any names. This was in summer 2004. One of the papers that I held in my hand was to wiretap a bunch of numbers associated with a 40-something year old wanna-be Senator from Illinois. You wouldn't happen to know where that guy lives right now, would you? It's a big White House in Washington DC. That's who they went after. And that's the President of the United States now. And I could give you names of a bunch of different people they went after that I saw! The names and the phone numbers of congress. Not only the names but it looked like staff people too, and their staff. And not only their Washington office but back home in their congressional offices that they have in their home state offices and stuff like that. This thing is incredible what NSA has done. They've basically turned themselves - in my opinion - into a rogue agency that has J Edgar Hoover capabilities on a monstrous scale on steroids."
--former nsa officer Russ Tice...
June 20th interview on Boiling Frogs... http://www.youtube.com/watch?v=CPyxeqcCjkc full 1hr+ radio interview)
or watch 11 minute RT interview http://www.youtube.com/watch?v=d6m1XbWOfVk
Privacy is one of the hardest things to get folks riled up about. It erodes slowly, and for "good" reasons, like defending the country against terrorism. But privacy is critical to a meaningful democracy. Strangely, many of the members of Congress fail to understand how important it is, and that compromising our privacy for security is a huge mistake. Particularly since those compromises are not necessary.
The fact that the NSA is monitoring the calls of world leaders is also worrying. But it's more of a foreign policy issue, damaging international relations and making it more difficult for countries to trust the US. I think it's foolish, and needs to stop, but it doesn't threaten our freedom directly.
Jay Carney issued a statement that said the US "is not monitoring and will not monitor" the German chancellor's communications.
Reminds me of a Spaceballs scene:
Colonel Sandurz: Now. You're looking at now, sir. Everything that happens now, is happening now. Dark Helmet: What happened to then? Colonel Sandurz: We passed then. Dark Helmet: When? Colonel Sandurz: Just now. We're at now now.
What would this information be useful for? Why was the NSA collecting this information and at whose request? Is the same being done to US politicians?
The most useful applications of this I can think of are betraying allies, manipulating negotiations with rival trade blocs, economic espionage, and of course protecting the power of the agencies who perform this surveillance and the lucky few who are given strictly limited access to it.
If the POTUS is given this intelligence and makes most of his decisions based on it, how does he know that he is being given the truth, rather than a carefully edited version of it?
It seems surveillance is no longer focussed on terrorism, if it ever was (indeed a few terrorist attacks have gone on the US without detection in spite of all this surveillance). It's telling that even the NSA have given up using that excuse as it becomes more and more clear where the focus of their intelligence gathering is directed.
Is the NSA (and the US by proxy) using the information it collects as a way of protecting and expanding its power? Is this inevitable if you give an organisation that much power over our lives and very little oversight?
Are all allies of the US mistrusted so much that they must be spied on? Should they in return shut down trust of the US and repudiate treaties they have with it like the one sharing SWIFT data or details of people visiting the US? Can the EU trust the products of American internet companies, or should they set up rivals?
It seems information has become more and more synonymous with power as our economies in the west become information economies, and the greatest power of all has been handed to an agency without significant legal limits and without any sort of public accountability, led by a member of the military.
I find the NSA's domestic spying to be appalling... but this is the sort of thing everyone knew the NSA was responsible for since its inception.
Now I do not condone Orwellian spying on the citizens of your own republic, but this really is their job and it's quite impressive that they're this good at it. Especially given the fact that historically the US has not been completely invested in espionage and has favoured building up capacity after key events and quickly dismantling the apparatus once the emergency has passed. What these scandals are offering is a glimpse into a dramatic shift in the way the US conducts its affairs and that in of itself is quite noteworthy.
The character of the US government in general, and the NSA in particular, is apparently that of a rotten, sneaking, dishonest liar.
I cannot say that I am surprised, human nature being what it is, but I am very disappointed.
On the bright side, it is in good international company.
Domestic spying violates constitutional rights. More to the point, I don't want to support an institution with programs that violate my privacy, no matter what benefits such programs provide.
But isn't international spying is different? Honestly, I don't mind supporting an institution with an external espionage programs. Isn't in my best interest? Does it harm me? What are the concrete repercussions of spying on foreign officials? Are these officials really going to renege on international alliances because they have a chip on their shoulder? If they have anything to hide, it's by definition counter to US interests; if our allies are making plans behind our backs, I want our government to find out. (And to be totally honest, if our government is making secret plans behind our allies backs, I would want our allies to find out, as well.)
I'll repeat this, because it is a real question, and the answer could have a real effect on my opinion: What are the concrete repercussions of spying on foreign officials?
The problem is the mass slurping of the data of everyone else.
There are exceptions! If your nation could plausibly be on deck for the next military-industrial complex fundraising activity, you might want your leaders to secure their communications against NSA. Of course, if they're not doing anything wrong, they might want that fact to be observed, on the off chance it might make a difference.
And newsflash ... we probably bug your embassies too.
Definitely Snowden:0 and NSA:1 in this case.
It seems that the United States loses the reality and lapses in an egocentric / ethnocentric disease.
How would the U.S. react if they found out such a thing the other way around?
I appreciate the sentiment of those that want to protest against this and I can understand the spoon-fed arguments about how the NSA must go after the kiddie fiddlers, terrorists that want to blow up innocent kiddies (as in the ones that haven't been fiddled with, yet) and do all that mysterious national security stuff.
However, instead of same-old, same-old, can we work on a technological solution? Something that will work for you and I as well as Mrs Merkel?
We can let go the network analysis stuff, who is in contact with whom as right now there is no easy way to prevent the NSA slurping that stuff up. But, as for the content, can't that be encrypted properly, without the NSA having the key and without there being secret courts where keys get handed over in secret? It is just code we need, and with it we can get a reasonable compromise where our conversations are secure.
Hell, no one, regardless of nationality, in or out of government, can fell secure in any communication of any kind with a US government person.
It's my understanding that the British were thought to have stolen submarine detection technology from the French, and the French were widely accused of industrial espionage against US companies in the 1990s. I also vaguely remember a 60 Minutes piece in the 1990s about Germans fulfilling their military service obligations by committing industrial espionage against US companies.
It seems to me that politicians are playing to public opinion, while knowing full well that this is how the international relations game has been played for decades, if not forever.
Therefore it's not reasonable to apply same expectations of how it acts as we do pleasant little countries like Norway or Netherlands (who are probably only independent because US defended them against Germans & Soviets).
Irrespective of what one thinks of it (and I do not think favourably of it), how is it surprising that an organisation that is established specifically to spy on people is in fact spying on people?
Which, of course, was a blatant lie.
Gonna call my senator and congress woman tomorrow. And then I'm going to tell everyone I know to do the same.
Intellectual Ventures and these other pantywaste dirtbags are going to be lobbying hard against this bill, so the only thing we can do (unless you have some millions of dollars to spare on lobbying) is to call people and spread the word. That worked for SOPA, so maybe it can work now too.
Go, go, go!
As an aside, there are a lot of parallels between the litigation system under the federal rules and computer systems. In patent litigation, you have a phase that is extremely slow and expensive (claim construction). How can you minimize the average cost? One way is to try and filter out as many easy cases early in the pipeline so you hit the slow path as little as possible.
As always, the topic is so much more nuanced than "good" or "bad". The first result, "Patent Troll Myths" by Michael Risch is a good start.
Sure, you will find the papers by Bessen et al where the "trolls cost the economy 29 Billion" meme comes from. But you'll also find a paper (by Schwartz and Kesan) that debunks Bessen's paper, which got nearly 0 coverage in the press.You'll even find a paper showing trolls have better patents than average! But these tend to get settled quickly, so typically the poorer ones go to trial, and so you get papers (like from Lemley) showing that trolls lose more cases than average.
You'll also find papers arguing the benefits of trolls, debunking some of the common arguments against trolls, and introducing new previously unconsidered harms of patent trolls.
And of course, just like there's no clear definition of "software patents", there is no clear definition of "patent trolls" either, and you'll find papers discussing this.
And because they use different data sets, different papers look at the same problem at the same time and reach completely opposite conclusions.
And further, because the authors are almost never practitioners in the field, you get some really obvious findings being reported... and then misconstrued! For instance there's a paper showing litigation has shot up since 2007, and presenting various theories, completely missing theMedimmune v Genntech decision that effectively upended the rules of patent licensing. And there's the paper that argues patent quality is dropping because more patents were being issued, without being aware of the ending of the misguided "reject, reject, reject" unofficial policy instituted by former USPTO head Jon Dudas (http://www.ipwatchdog.com/2009/03/16/prespective-of-an-anony...)
And as always, it's helpful to keep in mind where the authors' funding comes from. Bessen of the "29 billion" fame, for instance, is funded by the "Coalition for Patent Fairness". Check out the list of supporters. It's almost ad hom, but hey, if we can point out that studies showing the harms of piracy are often funded by the MPAA, we can point this out too.
Yes, there are clear bad actors like Lodsys, but there are so many more variables out there, and many are arguably helping more than harming.
Yet, somehow, it's only one small side of the story that gets told.
As this is a hot-button topic, we should take an objective look at the data.Because, quoting from one of the papers above, "Without a better understanding of the many complicated effects of patents in high technology markets, we run the very real risk of misguided policy decisions."
One of the reasons trolls have been successful is that the patent office is understaffed, and of the staff it does have, not enough are experts in software related matters. This means things get through that might not have if the PTO had more and better trained examiners.
If reforming to eliminate patent trolls, how about tossing in a nice big retroactive tax on patent trolls to help fund improvements in patent examining?
What we need is actual reform of the patent system, not just sweeping the problem under the rug by singling out "trolls".
That's good, but the pessimist in me thinks IV could probably find a way around this too, but maybe not. Modifying the law to somehow identify patent troll originators (IV) and barring them from disbursing patents to NPEs would seem like some added protection.
A couple of years ago when I was building a product, our board convinced us to apply for a patent. After a provisional application and following it up with a proper submission, we finally had an offer that granted us the patent. Never pursued it. I know, it makes sense to protect your ideas; but we had Whatsapp, Pinger and other apps kicking ass in the space.
when $100k dollars is considered a low cost, someone is living in cuckoo land...
how about charging people this for failed patent applications? or just no patents at all?
nearly all of the arguments for patents are trivially in the worst interest of the wider public... frankly its an embarassment that the system exists at all, much less in the way that it does
I'm pessimistic like that.
What is shown here is the LZ77  factorization of a string. Compression in general works by exploiting redundancy in the source, and natural language is highly redundant, since many words repeat often. Hence the factors in the factorization often look like frequent words or n-grams.
A recent line of research is grammar compression, which tries to turn a text into a tree of rules that generate that text. While still not very good at general-purpose compression, the generated tree are much more interesting than the LZ77 factorization, since they "discover" something that looks like a syntactic parsing of the string, finding syllabes, words, phrases, sentences...
In Craig Nevill-Manning's Ph.D. thesis  introduction there are several examples of the inferred grammar of pieces of text, music, fractals, source code, etc... While the algorithm presented there (Sequitur) is now kind of obsolete, the thesis is very interesting because it makes some considerations with a linguistic perspective.
I've always wondered if this is true: If we approach infinite computational power, does the amount of information we need to represent data decrease? (Excuse any incorrect use of my terminology here.) I think about a number like pi that, as far as we know, has an infinite number of digits, and theoretically speaking every message is contained in that number. So if we just get the pointer of where that message is contained in the number and then calculate the number up to that pointer then we'll have ourselves the message. Hence, more computational power, less information needed.
Offline Site: jvns.ca You have requested a site that is currently offline. This generally happens when a site is temporarily disabled for some reason, but has not been permanently removed...
Usually a word is either lowercase, capitalised or uppercase. In more complex and rare cases this could be efficiently encoded bitwise (0 = keep, 1 = swap case), so HAMLEt becomes 011110 plus a pointer to Hamlet.
I wonder if any compression algorithm does this. Probably not, because the benefit is likely minimal at significantly increased de/compression time.
Gzip is a unix utility, LZ77 is an algorithm, this distinction is not pedantic.
This is what happens when you go to "hacker school" before regular CS school.
How about an vmware/vbox image setup explicitly for that purpose? Not feasible for windows due to licencing issues, i guess.
Also, huge kudos for the effort going into this work. Thanks!
Why? What is the benefit of doing so when everyone wants a deterministic build?
... as long as you also trust the compiler not to introduce any backdoor... (cf. Reflections on Trusting Trust)
Import the .asc file in the keyring (File > Import certificates). Now you should mark the key as trusted: right click on the TrueCrypt Foundation public key in the list under Imported Certificate tab > Change Owner Trust, and set it as I believe checks are casual. You should also generate your own key pair to sign this key in order to show you really trust it and get a nice confirmation when verifying the binary.
In 2009, in the US, there were 14.9 million bicycles purchased  vs. just over 13 million passenger vehicles.
I would not be surprised to hear that more paper and pens are sold in Europe than computers.
Where I live, it approaches dangerous to bike on the road. When a road has a speed limit of 45 mph, turns, and no shoulder at all, it's difficult to bike. This is not in a rural area.
When I was living in Vienna we had 2 cars and 3 bikes for the family.
Bikes were replaced more often, I needed to buy a bike twice in 5 years due to the original ones being stolen. Kept my car the whole time.
Not exactly sure what this article is trying to show. Bad statistics?
You see and experience more from a bike and it clears the mental cobwebs for me. Maybe my bobble head Yoda on the handlebars helps too :)
Pedaling on in California.
So what kind of laws does that get us? In most cities electric bikes are illegal. Sidewalks and bike lanes, more often then not, only cover partial lengths of road. It's actually illegal to ride a bike on the sidewalk, but the bike lanes in the States are so unbelievably dangerous that no one in their right mind would choose the bike lane over the sidewalk. You can't ride a bike along an interstate highway --that's illegal too. Oh, some rich cities have nice new bike paths, but they go almost no where useful and it is illegal to ride on those at night.
And to top it all off there's a cultural stigma that if you ride a bike for more than exercise or a leisurely peddle around the block then you are a worthless bum.
Ownership of the two is not mutually exclusive.
Usage habits of the two vary wildly, probably mostly favoring cars, not only because keeping an unused car is far more expensive and complicated than keeping an unused bicycle is, and because people don't tend to buy cars the first week of January to work off those holiday pounds.
I'm not sure why this article matters much.
>We decided to delve a little deeper into the figures and see which of these countries had the highest rates of bicycle-to-car ownership.
I'm more interested in historical bike sales, historical car sales, and the correlations and causations we could find, than which countries had the highest rates of bike-to-car ownership.
I wonder if thats why Belgium is the only country where this isnt true ( well besides luxembourg )
Dear Westerners suffering from Stockholm Syndrome: Stalin called people like you "useful idiots".
Furthermore, bikes are easier to break, get stolen, and are much cheaper...
From my experience in the local road racing scene, a majority of the 200 or so cyclists that participate in the weekly training racing have at least four different bikes. I suspect that the percentage that only owns one bike is very, very small.
In Almost Every European Country, Apples Are Outselling Oranges
Teslas are the ultimate commuter car - great for short distance, back and forth, stop and go. If Europeans are already doing these sorts of commutes on bicycles, what is the point of a Tesla? If what they really need cars for in Europe is long-distance travel, a Tesla probably isn't the right match.
This is not a displacement market. Few people chose bike OR car. It's more like car AND a few bikes. A family of four can do with one car yet needs four bikes.
As others pointed out, the case is the same in the US.
BAL0001BY AND LARGE LLC**C & C MARINE & REPAIR LLC2010Freight Barge2164.0249.6 BAL0010BY AND LARGE LLC**C & C MARINE2011Passenger Barge (Inspected)2520.0260.1 BAL0011BY AND LARGE LLC**C & C MARINE2011Passenger Barge (Inspected)2520.0260.1 BAL0100BY AND LARGE LLC**C & C MARINE2012Freight Barge2164.0249.6
The OP is a much better article (The Press Herald is garbage).
My original guess for the barge-building in Portland was a ocean-based prison facility for the government to use for interrogations. But it's seeming like the floating data center idea is much more likely.
My second thought: "Meh, the job interview probably involves writing a breadth-first search algorithm to search for known pirates in the graph of nearby ports."
My wacky guess would have been off-shore offices for employees that can't get H1Bs.
Google was granted a patent in 2009 for a floating data center
The article keeps saying "huge", but really that's tiny for a ship. Google could buy an old cruise liner and fit several of those inside. Or maybe Google is going to buy more little ships and plonk one of these in each?
I hope that they have adequate security, because all that gold and copper is worth stealing for some people.
Any chance that this will mean the Govt can't subpoena data from this data center if it is floating far enough from the US coast? Or does it not matter because clearly the US will be the closest harbor?
> Never underestimate the bandwidth of a station wagon full of tapes hurtling down the highway.-- Andrew S. Tanenbaum, Computer Networks, 4th ed., p. 91
Due to speed-of-light delays, some of the ideal locations for hosting high-speed trading arbitrages are in the middle of the ocean. Being half way between two exchanges would allow you to notice a small difference in price of some commodity in two different exchanges and buy and sell microseconds before your competitors on the mainland know that there's an arbitrage opportunity. See http://www.physicscentral.com/explore/action/stocktrade.cfm
In a dystopian future, when the world as we know it is covered in water, one mythical ship traverses the oceans, powered by stratosphere kites or nuclear power. Nobody knows where it's from.
All we know is that many years ago, it became sentient.
And its name is Google.
 book named "Avogadro Corp: The Singularity Is Closer Than It Appears:
For example an "Orkut data center" would have been very useful to Google in 2008, when they announced they were moving its operations to Brazil because somewhat unpredictably this is where their core user base developed the most: http://en.wikipedia.org/wiki/Orkut
You can probably charge a premium for easily accessible highly secure data I would imagine.
... assuming megalodon isn't still around
Also, does wave power really yield sufficient energy?
"Boy that was quite a storm last night...oh wait...gmail just floated away and is halfway to hawai'i"
I wonder how the hardware will hold up to the environmental harshnesses of the open water? Or even just sea spray...
Is this true? Does anyone know why? The article seems to assume it without saying why.
The Mac Pro ships with enough bandwidth to drive three 4K desktop monitors, yet Apple's most recent monitor -- the 27" Thunderbolt Display -- dates to 2011 and has the same resolution as the current 15" Macbook Pro. Bluntly, this is disgraceful. Serious video folks are going to be buying Mac Pros and then paying ASUS three times as much for the monitors! Where's the Apple 4K Thunderbolt Display?
A keyboard cover -- like the Logitech Ultra-thin Keyboard Cover, or Microsoft's Surface keyboards -- would be nice. (I suppose Apple are relying on the after-market, as witness the startlingly fast announcements by Belkin et al.)
Finally, the "software is free" announcement ... yes, they're taking aim at Microsoft, but iWork 5 on OSX turns out to be a mixed blessing; there are regressions all over, especially in scripting (they've virtually dropped AppleScript from their office apps). What is this, an attempt to build market share for MS Office? (The mind boggles.) What other power user features have they planed away in the pursuit of a clean and consistent user interface across all platforms? (Which in practice seems to mean dumbing down the apps on the Pro platform -- OSX -- for compatibility with the mass market platform -- iOS.)
The iPad was released less than 3 years after the iPhone. Now we're three years past the release of the iPad with nothing new to talk about. I'm sure there are exciting things happening under the hood at Apple, but the event was a bit boring.
Apple should also rethink their television ads. The style they popularized has become trite and they ooze with self-importance. The iPad mini video with the pencil reminded me of Facebook's terrible Chair ad. I miss the lightness and humor of watching a John Hodgman riff with what's-his-name.
this is false. android tablet browsing is substantial and growing fast. Looks like 25% in july 2013 in this graph, up from 15% in july 2012.
This irritates the hell out of me. Who is this "we"? Fine if Marco wants to suggest that they suck, but I'll take a shot in the dark and say he hasn't even tried to give one a fair shake...would love to hear from him if he actually has.
I have never found a use for a tablet, but I have several around my house including iPad Mini, iPad 3, Nexus 7 (1st gen), and now a Surface. The Surface is the best of those devices and the only one I can see myself continuing to use going forward.
I would challenge anyone to just open their minds if they haven't tried one and jump in completely for a week or so then make up your mind. Definitely not saying the device is perfect, there are some things (both hardware and software) I'd like to see added but it's a damn nice device!
The Mac Pro is absolutely hilarious in it's pricing. When converted back to USD, it's almost 30% more expensive in Australia for absolutely no understandable reason. The fact that it wasn't released is very strange too, along with it's very vague "December" date. Makes me feel like they expected to be releasing it but ran into problems with their process.
The complaint seems to be that this event, despite all the time spent on the usual "The things you fans bought have indeed turned out to be very popular, yay for you" message, didn't deliver the same sense of materialist cult excitement that some people had become accustomed to.
And that apparently is generally viewed as a criticism worth making, worth discussing. It's considered important.
Hm. Well what do you know.
- The Mac Pro is still not available. I don't believe it's ever been like Apple to pre-announce something this far out.
- The iPad update was the first not to make me want the new one. I'm perfectly happy with my iPad 4 and see no reason to update yet.
- An iPad Mini with a Retina display is nice, but I've never been attracted to that screen size so it doesn't do much for me.
- There was no "One more thing..." or anything more surprising than them making all of their consumer software free.
- There were brief mentions of new versions of both Aperture and FCPX, but that was it. I only found out later that the Aperture update is just a small dot update and now requires Mavericks.
Really? I use my Android tablet all the time and love it. So much so, I'm switching from an iPhone to and Android phone. In turn, this also makes using a Mac computer far less important for me.
They introduced official gamepad support coming to iOS7 at WWDC, both standalone gamepads and iphone/ipod-wrapping cases. A couple MFI partners even teased things to come. And then... nothing.
It became actually real in the release of iOS7. The iPhone event even dedicated some serious stage time to gaming and a few higher-profile apps were updated to support it. But, still, nothing.
The iPad event came and went and they didn't even mention the iPod Touch, let alone gaming. I don't think they've ever talked about the iPad without talking about gaming.
So I wonder if the event was "off" because a tent pole feature, something that encompassed phones, tablets, ipods and maybe even the appleTV, just wasn't ready to go.
The lines were so tightly scripted that the presenters often stumbled off-script slightly, and rather than rolling with it naturally, theyd just jump back and awkwardly retry the line.
Jobs was such a perfectionist in message delivery that anyone else doing that on behalf of the same company just is not going to measure up. The expectations are so high, and nobody carries that persona. I'd rather personally see the voice of Apple change to something I can identify with, and that voice just isn't there. If anyone at Apple is listening, just so you know...the company has no voice at the moment.
The rock-and-hard-place is the product offering. Frankly, the products haven't really advanced all that much in the past few years. There have been some improvements, but improvements are to be expected, and everyone tends to deliver incremental improvements. Those improvements certainly don't measure up as a premium. The days of massive lines for product releases, the waiting all night for the next iThing...I just have a hard time expecting that those are going to be on the order-of-magnitude to what we've seen in the past.
No, the product changes are still the right ones:
I have iPad 3, but I've bought iPad 2 for my parents. Whenever I go to them and use it I am impressed by the slightly lighter and thinner feel of it.
Now the iPad Air is significantly lighter and thinner than iPad 2. If you have any other iPad, wait to try the iPad Air, then tell me if you still think it's not a big improvement.
Ditto for iPad mini. If you have the present one, wait until you can try the new, then tell me it's not significant. I'm quite certain I'm going to by it, just to take it with me to the places for which I consider "full" iPad too big. Now it's retina, I'm sure it's the best device of that size. Is it too little? I'm considering best as "best that money can buy" not "best when I want to give as little money as I can." And if you're not using Apple tablets then this won't change your mind: others make cheaper stuff and it's still so.
> Let us continue to believe that these are relevant industry events rather than giant commercials!
Why? Oh, why is it so hard to confront the reality that is right in front of their eyes? IT IS A GIANT COMMERCIAL, FOR FUCK'S SAKE!
This is the point where it becomes impossible to avoid comparisons to religion. You have a basic admission of someone who wants to keep believing in an illusion rather than exercising any kind of critical thinking.
Also, I think it was one of them going something like "it's just gor- beautiful." He probably realized he used "gorgeous" in previous sentence so he changed it to "beautiful"... Well, I didn't believe him.
Otherwise, I didn't watch live, but I wasn't particularly disappointed or anything. Despite the hype, Apple events are always kind mostly dry affairs you can catch up on later with just a few minutes of reading. With the exception of new product line launches, which obviously can't happen three times a year.
This isn't the first event since Jobs' death, but I think 2 years is about right for the momentum that he left behind to start running down.
Even if Jobs was pushing Apple to build shiny consumer-oriented gadgets, he was still pushing. Nobody can replace what he brought to the company.
P.S. I'm not saying it's the end of Apple. I'm sure they can keep making good stuff for a long time. I'm saying that this is an inflection point, where Apple is now moving away from Jobs' vision and towards someone else's. Anything that started under Jobs is wrapped up now, and what we're seeing today is wholly the product of this next phase of Apple.
I know there is a "CEO must do these" thing, but I would prefer if they left the keynotes to Phil Schiller and Craig Federighi. The rest can appear in the videos.
If Apple was more of a games oriented company and concerned itself with the market I think we would have seen the controller API years earlier, actual gamepad hardware from Apple and a more powerful Apple TV with a games oriented App Store.
I realize Apple/Steve nailed the presentation format and many are trying to copying (and some, like Samsung, are trying to stray from it). But maybe it's time to shake it up a bit. Every event feels exactly the same, even the general structure and collection of stats and retail store openings. Apple is creative and smart. It should figure out the next format/style.
The $229 / $299 price reduced the demand for the Touch. I'm surprised they haven't found a way to get a sub-$200 Touch.
For the iPhone announcement, I would have agreed. That was unsurprising due to supplier leaks.
For this event, it was completely Apple's fault, because there was nothing really that surprising. A lot of "that is some very nice engineering" but nothing to really make competitors go "uh oh, we gotta go back to work and catch up."
I do think they've announced major refreshes - it seems to me that many products brought in features that have been years in development (e.g. Touch ID, 0.5 lb off the Air, Mac Pro, etc). I'm not saying these things were huge - it's just that any kind of getting any kind of multi year effort to line up while still keeping the normal plane flying is really hard work.
I personally think the current lineup is really good. Sure there's a few bits missing (notably there are apps in Mavericks which missed polish and touch id needs to be everywhere), but it feels to me like each of their hardware lines are now at a really rock solid iteration.
Software wise, the lineup feels even more integrated if you're an all apple customer.
TL;DR - it feels like they're getting their lineup up to a solid level baseline before using that as the base for the next set of awesome stuff, but hey - I could be wrong :)
I personally was very surprised that the raised the price of the iPad mini.
I also found that a bit jarring. A 4-to-5 price ratio relative to latest model, which has much better processor, screen and weight ... it's hard to justify.
Perhaps it's because of the cheapest Mini price acting as some sort of backstop.
I'm glad that this was only a minor point, and that the main issue, that the speakers currently seem to lack vivacity (exception of Federighi), was highlighted as a major issue.
I realized a few months ago hacker news has become boring. I don't really care much for the incremental updates, which is the entire hardware industry. Even the internet has become pretty boring.
We're all excited for the promises of the future, and as usual they're taking a lot longer than we want them to.
What a waste, Apple.
Something felt a bit off about this weeks Apple event. [Was: Off] (marco.org)
Tim Cook's number one priority should be untangling the Jobs cult of personality and Apple Inc. And I definitely don't envy him.
Apple is somewhat famous for summarily killing off whole product lines in the interest of technological innovation. I get it. No issues there.
However, as their installed base expands it will be increasingly hard for the average person to stomach the idea of their expensive computers or iOS devices becoming obsolete. Not everyone lives on the bleeding edge. In fact, most people don't.
It'll be interesting to watch what happens. It sure feels like the rate of innovation might have slowed down a bit. Thinner and lighter only go so far.
There are a few surprising things here and there. For example, I can't understand why Apple didn't acquire Bump  and and tightly integrate that capability both iOS and OSX. Google grabbed them instead. We'll see what happens.
Nevertheless, when the rewards for laziness are so high what incentive is there to take on risk? There are negative incentives, in fact, because any amount of effort or resources spent pursuing something risky will likely come at the cost of working on something safer. If the safe and lazy thing is sure to bring in billions in profit then even if the risky things succeeds it might end up being a short-term loss due to opportunity cost.
It's obvious that things like the iPad are the harbingers of the future. But at the same time it's just as obvious that the iPad does not represent anywhere near the final evolution along those lines. It's clear to me that consumer OSes will increasingly be like modern mobile OSes, with managed apps, streamlined UI, and even more streamlined administration. But the idea of there being such a gulf between a desktop with a keyboard and mouse on the one hand and a touch-only tablet on the other is mostly an accident of history. As well, the idea, from Windows 8, that there should be a single UI model that spans both portable (touch only) and stationary (keyboard and mouse) realms is ridiculous.
There should be a lot more innovation, a lot more development, and a lot more trial and error out in the market today. But until the market dynamics change we'll likely be stuck with a lot of lazy designs for a while.
Furthermore, advertising income often provides an incentive to provide poor quality search results. For example, we noticed a major search engine would not return a large airline's homepage when the airline's name was given as a query. It so happened that the airline had placed an expensive ad, linked to the query that was its name. A better search engine would not have required this ad, and possibly resulted in the loss of the revenue from the airline to the search engine. In general, it could be argued from the consumer point of view that the better the search engine is, the fewer advertisements will be needed for the consumer to find what they want. This of course erodes the advertising supported business model of the existing search engines. However, there will always be money from advertisers who want a customer to switch products, or have something that is genuinely new. But we believe the issue of advertising causes enough mixed incentives that it is crucial to have a competitive search engine that is transparent and in the academic realm.
The main difference seems to be that today even getting the top organic search result doesn't provide enough clicks for advertisers, so they feel obliged to purchase ads for their own brand names even when they already rank first. If people searching for Southwest Airlines on Google aren't ending up on the Southwest Airlines website without a huge great banner ad (despite it being ranked at the top of the results) then something is going badly wrong on the Google search results page.
In fact spending money on their own brand keywords generated signifigant negative ROI (1).
So my guess is that this strategy from Google is designed to provide brands with a first step to generating actual value from Google search results.
I can see brands making these out-sized spends when able to provide their customers w/additional value like interactivity within the goog results, etc.
I think this is a pretty disingenuous analysis of what's going on. It's obvious from the comparison to the [Virgin America] search that this is a bigger change that just adding a "banner ad".
Notice that for [Virgin America] there are _two_ spots that bring you to virginamerical.com, the ad and the first organic result. This is redundant, wastes space, and probably is confusing to some users. I don't know why a company buys ads for navigational queries where it's already the top result, but they do, and I'd argue it's bad for users.
On the [Southwest Airlines] query you can see that there's no redundant ad anymore - the navigational ad and the first organic result are combined. Calling that whole box and ad, when it contains the same content that the former top organic result used to, is misleading, but makes for a much more sensational headline when you want to claim that most of the screen is ads.
I'm not sure about the experiment, that's not my area, but my guess that this is part of an attempt to not have this ad+organic confusion for navigational queries by allowing the owner of the first result of a nav query to merge the ad with the result into a professional and official looking box. Maybe that'll work, maybe not, which is most likely why it's an experiment.
This sounds to me like a complete non-issue. If you don't like ads, install AdBlock. Of course if you need clicks for your website, carry on.
Same story (but no real discussion) was submitted here:
Call me cynical, but I suspect it will still be upvoted and discussed here because any comments on that earlier discussion will get lost in the noise of the close to 200 comments already there.
Important boss man also wants to get good results for 'acme blue widgets', 'tough widgets Alabama', 'naughty widgets' and whatever but only really cares about those secondary searches when someone else has told him to care about it. It is the main company name, in the search box that matters.
I think this is going to work well for all concerned and I don't share the cynicism most people seem to have about this.
First, probe the outrage machine for banners for particular brands. Then for a huge price tag, add lightweight widgets to the SERP for brands so searchers can e.g. buy tickets from the Google Search page. This is hailed by the brands as increasing sales dramatically. Demand for this feature grows.
Once significant numbers are using the SERP widgets, make the banners/widgets part of general non-brand search. Natural next step. A little bit of outrage, but at this point it just gets muffled by the masses. Life goes on.
All of these brands are getting increasingly dependent on Google's SERP widgets, which give Google huge leverage power. One deal leads to another and before you know it Google starts buying up airlnes to streamline everything.
So in 2030 we're flying Google Air using a Google phone to buy tickets to the Google Movies, to see a film made by a studio wholly owned by Google.
I'm not even saying this is a Bad Thing (tm). Just that if I were heading Google this would totally be my game plan.
1. Users tend to ignore the small ads on the right (anecdote: I do)
2. Users do notice and click on search results beneath the top query, even when they originally intended to arrive at their exact branded query
3. Search results beneath the top result are for competitors
Solution: Put in huge "ad" to draw attention and also to knock competitors listings to the very bottom of the screen or off the fold completely
If 1-3 hold true, then I could see it making sense competitively to shove those other results down the page.
Edit: aresant pointed out a good article that could explain the intent. Yay! Also, it wasn't my intent to hate on Google, just a thought experiment.
Ignoring that, it's unfair to use one example and say that search results are 12%. Is it 12% average, 12% median, or 12% for navigational queries only?
I just did a few searches for educational topics, got no ads. ... I would say there isn't a problem...
So long as Google only returns these sponsored ads for searches for the company name, I don't see this as being a problem at all, given the fact that many users are using the address bar integrated search in place of bookmarking or typing URLs.
Where this would become a problem is if they start expanding this to searches beyond simply the company name, and I think there is a bit of a gray area there. As someone else pointed out in this thread, showing the Southwest banner in response to a search for "cheap airfare" pretty unambiguously crosses a line, but what abut something like "book southwest airlines flights." One could argue that the user was attempting to get to the southwest airlines website to book a flight, so showing the Southwest banner would be appropriate, however, companies like Expedia, Kayak, and so on, whose links would now be much further down the page, would likely disagree.
It's ironic that every time one of these "omg, google is pushing organic search results off the page" posts comes up, it's the general public who's obsessed with dollars, whereas Google seems to be concerned for the user. Google makes a ton of money off of advertising because they know how to provide useful user experience. Which isn't surprising really, they have a lot more vested interest in making sure they provide such an experience than arstechnica do.
Sure they want to find ways to align their incentives with the user's incentives, but come on people: think of the people they saved clicking through to www.cheapair.com and www.insanelycheapflights.com
There will be no banner ads on the Google homepage or web search results pages. There will not be crazy, flashy, graphical doodads flying and popping up all over the Google site. Ever.
Speaking of "high quality ads": The second Cheap-O-Air Ad is for flights to Southwest not on Southwest Airlines - Deceptive IMHO.
As a someone who works in advertising, even I dislike banner ads. They are obtrusive, annoying and take away the attention. Google should go back to adwords and make them better rather than anything else.
This ain't a big deal actually, it's a test to get more from their Adwords when people really search for the companies. But behold the future :( (investors, stocks, it will never be enough).
Eight years passed...
"There will be no banner ads on the Google homepage ... ever, excepting one large ad at the top of the page."
After that it did not seem strange when the pigs who were supervising the work of the farm all carried whips in their trotters.
Google are so big and powerful that it's easy and tempting to think of them as invulnerable and immortal, but remember... people have thought that about many companies in the past, more than a few of whom are no longer with us.
Edit: OK, IF this really is only for brand names and doesn't show up for more general searches ("cheap airline tickets", etc.) then maybe it won't be received so badly. That said, I still believe that, in general, "big honkin' banner ads" are NOT going to be well received on Google search result pages. I guess time will tell.
Then again, cartopy is only a year or two old, so it doesn't have the traction that basemap does. It's gained a fairly large following very quickly, though.
Google is smearing the smartphone market, at the expense of Apple's cash engine, Microsoft is smearing the Search market at the expense of Google's cash engine and Linux is smearing the operating system market at the expense of Microsoft's cash engine. Seems like there is a lot of pressure to diversify.
- 1st quarter of last year: $16.01 billion
- 2nd quarter of last year: $21.46 billion
- 3rd quarter of last year: $20.49 billion
- 4th quarter of last year: $19.90 billion
- 1st quarter of this year: $18.53 billion (the "record" one)
MSFT is both a tech company and a utility.
It has growth potential (phones, surface, search, xbox) but it is also completely essential for global business (servers, AD, SQL Server, Exchange, Sharepoint).
In that sense it is a utility. If you took out all the MSFT software in the world everything basically stops. Your electricity probably doesn't work, you probably can't get on a train to get to work and if you manage to get to work you can't login to anything.
People say "but my company has BYOD!" that might be true, but MSFT is still the infrastructure it is running on. You can bring your AAPL car but you're still driving on an MSFT road.
Critics of Microsoft are wrong to call it's enterprise business a dinosaur. There is no reason to think Microsoft won't continue to grow this business for decades to come.
But I would like to be able to own this as a pure play, not mixed up with XBOX. Let's call this company "Azure" and spin it off, like HP did with Agilent (which should have been called HP), and let the "devices and services" part screw around with reinventing itself.
RIM was a one trick pony. Microsoft has several billion dollar businesses.
For comparison there is the trend for 'iphone' and 'android'.
Sure Microsoft are doing loads of exciting things but people aren't typing 'Microsoft' or 'Windows' into the search engine box of Google as much as they used to. Make of that what you will.
And, of course, defacto server-side OS is Linux. So-called Desktop will be theirs for a long long time, but not because the OS is of any good - it is meaningless a bloatware, but because of word.exe and excel.exe which it seems to be here forever.
As for their services, well, forcing sheeple to use IE and Bing by re-writing their browser's settings doesn't account for even for popularity. IE is crap compared to Chrome, and even some sheeple could see that, but most of them just doesn't know any better, so they got stuck with IE and Bing.
And to appreciate the absurdity, just look what is happening with all those Java apps, which are supposed to run everywhere, with each new release of Windows, which are supposed to be 100% backward-compatible.)
I'm interested in following those Surface numbers over the course of the next year. If they can get that revenue up to about a billion, they will have done very well. And I think they can do it.
1. Accelerate Ballmer booting out process. Why's he still there?
2. Boost Cloud.
3. Boost enterprise services and everythings.
4. Stop wasting resources on stupid consumer widgets department.
Also known as the Wile E. Coyote Syndrome.
I'm talking about service providers moving their operations to more privacy-friendly jurisdictions, and improving protocols with e.g. perfect forward secrecy to make this sort of attack impractical.
So everyone suffers under an adverse decision in this case:
The US economy suffers because businesses seriously concerned about privacy choose to locate elsewhere
Law enforcement suffers because those businesses are no longer reachable when they have a legitimate reason to obtain the communications of spies, terrorists, or plain old criminals, and get a narrow warrant that properly protects the privacy of innocent bystanders.
Individual liberty suffers because a precedent will make it easier for people who don't care about privacy and use domestic providers subject to these overbroad warrants to be caught up in a surveillance dragnet
That being said, Congress, not the courts, is the proper venue to address those practical arguments. Will anyone care outside of technophile bubbles like HN? Unfortunately, I think we all know the answer.
Seriously, that is one heck of a broad warrant, namely the private key used to decrypt all business records of all customers.
(PS: Great job EFF!)
For background, Lavabit filed their appeal a few weeks ago . Ars covered it , and it was discussed here on HN  as well.
0: [pdf] http://cdn.arstechnica.net/wp-content/uploads/2013/10/gov.us...
I tried to post a comment on the NSOP (Not So...), but first I got "HTTP internal error" and then I got "duplicate comment" but it still hasn't shown up, so I'll post it here.
"The private bit is important; although various techniques have been created for shared (multi-master) access to the interconnect, all were relatively expensive, and none are supported by the consumer-grade drives which are often used for scale-out storage systems."
I was working on multi-master storage systems using parallel SCSI in 1994. Nowadays you can get an FC or SAS disk array for barely more than a JBOD enclosure. Shared storage is neither new nor expensive. It's not common at the single-disk layer, but it's not clear why that should matter.
The idea of network disks with an object interface isn't all that new either. NASD (http://www.pdl.cmu.edu/PDL-FTP/NASD/Talks/Seagate-Dec-14-99....) did it back in '99, and IMO did it better (see http://pl.atyp.us/2013-10-comedic-open-storage.html for the longer explanation.
"Dont fall into the trap of thinking that this means well see thousand upon thousands of individual smart disks on the data center LANs. Thats not the goal."
...and yet that's exactly what some of the "use cases" in the Kinetics wiki show. Is it your statement that's incorrect, or the marketing materials Seagate put up in lieu of technical information?
"they dont have to use one kind of (severely constrained) technology for one kind of traffic (disk data) and a completely different kind of technology for their internal HA traffic."
How does Kinetic do anything to help with HA? Array vendors are not particularly constrained by the interconnects they're using now. In the "big honking" market, Ethernet is markedly inferior to the interconnects they're already using internally, and doesn't touch any of the other problems that constitute their value add - efficient RAID implementations, efficient bridging between internal and external interfaces (regardless of the protocol used), tiering, fault handling, etc. If they want to support a single-vendor object API instead of several open ones that already exist, then maybe they can do that more easily or efficiently with the same API on the inside. Otherwise it's just a big "meh" to them.
At the higher level, in distributed filesystems or object stores, having an object store at the disk level isn't going to make much difference either. Because the Kinetics semantics are so weak, they'll have to do for themselves most of what they do now, and performance isn't constrained by the back-end interface even when it's file based. Sure, they can connect multiple servers to a single Kinetics disk and fail over between them, but they can do the same with a cheap dual-controller SAS enclosure today. The reason they typically don't is not because of cost but because that's not how modern systems handle HA. The battle between shared-disk and shared-nothing is over. Shared-nothing won. Even with an object interface, going back to a shared-disk architecture is a mistake few would make.
This has real promise so long as it stays as radically open as they are claiming it will be. When I can grab an old scrub machine, put a minimal debian on it and apt-get seagate-drive-emulator and turn whatever junk drives I've got laying around into instant network storage (without buying magic seagate hardware), I'm sold (and then might think about buying said hardware).
Apple has been using IPv6 for local network services for years now, like file sharing and Time Capsule backups, and it works great.
The important, actual TLDR: "Kinetic Open Storage is a drive architecture in which the drive is a key/value server with Ethernet connectivity."
Imagine a CD player, turntable, receiver, preamp, etc., that all have only two connectors: power, and Ethernet. You wouldn't have problems anymore with running out of connections on the back of your receiver. That incredible rats nest of disparate wires and cables would be gone. No more RCA cables, coax cables, HDMI, optical cables, composite video, supervideo, component video, BNC, various adapters, etc.
No more fumbling around the back trying to figure out which socket to plug the RCA cables into, which is input, which is output, etc.
Comments along the lines of "Backups? Snapshots? RAID? How they handling this then?"
Hmm, which raises the question: how much RAM should a hard disk have? In a regular architecture, that database lookup could be meaningfully cached (and you could design and provision exactly to ensure your entire set is cached). Opaque K/V "disk" seems less appealing from this angle
I can imagine that once these are SSD drives, paired with reasonably powerful (likely ARM) chips, that we'll have massively parallel storage architectures (GPU-like architectures for storage). We'll have massive aggregate CPU <-> disk bandwidth, while SSD + ARM should be very low power. We could do a raw search over all data in the time it takes to scan the flash on the local CPU, and only have to ship the relevant data over (slower) Ethernet for post-processing.
I'd love to get my hands on a dev-kit :-)
First of all: Hyperscale? I'm not a retarded non-technical manager or MBO, so I just stopped listening to your entire pitch. Second: You're still selling storage infrastructure, and I still have to support it. The expense just has a different name now.
"Companies can realize additional cost savings while maximizing storage density through reduced power and cooling costs, and receiving potentially dramatic savings in cloud data center build outs."
How does reducing my power and cooling costs maximize my storage density? Oh, by getting me to spend more money on your product instead of power and cooling. Nice try, buddy; give me the cost comparison or stfu.
Their whole pitch here is "throw away your key/value servers and use our key/value server instead". I wonder which will be more expensive: something I throw together with commodity PCs, or a SAN developed by Seagate.
Otherwise this will be hampered by the fact that the 6Gbps of SATA III is already too slow to take maximum advantage of many SSD devices (hence OCZ experiments with effectively extending PCIe over cables to the devices.
2) Various posts pooh-pooh'ing this development (including the current top post) here are committing the classic mistake described in that book made by incumbents which leads to disruption by new entrants to the market.
3) Seagate is doing something right. It doesn't guarantee that they'll win the next phase of the storage battle but they are doing something radically different which has a plausible marketing story appealing to a large base.
JFS file system, and the only access we then allow to this totally self-contained storage building block is through HTTPS running custom Backblaze application layer logic in Apache Tomcat 5.5. After taking all this into account, the formatted (useable) space is 87 percent of the raw hard drive totals. One of the most important concepts here is that to store or retrieve data with a Backblaze Storage Pod, it is always through HTTPS. There is no iSCSI, no NFS, no SQL, no Fibre Channel.
Nerdy me likes idea of POE hub and bunch of drives doing their own thing.
Also pretty good time to start writing stuff to support this into Linux kernel and developing support apps.
On the oher hand I see an opportunity as shared storage for mobile and ligthweight devices. Using a single and simple protocol, compared to NAS, could open a new technology domain and market. Of course it requires also a good integrated authentication and access control system because on Ethernet this data might be open to the world.
More than just advertising, this represents an element of curation on such search terms, to get you to the place you're really looking for. It'll help avoid situations like when that one blog post appeared at the top for [facebook login] and suddenly bunches of users couldn't find Facebook.
Like any technological tool, it could be misused for evil, and so will require vigilance in the court of public opinion if not in actual courts.
(Disclaimer: I'm a potential Googler, currently in the interview pipeline, but these views are my own.)
"There will be no banner ads on the Google homepage or web search results pages. There will not be crazy, flashy, graphical doodads flying and popping up all over the Google site. Ever."
That Southwest Airlines screenshot doesn't look like a banner ad as described by Marissa Mayer. It doesn't seem to be flashing or flying around or popping up.
And, arguably, Google has been doing things that flash and fly around with the Google Doodles on the homepage for years. And no one freaked out about that.
Granted some might consider this to be a minor breach of a promise, some might not. The point still stands.
For any query less and less percentage of first page is dedicate to organic example.
E.g. a silly example:https://www.google.com/search?q=trash+can25-30% organic, rest are the adds.
And that then is what I think the real "problem" is. You reach a point where your biggest money maker, search advertising, by at least one and possibly two decimal orders of magnitude, is no longer growing. And all of the things you've ever done which were never as successful as search advertising are supposed to give you the growth that your stockholders are looking for. Interesting place to be for a company like Google I expect.
This is just another example of how that it process is coming along. It will be interesting to see what happens if it starts damaging their brand.
"Google breaks promise" followed by "Google is testing banner ads" in the first paragraph. So umm, "breaks" is the wrong verb, more like "thinking of breaking"
The reality is, Google runs hundreds, perhaps thousands of experiments all the time and only a few make it.
If all ads were such high quality I'd have no problems with this (but would still probably use adblock!).
While some see it as a social utility, Google is a $350B public company that generates its revenues from advertisements. 8 years ago, the world of advertising (and the world in general), was a different place. Holding Google accountable for something so far in the past by someone who is no longer there is a seemingly unfair standard.
The entire point behind a capitalist corporation is to make more profit, year after year. That is the entire idea behind the stock market. To think that they'd evade eventually exploring every avenue available to avoid making more money is mad.
When Orkut was all the rage, Google claimed that Orkut would never be merged with the Google core and would remain separate to Google.
The same seems tobe happening with YouTube. Sure, they still allow users to keep their YT & Google identities separate but IDK how long that will last.
Remember when they claimed their motto was 'Don't be evil'?
(NB: Before you come screaming at me for making vague accusations, please take that previous sentence with a pinch of '/s'. Thank you.)
This is replacing Southwest's search result. It's noteworthy to me that only "some" of this content would appear without sponsership. So not only are they showing "banner ads" in search results (that's a little bit of a stretch), but it's a bit like them allowing compensated reordering of search results.
How's Bing these days?
Also, I first saw this over on search engine land yesterday. It's possible the Guardian author remembered that Marissa Mayer quote (and blog post) on their own, but it seems unlikely. It's pretty shitty to take a story and not even cite where you got the idea.
https://www.google.com/search?q=SouthWest%20Airlines doesn't do it for me at the moment, from the UK.
1 Big ass ad3 "News" items5 genuine "Search Results" (with no heading or any way to know when the ads and nonsense stops. One of the 5 is a link to the Southwest Airlines Android App3 "In-Depth Articles" I don't know what this is, i guess long blog posts?
This honestly looks more to me like a domain squatting BS ad page that we hate on ISPs for than a research tool (which is what I used to think of google search as).
Google isn't what it was, and Google wants many more users than they can get from keeping just the early adopters.
People use Google like a portal. This is just Google giving in.
Thank you DuckDuckGo for taking a stand for users.
But this version doesn't allow open pay-for-play access, only one preferred buyer is invited per search term.
"There will be no banner ads on the Google homepage or web search results pages. There will not be crazy, flashy, graphical doodads flying and popping up all over the Google site. Ever." -Marissa Mayer
That "banner" is not a "crazy, flashy, graphical doodad". That is pretty much the company's logo.
Don't be Evil (2005) >>
Don't be Evil over short period of time (2013) >>
Oh screw it, now we are Evil enough. Let us plunder the hell (2021).
Now they will show ads for the key word, South West Airlines. Next it will be a whole flashy ad when you search flight, then it will when you start to think about flying or your girlfriend sends an email about flying for someone's funeral. But these profits too will dwindle after a point. Then they will start selling your profiles, what you read, what you think.
For a corporation privacy and trust, or any other values are only as important as the profit it can bring. Its only a matter of time before you will erode your own values, when profits are what we are maximizing. This is all the more likely when you are ambitious.
And then you repent it, and the cycle is complete.
Google workers mass flagging this submission? Don't be evil.
Still, seems like a slightly regressive strategy...I thought the traditional Google Homepage was becoming less of a revenue driver compared to all the other way results are traffic driver?
At one point on stage, you brought up the possibility of open sourcing your code, and Paul cautioned you that you may want to follow game industry conventions.
There are two reasons the game industry tends to keep their code closed-source. 1) It has been lucrative for game studios to sell licenses to their closed-source engine. Some game studios, such as Id Software, have made hundreds of millions of dollars (if not $1B) from licensing their engine. This is the main reason game studios tend to keep their source code closed. 2) There is strong institutional bias against releasing source code precisely because nobody else releases source code.
If you're not planning on licensing your engine, then I just wanted to reassure you that it's not a bad idea to go open source. You own codecombat.com, and hence you own the pipeline of users. Even if someone uses your code to launch their own version of CodeCombat, it's very unlikely that you'll suffer any problems for it. The only possibility is if your servers go down and theirs don't. But anyone who tries cloning your idea is going to suffer the wrath of the gaming community. E.g. see what happened to "War Z," a videogame that was blatantly ripping off the recent hit "Day Z." The War Z developers were basically tarred and feathered for it. Gamers may be fickle, but they are loud and they are loyal. I can't imagine them defecting to some competitor who steals your code.
Beyond code, there's art assets. You could release the code with a permissive license, and release art assets with a restrictive license. Nobody will be able to catch up to you if they have to develop all new art for their clone.
I wanted to speak up as a voice from inside the game industry: Don't follow industry conventions out of fear. Their conservatism wasn't derived from experience. Rather, it's because no studio wants to take any risks whatsoever.
Let's put it this way. If Notch (the creator of Minecraft) hesitated to follow his instincts, he would've tried to write Minecraft in C++ rather than Java. If, before Minecraft was written, he tried to convince any professional gamedev that using Java was a good idea for writing a multiplayer 3D game engine, everyone would've laughed in his face. And everyone would've been mistaken, as Notch wound up demonstrating. Java turned out to have many unexpected advantages new to the gamedev industry (e.g. the ability to deploy the game through a web browser and the ability to edit code without recompiling the engine).
So if you see an advantage in open sourcing your code, go ahead and do it. Don't second guess yourself just because it goes against conventional industry wisdom. The conventions are just groupthink, not pragmatism.
The parallels, I think, really help demonstrate how the CC concept has the potential to change young people's lives.
I like your website and concept very much. Great idea, may you go places.
Edit: What languages will I be able to learn through this?
Which is too bad, because I'd love to show her that programming isn't as "hard" as she thinks it is.
Here's a small big from the couple minutes I spent playing with levels 1 / 2: While it does execute the code on the right perfectly even if it's not the expected optimal entry, the camera focus during a playback will lose sync with the "spells" if you add a few extra calls like moving left and right.
The idea is to code the behaviour of a critter that can move / attack / eat and reproduce.
So a species that survives well can grow and invade a terrarium.
But the cool factor is the blue ball. It is actually a teleporter that sends critters randomly to someone else's terrarium, so your critter can invade other terrariums too :)
here is an earlier site called RubyWarrior that works similarly.
I knew I recognised the names from somewhere.
Best of luck with this new venture!
1. It adds very little security: 16 bits is not much, and the result is not 256 bits (say) of SSH key plus 16 bits equals 272 bits, but instead effectively still 256 bits, or 256+810 bits.
2. The security it adds is itself bad (sent in cleartext, easily brute-forced)
3. These problems stand against the many drawbacks of this previously discussed (complexity, confusion, etc.).
And the final argument: If increased security is what you want, simply increase your key lengths and/or password lengths, and you will get much more than 810 bits of security, without any of the above problems.
2. Next he talks about this non-root listener issue. He claims that you shouldnt run your SSH daemon on a non-privileged port because anyone can spin up a daemon up there. Great point, except you can still do that even if you run your main one on 22.
Are you sure you understood the original post's point?
firstname.lastname@example.org:~$ nc -l -p 14 nc: bind to source :: 14 failed: Permission denied nc: bind to source 0.0.0.0 14 failed: Permission denied nc: failed to bind to any local addr/port email@example.com:~$ nc -l -p 1414 ^C
(Edit: The original blog entry has now been edited to slightly clarify the wording. But the update mostly seems like an attempt to rapidly justify the author's original point.)
1. In the real world, security resources aren't free.
2. Security decisions are made by users.
3. Humans will engage in risk compensation 
4. Setting policy doesn't change people's brains, it just tells them what to do.
5. It doesn't matter what you intend, it matters what users actually do.
The upshot of this is that any security policy that is highly visible and highly inconvenient will reduce your security, and has to have a substantial benefit to justify its cost. You can say "I'll do stupid port reassignment tricks, and I'll also mandate that passwords are forbidden, and require that private keys be managed properly" but at three in the morning when the whatever is overdue and not working what you're gonna get is:
I'll just do password auth with root:root, nobody ever hits port 24601 anyway. Besides, look at this page , using a strange port makes me invisible like the Predator and makes me four thousand times more secure! I really want to believe this so I do.
Also, subverting scanners is an anti-security move, not a pro-security one. Scanners are a helpful tool to identify what the hell is running on your network. Your security efforts have to find every hole, the bad guys only have to find one. Don't put yourself at an even bigger disadvantage by making your systems harder to analyze.
In most cases the dimension is IP range - an automated process moves from IP address to IP address examining port 22 for any common vulnerabilities. Rarely do these processes check all ports. Moving your SSH deamon to a different port prevents those automated processes from then hitting your security layer on whichever port you are running.
The other dimension of attack is when an attacker is focusing on your IP address specifically. Then he probably is going to nmap your IP and discover which port(s) SSH is running on. Changing the default port for SSH doesn't help here, but this use case is far less common.
Like others have said, changing port doesn't remove the need for security measures (cert-based/passwordless login, disable root, fail2ban) but it reduces any of those even being tested in the first place when most of your attempted attacks are IP-range based.
It's reasonably clear to your average net malfeasant that any host running recognizable services is going to be running sshd.
So why not do both?
Put a dummy sshd on 22/tcp, deny all auth attempts, log whatever keeps you swimming in interesting data.
Then run real sshd elsewhere, possibly filtered, possibly port knocked, and hopefully permitting key-based auth only.
The really real good idea is running a VPN in front of all of your servers and never allowing SSH access to the outside world. I have two ports (at most) open on all of my servers: 80 and 443. OpenVPN takes less than an hour to setup. There's no reason not to set it up!
In addition to, as the author encourages, being "weary of the 'by obscurity'" argument (as I'm sure we all already are), I would also advocate being wary of it :)
The benefit of this is that it can allow you to tunnel through an HTTP proxy (e.g., like in a corporate environment). Many HTTP proxies only allow traffic through to port 80 and port 443. The benefit of ssh on port 443 is that if the proxy is handed a CONNECT verb, it will transparently just transmit data between your client and the remote server, irrespective of what that content is. In fact, this behaviour is what makes HTTPS remain secure when going through an HTTP proxy.
You can use this to tunnel ssh through an HTTP proxy. Putty supports this out of the box, but if you're using openssh, you'll need corkscrew also.
You can always try to tunnel to an ssh server on port 22, but most proxies will hand you HTTP403 on any CONNECT request to a non-port 443.
More info at http://daniel.haxx.se/docs/sshproxy.html.
Port - 22
Protocol - 2
PermitRootLogin - no
StrictModes - yes
MaxAuthTries - 1
PasswordAuthentication - no
PermitEmptyPasswords - no
ChallengeResponseAuthentication - no
UsePAM - yes
2. PAM_ABL (auto-ban by account after three retires)
3. IPTables (auto-ban by IP after three retries)
So in the above implementation an attacker has three attempts, max. This means the logs are quiet, yet accurately depict intrusion attempts. This also stops brute force attempts in their tracks and requires no exemptions to normal workflow.
If, under the above circumstances, I were to obscure the port as well, this would serve no purpose than to completely side step script kiddie brute force attempts (as minimized as they would be in this configuration) with the horrific side effect of forcing my users to maintain (at the least) a config entry for the custom port assignment. Which, by the way, would become perpetually worse with the amount of servers and users in play.
This is why obscuring the port is such a bad idea.
And if you still want to obscure the port because the server, or network device, in question should only have occasional access by an extremely limited group of people, then just throw on a white list and possibly restrict access only through another server. Both provide more security than moving the port.
And moreover, this article isn't even about SSH. It's about the semantics surrounding the usage of the term "security through obscurity" in the previous article. Which is hilarious to me, as both articles are full of shit. For one, the security implications of non-privileged ports is moot as the attacker already has access. And two, being less likely of a target is still being a target. Those five people who found the port in the test sample. Those are the ones who win most likely to exploit; not the thousands of script kiddies brute forcing you.
Your time would be much better spent obscuring the actual version information for the service than the access point to it ...
(Reposted here, as the original site went down.)
* Because anything but the IP address of your office or VPN connection should be blocked at the firewall level for that port
The condescending opening is a tip off ("people who almost understand the topic").
If a remote vulnerability is discovered in the server (happened in the past, don't rule it out for the future), you will be attacked, and it won't be a brute force attack to be blocked by fail2ban or similar. You can be scanned in any time, put in a database as "having ssh version x running in y port" and get ready for future use.
And if well simple port knocking could be defeated inspecting your traffic, there are variants like fwknop that are resistant to that kind of interception or replaying.
Running services on non-standard ports will make the next admin that takes over this server want to track you down and smother you in your sleep.
Don't understand how this article got to the main page and it's still here after more than 11 hours.
b: Uhh the port number means nothing. Host keys are there for a reason... Someone does not understand the functions of SSH. http://www.snailbook.com/ <- great book
c: If you are not investigating fingerprint issues when logging in via SSH and you call yourself a sysadmin, please stop. You are going to be the reason your company ends up in the news because your shit got owned and 2,000,000 user account hashes were leaked blah blah.
d: If you are not using key based auth and you have a fly by night keystore policy. Which means you have a keystore - stop. The whole keystore for SSH shit irritates me. I cannot tell you how many times I have heard sysadmins say a that a single private key is a "best practice". It is not a best practice it is a stupid practice and really prevents you from protecting unauthorized logins on other machines for the obvious reasons.
Put your public keys on bitbucket.com or source management. Put your private keys on an encrypted disk in an encrypted archive if you must. This is still dumb imho because it is not needed. Leave one account (root) with console only/no ssh access that will allow for keys to be revoked/recreated when users need new keys.
f: Spinning up daemons is a big deal for non-priv users? So spinning up a remotely accessible Lisp out of emacs from a screen that is running in the background is bad? Hmm, here I thought that computers were meant to be tools for humans to get work done... Sorry, background processes are part of getting shit done. Users should be able to spin up the stuff they want to spin up in the network segments they have access to without the bureaucracy of misguided fools making the jobs of others more difficult because they think spinning up a gunicorn process or a custom daemon is worse than their unpatched kernel, apache tomcat and mysql listening on a publicly accessible address. Stateful firewalls and hosts allow/deny are there for a reason.
Sorry for the snarky reply here but there are a lot of peoplechiming in that obviously have very little knowledge aboutmanaging *nix ops and remote access. I have pretty strong opinionsabout this kind of stuff. Especially the single key stupidity andnot checking host fingerprints.
Then I agree.
Otherwise, hell no.
(Actually, I appear to have stopped doing this. But it's something to consider if you are on weird networks on a regular basis.)
Changing ports reduces the threat surface in limited but practical ways, however far more effective would be using secure port knocking (say fwknop with GPG and is also time-based).
Secure port knocking and changing ports together would be perfectly valid. In fact, I have deployed these for openbsd jumpboxes guarding core infrastructure. So breaking in would require defeating fwknop with GPG and ssh.
(If anything needs public auditing, it's GPG and SSH. VPN code also considering the logic often makes OpenSSL look simple. )
It just depends on which are the tradeoffs between the antithetic goals that you have when you do any kind of security hardening.
Aside from that, since many already mentioned port knocking as another layer in the pile of this game, let me point out that not all port knocking (-like) implementations are that weak, look e.g. at knockknock [ http://www.thoughtcrime.org/software/knockknock/ ].
Very little hassle, no crap in the logs. Is there a drawback I'm missing?
So why not solve the problem with something a little more proactive like turning off password auth and go for sshkeys only. Maybe toss in something like fail2ban if you want to interrupt kiddies scanning your boxen.
That said high port ssh can be nice if you're frequently on restrictive networks and getting out on port 22 is impossible.
1) I don't run HTTPS on the box I SSH into
2) I might hit an overly restrictive WiFi that only allows traffic out over HTTP and HTTPS
Which is another reason why you might not want to run SSH on another port. You might not be able to reach it.
That was my biggest reason not to bother changing the port.
Is there any real reason beyond that? (I do use fail2ban to block repeated attempts.)
the chance to get hacked is way higher. why would you not want to lower the risk?
Hofstadter should be COLLABORATING with all those other researchers who are working with statistical methods, emulating biology, and/or pursuing other approaches! He should be looking at approaches like Geoff Hinton's deep belief networks and brain-inspired systems like Jeff Hawkins's NuPIC, and comparing and contrasting them with his own theories and findings! The converse is true too: all those other researchers should be finding ways to collaborate with Hofstadter. It could very well be that a NEW SYNTHESIS of all these different approaches will be necessary for us to understand how complex, multi-layered models consisting of a very large number of 'mindless' components ultimately produce what we call "intelligence."
All these different approaches to research are -- or at least should be -- complementary.
There have been attempts to understand intelligence with intelligence (logic, symbols, reasoning etc.) for 30 years, to not much effect, now AI and machine learning are advancing quite steadily, so why the snark? All evidence suggests that the way the brain itself learns things is statistical and probabilistic in nature. There are also new disciplines now, like Probabilistic Graphical Models, which are free of some of the traditional downsides of purely statistical methods, in that they can be interpreted and that human-understandable knowledge can be extracted from them. This is something that really seems promising, and to some extent is an union of the old and new approaches, despite the claims of a big division, but it is hard to see much premise in purely symbolic methods invented merely by some guy somewhere thinking very hard.
I for one am very happy that people seek inspiration in the way human brain works, that's what science is, if you just come up with things without consulting the real world it's not science, it's philosophy, the one discipline that has yet to produce a single result.
This comparison between complementary approaches is an apt analogy for most fields, where the focus shifts every once in a while, when one of the approaches largely hits a wall and most people switch to the other one. A while later, the trends will almost inevitably reverse and draw inspiration from other approaches. The unfortunate thing is that there's no dialogue between the two camps, which makes it that much harder to port good ideas from one context to the other.
I could provide examples from physics research, or for that matter, trends in static-vs-dynamic blogs :P Also, the more "applied" the field, the shorter these cycles are.
I find the analogy to Einstein at the end of article especially funny. I think it's much more likely that people will look upon current defenders of "good old fashioned AI" like they now do upon people who still looked for ether after Einstein's discoveries.
DH is the most well known guy of a small, stubborn group of AI developers who still believe that "human thought" can be reasoned about and can be understood in isolation, and that we can build intelligence without simply reducing it to statistics or to brain anatomy.
I applaud his efforts, and find some of the programs he's written both creative and refreshing.
See, that's the point, as incredibly awesome and useful as the anti-gravity elevator might be, mankind can't wait around for someone to invent it, just raise stuff, or travel in the vertical dimension. And hence all our modern AI systems (including google, siri, robots, warehouse management systems, etc etc) are powered by this approach.
So should we scrap stairs an elevators in pursuit of anti-gravity? Certainly not, we NEED them right NOW. But does this mean we should dreaming about, and working towards anti-gravity? HELL NO!! We need that too.
And hence, as much as i LOVE Hofstadter (i have had the same approach to AI ever since i was a kid), i still have a very PROFOUND respect for modern approaches because they help me create some functionally amazing software.
Hofstadters lecture about analogy on youtube http://www.youtube.com/watch?v=n8m7lFQ3njk
Also some earlier work on the subject
I have also written a review on this very interesting book "Surfaces and Essences: Analogy as the Fuel and Fire of Thinking"
When I was in college (and GOFAI was still alive) GOFAI researchers themselves portrayed him as very much an outsider.
I cannot recommend "Creative Analogies" more. I have purchased no less than four copies (two for myself; two for others, including K. Barry Sharpless, who once made a remark about AI that was reminiscent of some of the ideas in CA) over the years. It's even better than "Surfaces".
If anything, the human mind seems to me to be a particular algorithm that is flexible, but trades that flexibility for capability in certain problem areas. Using a transportation metaphor, it's like walking versus air travel. Walking is incredibly flexible when it comes to where you can go, but air travel is by far the optimal route to get from coast-to-coast, although you are limited to travelling between airstrips. I feel like focusing on the human brain as the "true" intelligence is like claiming that walking is the only true transportation, instead of focusing on optimal routes for each problem.
I was under the impression that Wilhelm Wundt was the father of psychology.
So the result is, that there are three sites which do not incorporate third party connections whatsoever (DDG, HN, fefe). Without the addons, the other sites form a connected graph. With disconnect, the graph is less strongly connected. With only noScript, it starts to fall apart. With both activated, the primary sites are disconnected. ( But the combination apparently breaks something, since a second Guardian primary node appears.)
A few caveats, first of all this is of course not reproducible, since it depends on my whitelists for noScript and Disconnect. And the test set is of course not representative for anything except itself. And absence of a edge in the graph does not mean absence of a connection. But with this in mind, I found it quite interesting how connected even a small test set is.
 guardian.co.uk zeit.de blog.fefe.de reddit.com http://natmonitor.com/2013/10/24/ghostly-shape-of-coldest-pl... from reddit)duckduckgo.com http://linuxreviews.org/kde/screenshot_in_kde/ (from DDG search)
We reserve our copyright as to commercial applications but please contact us if you are interested in licensing for non-profit or educational uses. Our source code is available to review for your assurance.
However, this doesn't seem like a good way to collect good quality crowd-sourced data. It can be easily poisoned, and there are simpler alternatives, such as crawling and analyzing the links by themselves. (I am assuming that an entity like Mozilla would have sufficient resources for that).
I suspect the people who end up not getting what they want tend to be the ones who don't put in the effort, especially taking the emotional risks in attracting others, initiating relationships, and making the relationships work the way they want.
Many men and many women put in this effort. Many men and many women also don't put in this effort. I suspect the former group has much more success than the latter group in the long run whether male or female, though I suspect they face a lot more rejection and emotional pain in the short run. I suspect the latter group faces less short-term pain and rejection, but is lucky to get what they want from relationships if they ever do.
The emotional challenges of making yourself vulnerable are harder for most than pursuing a career or hobbies so many men and women go the emotionally easier route of working hard at their jobs. In my experience, along with overcoming those challenges comes tremendous emotional growth.
(Btw, I disagree with the zero-sum mentality of winners and losers because people can have more than one deep, meaningful relationship and relationship come in many forms, but adopted it for consistency with the article).
I think a lot of this is self-inflicted, though. Professional women often still carry with them some of this 1960's mentality and refuse to "date down." As the demographics change and women being overrepresented among the college-educated, this puts them at the wrong end of a supply/demand imbalance.
On the other hand, some of the voluntary decisions are due to unfair social pressures. I think women wouldn't wait so long to get married if doing so didn't start a timer on their downshifting their career. My wife and I got married at 26/27 and had a baby shortly thereafter. My wife is a corporate lawyer and gets a lot of flak for working long hours, especially from family. Nobody ever gives me flak for working long hours. Painting in broad brush strokes, men tend to find that when they get married, society reinforces their career ambitions. Women tend to find that when they get married, society chips away at their career ambitions. Other women, particularly other moms, are the worst about it.
Don't get me wrong, I made some SERIOUS mistakes of my own, and had some important problems to work through. She has her side of the story too, and neither of us could tell a simple story.
But money and earning was a huge problem, even when I was making two or three standard deviations more than the US average. And it was a shock to me to see how many people supposedly rejected the rat-race values, and knew my relationship with my kids, yet couldn't be bothered to call to see how I was doing. People say they want dads who focus on their kids, but I haven't noticed much effort to support those guys in the tight spots. At least not when Mom declares the guy a loser.
Again. No representation of personal perfection is made or intended. I had a lot to work on in the marriage, found more in the divorce, still finding more yet. Yay.
I hope this doesn't come across bitter. I think there is a lot of confusion in the discussions of gender roles and career and child raising, and I think a lack of candor is part of that. So it is important to notice that a great deal of the values declared, are declared for the nobility of the declaration and don't prove to mean much. A good many people are smarter / wiser / more careful than I was, and don't take those declarations at face value, and so find themselves making better decisions and on firmer foundations. But it is impossible to really talk to most of those people about their attitudes, because they all know that some of their opinions could bring a lot of flak. Why pay that price to be candid? Especially when, let's face it, many of these conversations are begun with an intended conclusion in mind.
From what I've seen, personally, we need a LOT more honesty in our discussions of gender roles and careers and child raising. And a LOT of that has to come from people on the "progressive" side of the discussion.
I've been dumped over the hazy prospect of something better more than once. AFAIK, those exes are still searching. But, I'm 31 now, and really happy as a husband to a wonderful wife.
Dating is severely overrated. The best thing you can do is get in and get out without becoming cynical from it. Long-term relationships are satisfying in a way that dating can never compare to.
This formal meeting where both sides are constantly analyzing their partners and odds, i.e., dates, inevitably leads to this paradigm. It is just like job interviews, you can't possibly get to know deepily each candidate, so you have to create artificials proxies that will help you to chose wisely. And this proxies are wrong most of the times.
But when finding a partner you can possibly have time to meet another person more deeply. Actually, you do that all the time, at work, at your neighborhood, with friends and friends of friends. But that is not longer a possibility if you have spent all your life not interest at all in people surrounding you, waiting for the time when you will chose a person from a shelf to marry you.
Tl;dr: I don't think is much about successfull women high expectations, but about women didn't have much interest at all in another humans and now think of a partner as a product. And they don't get how this 'product' has a mind of its own now.
Before people say "why don't men do the same?" here is why: http://jobs.economist.com/article/when-women-dare-to-outearn...
"For the couples themselves, the dynamic may be a problem. As long as the woman earns less, her income does not cause trouble in the marriage. Once she earns more, however, marriage difficulties jump and divorce rates increase. Interestingly, it does not seem to matter whether she earns only slightly more, or substantially morean indication that it is not female income per se, but the mere fact of earning more, that causes trouble."
I doubt that is changing anytime soon.
A store has just opened in New York City that offered free husbands. When women go to choose a husband, they have to follow the instructions at the entrance:
You may visit this store ONLY ONCE! There are 6 floors to choose from. You may choose any item from a particular floor, or may choose to go up to the next floor, but you CANNOT go back down except to exit the building!
So, a woman goes to the store to find a husband. On the 1st floor the sign on the door reads: Floor 1 - These men Have Jobs. The 2nd floor sign reads: Floor 2 - These men Have Jobs and Love Kids. The 3rd floor sign reads: Floor 3 - These men Have Jobs, Love Kids and are extremely Good Looking.
Wow, she thinks, but feels compelled to keep going. She goes to the 4th floor and sign reads:Floor 4 - These men Have Jobs, Love Kids, are Drop-dead Good Looking and Help With Housework.Oh, mercy me! she exclaims. I can hardly stand it! Still, she goes to the 5th floor and sign reads:Floor 5 - These men Have Jobs, Love Kids, are Drop-dead Gorgeous, help with Housework and Have a Strong Romantic Streak.
She is so tempted to stay, but she goes to the 6th floor and the Sign reads: Floor 6 - You are visitor 71,456,012 to this floor. There are no men on this floor. This floor exists solely as proof that you are impossible to please.Thank you for shopping at the Husband Store.
To avoid gender bias charges, the stores owner opened a Wife Store just across the street.
The 1st first floor has wives that love sex.The 2nd floor has wives that love sex and have money.The 3rd through 6th floors have never been visited...
...making those assumptions, I'm trying hard not feel just a little smug when realizing I'm one of those approaching-middle-aged men who's suddenly a lot more attractive (effectively) than ten years ago.
I don't sleep around rampantly, and never have, but I am with a 20-something woman (and part of that first paragraph I wrote comes in because I didn't pick her to settle down or because she's a 10, nor is that why she's with me). The description of relationships in that age range did make me think a bit.
I'm trying not to feel smug because that's a terrible reaction: It's the same way you'd expect a hot 20-something girl to feel knowing she can get any guy she wants, at least temporarily. And feeling smug about this ignores the fact that, whether women who do fit this profile were jerks in their younger years or not, they're now more mature, more experienced, and facing prospects that just aren't pleasant and make the rest of their lives -- which they've worked hard for -- a lot more uncertain than they had reason to expect before. Regardless of how carelessly or inconsiderately you spent the romantic pursuits your younger years, if this is the problem you face, I can manage at least some sympathy.
That said... I still can't shake the doubts I expressed at the start. I obviously haven't seen the data or anything, but it's hard to look at this and say "yep, I have no doubt their methods are good and their conclusions are representative."
And to add to this, why are so many guys douchebags? Because we've been treated like shit by women from our teenage years, and now know we have the upper hand.
Of course, I'm not like that (probably would be if I was single), my wife is very sweet and very pretty, and I'm glad to be out of the dating game.
I must say that I haven't seen anything as weird as I read here with people I know; guys getting their own back as some kind of revenge for their missed 20s? Maybe it happens; luckily I don't know these guys.
I'm 38 like Greg the writer from the article; unlike Greg the writer I had some idea how basic things work. You know; supply and demand. In (and a bit before) my 20s I was a big guy with glasses, a beard and long hair. I listened to metal music. So I must go to rock concerts and rock bars to meet girls? Of course not; that would be stupid; I went to parties with clean shaven, nice smelling, well dressed, upper class talking students. People studying law, business etc. In those days (still? no idea) math/physics/cs were real geeks and they didn't go to those things. Now all girls there wanted these guys and there were ONLY these guys; they all looked the same. There would be always about 1-3 girls who went with friends or just because bored, but who hated the kind of guy there. Either because of look or attitude. They went for me, automatically, every time. I would talk about physics and they would sleep with me; I had/have great relations with some of them. It still works now (I'm happily married now, but it still is flattering).
I ran a successful dating site for a while and often explained to people that if you all fish in the same pond, nothing will happen. That's just useless disappointment if you're not Don Juan.
I'm 38. I get more attention from early 20's women now than ever before.
We're trying to fit a square peg into a round hole. Marriage was born a long time ago and used for very very different purposes than what the 21st century 30-something career woman is hoping to use it for. Of course said demographic isn't getting what they want from it.
Both the criteria for a spouse and the reasons for getting married are either too superficial or overly vague. You need to first ask yourself why you want to get married and then develop criteria for a partner based on that.
Why do you want a spouse? To have sex? You don't need a lifetime commitment for that, it'd be a wiser life style decision to move to a place with legal prostitution. Do you want a spouse because you're lonely? Then make more friends. There's no need to make the relationship legally binding, go to places where people congregate with similar interests. Do you want to have kids? There's tons of charities out there where you can mentor children and make a very real impact on our society without creating new children with a spouse whom you selected based on criteria that are terrible predictors of being a good parent.
We as a society are never going to be able provide healthy guidance on marriage until we start to be very honest with ourselves about what marriage is for and what its purpose is for each us.
Some bits, I don't really know if I should dismiss: "It's wall-to-wall arseholes out there". It's too easy to find negative anecdotes and sentiments that things are getting worse. Especially month the nonvoluntary singles.
"Women with degrees want a smaller group of men with degrees" will fix itself. A degree isn't what it used to be in exclusiveness. Women might even be doing more degrees specifically because they are under a little less pressure torn.
I think preferences at different ages plays a bigger part. Women tend to be at peak attractiveness in their 20s. Men in their 30s. Both want to settle down in their 30s. Also men can* have kids later so even though attractiveness goes down they have longer. This makes it easier for men in the settle down phase and women in the play the field phase. The 30something women's complaint (can't find a nice guy to marry) just seems more reasonable than the 20something Men's.
damn it..its just like the app store.
All the crap women put us through most men read this article and cheer.
2. Behavioral psychology says we all tend to "high water mark". We want our eventual partner to be better looking than, smarter, more successful etc than the partners we had previously. Especially if you're getting older and your appeal is perhaps declining, that's going to make it very difficult to find someone who meets your standards, because your standards have risen over the course of your dating life.
That's weird. It all sounds rather medieval to me. I hope this is all just the kind of selective stream of colorful anecdotes that journalists are so fond of.
Each year, about 500,000 men in the US get a vasectomy, with rates higher among more educated and higher-income men.
Why? Because birthrates are declining? Or because people "need to" have a partner?
"Men put more work up front -- making the first move, taking the women out and showing them a good time, etc. Women put more work once the relationship gets going (i.e. after much sex has been had). They put up with their guy's frustrating habits and work to advance the relationship forward."
Now, this is a description of the MAJORITY of interactions, not all, of course. Some women chase men (or are more open to advances from men). Some men are very marriage minded. But the majority behaves as I described. To see why, I highly recommend this article: http://denisdutton.com/baumeister.htm
Now, how does this affect the marriage market? Well, the conclusions follow directly from observing the trends that are occurring in the last 50 years:
* More women work
* Women work longer hours
* More women are educating themselves
* More young women are independent financially
* In fact, young unmarried women make more than male counterparts
However, in some ways the situation is pivoting again:
* Technology is making traditional college educations less useful
* The internet will soon disrupt college education
* Income inequality penalizes wage earning in favor of capital (running a business with clients is more inflation-resistant). Entrepreneurs are the new finance guys.
All this should combine to once again change women's perspective on who's dateable
Women respect risk-taking men (see the article), and want to have children with a successful man whose risks paid off.
And it looks like the humble folks on HN with their lifestyle businesses or those in successful startup cities will have the advantage in terms of earning potential, freedom to choose, and also women.
If they would just work out more... :)
Eastern cultures (india/china etc) have a relative advantage now, precisely because of their culture. Think of all the lost productivity from emotional hardships from most men in their 20s and lots of women in their 30s in western countries.
The only thing I can suggest to all you of guys out there (and I know that 90%-95% of the readership here is 'guys' and not 'chicks') is to get educated.
Go read these blogs:http://dalrock.wordpress.com/http://therationalmale.com/http://heartiste.wordpress.com/http://www.rooshv.com/
And why not some books:http://www.amazon.com/Models-Attract-Through-Honesty-ebook/d...http://www.amazon.com/The-Rational-Male-ebook/dp/B00FK901R8/http://www.amazon.com/The-Art-of-Seduction-ebook/dp/B0032BW5...
And why not a Reddit too:http://www.reddit.com/r/TheRedPill
The problem is that you must have the confidence to write your own rules in life. If you speak with confidence, move with confidence, dress with confidence, and act with confidence, you'll have your choice among women.
The irony of it is that you only get confidence from past success. You must move beyond your nerd persona from high school. If you adopt the mantra "I AM the prize" and actually truly believe it, women will believe it too.
I cleaned up my act. I was just a cubicle nerd in Silicon Valley. (Although I must have had something going for me: I became a manager.) I hired a personal trainer and started to hit the gym like a wild animal. My abs came out: I hadnt seem them since high school. I changed my diet. (Hint: The Paleo/Atkins diet works.) I started a relationship with a tailor and ordered a lot of made-to-measure clothing. I subscribed to GQ.
I read HN every day. The technical articles are fascinating, and the writing brilliant. Yesterday I spent a large section of my day reading a set of about 450 slides about subtleties in the C language that was linked from this site. Today I spent a good fraction of my morning reading about elliptical-curve cryptography. I am a nerd at heart.
Yet, Im not a nerd in the sense that you think of. When I meet a new girl, my frame is Im going to bend you over my kitchen table and fuck you like the dirty ho that you are. She knows it just by my speech, my body language, and how I act. Obviously, some women wont step into that frame. It doesnt matter: the thing you have to realize is that men display, and women select. The key to catching women is approaching more women. Depending on your perceived status, a certain fraction of women will select you. Dont waste time with women who dont select you. Focus on the ones who do.
I can already hear the shrill cries of oh, no GOOD girls would select a guy like that. Its a fallacy. Women are emotional, and when they step into the strong frame of a man with whom they resonate, all bets are off. The nice HR girl you took to dinner at a fashionable restaurant on University Ave. in Palo Alto will screw a guy in the back seat of a car if he has high enough status. Give up your good girl/bad girl dichotomy.
The problem I have now is described as the players curse. The sheer numbers of women riding the cock carousel (i.e., slut it up in your 20s, find the beta provider in your early 30s) has distorted the market. (If you dont believe me, shut the fuck up and go read the reference sources I cited above.) In my 20s I dreamed of children and family in my 20s. Lots of men are simply dropping out of the mating market and simply jerking off and not dating because women in their 20s dont select their twentysomething equivalents. A man in that situation has two choices: to kick up his game a notice, or just to retreat into porn and World of Warcraft. The paradox is that a man of willpower and clarity that can put effort into cleaning up his act can break into the side of the selected and score plenty of vj. Once you understand that, you can see modern-day feminism for the hoax that it is. It is a pox on the civilized world.
I can already foresee that somebody out there is going to some white knight jerkoffs who are going to call me a misogynist. For the record, a misogynist is someone that HATES women. Im not a misogynist. Roissy/Heartist is not a misogynist. Roosh is not a misogynist. Usually, the misogynist stick is used to say, youre not being politically correct. If you want to say, greenlander, youre not being politically correct, Ill accept that. Ill accept it even if you want to say, greenlander, youre a self-absorbed, narcissistic, self-deluded dickhead. Its the truth. But dont call me a misogynist: I love women. One must simply see them for what they are.
The great thing is that it doesnt matter how many people out there slander me with politically-correct ad-homonyms. A man who is ready to see the truth will follow the path if even a tiny morsel of the truth is laid before him. And if I even help one nerd change his life for the better by nudging him in the right direction, the past hour Ive spent writing this post will not have been in vain.
Being focussed is the key when you want to be successful as an entrepreneur. Finding the right one until you found the right one is the most distracting thing.
- women want direction in life from men (not feelings).
- men want beauty, support and kindness from women (not careers).
No one is giving the other what they want/need so they seek it in themselves, making the situation even worse.
Look, guys and gals, there's some quitegood evidence that the genes of peopleof descent in Western Europe, Russia,and East Asia are essentially the sameas the genes that were successful, say,10,000 years ago although we could likelypush that back to 25,000 years ago.
So, think was tribal or village life waslike, say, beside a river in Europe 10,000 years ago. Right: The womengathered together and tended to thechildren, prepared food, and made clothing. The men and the boys oldenough did men things, hunting, toolmaking, building, and fighting.
The talents of men for those men thingsled to more tools, fire, wheels, metals, ..., Windows 7, and these thingsenormously changed the economy and culture, built by men in ways convenientfor men, wildly different from what thewomen did 10,000 years ago and not soconvenient for women. E.g., a singlewomen or a woman in a suburban housewith 2-3 kids is in a very differentsituation, especially for woman, thanthe women in the tribe/village 10,000years ago.
In simple terms, the women were happierwith their lives 10,000 years ago, assumingthere were no problems with disease, injury,hygiene, food, child birth, etc.
Then, women of 35, sorry: You are too lateto the game. Way, Way, Way too late. Howmuch too late? At least 15 years, more like20 years, and for a really good answer onwhen to start looking for a husband, letme be clear (assuming good nutrition andrate of maturation) 22 years. Right: Congratulations on your abilities atarithmetic, 10,000 years ago you wouldhave been looking for a husband at age13 or so and getting married at age14-16. Did I mention that you are late?
There's more from the side of the men:He wants her cute, sweet, pretty, precious, darling,adorable, something to cherish and protect.How to know? Easy: Look at the faces. Hmm?Right. Look at the faces of human females overthe years starting at age 1. There they elicittheir support from Daddy, uncles, etc. withtheir faces, facial expressions, and expressionsof endearing emotions. That's just how it works.And (simple argument) that's how it worked10,000 years ago (proof left as an exercise).Then look at the faces over the years. Noticesomething? Right: At age 10 with some workon hair style and makeup, she can look 17.Or, to be more clear, a young women of 17still tries to look like an endearing10. And even more so for a young woman of13-16. Why? Endearing. She's not tryingto be independent, autonomous, self-sufficient,and equal, crashing through glass ceilings,adopting and hiring a nanny, etc. Insteadshe's trying to be endearing, cute, sweet,meek, darling, adorable, precious, to becherished, protected, and cared for by herhusband as she has babies.
But, woman of 35, on endearing, etc., youjust don't ring his bell, just don't arouse his protective, caring emotions,are way, Way out of the game. Any prettygirl of 14 can totally blow you off thefield of competition.
The way of the world. And the result?Right: In the more developed societiesthe average number of children per woman is significantly under 2.1. E.g.,in Finland it's 1.5 which means that in10 generations 30 Finns will become 1.We're going extinct, literally, quickly.
Why? It's not nice to try to foolMother Nature.
The way of the world. That's just howit works and has worked for at least10,000 years. That's how it workedfor all the woman you descended fromfor nearly all of the last 10,000 years.So, go back to 13, and let's try again,if you can find a way to do that.
Today 13? Right: She has to (1) findhim, (2) get into boy/girlfriend withhim, (3) go steady or some such with him,(4) get a diamond, (5) get married, allby about age 17-19. E.g., Lady Di decidedat 15 that she wanted to catch PrinceCharles, and she did, married him atage 20. Age 25, 35, etc. to start to be lookingfor him? You gotta be kidding! Uh,honey, there are sperm banks -- check oneout!
Wish I'd known this, this clearly, whenI was 15. Very much wish that.
"Marriage is about offspring, security,and care taking." -- extra credit forknowing the source!
I also predict that those men will do so in a very masculine way.
Finally, I hypothesize that a lot of the demand for good breadwinners from women who already have all the bread they need is cultural rather than hormonal.
The best indicator that this is true is the massive difference in how couples behave in public (keep with the norms) and in private (endless variety). Another observation is that lots of "cultural revolutions" are simply formerly private activities becoming public.
Don't go on dating sites, except for maybe okcupid. But even then, things like Reddit meetups, concerts, meetup.com meetups, are where you should meet people. Don't go to the bar either.
If you're in public, go up to someone and say "Hi, I'm x and y" followed by something relating to wherever you are. I've had great success with this, at the very least, you'll get a coffee date, at worst, a fake number.
It's not hard, you just need to put yourself out there. Screw rejection.
i would recommend watching this video if you want a different, non-mainstream perspective on the philosophy of the sexual, marriage, and 'dating' marketplaces.
you don't have to agree with it, but it's worth a watch. especially if you are having trouble 'understanding' women. women are actually very simple biological creatures, like men. they just operate under a different set of constraints which are generally invisible to men.
in my opinion he is able to see and convey things from the perspective of women, which is valuable insight for an audience of men.
Sometimes it seems as if HN-users are feeling so guilty about the whole bro-gramming topic, that they up vote anything that has to do with women..
The problem with NIST Dual_EC_DRBG is simpler than the article makes it sounds. A good mental model for Dual_EC is that it's a CSRPNG specification with a public key baked into it (in this case, an ECC public key) --- but no private key. The "backdoor" in Dual_EC is the notion that NSA --- err, Clyde Frog --- who is confirmed to have generated Dual_EC, holds the private key and can reconstruct the internal state of the CSPRNG using it. I think this problem is simple enough that we may do a crypto challenge on a toy model of Dual_EC.
Nobody in the real world really uses Dual_EC, but that may not always have been historically true; the circumstantial evidence about it is damning.
The NIST ECC specifications are in general now totally discredited. If you want to see where the state of the art is on ECC, check out http://safecurves.cr.yp.to/.
You should never, ever, never, nevern, nervenvarn build your own production ECC code. ECC is particularly tricky to get right. But if you want to play with the concepts, a great place to start is the Explicit Formulas Database at http://www.hyperelliptic.org/EFD/; the fast routines for point multiplication are mercifully complicated, so copying them from the EFD is a fine way to start, instead of working them out from first principles.
Doing 2048 bit private rsa's for 10s: 1266 2048 bit private RSA's in 9.98s Doing 256 bit sign ecdsa's for 10s: 22544 256 bit ECDSA signs in 9.97s Doing 2048 bit public rsa's for 10s: 42332 2048 bit public RSA's in 9.98s Doing 256 bit verify ecdsa's for 10s: 4751 256 bit ECDSA verify in 9.92s
f : x -> pow(x, pubkey) mod m g : x -> pow(x, privkey) mod m
The "big breakthrough" result was actually proven by Euler hundreds of years ago!  The innovation of RSA was building a working public-key cryptosystem around Euler's result, not the result itself.
Not a cryptography expert here, I don't know how to respond to these.
Sure in the example the key was so small it could only do one character at a time. With a larger key you wouldn't know the length of bytes to decode in one go. But that would only slow things down a bit.
There must be more to it than that?
I couldn't figure out why all of her search and homepage settings had changed, and how they were so resilient that they were re-applied.
I did find SearchProtect, and eventually managed to remove it (uninstalls, + registry hacking, + force deleting files, + nuking the browser installs and re-installing).
But I hadn't figured out where it had come from as my girlfriend didn't believe that she'd installed anything and although I saw uTorrent I thought nothing of that since I didn't believe it installed such `add-ons`.
For those who encounter this, SearchProtect is really nasty. Really hard to remove.
- Starts very light, bare bones, downloads torrents and that's all
- Gets bloated with more and more features that nobody wants
- Partners with a shady company
Off to alternatives I go.
The day uTorrent pushed the update that tried to install a browser extension I was absolutely done with them. I do not support malware in any shape or form.
1.6.1 is light weight, unmolested, and still worth using.
As for uTorrent, it's been going down this path for a while, gradually introducing crap into the app. And this one is the last for me, as well.
Btw, apparently, they turned off registration on the forum to ward off the mounting complains. When I go to https://forum.utorrent.com/register.php, I'm greeted with Get lost spammer, we don't need your kind here. And of course the topic is closed. Well done.
runs fast, no ads, no issues, just works!
This is fuckyou-ware. Software that serves a reasonable purpose, but does it with utter contempt for the user.
uTorrent is gone in our case, I've moved her over to using Qget with our Qnap NAS and while it's not as feature rich as uTorrent it's a much better option. And it's one I can watch and control a little better as well!
My point is, torrent usage is synonymous to piracy, infringement and other illegal activities. So, perhaps it is this tendency that makes people at uTorrent think that it is not totally wrong to rip off people who are ripping off content & software makers. In my experience, I never fully trusted uTorrent. It is simply difficult to trust something that allows advertisement of malware, porn, fraudulent sites. It started off quite well, but then it has been on my watch list since quite some time now.
I was looking for a simple replacement for uTorrent for a while now. I have been using linux for years now and was surprised how awful it became while I was off windows.
Now it's just another parasite on the internet.
I've also used Deluge, but there's nothing too special about it in my eyes.
I stopped using it about 2 yrs ago for similar reasons. It's a malware seeding garbage now.
It was a beautiful bit of software.
Let's be clear here: the user was still given a choice, but the user "trusted" uTorrent to not force them to make one. Give me a break.
The problem with most lock screen enhancements is that anything you put there is outside your phone security "firewall" and available to anybody who picks up your phone. The 4.2 lock screen widgets work fairly well with this (eg: you can open the camera app without unlocking the phone, but attempting to swipe over the gallery forces you to unlock). However they are (I assume) using the core framework APIs to do that and I presume support for it is coded into the apps, while this seems to be doing it for any app.
What happened to the dreams of a computer in your pocket that knew what you wanted to do?
All anyone can think of is to complain about privacy? Really?
I once did a brainstorm session with a facilitator who taught me a great technique. Whenever someone suggests something you aren't allowed to so "No" or "But" - instead you should say "Yes" or "And".
Try it for a second:
This application tries to predict what you will need when you pick your phone up. Currently it uses serverside processing to help with that. Yes, and imagine what else it could do with that serverside power! No battery constraints to worry about!
Privacy problems are a great way to kill good ideas. Put those concerns aside for a minute and imagine the portability of handheld devices merged with the power of always-on servers.
How do I know you're not sending my usage patterns upstream to CoverCorp? How do I know that you're not reading the Android Music Provider database, and sharing my data back?
All I want is a way to put the current weather on my lockscreen under the time, and to put immediate access to camera, flashlight, and Google Now there. Everything else I'm perfectly comfortable doing myself. Any suggestions for an app that does that?
I hate the idea it needs all sorts of server connections for their business model. I don't know a way around that, but if they or another company figure out how, that's what people will gravitate toward. Especially given the paranoid climate.
This makes me want an Android. Great job, guys!
I've never liked Android's implementation of home/app screens (widgets + some apps, tap to reveal all your apps).
I guess if you want a lot of clocks, Android is great.
This adds another app/button layer...
How well does it work with some kind of lock-screen security? The UX for that is always a hassle, and I'd love to find someone who is doing it well.
Are those interactions simulated though? I'm not an Android user so when I saw how thin the bezel was on that white phone they use I had to look it up.
Turns out it's the S4 Play Edition without the Samsung logo. That bezel isn't right though, I mean the S4 bezel is pretty thin but the video makes it look razor thin. Also: I want razor thin bezels, let's get there.
It would be good to be able to define actions based on location (either by which wifi I connect to or GPS) - as well as time of day.
(I'd like to have my screen auto dim at 10PM)
I still find it amazing to see how much true 1984 is becoming. I am sure the next phase is thought control, because "crimes" start there and have to be prevented at all costs. Let's get inside the minds of people and put CCTVs and audio recording devices everywhere. In tables under restaurants, in cars, in buses, every possible place. Crime has to be prevented.
The future is scary.
"Civil government, so far as it is instituted for the security of property, is in reality instituted for the defense of the rich against the poor, or of those who have some property against those who have none at all.
The firehose, which BlueJay presumably collects from, doesn't capture geolocations that aren't already in the public data, right? So it looks like the end of the road for criminals who tweet about their #meth lab and have let Twitter geocode their tweets. Hopefully, that consists of the majority of villains the police have to deal with
I am writing this after coming back from the bar with friends. I would actually consider myself someone who drinks too much. I have written code for years, tried and failed on over four startups. I have spent a week in a hospital for suicide and depression. I currently work a cushy job where I make great money and write code that a first year CS student could write.
This man was depressed. He tried to find outlets, self medicate, whatever the garbage we need to say about his actions that we need to say. He needed help. Rather than criticize his failures, lets make a note of how fragile our human psyches are and work towards helping one another cope with our internal battles.
There is probably no line between self and business for a guy like PK. At that time you could build a significant program or even a game singlehandedly. Most likely he obsessively made his better ARC program for its own reward (you could use it for free) and he was surprised both by the commercial success and the subsequent legal attack. Very personal indeed, and if he wasn't already a completely tortured soul, that would be more than enough to take away any shred of sense he had made of the world. Give enough cash and free time to someone who has been cracked like that and he quite easily can end up dead from an existential crisis with no practical boundaries.
Considering this was some 25 years ago now, if he thought about intellectual property issues at all, the mindset at that time was very reasonable in that your source code and executable was considered copyrightable like a book - it didn't matter if it provided the same functionality as someone else's program as long as you wrote the code. Just consider the fate of the original spreadsheet for confirmation of this.
To summarize, before you take any stand against the tragedy that is the life of PK, consider that there is probably a huge concentration of people very much like the early Phil Katz right here on HN. The man simply needed help, and he didn't get it.
The team that owned ARC was even smaller than Katz's, and PKARC was based directly on ARC.
The takeaway is that Katz optimized existing code, his mother ran the pkzip business, they defamed the arczip guys, and Katz himself died a paranoid, drunken wreck. The problem with the doc is that pretty much nobody is there to defend Katz. It's an old war, and really doesn't matter now.
Each morning I read a note to myself: "The high score isn't money, it's people who love you".
Coders in general are susceptible to alcohol for these kinds of reasons. Add in flexi hours (hey it doesn't matter what time I show up as long as I ship code, right?), and it's practically the ideal job for a functioning alcoholic.
I've seen dozens of fellow programmers slip down the slope. It's particularly bad in the financial sector in London, where "trader culture" of hard drinking, drugs and women is seen as acceptable. Usually it's well concealed until early to mid-thirties. Often their situation rapidly degenerates after a relationship breakup or family bereavement. What's fun and social when you're with your friends in your 20s isn't so much when you're 35 and lonely.
It's striking how casual and uninformed the general attitude to this drug is in our industry, e.g. http://zachholman.com/posts/how-github-works-creativity/
I wonder if alcohol served him as an unfortunate remedy for his introverted person.
And on unrelated note:
He got real good at optimizing programs, and he learned to get the job done with the least amount of instructions and running times.
I like the culture of code bumping back in the day. Although, we now live in a time of abundant CPU cycles and memory, there's still value in that, even above many layers of abstraction. Sadly, increasing number of programmers do not care or even aware of their programs' resource footprint on the hardware.
The brightest of us are the hardest to reach and the most difficult to persuade, but they're also the most painful to lose.
To anyone reading this: if someone that you care about needs help, don't wait until tomorrow. Don't make excuses. Don't fuck around.
Me: Why would I pay 40 dollars for zipping software I get for free? You guys are totally late to the game, winzip and winrar already exist.
Rep: ....... Yea, we started the industry, and our founder died from alcohol abuse...
Me: Good joke...
It's a sad tale indeed.
these days, every time a startup reaches an IPO, we get a movie, book or long series of articles. we learn the guys revolutionizing social media are total basketcases
Distros that use Python 3 by default (such as Arch and Gentoo) still allow for Python 2 to work side-by-side with Python 3. Proper separation using virtualenvs works seamlessly, and there's no problem at all to work simultaneously on Py2 and Py3 projects.
IMHO confusing these roles and providing, or expecting one install of a language to fit both simultaneously has been something that's irritated me about Unix-like OSes for a long time, much as I love them.
Yes I know there are plenty of tools for installing dev versions of tools side by side with the system components. IMHO doing so should be the default assumption unless you really are developing system scripts or scripts that you explicitly expect to be limited in scope to that OS.
Can someone with knowledge of python ecosystem explain what took a major distro so long, given that you could run different versions of python in parallel (or couldn't you?) for some of big pro software that needs the old version? - http://python.org/download/releases/3.0/
Otherwise no big news, since all other major distros are switching to default python 3 too.
I just wish Python 3 didn't benchmark so much slower than Python 2 in some of my use cases (though I still aim for compatibility with it).
Couple of relevant search results:
Has anyone else experienced this?
This will be interesting.
What our systems found was definitely a compromised JS file, and others on this thread have posted something similar to what we saw. This is not a false positive.
We have detailed help for webmasters in this kind of situation:
One thing that I strongly suggest to any webmaster in this situation is to look for any server vulnerability that allowed this file to get compromised in the first place. We sometimes see webmasters simply fix the affected files without digging into security hole that allowed the hack, which leaves the server vulnerable for repeat attacks.
Happy to answer questions.
It reports google.com for 142 exploit(s), 131 trojan(s), 98 scripting exploit(s)
Thank you, thank you, ladies and gentlemen, I'll be here all week!
And now forevermore the icon for that site in the url-bar dropdown is the warning icon, and I have not been able to find out how to change it back to the normal one.
tl;dr: Relevant services moved to new servers; investigation continuing. Post mortem to follow once that's done.
- easier to read
- mobile-browser friendly
- auto refreshes
- preserves articles that make it to the front page, and in (reverse) order of the time they made it to the front page, so no need to constantly check the front page and parse all of its contents to see if new articles are posted
a big Thank You and kudos to its author(s) and maintainer(s)-- it works well and consistently!
-top X posts by comments from a day/week/month
-top X posts by votes from day/week/momth
What I don't like about Hacker News is that interesting things fall out of front page too quickly and discussion dies I prefer interface where interesting stuff stays at the top longer (amount of comments last week approximate it well in my view).
My theory on this is that it was caused by loading articles a certain number of hours back from the current time, and then grouping by day before sorting to the top X.
For example, if the last 24 hours were loaded, and grouped into today and (part of) yesterday, you would get an accurate top X for today so far, and an accurate top X for the portion of the previous day it had fetched.
This was particularly noticeable when I hadn't visited for a few days (I've since rectified this aberrant behavior of mine) and loaded a few past days to review missed submissions. Seeing something that caught my eye disappear as it loaded older content drove me nuts.
It looks like the problem is fixed now, but it's hard to be sure, as it may be more or less likely depending on the time of the day you visit.
I sent a bug report to the developer when I noticed this (in February 2013), but never heard back. I'll happily go back to using this interface if it's fixed though, I found it generally more pleasant to use.
BTW another project of Wayne's is: http://clara.io
Also, I want to know what the settings are compared with the "official" HN frontpage ordering.
"about" is broken, where I had hoped to find a FAQ
Now I just use regular HN with it.
* Mockup, nothing works.
#FBFBFB for instance.
The problem then, is that that general functions have no (essential) bandlimit . Remember that differentiation acts as a multiplication by a monomial, in the frequency domain . Non-constant polynomials always eventually blow up away from 0, so in differentiating, you're multiplying a function by something that blows up, in the frequency domain. This means that, in the result, higher frequencies are going to dominate over lower frequencies, at a polynomial rate.
Let me be clear, the problem with numerical differentiation is not just that rounding errors accumulate, it's that differentiation is fundamentally unstable, and not something you want to apply to real-world data.
It depends very much on what your application is, however, I think generally a better approach to AD is to redefine your differentiation, by composing it with a low-pass filter. If designed properly, your low-pass filter will 'go to zero' faster (in the frequency-domain) than any polynomial, thus making this new operator bounded, and hence numerically more stable. It's not a panacea, but it begins to address the fundamental problem.
One example of such a filter is Gamma(n+1, n x^2)/Factorial[n], where Gamma is the incomplete gamma function .
To see why this is a nice choice, notice item 2 in . This filter is simply the product of exp(- x^2) (the Gaussian) multiplied by the first n-terms of the Taylor series of exp(+ x^2), (1/ the-Gaussian). Since this series converges unconditionally everywhere, as n-> +infinity, this filter converges to 1 for a fixed x (as you increase n), however, since it's still a gaussian times a polynomial, it always converges to 0 as you increase x, but fix n.
This is my area of research, so if anyone's interested I can give more details.
 https://en.wikipedia.org/wiki/Band-limit https://en.wikipedia.org/wiki/Fourier_transform#Analysis_of_... https://en.wikipedia.org/wiki/Incomplete_gamma_function https://en.wikipedia.org/wiki/Incomplete_gamma_function#Spec...
For any function that is not a combination of polynomials, you need to have its Taylor expansion up to the desired order of derivatives, so you can't just take an "arbitrary" function and use this method to compute its derivative in exact arithmetic.
So for anything other than polynomials, you just reword the problem of finding exact derivatives to finding exact Taylor series, and in order to find Taylor series in most cases, you have to differentiate or express your function in terms of the Taylor series of known functions.
Edit: Indeed, take the only non-polynomial example here, a rational function (division by a polynomial). In order to make this work, you have to know the geometric series expansion of 1/(1-x). For each function that you want to differentiate this way, you have to keep adding more such pre-computed Taylor expansions.
They're efficient enough for first-order derivatives. For example they are used in Ceres, Google's library for non-linear least-squares optimization
All values tried so far agree with Wolfram Alpha, so color me surprised and happy for learning something new.
Encoding power series as matrices is sometimes convenient for theoretical analysis (or, as here, educational purposes), but it's not very efficient. The space and time complexities with matrices are O(n^2) and O(n^3), versus O(n) and O(n^2) (or even O(n log n) using FFT) using the straightforward polynomial representation (in which working with hundreds of thousands of derivatives is feasible). In fact some of my current research focuses on doing this efficiently with huge-precision numbers, and with transcendental functions involved.
The number a+bd can be encoded as...
The number a+b*epsilon can be encoded as...
This really does fall in the ream of algebraic geometry since this method only works for rational functions - as he implemented it.
To numerically compute sin(x + ) you need the Taylor series.
"...but to give you an overview, the idea is that you introduce an algebraic symbol such that 0 but ^2=0"