The one thing I wish it had is either the ability to not save plans automatically, or at least a button to clear the history. As it is, I just pull up a console from time to time and do localStorage.clear()
So if you just want to checkout the interface you can click to load up an example or two.
How feasible would it be to port this over to MySQL / MariaDB? I know EXPLAIN output on MySQL is much simpler than what you get out of Postgres so my gut feeling would be that it wouldn't be possible.
C - https://github.com/lfittl/libpg_query
Ruby - https://github.com/lfittl/pg_query
Go - https://github.com/lfittl/pg_query_go
Node - https://github.com/zhm/pg-query-parser
Python - https://github.com/alculquicondor/psqlparse
Outtake: "With reproducible builds, multiple parties can redo this process independently and ensure they all get exactly the same result. We can thus gain confidence that a distributed binary code is indeed coming from a given source code."
I'm very grateful for the work that this project has done and continues to do. Thank you!
I tripped over this a couple weeks ago and was both amused and annoyed, since it seemed that packages were being listed in the file in a random order. I'm asking here because it might already be fixed; we're using a slightly old version of the package/repository tools.
Right now we can sign source code, we can sign binaries, but we can't shows that source produced binaries. I would feel much happier about installing code if I knew it was from a particular source or author.
I think the solution is to give those devs who favor such techniques a separate but easy to use fuzzing tool set that they can run just like their unit tests, separate from their usual 'build' command. Give them their ability to discover new bugs, but make it separate from the real build.
Except that the moment you "share them across devices", at least one large company will silently grab your contacts anyway. And several others will try to, too, with one excuse or another.
Thinking about trees as a supervised recursive partitioning algorithm or a clustering algorithm is useful for problems that may not appear to be simple classification or regression problems.
> Maximum depth of tree (vertical depth) The maximum depth of trees. It is used to control over-fitting, higher values prevent a model from learning relations which might be highly specific to the particular sample.
Shouldn't it be lower values, i.e., shallower trees, that control over-fitting?
How governments can be enthusiastically cheering on a return to pre-industrialisation labour practices is beyond me - unless they have a vested interest in doing so, or are doing their damnedest to mollify an increasingly agitated populace while toeing the line their donors and lobbyists demand. Trump & Brexit are both symptoms of the increasingly widespread anger and despair, and if nothing fundamental changes (UBI?), a descent into a totalitarian/terrorist (where the terrorist is anyone who opposes unfettered state power) dichotomy is inevitable.
You already see the rhetoric in the press - traitors, saboteurs, etc. - and the political and economical landscapes are inseparable, despite what many wishful thinkers believe.
This is the sole reason why I don't think it will be successful in it's current state. With most successful tech or services or whatever the core idea is often super simple to grasp and you can instantly see the benefit. I don't see this with Ethereum.
Even with bitcoin which is complex the benefits are instantaneous for the common man. Decentralized system, no single entity controls it. It is a fixed amount of bitcoins so like a mineral it's value is probably going to be stable in the long run and also each bitcoin will increase in value when more people get interested. It's easy to send coins to anyone in the world, at any time.
What does Ethereum do? Smart contracts is probably the key word but I don't understand how it works or how it will benefit me. Why bother?
If you're reading this Kite. I now have a negative view of your product. We cannot allow corporations to take over open source tools. Donating is perfectly fine and encouraged, but the above example is a downright take over. If you want another tool then create one, don't take over an existing one and use the communities trust of that tool to promote your product.
It is a featured Atom package, which may point to whom is GitHub endorsing in this issue, though we could see a more direct response from them regarding both minimap and autocomplete-python.
After reading sadovnychyi's reaction to the autocomplete engine selection screenshot, I think forking is also the only remaining step for autocomplete-python.
I've never heard of such a thing before. Could someone explain how would they use machine learning for building coding tools ?
It's a real shame as the service was good, but nothing is good enough to justify advertisements in my work-space. The fight against distraction is hard enough as it is without having to think carefully about where I'm clicking due to dark-pattern UI.
I've reported several of these issues, sometimes all I get is single reply months later saying: "fixed".. mostly, nothing.
Once I found a SQL injection in a courier service's (very broken) web portal. This was very serious because any idiot could drop all the tables, so I sent an email to the most important worded member of their tiny, yet already bureaucratically structured team. I followed up several times because I knew someone saw my email (I embed beacons in my emails) but gave up after the sixth time. Three months later someone else replied saying "thanks Amin, we've fixed it"
On a separate occasion, a large government agency's emails routinely ended up in my spam folder. It was a huge problem, and they acknowledged it and said they couldn't figure out what was wrong. I took five minutes and found the problem to be a misconfigured server on the domain. The server sending the email thought it was `server-a.governmentdomain.com` but there were no DNS entries pointing the subdomain to the server.I reported this problem with clear instructions to test and fix the issue, but I was called despite the instructions, multiple times, to explain the issue with my words over the phone. This was 2 years ago, last I checked, the issue was still present.
Protection from this kind of blame-shifting and misdirected retaliation should be guaranteed by law. Until it is, bugs in critical and important infrastructure will go on unreported, and remain available for malicious actors to exploit.
Adobe had him arrested the day after he gave his talk.
Link to a Wired article here: https://www.google.com/amp/s/www.wired.com/2001/07/russian-a...
EDIT: I have a terrible memory-- thanks to the folks who replied to my comment with corrections.
In the old days, protesters used to physically go and picket in front of company offices. These days, protesters leave one-star reviews. I wonder which is more effective.
In both cases, it was dads of children in the institution that noticed the bugs when they were rightfully using the system and were ignored when notifying the responsible party about it until they "shouted it so loudly" that they couldn't be ignored anymore, in which case they were reported to the police for hacking.
Links below are in danish, but they can probably be translated if needed.
Also, if such behaviour is systemic, how should we bring about the paradigm shift in handling such events? Such incidents will happen more often across the world as e-governance becomes more predominant.
1 - https://thewire.in/119578/aadhaar-sting-uidai-files-fir-jour...
Unless the company concerned has a well documented and trusted bug bounty procedure, it can be very risky to report a bug in a system, if it involves any kind of hacking.
What happens is once the "bug" is reported, someone inside the company asks "How did this happen?". Now the person responsible has 2 options, admit it was their fault and the vulnerability exists and risk being accused of incompetence, or say that the system was hacked.
Human nature being what it is, one tends to complain of being hacked, thus snow-balling effects, which lead to the arrest of an 18 year old just trying to help.
My advice: Don't report these types of bugs at all, or if you really feel you must, report anonymously.
The public procurement process for the current system called RIGO was indeed 2013 but the whole process is much, much older than that. A more than 300 page feasibility study was published in 2011 https://www.bkk.hu/apps/docs/megvalosithatosagi_vizsgalat.pd... And a completely different system, called Elektra was announced in 2004 with a 2006 deadline.
This whole clusterfuck with RIGO starting in less than a year was absolutely unnecessary since the 2011 study already suggested supporting contactless credit cards so once RIGO starts the only ones using this online ticket purchasing system will be those who have a credit card but not a contactless one. This is a (very) rapidly shrinking audience.
a rabbit was detained by the secret police. the interrogator asks him, "what are you?" the rabbit says, "rabbit"
They torture, beat, and electrocute him for days.
Then, the interrogator asks him, "who told you you're a rabbit?"
I'd really like to know which of these is the better solution.
It seems to me that if people go to the http address, they could be redirected to an attacker's address with a simple MITM attack. So there's an argument to be made for not using http at all, even for a legitimate redirect, because it can be so easily MITM'ed.
On the other hand, if the http address is left unused, then people who try it anyway and it fails will be confused. For this solution to work, it seems the users have to be educated to always and only use the https address.
For these reasons, the whole separate http/https scheme seems broken by design.
What's the consensus from the security community as to the right setup here? Am I missing something, or is there a better way?
They had a form you could try the demo where it sent an SMS to verify and only allowed one query.
If you looked at the source of the page it had hidden fields to override the SMS verification and allow multiple queries.
I freaked out some friends for the day and nearly contacted a journalist but lost interest after some weeks.
I could have had my 15 minutes of fame or be on some list, or both.
It's alright, had some fun.
Or, the managers knew full well the system was shit and they had no time to fix it, but 80k/month is 80k/month.
edit: a few weeks ago, not this past summer that is still occurring
The command being used (for example, "build", "restore") The ExitCode of the command For test projects, the test runner being used The timestamp of invocation The framework used Whether runtime IDs are present in the "runtimes" node The CLI version being used
Here is the telemetry code itself: https://github.com/dotnet/cli/blob/5a37290f24aba5d35f3f95830...
I think they should ask people like Yeoman, but I don't think they deserve this much shit for such a small thing.
Nginx has a lot of respect on the market for handling high concurrency as well as exhibiting high performance and efficiency.
I don't even have to speak about the Git architecture. It speaks plainly for itself.
There's a series of books called The Architecture of Open Source Applications that does justice to this topic
None of this is arguing that one or the other style of architecture is "better" per se, but rather the architectures are different because they were in the end optimized for different kinds of development organizations.
Most business applications remain fundamentally a three-tiered architecture, with the interesting stuff today tending to happen in how you slice that up into microservices, how you manage the front end views (PHP and static web apps are pretty different evolutionary branches), and critically how you orchestrate the release and synchronization/discovery of all those microservices.
(None of which is directly an answer to your question, but is more meant to say that lots of the most interesting stuff is getting harder to spot in a conventional github repository because much of it is moving much closer to the ops side of devOps)
Spree has a clean API, clear models, front end and back end, extensions, and command line tools.
Especially take a look at the models:
Anyway, here are some projects which I can recommend by its source code:
* OpenBSD. Also the other BSDs. Plan9. And the BSD tools. Linux is a bit bloated but maybe it has to be. I don't recommend the GNU tools.
* WebKit. Also Chrome. Firefox not so much, although maybe it improved.
* Quake 1-3, as well as other earlier id games. Really elegant and clean. Also not that big in total. Doom 3 has become much bigger in comparison but again maybe it has to be.
* CPython. Anyway interesting also for educational purpose.
* TensorFlow. Very much not Theano.
I really enjoy reading the source code of most projects which I used at some point. Some code is nicer, some not so nice, mostly judged by how easy it is to understand and how elegant it seems to be. In any case it really is rewarding to look at it as you will gain a much better understanding of the software and often you will also learn something new.
OpenERP, now Odoo, is written in Python.
OpenEMR is written in PHP. It dates from a while ago, but has been mostly updated to the latest PSR standards.
Might also try OrangeHCM, but not sure what those guys are doing these days.
Their argument that birds make maneuvers 3-4x more aggressive than needed for level flight, and therefore level flight may be within the reach of man, is an intriguing one. You could make parallel arguments about AI today, for example that driving a car requires only 1/3 - 1/4 of a brainpower.
So, cranks were a wellknown commodity already in the nineteenth century. Whoever would have guessed...
As a former researcher of alternative magnetic confinement schemes, I'm disappointed the latest research in FRCs and mirrors didn't make it into this talk. Viewers should take into account that this, like most talks, is pushing an agenda, in this case a new device called SPARC. It appears to also be a way of using the incredibly talented tokamak researchers at MIT now that Alcator C-Mod is not operating.
It baffles me that we just accept the kind of malice inflicted on people by programmers because "someone will always do it". As a profession / collection of skilled persons, we should really be better than that.
Obviously, one cannot see the future, nor would we want to be paralized by fear of doing anything. But there is a certain minimum requirement for collective responsiblity which I really don't think we are meeting at the moment.
She possesses the power of foresight, which she uses to advise and guide the humans attempting to fight the Matrix
Mario Kart is a fun party game, not some ultimate test of skill.
People wear rings all the time, application is painless, easy to change once compromised or if there is a new model, and provides same features. Chips would have advantage if they would provide better IO, but currently they don't.
It's great to get yourself labeled a freak though. I got mine done in a a piercing shop, and when I walked in, the place was full with people with dozens of facial piercings, stretched earlobes and full arm and neck tattoos. But as soon as I mentioned the chip implant and showed that I was serious by showing the actual one I wanted put in, they all started murmuring and looking at me like I was send back in time from the year 2210. The owner wasn't sure how to feel when I pulled out a laptop and rfid reader and grinned like an idiot when I successfully logged on to the machine by waving my hand over the reader.
But what exactly are the advantages of this implant over NFC-cards or something like Apple Pay? That I cannot forget my card or my phone? I don't even know when this last happened to me...
Still crazy that anyone would opt-in to do this, but a misleading headline all the same.
I guarantee openssh, Firefox, LibreOffice, and probably a hundred other applications, are (orders of magnitude) more popular than the top applications on this list.
So, if this were titled, "Most open source software on github..." I wouldn't object. But, I have to completely reject the premise here, because I know that there's an entire iceberg of OSS software, including applications, that is completely excluded from the listing by virtue of either not being on github or being on github, but not using github as its primary method of distribution and promotion, and this data completely ignores everything below the surface.
Also, it's probably dangerous to begin to think of "Open Source Software" as only being "Software that has a public github repo".
2) Judging from the top-5 list in the post, between 1/4 and 1/3 of the projects have been miscategorized.
- https://github.com/chrislgarry/Apollo-11 -> should have been categorized as "Application", not "Documentation".
- https://github.com/tensorflow/tensorflow -> should have been "Library", not "Tool".
- Electron, Socket.io, Moment, lodash... are "Web libraries", not "Non-web libraries"
and probably more.
I hope the reviewers catch these errors before they publish this in a research journal.
Most software is libraries and frameworks -- you just don't get to see most/all of the proprietary stuff, since it's not on github or anywhere else.
It is worth noting that the second most popular "software tool" tucked in between oh-my-zsh and homebrew (both command-line tools/packages) is Tensorflow.
That has to say something about the current state of the industry, though admittedly, I am a little confused as to why it was classified as a "software tool" and not say, "a non-web library or framework."
What it really should be called is "refinement." The innovation ends up being incredibly crude but it gets the job done. How can we build on that, make it better and less coarse than it was? How can we make it more efficient?
I have recently come to realize that, at least in my world, source code older than five years is basically doomed. Developers simply refuse to work on it.
The code that makes it to five years is extraordinary as most of it "dies" before reaching the eighteen month mark.
As a result I have recently been shifting my view to support replace-ability vs maintainability whenever possible. I'm not totally sure how to achieve it, though. Most current trends seem to be towards increasing baggage. (docker)
Data lives on and on and on, however. Data is king. :)
We can use automation to gather data we've never had before. We can use this data to help prioritize maintenance tasks, and get them done faster with less interruption to service.
The thing is I get very little credit for fixing something that is broken, but creating something new generates accolades and the illusion of productivity...
https://goo.gl/maps/uFkLJoKU1DB2https://goo.gl/maps/767CYu5Mwd62 (it actually looks worse than this up close)
I'd be interested to find out what the track record is of maintenance of infrastructure by private vs public entities.
This submission starts with a description of her service experience 25 years ago. It's clearly an awful experience.
But it doesn't talk about the things that have changed.
People who report sexual abuse are much more likely to be believed. Things aren't good, but they're much better than they used to be.
There are early intervention in psychosis services in some parts of England. (Regional commissioning mean they don't exist everywhere). These work with people who have a first episode of psychosis.
Very recently there has been a lot of work around perinatal mental health (the Twitter user Rosey - @pndandme runs some Twitter chats and they'd have lots of info about how good / bad services are).
Importantly, mental health teams are multidisciplinary teams. That team would include social workers (who have some statutory duties), mental health nurses, occupational therapists, and a psychiatrist. They'd arrange access to other teams - psychologists, housing advice, debt and benefit advice, employment support, social activity, exercise support. (Not all of these would be "THE NHS"' see of it would be charities or community interest companies or private companies. They should all be free at the point of delivery).
MH Professionals are much more comfortable with "breakout symptoms" - they know that antipsychotic medication has pretty devastating side effects, so they want the patient to be on the minimum needed dose. This might mean that people still have auditory hallucination, but are given support to cope rather than being heavily medicated.
The article suggests that many psychiatrists are only there to prescribe meds. That is an important part of their job (they're the only ones who can prescribe meds), but there are plenty of doctors who fully accept the "bio psycho social" model, and who focus on the psychological and social factors.
The article.makes it sound like none of this is happening.
Also, go careful with Lurhman, there are several critiques of that report.
The design was inspired by Nick Mathewson's libottery: https://github.com/nmathewson/libottery
I think he misses the point his criticism of getrandom() - that is intended to be the interface by which the libc PRNG gets its seed; userspace programs should just use the libc PRNG instead of going off to the kernel (i.e. arc4random())
- make test : run the entire test suite on local environment
- make ci : run the whole test suite (using docker compose so this can easily be executed by any CI server without having to install anything other than docker and docker-compose) and generate code coverage report, use linter tools to check code standards
- make install-deps : installs dependencies for current project
- make update-deps : will check if there is a newer version of dependencies available and install it
- make fmt : formats the code (replace spaces for tabs or vice-versa, remove additional whitespaces from beginning/end of files etc)
- make build : would compile and build a binary for current platform, I would also defined platform specific sub commands like make build-linux or make build-windows
We gradually swapped them out in favour of our own DAG-runner written in Rust, called Factotum:
http://www.oilshell.org/blog/ (Makefile not available)
and build a Python program into a single file (stripped-down Python interpreter + embedded bytecode):
Although generally I prefer shell to Make. I just use Make for the graph, while shell has most of the logic. Although honestly Make is pretty poor at specifying a build graph.
It was quite simple really, but really powerful to be able to tweak/replace a dataset hit make, and have a fully updated version of my thesis ready to go.
Even though Make does not have built-in support for arithmetic (as far as I know), it's possible to implement it by way of string manipulation.
I don't recommend ever doing this in production code, but it was a fun challenge!
* a source code download, * copying IDE project files not included in the source, * creating a build folders for multiple builds (debug/release/converage/benchmark, clang & gcc), * building and installing a specific branch, * copying to a remote server for benchmark tests.
Point being that autoconf is often overkill for smaller C projects.
make seems to be easier to install/get running than the myriad of non packaged, github only projects i have found.
I use one to build my company's Debian Vagrant boxes: https://app.vagrantup.com/koalephant
I use one to build a PHP library into a .phar archive and upload it to BitBucket
My static-ish site generator can create a self-updating Makefile: https://news.ycombinator.com/item?id=14836706
I use them as a standard part of most project setup
It has much of the same functionality, but I already know (and love) ruby, whereas make comes with its own syntax that isn't useful anywhere else.
You can easily create workflows, and get parallelism and caching of intermediate results for free. Even if you're not using ruby and/or rails, it's almost no work to still throw together the data model and use it for data administration as well (although the file-based semantics unfortunately do not extend to the database, something I've been meaning to try to implement).
Lately, I've been using it for machine learning data pipelines: spidering, image resizing, backups, data cleanup etc.
Instead of bloated autotools I also call a config.sh from make to fill some config.inc or config.h values, which even works fine for cross-compiling.
I can't currently access the article at https://lil.law.harvard.edu/blog/2017/07/21/a-million-squand...
[Insert joke about irony here.]
Looking at the million dollar homepage, many of the links were never valid:
http://paid & reserved/
http:// paid and reserved - accent designer clothing/
http://reserved for edna moran/
http://paid & reserved for paul tarquinio/ (1200 pixels)
These links are all shown in plain red ("link to unreachable or entirely empty pages") in the "visualization of link rot," so it looks like the authors didn't account for invalid URLs.
>In a 2003 experiment, Fetterly et al. discovered that about one link out of every 200 disappeared each week from the Internet. McCown et al 2005 discovered that half of the URLs cited in D-Lib Magazine articles were no longer accessible 10 years after publication [the irony!], and other studies have shown link rot in academic literature to be even worse (Spinellis, 2003, Lawrence et al., 2001). Nelson and Allen (2002) examined link rot in digital libraries and found that about 3% of the objects were no longer accessible after one year.Bruce Schneier remarks that one friend experienced 50% linkrot in one of his pages over less than 9 years (not that the situation was any better in 1998), and that his own blog posts link to news articles that go dead in days2; Vitorio checks bookmarks from 1997, finding that hand-checking indicates a total link rot of 91% with only half of the dead available in sources like the Internet Archive; the Internet Archive itself has estimated the average lifespan of a Web page at 100 days. A Science study looked at articles in prestigious journals; they didnt use many Internet links, but when they did, 2 years later ~13% were dead3. The French company Linterweb studied external links on the French Wikipedia before setting up their cache of French external links, and found - back in 2008 - already 5% were dead. (The English Wikipedia has seen a 2010-2011 spike from a few thousand dead links to ~110,000 out of ~17.5m live links.) The dismal studies just go on and on and on (and on). Even in a highly stable, funded, curated environment, link rot happens anyway. For example, about 11% of Arab Spring-related tweets were gone within a year (even though Twitter is - currently - still around).
It's crazy how many copycats came out, very unoriginal thinking going on.
It hardly seems fair to blame a billboard being in disrepair if the company it advertised no longer exists.
Even though its with a business we're not doing now, my business partner and I are on there.
Edit: don't think it deserves a downvote - is it not an interesting question? I bet there are loads of serial entrepreneurs on both
"Identifiers for the 21st century"https://doi.org/10.1371/journal.pbio.2001414
note/claimer/disclaimer: Although I am included as an author I do not write that well.
"Million Dollar Cat Billboard project sells 10 000 squares (places on a billboard) $100 dollars each to make worlds first ever cat billboard and put it up in 10 cities around the globe for a month. To proudly show your cat to the world you need to buy at least one square. But of course you can buy as many of them as you wish as long as they are available."
It's all in the marketing!
Also I wonder how Word got around to me about things like this in the days of MySpace and yahoo as my internet.
1 million pixels for only a dollar each!
That guy made a nice bundle off the idea, it got picked up and hyped by the media so much I'm sure the companies that bought in got some ROI, or at least some publicity. Such was the extent of the dot com bubble that this sort of nonsense could happen and everyone cheered...
When you've got little apps to start with obviously the best idea was to throw them all out twice and hope the developers are still interested. Not to mention the dev environment required Windows 8 - personally this is what discouraged me from even trying to develop on the platform, I wasn't going to give up my perfect Windows 7 installation for a toy OS that will allow me to create apps for a toy phone while getting in my way when I tried to do real work.
Add to that a shitty web browser and quite slow devices and obviously its failure must be the fault of the competition, I mean how can such a great product fail?
The only good thing I can remember about my Windows Phone is that it handled IMAP push notifications, something iOS is still lacking.
Also, I don't fully buy the argument that Windows Phone was unsuccessful because it was late. I think that doesn't matter that much - changing phones and even phone operating system isn't such a big deal. After using iOS for about 8 years, I have no problem to switch to something else, if it proves to be better, or equal but cheaper. If Windows Phone would be better back then, more people would switch to it after their next phone upgrade.
Thanks to Elop and Microsoft we never see that happening. It was killed before it was even born.
I bought the Nokia N9(the only Meego phone released) a couple of month after its launch, knowing the OS is coming to its end. The Meego OS was a very polished and well-made OS. It was smooth, very intuitive, and simply the best touchscreen smartphone experience I've ever had. Thats was why Elop and WP infuriate me so much. It was a missed opportunity. Nokia took the easy route of getting paid by Microsoft to use its OS, and that bite them in the ass.
And so I'm glad that trash WP and Nokia failed spectacularly
Microsoft has learned nothing from the failure of Windows Phone, and is now in the process of killing Windows. Yes, Windows 10 is pretty good, but as an app platform it is a failure. Even Microsoft products such as Teams and Azure Storage Explorer use Electron, not native Windows APIs. And why would any developer make native Windows apps? Ordinary users can't tell an app built using Electron from a native Windows app. Thanks to Metro and its bland flatness, Windows apps does not have a differentiated look & feel, so end users don't know to demand Windows native apps. So developers are better off using cross-platform technologies such as Electron. (Yes, Electron is bloated but if your application is substantial this is not a deal breaker.)
- Microsoft didn't care about mobile, thinking Windows CE was fine (they had ~42% marketshare in 2007)
- Windows Phone 7 was great, but it was too late by then (2010... 3 years after iPhone was release and 2 years after first Android phone)
- There were two resets (7 => 8, 8 => 10) which screwed customers hard. With the 7 => 8 upgrade, not only were apps incompatible, the OS was incompatible with previous hardware.
- The App Store was mostly full of garbage apps (lots of fake apps- hard to fine the genuine app)
- Carriers didn't do a very good job pushing Windows Phone (can you blame them? :P )
From my perspective (former WP user, 2011 - 2015), the biggest WTF to me was when Microsoft bought Nokia. That seems to be about the time they just completely gave up
The paid-for hype was obvious and hollow. Commentators sprung up in tech-related forums everywhere, singing the praises of the development experience of WP 7 with personal testimony of how awesome it was, months before it was available to developers. Then scarcely a year later, when it was clear that WP7 was not setting the world on fire, exactly the same obvious, transparent marketing hype started being produced for WP8. When challenged about this, it was declaimed that WP7 had only ever been meant as a transition phase, and 8 was where it was really the future!
It was so blatant, and so pointless.
But they have made all the right moves over the past five years, and have lots of momentum, cash, and goodwill from the tech community. They will do some sort of big mobile push in the near future. Probably some kind of Surface Phone, which can double as a desktop.
I believe Microsoft still has a chance but they need to 1) talk to samsung and other big android/smartphone manufacturers2) make a killer feature
IOS and android are the future, but WP is vinyl
By the time Windows entered the Market, Android and iOS had the critical mass of developers and users in downloading games and apps from their respective apps stores and Windows could not break into the network effect.
We would get paid for having our app on store. Seems the person who reached was a middle man and was more interested in making money for himself than getting good apps built.
I knew there was no way they could recover from that situation.
Their hail mary was impressive and ahead of it's time, but there was no room for a #3 in the market with Google giving away everything for free.
The site also features several other Guardian articles by the same author. Seems a little iffy?
That is: if we have a function on types, say, the function `f` defined by:
f(x) = Int -> x
if x <: y, then f(x) <: f(y)
f(x) = x -> Int
if x <: y, then f(y) <: f(x)
dog -> dog
In other words, if my original worker only knows how to turn a Dog into a Dog, and that was good enough, the most useful substitute is someone who can take any Animal and turn it into any special kind of dog.
Or maybe I care about 2x4's, and my normal guy only knows how to turn Cherry into 2x4's and isn't trained on anything else, even though that suited my needs. The best sub is someone who can take lots of kinds of wood and turn it into different kinds of posts and planks; I'm just only taking advantage of his 2x4 skills.
Unfortunately a lot of programmers design subclasses by saying "hey look, I'm good at starting with a schnauzer" or "hey look, I'm good at starting with Brazilian cherry". These guys aren't helpful when they show up in your parameter list.
As a mathematician with no comp-sci type knowledge, my only understanding of inheritance is the "is a" rule. Using this, I realized that a subtype of the set of functions from Dog to Dog must be a set of functions such that each function could be treated as a function from Dog to Dog under an appropriate restriction. This would be the only way for such a set to satisfy what felt like the "is a" inheritance rule.
In other words, a set of functions from A to B where Dog is contained in A and B is contained in Dog would be a subtype of the set of functions from Dog to Dog. So Animals -> Greyhound works.
f: X -> Y
f: U -> V
If u <: x then v=f(u) <: y=f(x)
There we physicists go, confusing things again.
Is it just a distinction of which direction the type hierarchy flows, and the consequences that must have with regard to functions in order for logical consistency to be maintained?
At least, I believe that the point of conceiving of 'covariance' and 'contravariance' is that we may have or not have either, in input or return types.
The submission presents one incarnation, a common one I believe, but nevertheless I think if the goal's to understand variance, the concept must be distinguished from implementation.
I feel that the examples in math are easier to understand that in programming. For example:
- The integral of a function is invariant to additive changes of variable : \int f(a+x)dx = \int f(x)dx
- The mean of a distribution is contravariant to additive changes of variable : \int f(a+x)xdx = -a + \int f(x)xdx
- The mean of a distribution is covariant to shifts of the domain (same formula, because f(x-a) is a shift of size "a")
- The variance of a distribution is invariant to additive changes of variable
Looking at the words superficially, their definitions are easily discerned:
Covariance: changing together, with similarity. Contravariance: changing in opposition to one another.
Try defining the word functional incorrectly during a technical interview and see what happens.
14 channels are defined in the 2.4GHz band. For example channel 6 is centered on 2437 MHz. Each channel is 20MHz wide and divided in 52 "data" subcarriers, each occupying a different frequency and spaced out by 312.5 kHz (52 312.5 kHz is less than 20 MHz because there are "control" subcarriers and additional spacing.) So 52 different symbols can be sent in parallel at the same time, which is what we call OFDM https://en.wikipedia.org/wiki/Orthogonal_frequency-division_... (basically, I'm simplifying!)
Remember this is for just 1 channel. So with 14 channels each composed of 52 subcarriers, we could have 728 symbols transmitted at the same time. If they are 256-QAM symbols that's basically 728 8 = 5824 bits being transmitted at the same time in the air. And they will all be received and demodulated independently. This high level of parallelism of OFDM is how WiFi can achieve very high throughput.
Then, with wide channels of 40 MHz, which basically aggregate two 20 MHz channels, we get a few more data subcarriers because we don't need as many control subcarriers so a few of them become used as data subcarriers. Hence a 40 MHz channel will have not 52 2 = 104 but actually 108 data subcarriers. And 802.11ac defines 80 MHz and 160 MHz channels with respectively 234 and 468 data subcarriers.
Let's calculate the maximum usable throughput of a single 802.11ac 160 MHz channel using 256-QAM modulation... It sends 468 symbols at the same time on 468 data subcarriers. Each symbol encodes 8 bits and takes in the best case 3.6us to be transmitted: 3.2us for the actual symbol + a short guard interval of 0.4us (the GI is normally 0.8us but can be a short GI of 0.4us if negotiated). The raw physical bitrate is:
1/3.6e-6 468 8 = 1.04 Gbit/s
However there is a mandatory error correction which is 5/6 in the best case so the actual usable bandwidth is:
1.04 5/6 = 866.67 Mbit/s
Seems about right.
From a front end perspective I think this it's awesome. No so sure about the content though.
From talking with him, the technology isn't really ready for prime time yet but it's getting pretty close. I think the key point is that efficiencies in small scale cells and larger scale manufacturing are still climbing (the same group has achieved greater than 5% in a cm^2 test cell iirc) and the printing is incredibly cheap and very amenable to fast scaling up.
It seems pretty obvious that you needed more efficiency for it to be a viable rooftop solution but the guy who set this up claimed that the fact he could just stick down some velcro and stick on the cells opened up some different use cases with cheap and lean installations supporting cheap cells.
All in all, if you look at how far the technology has come in the last 5 years alone, then it's a pretty exciting field to follow.
But can't banks just solve this, by financing panels upfront? There's quite some money to be made there, I'd guess. And the risk is limited.
edited to clarify 2/3%
So, if this can be combined with a paper-thin e-ink display (and, if needed, a flat sheet capacitor for power storage), would that be enough to make true paper-thin displays at reasonable price?
That being said, may-be 10Km worth of these can power 1000 homes.
It costs $10 per sq.meter.
Presumably if we get to a point where you can cheaply print 25+% efficient cells then we're "done" as it were on improving solar cells :-)
Of course, what we really want is a comparison in terms of cost per watt.
Maybe equally important to the cost of these panels is the ease and cost of installing them. These new printed panels are very flexible/lightweight and can be deployed easily and even temporarily.
The US has 100M homes. That would require 100,000 days, or 300 years...