hacker news with inline top comments    .. more ..    20 Sep 2017 News
home   ask   best   12 months ago   
1
US solar plant costs fall another 30 per cent in just one year reneweconomy.com.au
157 points by doener  4 hours ago   59 comments top 6
1
ggm 2 hours ago 5 replies      
The non-decline in the associated costs for individual installs has to be thought about. If this is a true reflection of unavoidable costs, home installs are unlikely to break a cost barrier, and won't make substantive differences in centralized generation. If on the other hand they are mutable costs, and can be brought down, then microgrids and local power has more chance of becoming something of substance.

The bigger "win" in this is the transmission loss. Consumption close to generation has lower transmission loss so it's innately higher efficiency in that one regard.

PHES and Battery technology probably matter more now than PV as a the cost problem in generation: we need time shifting for solar power to replace other forms of generation, to get to serving demand outside of the sun.

2
redwood 2 hours ago 2 replies      
Flying recently from Phoenix to San Jose I was amazed by the size of the solar arrays I saw in the desert. I wasn't sure that that was what they were; but I found some of them on Google maps satellite after and confirmed it (cool time we live it). Good news is that there's so much more room to build more!!!

Edit here if anyone's curious: https://goo.gl/maps/SjzWw9b2dSH2

Edit 2 - as an aside, when I first saw, I couldn't tell if it was solar or ag (e.g. here's what ag looks like - https://goo.gl/maps/AGvAxJ61BwB2 if you zoom out they look very similar!)

3
mc32 3 hours ago 2 replies      
On the one hand it's great to get Solar below cost _at someone else's_ expense (Chinese over producing manufacturers), on the other hand I'd prefer that some of our own PV mfgs remain afloat.

Ultimately the goal is to wean off of fossil fuels, but if all our R&D is unprofitable we may miss some breakthroughs.

4
wyattk 8 minutes ago 0 replies      
Many of the comments seem to be focusing on the initial install costs and missing a big point here. Solar can save a lot of money and even make a nontrivial amount of money for the owner.

As you would expect, there a lot of factors that go into how much an array would save/produce (generation, storage, etc.), but a regulatory factor that changes everything is rate structures. Rate structures are far from a standard thing, pretty much wherever you go, there's something different, it varies by state and even at a smaller, city level for municipally-owned utilities (about 15% of the US is served by these, including parts of Bay Area, LA area, Phoenix, Seattle, etc.).

For consuming energy, there is usually a flat-rate or a time-of-use rate (many varieties) but there are more and more capacity fees and fixed charges taking over. For producing energy, it gets much stranger. Many cities and states use versions net metering [0], some will pay you the wholesale power rate and others will pay you the retail rate (retail is ~3x wholesale), some will use a Feed-In Tariff [1], some will factor in a more time-based rate (like time-of-use above), and some others too. If you want to know more about rate structures in general, check this out [2].[0] https://en.m.wikipedia.org/wiki/Net_metering[1] https://en.m.wikipedia.org/wiki/Feed-in_tariff[2] https://www.google.com/url?sa=t&source=web&rct=j&url=https:/...

Rate structures are heavily regulated, for good reason. Their design is a very difficult task and is pretty murky. On one hand are the consumers and their desire to connect solar and other DERs [3] like storage to lower costs. On the other hand are utilities, usually not acting malevolent, wanting to maintain reliability, and, at all costs, avoiding the death spiral [4], which basically means that more people connecting solar and even leaving the grid will skyrocket costs and tank reliability. Though sometimes, the generators will desperately lobby against them. Depending on where you are, the utilities can be the generators too, another matter.[3] http://www2.epri.com/Our-Work/Pages/Distributed-Electricity-...[4] https://www.greentechmedia.com/articles/read/this-is-what-th...

Rate structures are arguably the largest factor in installing solar. Initial costs are important, but the rate structures will affect them over their 30+ year life. In some places, like North Carolina, it can lead to solar flourishing. In other places, hostile rate structures and other regulations can severely harm solar's adoption, like Florida which should be the best place in the US for photovoltaics.

Even more complicating is the ability of the grid to handle a lot of solar, let alone other DERs. The energy grid in the near future can be highly distributed, 100% renewable, and even more reliable than it is today, but there are some big system levels problems to solve before than (these rarely get any attention, most attention goes to node-level problems like sheer generation). I am fully engulfed in this field and am working on these things now. I thought I would present an important point and give y'all some information on this field that I find absolutely riveting. :)

5
bluedino 2 hours ago 3 replies      
Our local solar panel plant shut down a few months ago - Suniva. They just mysteriously closed up shop.
6
nwah1 2 hours ago 1 reply      
Solar panels are a semiconductor technology, and as such follow Moore's Law, and have been for decades.

https://blogs.scientificamerican.com/guest-blog/smaller-chea...

2
Swift 4.0 Released swift.org
115 points by runesoerensen  1 hour ago   23 comments top 7
1
swivelmaster 1 hour ago 0 replies      
"Swift 4 includes a faster, easier to use String implementation that retains Unicode correctness and adds support for creating, using and managing substrings."

This alone is grounds for opening and drinking very expensive champagne and/or wine.

2
sinhpham 1 hour ago 0 replies      
They implemented what looks like the Rust ownership model: SE-0176 Enforce Exclusive Access to Memory (https://github.com/apple/swift-evolution/blob/master/proposa...), but I'm having a hard time understanding the proposal, can anyone shed some light on this?
3
valine 1 hour ago 2 replies      
The new JSON parser is really slick. https://developer.apple.com/documentation/swift/codable
4
dep_b 1 hour ago 0 replies      
Converting two existing projects was pretty painless. Though the larger and older of the two had a significant amount of automatically generated changes. Most of them were (Void) in blocks changed to () though. Another nice extra is mixing 3.2 with 4.0 libraries.
5
mixmastamyk 1 hour ago 0 replies      
Ubuntu 16.10 is already out of support, they should update that. Artful is out in a month or so.
6
ramenmeal 57 minutes ago 1 reply      
I see some mentions of Server API's on their website. With these libraries in place, how comparable is swift to golang?
7
hasenj 1 hour ago 1 reply      
This is great news.

However, I wish the mac release was available as independent binaries without having to come along with Xcode.

3
Clocks for Software Engineers zipcpu.com
261 points by mr_tyzic  7 hours ago   37 comments top 14
1
btown 5 hours ago 4 replies      
One of my favorite undergrad electrical engineering classes [0] took an innovative approach to introducing this. Instead of learning about clocks/pipelines and HDL at the same time, we only looked at the former. We created our own simulators for an ARM subset, fully in C, where there was only a single for/while loop allowed in the entire codebase, representing the clock ticks. Each pipeline stage, such as Instruction Fetch, would read from a globally instantiated struct representing one set of registers, and write to another one. If you wanted to write to the same place you read from, you could only do so once, and you'd better know exactly what you were doing.

Because we didn't need to learn a new language/IDE/environment at the same time that we learned a new paradigm, we were able to keep our feet on solid ground while working things out; we were familiar with the syntax, so as soon as we realized how to "wire something up," we could do so with minimal frustration and no need/ability to Google anything. Of course, it was left to a subsequent course to learn HDL and load it on real hardware, but for a theoretical basis, this was a perfect format. Much better than written tests!

[0] http://www.cs.princeton.edu/courses/archive/fall10/cos375/de... - see links under Design Project, specifically http://www.cs.princeton.edu/courses/archive/fall10/cos375/Cp...

2
teraflop 6 hours ago 3 replies      
I'm not surprised that software engineers find these concepts difficult to understand at first -- it's a very different way of thinking, and everyone has to start somewhere. But I do find it kind of odd that someone would jump straight into trying to use an HDL without already knowing what the underlying logic looks like. (My CS degree program included a bit of Verilog programming, but it only showed up after about half a semester of drawing gate diagrams, Karnaugh maps and state machines.)

Does this confusion typically happen to engineers who are trying to teach themselves hardware design, or is it just an indication of a terribly-designed curriculum?

3
AceJohnny2 6 hours ago 2 replies      
TL;DR:

> The reality is that no digital logic design can work without a clock. There is always some physical process creating the inputs. These inputs must all be valid at some start time this time forms the first clock tick in their design. Likewise, the outputs are then required from those inputs some time later. The time when all the outputs are valid given for a given set of inputs forms the next clock in a clockless design. Perhaps the first clock tick is when the set the last switch on their board is adjusted and the last clock tick is when their eye reads the result. It doesnt matter: there is a clock.

Put another way, combinatorial systems (the AND/OR/etc[1] logic gates that form the hardware logic of the chip) have a physical propagation delay. The time it takes for the input signals at a given state to propagate through the logic and produce a stable output.

Do not use the output signal before it is stable. That way lies glitches and the death of your design.

Clocks are used to tell your logic: "NOW your inputs are valid".

The deeper your combinatorial logic (the more gates in a given signal path), the longer the propagation delay. And the maximum propagation delay across your entire chip[2] determines your minimum clock period (and thus maximum clock speed)

There exist clockless designs, but they get exponentially more complicated as you add more signals and the logic gets deeper. In a way, clocks let you "compartmentalize" the logic, simplifying the design.

[1] What's the most widespread fundamental gate in the latest fab processes nowadays? Is it NAND?

[2] or at least clock domain

4
alain94040 7 hours ago 0 replies      
This is such an important notion.

Another I try to explain hardware design for people coming from a software background:

You get one choice to put down in hardware as many functions as you want. You cannot change any of them later. All you can do later is sequence them in whatever order you need to accomplish your goal.

If you think of it this way, you realize that the clock is critical (that's what makes sequencing possible), and re-use of fixed functions introduces you to hardware sharing, pipelining, etc.

But it's hard to grasp.

5
martin1975 1 hour ago 0 replies      
Reading this would actually tremendously help software engineers improve their concurrent/parallel software design skills as well. I never had a particular desire to do hardware (my degree is CS) but some of the best C/C++ programmers who were able to squeeze out every last ounce of performance truly understood not just software languages but also computer architecture and I might even go as far as saying understood physics to a large extent very well. The LMAX software architecture is a product of this kind of hardware+software understanding. Awesome article.
6
jonnycomputer 3 hours ago 1 reply      
I liked the article, but I feel like an argument for why you need a clock was really never made.
7
DigitalJack 6 hours ago 3 replies      
"The reality is that no digital logic design can work 'without a clock'. "

This is not true.

"HDL based hardware loops are not like this at all. Instead, the HDL synthesis tool uses the loop description to make several copies of the logic all running in parallel."

This is not true as a general statement. There are for loops in HDLs that behave exactly like software loops. And there are generative for loops that make copies of logic.

Also, the "everything happens at once" is not true either. In fact with out the delay between two events happening, synchronous digital design would not work. (specifically flip-flops would not work).

8
amelius 6 hours ago 1 reply      
And here's "Clocks for Hardware Engineers": [1]

[1] http://lamport.azurewebsites.net/pubs/time-clocks.pdf

9
mzzter 7 hours ago 0 replies      
Learning to think in parallel, and understand and design for procedures that don't run sequentially, would be good practice for concurrent runtimes and distributed systems too. Not only for HDLs.
10
kbeckmann 6 hours ago 3 replies      
The zipcpu blog posts never ceases to amaze me, the content is so good. As a sw developer who plays around in verilog on my free time, the posts are extremely helpful to me. I just want to tip my hat to the author(s?), thanks!
11
PeterisP 7 hours ago 0 replies      
The Figure 5 in that article pretty much summarizes the main point - if you show that to the original (hypothetical?) student, then this should be sufficient to make them understand the downsides of their design.
12
gertef 5 hours ago 0 replies      
Conceptually, this is the same idea as concurrent network programming with futures, yes?
13
Joking_Phantom 6 hours ago 0 replies      
When I took Berkeley's EECS151 class (Introduction to Digital Design and Integrated Circuits), the first lecture actually did not go over clocks. Instead, it goes over the simple building blocks of circuits - inverters, logic gates, and finally combinational logic blocks that are made up of the previous two. These components alone do not need a clock to function, and their static functions are merely subject to the physical limitations such as the speed of electrons, which we package into something called propagation delay. It is entirely possible to build clockless circuits, otherwise known as asynchronous circuits.

From the perspective of an electrical engineer and computer scientist, asynchronous circuits theoretically can be faster and more efficient. Without the restraint of a clock slowing down an entire circuit for its slowest component, asynchronous circuits can instead operate as soon as data is available, while consuming less power to overhead functions such as generating the clock and powering components that are not changing state. However, asynchronous circuits are largely the plaything of researchers, and the vast majority of today's circuits are synchronous (clocked).

The reason why we use synchronous circuits, which may relate to the reason why many students learning circuits often try to make circuits without clocks, is because of abstraction. Clocked circuits can have individual components/stages developed and analyzed separately. You leave problems that do not pertain to the function of a circuit such as data availability and stability to the clock of the overall circuit (clk-to-q delay, hold delay, etc), and can focus on functionality within an individual stage. As well, components of a circuit can be analyzed by tools we've built to automate the difficult parts of circuit design, such as routing, power supply and heat dissipation, etc. This makes developing complex circuits with large teams of engineers "easier." The abstraction of synchronous circuits is one step above asynchronous circuits. Without a clock, asynchronous circuits can run into problems where outputs of components are actually wrong for a brief moment of time due to race conditions, a problem which synchronous circuit design stops by holding information between stages stable until everything is ready to go.

The article's point of hardware design beginning with the clock is useful when you are trying to teach software engineers, who are used to thinking in a synchronous, ordered manner, about practical hardware design which is done entirely with clocks. However, it is not the complete picture when trying to create understanding of electrical engineering from the ground up. Synchronous circuits are built from asynchronous circuits, which were built from our understanding of E&M physics. Synchronous circuits are then used to build our ASICs, FPGAs, and CPUs that power our routers and computers, which run instructions based on ISA's that we compile down to from higher order languages. It's hardly surprising that engineers who are learning hardware design build clockless circuits - they aren't wrong for designing something "simple" and correct, even if it isn't currently practical. They're just operating on the wrong level of abstraction, which they should have a cursory knowledge of so synchronous circuits make sense to them.

14
blackbear_ 6 hours ago 0 replies      
Immediately thought it was referring to this https://news.ycombinator.com/item?id=15282967
4
China Bans Bitcoin Executives from Leaving Country, Miners Preparing for Worst trustnodes.com
25 points by adamnemecek  2 hours ago   20 comments top 5
1
acjohnson55 4 minutes ago 0 replies      
What's interesting to me is that no amount of bad news appears to affect the price of BTC. I'm not a professional or amateur trader (I've got no position on BTC at all) but the price fluctuations really seem dominated by sentiment over any sort of fundamentals.
2
adamnemecek 28 minutes ago 1 reply      
Is it just me or has China been doing some crazy regulatory stuff recently. Idk if there's an overall mosaic I'm missing or if these are unrelated.
3
gggdvnkhmbgjvbn 25 minutes ago 3 replies      
All comments so far have been pro-bitcoin... does nobody think this could actually be a sensible decision?
4
eberkund 18 minutes ago 1 reply      
China is a closed economy, it makes sense that they would put a ban on a technology which outside of speculative investment is largely used to facilitate money laundering.
5
mincon4747 27 minutes ago 3 replies      
China's fast descent into totalitarianism is fascinating and scary. One can easily replace 'bitcoin executives' with 'foreign executives' or 'foreign assets' and see where this is going. I wonder if those companies that choose to outsource all the jobs in their country to China realized what they've done.
5
Show HN: Redox Rust OS Release 0.3.3 github.com
14 points by jackpot51  24 minutes ago   1 comment top
1
jackpot51 22 minutes ago 0 replies      
I am the creator of Redox OS. It is a microkernel based operating system mostly written in Rust.

Please ask any questions or make any comments you have about Redox!

7
GraphQL Patent Infringement Issues github.com
85 points by brodock  4 hours ago   5 comments top
1
chris_wot 1 hour ago 1 reply      
Even the companies who file the patents don't want the patents. It's getting kind of ridiculous now.
8
Learn from your attackers SSH HoneyPot robertputt.co.uk
103 points by robputt  7 hours ago   56 comments top 6
1
linsomniac 1 hour ago 0 replies      
Aside: I used to run a small ISP, a 200-300 dedicated+virtual machines. We set up our router to alert us if outbound SSH connections from a host went above a certain threshold, which was a super reliable way of detecting if a host was compromised. I think we had a near 100% success rate, because once a host is compromised they use it to start trying to compromise other hosts.

But, we also had every customer on a VLAN, limited to only being able to send traffic from their IPs, and also blocking incoming and outgoing bogon traffic.

Years ago I attended a presentation by Evi Nemeth (RIP) related to CAIDA and one thing they found in auditing "backbone" traffic was that some huge percentage of it was bogon traffic (I don't recall the exact number, but lets say 10% +/- 6%). Nobody wanted to filter that traffic because the pipes were less expensive than the routers to handle filtering packets at high pps rates.

2
otakucode 6 hours ago 8 replies      
If one were to run a honeypot like this and take every IP which connects and attempts a login and immediately ban it from your network, then if more than 1% of the IP range they are in has been banned, ban the entire range... what would the expected outcome be for a typical residential user?
3
knoxa2511 5 hours ago 0 replies      
Reminds me of this Fishing for Hackers post https://sysdig.com/blog/fishing-for-hackers/
4
Myrth 3 hours ago 1 reply      
I almost closed the page on mobile because I thought it's empty or broken...
5
X86BSD 5 hours ago 4 replies      
I've been using something similar for a while now. I use pam_jail on freebsd to drop the ankle biters using common ssh login attempts like test, ubuntu, oracle etc into a FreeBSD jail where I watch what the do and get a copy of all their tools. I rate limit the outgoing traffic from that jail to something painfully slow to prevent them from causing any major issues. But being able to fire up 'watch' on freebsd and snoop the tty they are on in the jail is awesome for forensics.

It's secure, they can't break out of the jail.

It's rate limited to prevent them causing much damage to anyone.

It's easy to observe every thing they type and do in the jail from the host.

6
shpx 4 hours ago 0 replies      
9
Facebook Faces a New World as Officials Rein in a Wild Web nytimes.com
167 points by ALee  9 hours ago   69 comments top 15
1
AlexandrB 8 hours ago 7 replies      
I find that the headline and the article mischaracterizes what is happening. The "Wild Web" was reigned in long ago by commercial interests. A distributed web with many small nodes would still be hard to control and police effectively. However as much the web has been centralized by the likes of Google, Facebook, and large media conglomerates effective government censorship is once again possible.

This is like a wild meadow turning to a manicured lawn. The near-monoculture of the web will have a much harder time withstanding legal assault by state actors than a distributed web would have.

2
doktrin 7 hours ago 2 replies      
Facebook is most likely a net negative in this world. I don't know if it's always been true, but I think it is now. I personally didn't realize how strong my feelings were until one of their recruiters contacted me. I'm far from a 'values' driven employee, but this was by far the easiest refusal of my career.
3
amrrs 9 hours ago 2 replies      
>The diplomatic game that unfolded in Vietnam has become increasingly common for Facebook.

Yes, At least this should ring a bell to all those who still think they can write anything on FB about a Government and get away with it. Facebook while being pushed and portrayed as your personal diary is actually your digital repository only accessible for the most elite like Government.

While an average FB user can easily shame anyone around him (like how frustrated boy friends shame their ex girl friends), fellow average FBians can't do much. This reiterates the capitalist world that we live in where Democracy is just a myth.

4
IBM 8 hours ago 5 replies      
>At a White House dinner in 2015, Mr. Zuckerberg had even asked the Chinese president, Xi Jinping, whether Mr. Xi might offer a Chinese name for his soon-to-be-born first child usually a privilege reserved for older relatives, or sometimes a fortune teller. Mr. Xi declined, according to a person briefed on the matter.

Laughed out loud at this. I can understand wanting access to the market but this is just embarrassingly desperate.

5
l5870uoo9y 6 hours ago 0 replies      
It is with the greatest obviousness that strategic important sectors such as defence can't trade with foreign countries without specific permission from the government. The tech sector is an sector of strategic importance and it can't both serve the Chinese communist party and the US democracy. The Chinese understands this.
6
danielrhodes 7 hours ago 0 replies      
There is now a history of American tech companies operating in China. The lesson is pretty clear: play politics if you want, but know that if you do (i.e. Google), you are going to lose access to the market. Thus, it comes down to a business decision and it becomes increasingly hard to argue on principles if things look so binary.
7
danjoc 6 hours ago 0 replies      
What a difference 20 years makes...

https://www.eff.org/cyberspace-independence

8
ameister14 7 hours ago 1 reply      
If we're going with the wild-west analogy, what's happening now is the transition from a Territory to a State. It was wild and ungoverned, then corporations moved in and created some order, and now governments see order and are moving in to take over management.
9
pnathan 4 hours ago 0 replies      
The Great Myth of cyberspace was that individuals in it or the servers that ran the software were not subject to the laws of the nations they existed within.

The cold reality is that this is not true. And, thus, in time, to exist within a repressive regime requires importing the repression within the software. This is the bargain Facebook wants to make.

10
beepboopbeep 7 hours ago 0 replies      
I think its perfectly fair to scrutinize and regulate a company that has such an immensely pervasive presence in the every day life of so many citizens. Yes, that applies to google to. Why should I trust them to be responsible?
11
bukgoogle 7 hours ago 0 replies      
Facebook and "new world" just sounds scary.

I really do not want facebook's new world.

12
TCM 6 hours ago 0 replies      
I think its more of the opposite. Technology creates spheres of influence in the countries that they operate. Traditional governments attempt to reign it in (this is usually effective if they are a company with ad revenue or they want to follow local laws. But when you cut down one sphere another grows to replace it.
13
Havoc 7 hours ago 0 replies      
Seems more like FB is reeling in official policy but ok...
14
faceboksukha 7 hours ago 0 replies      
Please people, try to avoid facebook and they agenda much as possible.
15
Top19 5 hours ago 0 replies      
> Facebook is racing to gain the advantage in Africa over rivals like Google and Chinese players including Tencent, in a 21st century version of the Scramble for Africa.

That is a really scary thing to have read. Perhaps the New York Times is out of line in using it, but if that metaphor is even 10% accurate that would be very bad.

To give some background, the Scramble for Africais the only time Ive ever read the words, where the writer had a serious argument, was worse than the holocaust. This was in reference to the mass deaths in the Congo under King Leopold of Belgium, as documented in the book King Leopolds Ghost.

I know a ton of people have died in history and there have been so many wars, but the Scramble for Africa was really really really bad.

Yeah so I guess in conclusion, NY Times shouldnt have used that phrase, Facebook sucks, and if they (Facebook) mess up almost everywhere please please just not let them mess up the African continent.

10
Show HN: Calcflow, an open-source complex maths visualization tool in VR github.com
47 points by ottomanbob  3 hours ago   10 comments top 6
1
j_s 3 hours ago 1 reply      
What purpose does the custom license serve? Of all the hills to die on for a project with source available on Github...

> This project is licensed under the NANOME VR PRODUCT SUITE

https://github.com/matryx/calcflow/blob/master/LICENSE.md

--

Digging a bit, it appears this is funded by an ICO, or at least created by a company currently running an ICO? Too bad that basically short-circuits to "smells fishy" to me right now; hopefully they can pull through and build a track record of credibility.

2
sus_007 1 hour ago 1 reply      
IMO, an AR implementation of visualization would be more interactive than a VR. Overall, great job.
3
imranq 1 hour ago 0 replies      
This is a great idea - it's like the medical demos they use ar/vr for but for engineers.

Someone should make a VR version for mechanical motions like that How to Make a Car course that was posted here a while ago

4
tlarkworthy 2 hours ago 0 replies      
Wow what a great idea, if you could interact with jupyter in tandem it might actually lead to more powerful data exploration.
5
kokwak 2 hours ago 0 replies      
FANTASTIC! Wish I could have accessed to something like this in my undergraduate journey
6
carapace 1 hour ago 0 replies      
The thing I most want from a system like this is low-latency update.
11
A smaller, cheaper RISC V board hackaday.com
133 points by MrsPeaches  8 hours ago   38 comments top 6
1
Joking_Phantom 5 hours ago 4 replies      
IMO, the first step for people interested in promoting RISC V should be to get it into the hands of the universities' undergrads. Berkeley's EECS 151 lab final semester project was to implement a RISC V CPU at 50 MHz on a FPGA.

If EE/CS departments of colleges adopt RISC V hardware for teaching their students, providing cheap microcontrollers and boards to students at the start of their semester classes, those precocious little buggers are going to build Doom clones and help port their favorite flavor of linux onto them. When you've got a generation of top talent tinkering with an ISA that doesn't suck like x86, you're going to see adoption in actual industry.

2
spilk 1 hour ago 1 reply      
Slightly off topic, but are there more-or-less boilerplate RISC-V verilog/VHDL designs available that can deploy to ~$100ish FPGA boards? I'd be super interested in messing with that if so. I took digital design and HDL related courses in university but haven't really had an opportunity to do anything productive with that yet.
3
phkahler 4 hours ago 0 replies      
Just saw this:

http://markets.businessinsider.com/news/stocks/SEGGER-Adds-S...

The ecosystem is starting to pull industry players.

4
chubs 3 hours ago 2 replies      
I wonder if you can program this with Rust? There seems to be some support for a LLVM backend which would suggest 'yes'. However the gcc linker is still required which I'm unclear on whether that rules it out: https://riscv.org/software-tools/low-level-virtual-machine-l...
5
Dowwie 6 hours ago 2 replies      
Notice how they're selling through GroupGets?

Aside from GroupGets and MassDrop, who else is operating in the crowd-funded, discounted bulk-purchase collaborative.. consumption.. space?

6
problems 7 hours ago 5 replies      
Nice. That's a big improvement in price over the previous ones.

I'm still a little unclear on RISC-V's goals - are they looking at the microcontroller market or are they looking more to offer an alternative for ARM and x86 CPUs?

In the microcontroller market there's a lot of competition right now, especially with devices like the ESP32 going for $8 with wifi and bluetooth.

       cached 20 September 2017 04:02:02 GMT