Rechercher dans ce blog

Thursday, September 30, 2021

EA promotes Laura Miele to COO, making her one of the most powerful women in gaming - The Verge

Electronic Arts is promoting chief studios officer Laura Miele to chief operating officer, the company announced Thursday. The change is a big promotion for Miele, who already had significant leadership at the company overseeing 25 different studios. The new role will give Miele greater oversight over the company and arguably makes her the most powerful woman in gaming, an industry where there are few female executives, fewer in the C-suite, and where those C-suite execs are often in charge of HR or finance rather than the company’s products.

Ubisoft did make Virginie Hass its chief studios operating officer last August, following scandals over a toxic culture including sexual harassment and misconduct that went as high as the C-suite ranks.

Miele joined EA in 1996 and has served as chief studios officer since April 2018. The Verge spoke with Miele in July, where she discussed how the pandemic changed development at EA. Miele will move into the role over the next few months, according to an SEC filing (PDF).

EA also announced that chief financial officer Blake Jorgensen will be leaving the company. He’s expected to depart in 2022, and a search to replace him “will begin immediately.” Chris Bruzzo, who was previously the company’s executive vice president of marketing, commercial, and positive play, will become the company’s chief experience officer.

Adblock test (Why?)

Article From & Read More ( EA promotes Laura Miele to COO, making her one of the most powerful women in gaming - The Verge )
https://ift.tt/3a0paJ7
Technology

Amazon Basics ripped off accessories, now Amazon is coming for Fitbit, Ecobee, and more - The Verge

In the image above there are two wearables. One is Fitbit’s recently released Charge 5, a $179.95 fitness tracker designed to measure everything from your heart rate to your sleep and even your skin temperature. The other is Amazon’s new $79.99 Halo View fitness tracker, which Amazon says can measure everything from your heart rate to your sleep and skin temperature. Ten points if you can tell me which is which.

The Halo View was just one of a host of new devices announced by Amazon at its now annual fall hardware event this week. But while many of Amazon’s new products feature completely original designs and features, like its cute Astro “home robot” or Ring-branded home surveillance drone, there were a handful that bear a striking resemblance to preexisting products.

Take Amazon’s new $59.99 smart thermostat, which works with Amazon’s voice assistant Alexa, and promises to detect when you’re home and adjust the temperature accordingly. That’s very similar to what other Alexa-enabled thermostats like the $250 Ecobee smart thermostat offer, but at a fraction of the price. Not to mention Amazon’s design is also similar to a preexisting smart thermostat produced by a company called Tado (which itself retails for the equivalent of around $240).

Not every smart thermostat needs to look completely original or have a unique set of features (after all, there’s only so much a thermostat can do). But the announcement of Amazon’s own Smart Thermostat comes just months after The Wall Street Journal reported that Ecobee had been nervous about sharing additional data with Amazon, in part due to fears that this data could help it launch competing products, as concerns that it could harm consumer privacy. Ecobee was reportedly told that failing to provide this data, which would send information to Amazon about the device’s status even when a customer wasn’t using it, could risk the company losing its Alexa certification for future models or not be featured in Prime Day sales.

In response to The Verge’s enquiry, Amazon said it hadn’t used data from any other Alexa-connected thermostats to design its Smart Thermostat. It said the device had been co-created with Resideo, a company that has also worked on Honeywell’s Home thermostats, and that Ecobee continues to be one of its valued partners.

Meanwhile, Amazon’s new Halo membership features look like obvious competitors to MyFitnessPal. Halo Nutrition is designed to help users find recipes and cook food that fits their dietary needs, similar to the Meal Plans feature MyFitnessPal already includes as part of its premium plan. There’s also Halo Fitness’ guided workouts, similar to the self-guided routines from MyFitnessPal. But Amazon’s health subscription is a lot cheaper than MyFitnessPal’s premium tier. Amazon includes a free year of its Halo membership services with the purchase of its new Halo View fitness tracker, and it retails for $3.99 / month thereafter, which is less than half the price of MyFitnessPal’s $9.99 premium tier.

When we asked Amazon about these similarities, it said it had not copied other companies, and that its Halo service includes unique features not available with other fitness trackers.

Responding to questions from The Verge, Amazon said it’s “pioneered hundreds of features, products, and even entirely new categories” throughout its history. “Amazon’s ideas are our own,” the company said, citing products such as the Kindle, Amazon Echo, and Fire TV as prominent examples of its original inventions.

I’m not trying to claim that Amazon is breaking any rules with these products. There are only so many ways you can design a screen that straps to your wrist and shows you your heart rate — even if this one has clear Fitbit vibes — or a panel that attaches on your wall to control the temperature. Even complicated devices like smartphones have seen their designs gradually converge over the years, a process not helped by the fact that many manufacturers are using components supplied by the same small handful of companies.

But the similarities look cynical coming from Amazon, which has been criticized for ripping off the designs of products sold on its platform and then undercutting them on price. Earlier this year, bag and accessory manufacturer Peak Design drew attention to the startling similarity between its $99.95 Everyday Sling and Amazon Basics’ $32.99 Camera Bag, for example. Amazon’s version of the bag has since been discontinued, the company told The Verge.

Amazon’s cloning of the Peak Design bag wasn’t an isolated incident, either. In 2019, striking similarities were also pointed out between the $45 shoes produced by Amazon’s 206 Collective label and Allbirds’ $95 equivalent. The similarities prompted Allbirds CEO Joey Zwillinger to respond in a Medium post saying that he was “flattered at the similarities that your private label shoe shares with ours,” but politely asked that Amazon also “steal our approach to sustainability” and use similarly renewable materials. Amazon said that the shoe has also since been discontinued, but that it continues to offer similar styles. Amazon also said its original design did not infringe on Allbirds’ design, and that the aesthetic is common across the industry.

Or what about the Amazon Basics Laptop Stand that Bloomberg reported on in 2016, which was launched at around half the price of Rain Design’s (at the time) bestselling $43 model? Harvey Tai, Rain Design’s general manager, said that the company’s sales had slipped since Amazon’s competing model appeared on the store, although he admitted that “there’s nothing we can do because they didn’t violate the patent.”

Copying and attempting to undercut dominant market players is nothing new. But Amazon is in a fairly unique position in that it’s not just competing with these products; in a lot of cases, it’s also selling them via its own platform. That theoretically gives it access to a goldmine of data which could be invaluable if it wanted to launch a competitor of its own.

Amazon was accused of doing exactly this in a Wall Street Journal investigation last year, which alleged that Amazon’s employees “have used data about independent sellers on the company’s platform to develop competing products.” The WSJ specifically cited an instance where an unnamed Amazon private-label employee accessed detailed sales data about a car-trunk organizer from a company called Fortem launched in 2016. In 2019, Amazon launched three similar competitors under its Amazon Basics label.

The same report also detailed an instance of employees accessing sales data for a popular office-chair seat cushion from Upper Echelon Products, before Amazon launched its own competitor.

Amazon tells The Verge that an internal investigation conducted following the publication of the WSJ’s report found no violations of its policies prohibiting the use of non-public individual seller data.

Whether or not Amazon’s employees are breaking its rules, regulators have taken notice. Last year the EU accused Amazon of using “non-public seller data” to inform Amazon’s own retail offers and business decisions. “Data on the activity of third party sellers should not be used to the benefit of Amazon when it acts as a competitor to these sellers,” the European Commission’s antitrust tzar, Margrethe Vestager, said at the time. The Commission has yet to issue a final report or findings, and in a statement Amazon said it disagreed with its accusations.

For its part, Amazon says it has a policy of preventing its employees from using “nonpublic, seller-specific data to determine which private label products to launch.” But the company’s founder and former CEO Jeff Bezos told lawmakers last year that he couldn’t guarantee the policy has never been violated, and sources interviewed by The Wall Street Journal said that employees found ways around these rules.

These accusations have so far centered around low-tech items like bags, shoes, and trunk organizers. But as Amazon has expanded into more areas of consumer tech, its designs are once again straying very close to the competition.

Given these concerns, it seems especially bizarre that Amazon was willing to reference the price of competing smart thermostats sold via its platform during the launch. Its smart thermostat is “less than half the average cost of a smart thermostat sold on Amazon.com,” the company’s senior vice president of devices and services, Dave Limp, said.

Once again, I need to stress that there’s nothing illegal (to my knowledge) about using public information like pricing in order to inform your own products. But taking a moment to specifically reference the pricing of competing devices sold on your own monolithic online store is an odd choice in the midst of all this scrutiny.

It will be impossible to know how closely the functionality of each of Amazon’s new products are to their competitors until we’ve tried them for ourselves. But given Amazon’s size and market power, these kinds of awkward questions need to be asked. After all, Amazon is treading an awkward line between operating one of the largest sales platforms in the world, and competing within them as an increasingly prolific consumer tech manufacturer. It’s a difficult balance, and regulators are watching.

Adblock test (Why?)

Article From & Read More ( Amazon Basics ripped off accessories, now Amazon is coming for Fitbit, Ecobee, and more - The Verge )
https://ift.tt/3uqjSQA
Technology

Can you use MacBook Pro chargers for iPhone and iPad fast charging? - 9to5Mac

Recommendations to fast charge iPhone or iPad often include picking up the 20W power adapter from Apple or similar from a third party. But what if you already have a higher-powered USB-C charger from your MacBook Pro or MacBook Air? Follow along for which iPhones and iPads you can fast charge with Apple’s MacBook chargers or similar third-party chargers.

Update 9/30: With the iPhone 13 Pro Max being able to pull up to 27W of power, using 30W+ power adapters will give you the fastest charging times. It’s unclear for now if the whole iPhone 13 lineup can pull up to the 27W max, but as detailed below, it doesn’t hurt to use a higher powered wall plug as the iPhone is what determines the power it gets.

If you want something with more ports than your MacBook charger, two of the best options are Satechi’s compact 4-port 66W GaN USB-C Charger and Anker’s 36W dual-port USB-C charger.

Apple used to ship an 18W USB-C power adapter with the iPhone 11 Pro models and the 5W adapter with older iPhones. However, starting in fall 2020 with the iPhone 12/12 Pro launch, Apple stopped included a power adapter in the box with all new iPhones.

Fast charging offers around 50% battery in 30 minutes. But picking up a new USB-C to Lightning cable and 20W charging block from Apple costs $40 if you need both. Third-party options often cost less, but what about using something you already have?

The good news is that modern iPhones and iPads work with all of the Mac notebook USB-C chargers, even the 96W model that comes with the 16-inch MacBook Pro. While it may sound risky at first, it’s safe to use any of Apple’s USB-C chargers, as your iPhone or iPad is what determines the power it receives, not the charger. Apple even does its own testing with the whole range of its USB-C power adapters.

Fast charge iPhone iPad with MacBook charger?

Note: depending on the current capacity of your battery, your device will pull different levels of power. For example, a battery at 10% will draw more power than one at 80%.

Fast charge iPhone and iPad with MacBook chargers?

Apple says the following iOS devices are compatible with the 18W, 20W, 29W, 30W, 61W, 87W, and 96W adapters for fast charging:

  • iPhone 8/8 Plus and later
  • iPad Pro 12.9-inch (1st generation and later)
  • iPad Pro 11-inch (1st generation and later)
  • iPad Pro 10.5-inch
  • iPad Air 3rd generation and later
  • iPad mini 5th generation and later

Apple notes you can use its USB-C to Lightning cable or that “a comparable third-party USB-C power adapter that supports USB Power Delivery (USB-PD)” will also work like Anker’s Powerline series.

If you’re looking for a more flexible USB-C charger or want an extra, Anker’s 36W dual-port USB-C charger and Satechi’s 4-port 66W GaN USB-C Charger are great choices to fast charge iPhones and iPads simultaneously.

Read more 9to5Mac tutorials:

Check out 9to5Mac on YouTube for more Apple news:

Adblock test (Why?)

Article From & Read More ( Can you use MacBook Pro chargers for iPhone and iPad fast charging? - 9to5Mac )
https://ift.tt/2ZG6Bbn
Technology

Google tells EU court it’s the #1 search query on Bing - Ars Technica

Let's see, you landed on my "Google Ads" space, and with three houses, that will be $1,400.
Enlarge / Let's see, you landed on my "Google Ads" space, and with three houses, that will be $1,400.
Ron Amadeo / Hasbro

Google is in the middle of one of its many battles with EU antitrust regulators—this time it's hoping to overturn the record $5 billion fine the European Commission levied against it in 2018. The fine was for unfairly pushing Google search on phones running Android software, and Google's appeal argument is that search bundling isn't the reason it is dominating the search market—Google Search is just so darn good.

Bloomberg reports on Google's latest line of arguments, with Alphabet lawyer Alfonso Lamadrid telling the court, “People use Google because they choose to, not because they are forced to. Google’s market share in general search is consistent with consumer surveys showing that 95% of users prefer Google to rival search engines.”

Lamadrid then went on to drop an incredible burn on the #2 search engine, Microsoft's Bing: “We have submitted evidence showing that the most common search query on Bing is, by far, 'Google.'"

Worldwide, Statcounter has Google's search engine marketshare at 92 percent, while Bing is a distant, distant second at 2.48 percent. Bing is the default search engine on most Microsoft products, like the Edge browser and Windows, so quite a few people end up there as the path of least resistance. Despite being the default, Google argues that people can't leave Bing fast enough and that they do a navigational query for "Google" to break free of Microsoft's ecosystem.

Google's argument that defaults don't matter runs counter to the company's other operations. Google pays Apple billions of dollars every year to remain the default search on iOS, which is an awfully generous thing to do if search defaults don't matter. Current estimates put Google's payments to Apple at $15 billion per year. Google also pays around $400 million a year to Chrome rival Mozilla to remain the default search on Firefox.

Adblock test (Why?)

Article From & Read More ( Google tells EU court it’s the #1 search query on Bing - Ars Technica )
https://ift.tt/39QqMFx
Technology

An Interview with Intel Lab's Mike Davies: The Next Generation of Neuromorphic Research - AnandTech

As part of the launch of the new Loihi 2 chip, built on a pre-production version of Intel’s 4 process node, the Intel Labs team behind its Neuromorphic efforts reached out for a chance to speak to Mike Davies, the Director of the project. Now it is perhaps no shock that Intel’s neuromorphic efforts have been on my radar for a number of years – as a new paradigm of computing compared to the traditional von Neumann architecture, and one that is meant to mimic brains and take advantages of such designs, if it works well then it has the potential to shake up specific areas of the industry, as well as Intel’s bottom line. Also, given that we’ve never really covered Neuromorphic computing in any serious detail here on AnandTech, it would be a great opportunity to get details on this area of research, as well as the newest hardware, direct from the source.

Mike Davies currently sits as Director of Intel’s Neuromorphic Computing Lab, a position held since 2017, as well as having been a principle engineer on the same project. Mike joined Intel in 2011 as part of the acquisition of Fulcrum Microsystems, where he had been in IC development for 11 years. Fulcrum’s focus was on asynchronous network switch design, and after Intel made the acquisition, that technology eventually made its way into Intel’s networking division, and so the asynchronous compute team pivoted to Neuromorphic designs. Mike has been the face of Intel’s Neuromorphic efforts, demonstrating the technology and the extent of the research and collaborations with industry partners and academic institutions at industry events.


Mike Davies
Director, Intel Labs

Dr. Ian Cutress
AnandTech

Ian Cutress: Can you describe what Neuromorphic Computing is, and what it means for Intel?

Mike Davies: Neuromorphic Computing is a rethinking of computer architecture, inspired by the principles of brains. It is really informed at a very low level of our understanding of neuroscience, and  it leads us to an architecture that looks dramatically different from even the latest AI accelerators or deep learning accelerators.

It is a fully integrated memory and compute model, so you have computing elements sitting very close to the storage state elements that correspond to the neural state and the synaptic state that represents the network that you're computing. It’s not [a traditional] kind of streaming data model always executing through off chip memory - the data is staying locally, not moving around, until there's something important to be computed. [At that point] the local circuit activates and sends an event based message, or a spike, to all the other neurons that are paying attention to it.

Probably the most fundamental difference to conventional architectures is that the computing process is kind of an emergent phenomenon. All of these neurons can be configured, and they operate as a dynamic system, which means that they evolve over time – and you may not know the precise sequence of instructions or states that they step through to arrive at the solution as you do in a conventional model. It's a dynamic process. You proceed through some collective interaction, and then settle into some new equilibrium state, which is the solution that you're looking for.

So in some ways it has parallels to quantum computing which is also computing with physical interactions between its elements. But here we are dealing with digital circuits, still designed in a pretty traditional way with traditional process technology, but the way we've constructed those circuits, and the architecture overall, is very different from conventional processors.

As far as Intel's outlook, we're hoping that through this research programme, we can uncover a new technology that augments our portfolio of current processors, tools, techniques, and technologies that we have available to us to go and address a wide range of different workloads. This is for applications where we want to deploy really adaptive and intelligent behavior. You can think of anything that moves, or anything that's out in the real world, faces power constraints and latency constraints, and whatever compute is there has to deal with the unpredictability and the variability of the real world. [The compute] has to able to make those adjustments, and respond to data in real time, in a very fast but low power mode of operation.

IC: Neuromorphic computing has been part of Intel Labs for almost a decade now, and it remains that way even with the introduction of Loihi 2, with external collaborations involving research institutions and universities. Is the roadmap defining the path to commercialization, or is it the direction and learnings from the collaborations that are defining the roadmap?

MD: It's an iterative process, so it's a little bit of both!

But first, I need to correct something - the acquisition I was a part of with Intel, 10 years ago, actually had nothing to do with neuromorphic computing at all. That was actually about Ethernet switches of all things! So our background was coming from the standpoint of moving data around in switches, and that's gone on to be commercialized technology inside other business groups at Intel. But we forked off and used the same kind of fundamental asynchronous design style that we had in those chips, and then we applied it to this new domain. That started about six years ago or so.

But in any case, what you're describing [on roadmaps] is really a little bit of both. We don't have a defined roadmap, given that this is about as basic of research as Intel engages in. This means that we have a kind of vision for where we want to end up – we want to bring some differentiating technologies to this domain.

So in this asynchronous design methodology, we did the best we could at Intel in developing an architecture for a chip with the best methods that we had available. But that was about as far as we could take it, as just one company operating in isolation. So that's why we released Loihi out to an ecosystem, and it's been steadily growing. We're seeing where this architecture performs really well on real workloads with collaborators, and where it doesn't perform well. There has been surprises in both of those categories! So based on what we learn, we're advancing the architecture, and that is what has led us to this next generation.

So while we're also looking for possible near term applications, which may be specializations of this general purpose design that we're developing, long term we might be able to incorporate designs into our mainstream products hidden away, in ways that maybe a user or a programmer wouldn't have to worry that they are present in the chip.

IC: Are you expecting institutions with Loihi v1 installed to move to Loihi v2, or does v2 expand the scope of potential relationships?

MD: In pretty much all respects, Loihi 2 is superior to Loihi v1. I expect that pretty quickly these groups are going to transition to Loihi 2 as soon as we have the systems and the materials available. Just like with Loihi 1, we're starting at the kind of the small scale - single chip / double chip systems. We built a 768 chip system with Loihi 1, and the Loihi 2 version of that will come around in due course.

IC: Loihi 2 is the first processor publicly confirmed for Intel's first EUV process node, Intel 4. Are there any inherent advantages to the Loihi design that makes it beneficial from a process node optimization point of view?

MD: Neuromorphic Computing, more so than pretty much any other types of computer architecture, really needs Moore's law. We need tiny transistors, and we need tiny storage elements to represent all the neural and the synaptic states. This is really one of the most critical aspects of the commercial economic viability of this technology. So for that reason, we always want to be on the very bleeding edge of Moore's law to get the greatest capacity in the network, in a single chip, and not have to go to 768 chips to support a modest size workload. So that's why, fundamentally, we're at the leading edge of the process technology.

EUV simplifies the design rules, which actually is really great for us because we've been able to iteratively advance the design. We’ve been able to quickly iterate test chips and as the process has been evolving, we've been able to evolve the design and loop feedback from the silicon teams, so it's been great for that.

IC: You say pre-production of Intel 4 is used - how much is there silicon in the lab vs simulation?

MD: We have chips in the lab! In fact, as of September 30th, they'll be available for our ecosystem partners to actually kick the tires and start using them. But as always, it's the software that's really the slower part to come together. So that being said, we’re not at the final version. This process (Intel 4) is still in development, so we aren't really seeing products. Loihi 2 is a research chip, so there's a different standard of quality and reliability and all these factors that go into releasing products. But it certainly means that the process is healthy enough that we can deploy chips and put them on subsystem boards, and remotely access them, measure their performance, and make them available for people to use. My team has been using these for quite some time, and now we're just flipping the switch and saying our external users can start to use them. But we have a ways to go, and we have more versions of Loihi 2 in the lab - it's an iterative process, and it continues even with this release.

IC: So there won't specifically be one Loihi 2 design? There may be varying themes and features for different steppings?

MD:  For sure. We've frozen the architecture in a sense, and we have most of the capabilities all implemented and done. But yes, we're not completely done with the final version that we can deploy with the all the final properties we want.

IC: I think the two big specifications that most of our readers will be interested in is the die size – going down from 60mm2 in Loihi 1 to 31 mm2 in Loihi 2. Not only that, but neuron counts increase from 130,000 to a million. What else does Loihi 2 bring to the table?

MD: So the biggest change is a huge amount of programmability that we've added to the chip. We were kind of surprised with the applications and the algorithms that started getting developed and quantified with Loihi we found that the more complex the neuron model got, the more application value we could measure. So could we could see that there was a school of thought that the particular kind of neural characteristics of the neuron model don't matter that much - what matters more is the parallel assembly of all these neurons, and then that emergent behavior I was describing earlier.

Since then, we've found that the fixed function elements in Loihi have proved to be a limitation for supporting a broader range of applications or different types of algorithms. Some of these get pretty technical but as an example, one neuron model that we wanted to support (but couldn't) with Loihi is an oscillatory neuron model. When you kick it with one of these events or spikes, it doesn't just decay away like normal, but it actually oscillates, kind of like a pendulum. This is thought in neuroscience to have some connection to the way that we have brain rhythms. But in the neuromorphic community, and even in neuroscience, it's not been too well understood exactly how you can computationally use these kind of exotic oscillating neuron models, especially when adding extra little nonlinear mathematical terms which some people study.

So we were exploring that direction, and we found that actually there are great benefits and we can practically construct neural networks with these interesting new bio-inspired neuron models. They effectively can solve the same kind of problems [we’ve been working on], but they can shrink the size of the networks and the number of parameters to solve the same problems. They're just the better model for the particular task that you want to solve. It's those kind of things where, as we saw more and more examples, we realized that it is not a matter of just tweaking the base behavior in Loihi - we really had to go and put in a more general purpose compute, almost like an instruction set and a little microcode executer, that implements individual neurons in a much more flexible way.

So that's been the big change under the hood that we've implemented. We've done that very carefully to not deviate from the basic principles of neuromorphic architectures. It's not a von Neumann processor or something - there's still this great deal of parallelism and locality in the memory, and now we have these opcodes that can get executed so we don't compromise on the energy efficiency as we go to these more complex neuron models.

IC: So is every neuron equal, and can do the same work, or is this functionality split to a small sub-set per core?

MD: All neurons are equal. In Loihi v1, we had one very configurable neuron model - each individual neuron could kind of specify different parameters to be customized to that particular part of the network, and there were some constraints on how diverse you could configure it. The same idea applies, but you can define a couple different [schema], and different neurons can reference and use those different styles in different parts of the network.

IC: One of the big things about Loihi v1 was that it was a single shiny chip which could act on its own, or in Pohoiki Springs there would be 768 chips all in a box. Can you give examples of what sort of workloads run on that single chip, versus the bigger systems? And does that change with Loihi 2?

MD: Fundamentally the kinds of workloads don't necessarily change - that's one of the interesting aspects of neuromorphic architecture. It's similar enough to the brain such that with more and more brain matter the particular types of functions and features that are supported at these different scales don't change that much. For example, one workload we demonstrated is a similarity search function – such as an image database. You might think of it as giving it an example image and you want to query to find the closest match; in the large system, we can scale up and support the largest possible database of images. But on a single chip, you perhaps performed the same thing, just with a much smaller database. And so if you're deploying that, in an edge device, or some kind of mobile drone or something, you may be very limited in a single chip form factor to the types of the varied range of different objects that it could be detected. If you're doing something that's more data center oriented, you would have a much richer space of possibility there.

But this is one area we've improved a lot – in Loihi v1, the effect of bandwidth between the chips proved to be a bottleneck. So we did get congestion, despite this highly sparse style of communication. We're usually not transmitting, and then we only transmit infrequently when there's information to be processed. But the bandwidth offered by the chip-to-chip links in Loihi was so much lower than what we have inside the chip that inevitably it started becoming a bottleneck in that 768 chip system for a lot of workloads. So we've boosted that in Loihi to over 60 times, actually, if you consider all the different factors of the raw circuit speeds, and the compression features we've added now to reduce the need for the bandwidth and to reduce redundancy in that traffic. We've also added a third dimension, so that now we can scale not just planar networks, 2D meshes of chips, but we can actually have radix, and scaling so that we can go into 3D.

IC: With Loihi 2, you're moving some connectivity to Ethernet. Does that simplify some aspects because there's already deep ecosystem based around Ethernet?

MD: The Ethernet is to address another limitation of a different kind that we see with neuromorphic technology. It's actually hard to integrate it into conventional architectures. In Loihi 1, we did a very purist asynchronous interconnect - one that allows us to scale up to these big system sizes that enables, just natively speaking, asynchronous spikes from chip-to-chip. But of course at some point you want to interface this to conventional processors, with conventional data formats, and so that's the motivation to go and put in a standard protocol in there that that allows us to stream standard data formats.  We have some accelerated spike encoding processes on the chip so that as we get real world data streams we can now convert it in a more efficient fast way. So Ethernet is more for integration into conventional systems.

IC: Spiking neural networks are all about instantaneous flashes of data or instructions through the synapses. Can you give us an indication what percentage of neurons and synapses are active at any one instant with a typical workflow? How should we think about that in relation to TDP?

MD: There is a dynamic range of power. Loihi, in a real world workload on a human timescale, would typically operate around 100 milliwatts. If you're computing something that's more abstract computationally, where you don't have to slow it down to human scales, say solving optimization problems, then it’s different. One demonstration we have is that with the German railway network we took an optimization workload and mapped it onto Loihi – for that you just want an answer as fast as possible, or maybe you have a batched up collection of problems to solve. In that case, the power can peak above one watt or so in a single Loihi chip. Loihi 2 will be similar, but we've put so many performance improvements into the design, and we’re reaching upwards of 10 times faster for some workloads. So we could operate Loihi 2 at a fairly high power level, but it’s not that much when we need it for real time/human timescale kind of workloads.

IC: In previous discussions about neuromorphic computing, one of the limitations isn't necessarily the compute from the neuromorphic processor, but finding sensors that can relay data in a spiking neural network format, such as video cameras. To what level is the Intel Neuromorphic team working on that front?

MD: So yes, there’s a definite need to, in some cases, rethink sensing all the way to the sensors themselves. We've seen that with new vision sensors, these emerging event cameras, are fantastic for directly producing spikes that go speak the language of Loihi and another neuromorphic chips. We are certainly collaborating with some of those companies developing those sensors. There's also a big space of interesting possibility there for a really tight coupling between the neuromorphic chips and the sensors themselves.

Generally though, what matters more than just the format of the spikes is that the base for the data stream has to be a temporal one, rather than static snapshots. That's the problem with a conventional camera for neuromorphic interfacing, we need more of an evolving temporal signal. So audio waveforms, for example, are great for processing.

In that case, we can look at bio-inspired approaches. For audio, this is an example where with the more generalized kind of neuron models in Loihi, we can model the cochlea (ear). In the cochlea, there is a biological structure that converts waveforms into spikes, and making a spectral transform of spikes looking at different frequencies. That's the kind of thing where that the sensor part of it, we can still use a standard microphone, but we're going to change the way that we convert these signal streams that are fundamentally time varying into these discrete spike outputs.

But yeah, sensors are a very important part of it. Tactile sensors are another example where we're collaborating with people producing these new types of tactile sensors, which clearly you want to be event based. You don't want to read out all of the tactile sensors in a single synchronous time snapshot - you want to know when you've hit something and respond immediately. So here's another example where the bio inspired approach to sensing tactile sensation is really good for a neuromorphic interface.

IC: So would it be fair to say that neuromorphic is perhaps best for interrupt based sensing, rather than polling based?

MD: In a very conventional computing mindset, absolutely! That's exactly it.

IC: How close is Loihi 2 to a 'biological model'?

MD: I think our guiding approach is to understand the principles that come from the study of neuroscience, but not to copy feature by feature. So we've added a bit of programmability into our neuron models, for example - biology doesn't have programmable neurons. But the reason we've done that is so that we can support the diversity of neuron models that we find in the brain. It's no coincidence and not a just a quirk of evolution that we have 1000s of different unique neuron types in the brain. It means that not all one size fits all. So we can try to design a chip that has 1000 different hard coded circuits, and each one is trying to mimic exactly a particular neuron - or we can say we have one general type, but with programmability. Ultimately we need diversity, that's the lesson that comes from evolution, but let's give our chip the feature set that lets us cover a range of neuron models.

IC: Is that kind of like mixing an FPGA with your model?

MD: Yeah! Actually in many ways that is the most close parallel to a neuromorphic architecture.

IC: One of the applications of Loihi has been optimization problems - sudoku, train scheduling, puzzles. Could it also be applied to combative applications, such as chess or Go? How would the neuromorphic approach differ to the 'more traditional' machine learning?

MD: That’s a really interesting direction for research that we haven't gone deeply into yet. If you look at the best performing, adversarial type of reinforcement-based learning approaches that have proven so successful there, the key is to be able to run many, many, many different trials, vastly accelerated to what a human brain could process. The algorithm then learns from all of that. This is a domain where it starts being a little distant from what we're focused on in Neuromorphic, because we're often looking at human timescales, by and large, and processing data streams that are arriving in real time and adapting to that in a way that our brain adapts.

So if we're trying to learn in a superhuman way, such as all kinds of correlations in the game of Go that human brains struggle to achieve, I could see neuromorphic models being good for that. But we're going to have to go work on that acceleration aspect, and have them speed up by vast numbers. But I think there's definitely a future direction - I think this is something that eventually we will get to, and particularly deploying evolutionary approaches for that where we can use vast parallelism similar to how in nature it evolves different networks in a kind of distributed adversarial game to evolve the best solution. We can absolutely apply those same techniques, neuromorphically, and that would be a guiding motivation to build really big neuromorphic systems in the future - not to achieve human brain sales, but to go well beyond human brain scale, to evolve into the best performing agent.

IC: In normal computing, we have the concept of IPC - instructions per clock. What's the equivalent metric in Neuromorphic computing, and how does Loihi 2 compare to Loihi 1?

MD: That’s a great question, and it gets into some nuances of this field. There are metrics that we can look at, things like the number of synaptic operations that can be processed per unit of time, or similar such as max per second, or the max per second per watt, or synaptic energy, neuron updates per time step, or per unit of time, and the numbers of neurons that could be updated. In all of those metrics, we've improved Loihi 2 to generally by at least a factor of two faster. As I was saying earlier, it's uniformly better by a big step over Loihi 1.

Now on the other hand, we tend to not really emphasize (at least in our research programme) those particular metrics, because once you start fixating on specific ops and try to optimize for them, you're basically accepting the fact we know what the field wants, and let's go optimize for those. But in the neuromorphic domain, that there's just no clarity yet on exactly what is needed. For a deep learning accelerator, you want to crank the greatest number of operations per second, right? But in the neuromorphic world, a synaptic operation, if you take something as simple as that, should that operation support the propagation delay, which has another parameter? Should it allow the weight that it applies to multiply with a strength that comes along with that spike event? Should the weight evolve in response? Should it change for learning purposes? These are all questions that we're looking at. So before we really fixate on a particular number, we want to really figure out what the right operations are.

So as I say, we've improved certainly Loihi 2 over Loihi 1 by large measures. But I think energy is an example of one that we haven't aggressively optimized. Instead, we've chosen to augment with programmability and speed, because generally what we found with Loihi is that we got huge energy gains purely from the sparsity from the activity and the architecture aspects of the design. At this point, we don't need to take a 1000x improvement and make it 2000x: for this stage of development, 1000x is good enough if we can focus on other benefits. We want balance the benefits a little bit more towards the versatility.

IC: One of the announcements today is on software - you said in our briefing earlier today that there is no sort of universal collaborative framework for neuromorphic computing, and that everybody is kind of doing their own homespun things. Today Intel is introducing a new Lava framework, because traditional TensorFlow/PyTorch or that sort of machine learning doesn't necessarily translate to the neuromorphic world. How is Intel approaching industry collaboration for that standard? Also, will it become part of Intel's oneAPI?

MD: So there are components of Lava we might incorporate into oneAPI, but really with Lava, the software framework that we're releasing, is that it's a beginning of an open source project. It's not the release of some finished product that we're sharing with our partners - we've set up a basic architecture, and we've contributed some software assets that we've developed from the Loihi generation. But really, we see this as building on the learnings of this previous generation to try to provide a collaborative path forward and address the software challenges that still exist and are unsolved. Some of these are very deep research problems. But we need to get more people working together on a common codebase, because until we get that, progress is going to be slow. Also, that's sometimes inevitable - you have to have different groups building on other people's work, extending it, enhancing it, and polishing it to the point that non specialists can come in take some or all of these best methods, that they may have no clue what magic neuroscientist ideas have been optimized, but just understandable libraries wrapped up to the point that they can be applied. So we're not at that stage yet, and it won't be an Intel product - it's going to be an open source Lava project that Intel contributes to.

IC: Speaking on the angle of getting people involved - I know Loihi 2 is an early announcement right now. But what scope is there for Loihi 2 to be on a USB stick, and get into the hands of non-traditional researchers for homebrew use cases?

MD: There's no plan at this point, but we're looking at possibilities for scaling out the availability of Loihi 2 beyond where we are with Loihi 1. But we're taking it step by step, because right now we're only unveiling the first cloud systems that people can start to access. We'll gauge the response and the interest in Lava, and how that lowers the barriers for entry to using the technology. One aspect of Lava that I didn't mention is that people can start using this on their CPU - so they can start developing models, and it will run incredibly slowly compared to what the neuromorphic chip can accelerate, but at least if we get more people using it and this nice dynamic of building and polishing the software occurs, then that will create a motivating case to go and make the hardware more widely available. I certainly hope we get to that point.

IC: If there's one main takeaway about neuromorphic computing that people should after reading and listening to this interview, what should it be?

MD: The future is bright in this field. I'm really very excited by the results we had with that first generation, and Loihi 2 addresses very specific pain points which should just allow it to scale even better. We’ve seen some really impactful application demonstrations that were not possible with that first generation. So stay tuned – there are really fun times to come.

Many thanks to Mike Davies and his team for their time.

Adblock test (Why?)

Article From & Read More ( An Interview with Intel Lab's Mike Davies: The Next Generation of Neuromorphic Research - AnandTech )
https://ift.tt/3zX1LD2
Technology

Fairphone’s latest sustainable smartphone comes with a five-year warranty - The Verge

Fairphone, the manufacturer focused on making easy to repair smartphones made out of ethically sourced materials, just took the wraps off its fourth-generation handset. The Fairphone 4 uses a modular design that’s similar to the company’s previous phones, only now with more powerful internals, a five-year warranty, and a promise of two major Android updates and software support until the end of 2025. Prices start at €579 / £499 for the phone, which will ship on October 25th.

I’ve been using the Fairphone 4 for a couple of days as my primary phone, and while I’m not ready to give a final verdict just yet, it feels like a big step forward compared to the dated designs and low-power components found in the company’s previous phones. Stay tuned for my full review.

Fairphone’s ambition is to produce a more ethical alternative to modern smartphones. That means making a device that’s ethically sourced using sustainable materials before providing the software support and warranty to make it useable for as long as possible. Although Fairphone is only guaranteeing software support until the end of 2025, it has ambitions to extend this as far as 2027. In an ideal world, Fairphone would also like to eventually release 2024’s Android 15 as an update to the phone.

Normally, the specs of Fairphone’s devices are secondary to its ethical considerations, but unlike its previous phones, the Fairphone 4 is competitive with other mid-range Android handsets. The 5G handset is powered by Qualcomm’s Snapdragon 750G processor, and that’s paired with either 6 or 8GB of RAM and 128 or 256GB of internal storage, expandable via microSD. It’s powered by a 3,905mAh removable battery, and the display is a 6.3-inch 1080p LCD panel.

There are two rear cameras — a 48-megapixel main camera and a 48-megapixel ultrawide — and a single 25-megapixel selfie camera. The main rear camera is equipped with optical image stabilization and can record at up to 4K / 30fps.

A notable downside compared to previous Fairphones is that the Fairphone 4 no longer includes a 3.5mm headphone jack, a choice that feels at odds with the company’s otherwise customer-first approach. Fairphone tells me it made this decision in order to be able to offer an official IP rating for dust and water resistance, which was missing from the company’s previous phones. It’s only IP54, which means it’s protected from light splashes rather than full submersion, but that’s impressive in light of its removable rear cover and modular design.

Regarding its modularity, Fairphone is selling eight repair modules for the phone, which include replacement displays, batteries, back covers, USB-C ports, loudspeakers, earpieces, rear cameras, and selfie cameras. All of these are easily removable using a standard Philips head screwdriver, which means customers should be able to carry out a lot of repairs themselves. But, if you need to turn to a professional, Fairphone says its spare parts are readily available for local repair shops to buy and use themselves.

Fairphone’s previous two phones are the only devices to have received perfect repairability scores from iFixit, and the company tells me it believes the Fairphone 4 is even more repairable.

The hope is for these spare parts to be available until at least 2027. Fairphone has a good track record with previous devices, telling me it still has parts in stock for the six-year-old Fairphone 2, two years after the last handset was sold. But product manager Miquel Ballester concedes that the company has run out of certain parts for that model.

So too does Fairphone have a solid record on the software side of providing major Android updates for its phones… eventually. Earlier this year, the company officially released its Android 9 update for the Fairphone 2, a device that originally launched with Android 5. It may have come almost three years after Android 9’s original release, but it means that the phone continues to run an officially supported version of Google’s operating system. It bodes well for Fairphone’s support aspirations for the Fairphone 4, although it will have to contend with the fact that Qualcomm only officially supports its chipsets for three major OS updates and four years of security updates, Ars Technica reports.

In terms of materials, the Fairphone 4 is made using Fairtrade-certified gold; responsibly sourced aluminum and tungsten; and recycled tin, rare earth minerals, and plastic (including its rear panel, which is 100 percent recycled polycarbonate). The company has various initiatives to improve the working conditions of miners and factory workers involved in the supply chains for its devices. Fairphone also claims that the Fairphone 4 is the “first electronic waste neutral handset” because it’ll recycle one phone or an equal amount of e-waste for each device sold.

The Fairphone 4 is available to preorder today in Europe and should ship starting October 25th. The model with 6GB of RAM and 128GB of storage costs €579 / £499, while the step-up model with 8GB of RAM and 256GB of storage retails for €649 / £569. Unfortunately, there’s no sign of a US release: Fairphone says it’s interested but that it’s focusing on Europe for the time being.

Adblock test (Why?)

Article From & Read More ( Fairphone’s latest sustainable smartphone comes with a five-year warranty - The Verge )
https://ift.tt/3mfA17I
Technology

Wednesday, September 29, 2021

Doctor uses iPhone 13 Pro’s Macro camera to check patients’ eyes - 9to5Mac

One of the new features of the iPhone 13 Pro is the addition of a new Macro mode for capturing very close-up photos and videos with the camera. While most users have been using the new mode to capture details of nature, Doctor Tommy Korn has discovered that the iPhone 13 Pro’s Macro camera can also be useful for eye treatment.

In a LinkedIn post, the ophthalmologist shared the story about how he has been using his new iPhone 13 Pro Max to check a patient’s eye with the new camera. Thanks to the Macro mode, Korn can take extremely detailed photos of the eyes, which lets him observe and record important details about patients’ health.

The doctor shows the case of a patient who had a cornea transplant and now needs to constantly check if the abrasion is being healed.

Been using the iPhone 13 Pro Max for MACRO eye photos this week. Impressed. Will innovate patient eye care & telemedicine. forward to seeing where it goes… Photos are from healing a resolving abrasion in a cornea transplant. Permission was obtained to use photos. PS: this “Pro camera” includes a telephone app too!

Together with optometrist Jeffrey Lewis, both doctors argue how this feature should be quite useful in pushing telemedicine forward.

Dovetails with the overall move toward virtual, slowly overcoming imaging barriers. Yet another way to impress, manage, nurture long-term relationships with our patients.

Despite having the new camera mode, Apple has not added a new lens specifically for Macro shots. Instead, iPhone 13 Pro and iPhone 13 Pro Max have an upgraded ultra-wide lens with a larger f/1.8 aperture and 120-degree field of view that is capable of capturing Macro images with 2 centimeters of distance.

You can learn more about taking macro photos and videos with iPhone 13 Pro with our guide here on 9to5Mac.

Check out 9to5Mac on YouTube for more Apple news:

Adblock test (Why?)

Article From & Read More ( Doctor uses iPhone 13 Pro’s Macro camera to check patients’ eyes - 9to5Mac )
https://ift.tt/39NqpeK
Technology

Amazon Astro home robot: How to preorder the $1,000 Alexa bot and everything else announced - CNET

amazon-echo-show-15

Amazon announced several new additions during Tuesday's product launch, including this new robot, Astro. 

Amazon/Screenshot by James Martin/CNET

Amazon unveiled a new line of smart home and security devices at its fall invite-only product launch this week (see below). But the big reveal was a $1,000 smart robot called Astro, an Alexa assistant-meets-security-camera on wheels that can move through your home to check on your loved ones, make calls, play music and more. But with this bot's feature set comes more privacy concerns.

Amazon says that Astro will have "out of bounds zones" and the ability to disable mics and speakers. All in all, Astro is just one of the many ways robots are coming to life. We'll tell you how you can preorder the smart bot and when it will ship below. 

As well as Astro, Amazon introduced the new Echo Show 15 and an updated Halo View fitness band with a screen. A year after it was first announced, the flying Ring drone Always Home camera is up for preorder. (Note that the Ring line has been subject to criticism about privacy concerns.) And a truckload of new features and accessories are coming, too. Hey Disney is bringing interactive Disney characters and commands to life in Echo devices (here's more on Echo wake words) and Alexa Together is a new subscription service for caregivers and their loved ones.

There's a lot to preorder and unpack from yesterday's event. We'll tell you what's available now, how to request the special invitation-only devices and when the new line of products will be available. This story was recently updated. 

Amazon/Screenshot by James Martin/CNET

Astro is a new robot that brings AI to your home. You can sign up to request an invite today and Astro will ship later this year. It's an adorable robot that follows voice commands and keeps an eye on your home with its periscope camera. Astro can show a live view via the mobile app, so you can check on your home when you're away. Astro works with Alexa Together and Ring Protect Pro, and comes with a six-month free trial of the latter. Anticipating criticism, Amazon offers some privacy features with Astro:

  • No-go zones 
  • Do not disturb features
  • Alexa's standard privacy features

Read more: Amazon's Astro: New details on price, privacy, battery, specs and more

Now playing: Watch this: Amazon's Astro robot is part home helper, part surveillance...

8:05

Amazon/Screenshot by James Martin/CNET

Amazon partnered with the Honeywell Home thermostat team to create an Energy Star-certified smart thermostat that competes with Nest. Most customers may be able to get the thermostat for free after utility rebates. It's an Alexa-enabled thermostat that supports routines and automatically adjusts temperatures. It's available for preorder now and will ship starting Nov. 4. A few of the top features you can look forward to include:

  • Control your thermostat using the Alexa app or voice commands
  • An energy dashboard to breakdown your usage on your Echo device or the Alexa app 
  • Thermostat Hunches, which automatically adjusts the temperature
Amazon/Screenshot by James Martin/CNET

Amazon's new Echo Show is bigger than previous models. Now, it comes with a 15.6-inch display and only comes in one color: black. However, you can mount it on the wall or place it on a stand. The newest model now has facial recognition for personalized alerts and more when the hub recognizes your face, tightening the competition with Apple. It also comes with personalized to-do lists, like Google's Hub Max. But the Echo Show 15 comes with a few highly anticipated features. 

  • Custom sounds that allow Alexa to listen to specific noises in your home (available in 2022)
  • Visual ID to give you more customized calendars and reminders
  • Customizable Alexa widgets 

You can sign up to receive an email when the Echo Show 15 is available for preorder.

Amazon/Screenshot by James Martin/CNET

The Amazon Glow is a brand-new kid-friendly smart device (yes, it's different to the Echo Glow night light). Kids can use the video screen to chat with long-distance family and friends. It includes a silicone mat to read, play and draw with loved ones. You'll also get a one-year subscription to Amazon Kids Plus for access to digital books, games and more. The Amazon Glow comes with a two-year worry-free guarantee if you happen to break it. You can request an invite to the Glow program today

  • Games and activities from Mattel, Disney, Nickelodeon and Sesame Street
  • Pre-approved contacts using the Amazon Glow app 
  • Camera with privacy shutter 

Now playing: Watch this: Amazon Glow: An interactive projector for kids

4:24

Amazon/Screenshot by James Martin/CNET

The Halo View is a new addition to Amazon's Halo family. The latest Amazon fitness tracker has a few new features, including an AMOLED color display screen. Halo View users can also look forward to Halo Fitness and Nutrition services that will help with exercise and healthy eating. It benefits the Halo user in several ways, including the movement, emotional tone and the camera's body analyses. And it still works with the original Halo band. But these new additions might be worth upgrading from Amazon's Halo. 

  • 7-day battery life 
  • Water resistant down to 50 meters 
  • Skin and temperature tracking 

Adblock test (Why?)

Article From & Read More ( Amazon Astro home robot: How to preorder the $1,000 Alexa bot and everything else announced - CNET )
https://ift.tt/2WnbRPQ
Technology

Search

Featured Post

Samsung confirms Galaxy AI rollout for older flagships, but S22 owners left in the dark - gizmochina

Samsung ‘s Galaxy S24 series introduced a suite of AI-powered features promising a more enhanced user experience. While these features are ...

Postingan Populer