Categorie
Energia

La crescita continua e l’effetto rebound

I benefici economici derivanti da una maggiore efficienza dell’uso dell’energia o delle risorse si traducono spesso in un incremento della domanda energetica e di risorse. Il cosiddetto “effetto rebound” rende così l’efficienza solo un vantaggio temporaneo. Un articolo di Gianfranco Bologna.

Leggi l’articolo nella versione digitale della rivista QualEnergia

Nonostante l’ingente mole di pubblicazioni scientifiche in merito, gli effetti che l’impatto umano produce sui sistemi naturali – equivalenti a quelli provocati dalle grandi forze geofisiche che hanno plasmato il nostro pianeta in 4,6 miliardi di anni, tanto da richiedere alla comunità scientifica il riconoscimento di un nuovo periodo geologico definito Antropocene – sembrano ancora profondamente ignoti al mondo politico.

Ancora oggi si discute della necessità di una crescita continua dell’economia mondiale, definendola spesso Green Growth e sottolineando, nel migliore dei casi, l’importanza del disaccoppiamento, decoupling, tra il livello di crescita economica e l’intensità dell’utilizzo dell’energia e delle risorse.

Il quesito di fondo è: cosa provocherà un tale tasso di crescita alle fondamenta su cui si regge l’intera economia, vale a dire gli ecosistemi viventi e le risorse naturali che li costituiscono. Ciò che emerge da tutti gli sforzi di decoupling fatti sinora è abbastanza chiaro: generano incrementi nella produttività e fanno aumentare la crescita.

I guadagni economici derivanti da una maggiore efficienza dell’uso dell’energia o delle risorse vengono normalmente utilizzati dalle aziende per espandere le proprie attività e, per le famiglie, i propri consumi. Il risultato è che nel tempo la domanda di energia e risorse aumenta.

Per chi compra un’auto energeticamente efficiente, per esempio, l’esperienza mostra che una spesa più bassa per il carburante si traduce in un maggiore utilizzo dell’auto. Questo fenomeno che viene definito effetto rebound significa che dal punto di vista ambientale l’efficienza è un’arma a doppio taglio.

In parole povere e nei casi migliori, i cambiamenti nei modelli di consumo sembrano avere, alla luce di molte analisi effettuate in diversi paesi, un effetto temporaneo.

Ciò non significa che dovremmo focalizzarci meno sulla scelta di elettrodomestici energeticamente efficienti, migliorando l’efficienza energetica nelle nostre case, mangiando meno carne e comprando auto a più basso consumo. Dal punto di vista del benessere umano, una maggiore efficienza energetica e di carbonio è sempre positiva.

L’effetto rebound quindi non dovrebbe scoraggiare le nostre società a battersi per raggiungere la massima efficienza nell’uso di energia e di risorse. Ci sono molte ragioni a favore di politiche che le incoraggino che, se non altro, ci fanno guadagnare tempo, ritardando così gli impatti ambientali.

Per agire in maniera più organica dobbiamo avviare anche altre importanti politiche che riducano significativamente il nostro impatto sulle risorse come, per esempio, la tassazione dell’uso delle risorse stesse.

L’articolo è stato pubblicato nel n.2/2016 della rivista bimestrale QualEnergia, con il titolo “Antropocene al via”.

Autore: QualEnergia.it – Il portale dell’energia sostenibile che analizza mercati e scenari

Categorie
TvTech

MSI GT72VR 6RD Dominator con GeForce GTX 1060

Dopo aver presentato le schede video GeForce GTX 1080, 1070 e 1060 per sistemi desktop NVIDIA propone sul mercato le corrispondenti versioni per sistemi notebook. Le specifiche restano invariate, assicurando prestazioni velocistiche estremamente elevate. Abbiamo provato il notebook MSI GT72VR 6RD Dominator, dotato di scheda GeForce GTX 1060.

Tag: GeForceMSINVIDIAPascalRecensione

Autore: TVtech – Video e Web Tv sulla tecnologia, sull’informatica e sul mondo ICT – Ultimi Video

Categorie
HardwareSoftware

Microsoft: Office 365 più smart con l’acquisizione di Genee

Microsoft ha confermato di aver raggiunto un accordo volto all’acquisizione di Genee, startup che ha sviluppato un servizio per la pianficazione automatica degli appuntamenti. I termini commerciali dell’accordo non sono stati svelati. Microsoft ha confermato che le risorse acquisite tramite Genee saranno utilizzate per potenziare l’offerta Office 365

genee

Genee può essere descritto come un assistente virtuale per la gestione degli appuntamenti e delle riunioni. Tratto caratteristico è infatti la capacità di fissare automaticamente le riunioni senza che l’utente abbia necessità di consultare il calendario, sfruttando algoritmi di intelligenza artificiale che analizzano il testo delle comunicazioni inviate ai destinatari del meeting. Microsoft illustra il funzionamento di Genee – che gestisce non solo la programmazione, ma anche la riporgrammazione degli appuntamenti in caso di conflitto con altri elementi del calendario – con un eloquente esempio: 

Supponiamo che vogliate incontrare un potenziale cliente, Diana, per un caffé. Inviate semplicemente una email a Diana inserendo in copia Genee, come se fosse un assistente personale. Genee capirà che volete “trovare del tempo per incontrare Diana per un caffè la prossima settima” e semplificherà il processo inviando una mail direttamente a lei con le opzioni appropriate in realazione al calendario ed alle preferenze. Genee, invierà anche l’invito alla riunione a vostro nome, per risparmiare tempo. 

Genee può funzionare sia tramite l’app per iOS, ma anche tramite SMS, Facebook Messenger, i messaggi diretti di Twitter e la chat di Skype. Obiettivo dichiarato di Microsoft è quello di integrare il servizio in Office 365, l’ambito di impiego più naturale sarebbe l’impiego in Outlook, ma mancano ancora dettagli più certi a riguardo. Quello che è certo riguarda la disattivazione del servizio di Genee a partire dall’1 settembre prossimo. Le voci nel calendario create in automatico da Genee resteranno inalterate, ma l’assistente digitale non creerà ulteriori appuntamenti. 

L’acquisizione di Genee da parte di Microsoft si colloca in una serie di acquisizioni poste in essere dalla casa di Redmond per potenziare le risorse da impiegare nelle app e nei servizi di produttività personale. Si ricorda, ad esempio, la recente acquisizione di LinkedIn, così come quella di Wunderlist, Sunrise Caledar e Acompli, quest’ultime annunciate nel corso del 2015. 

Autore: Le news di Hardware Upgrade

Categorie
Economia

Biglietto verde in calo perché il mercato dubita che la Fed alzerà i tassi a breve

valute 23

Nelle negoziazioni notturne, il dollaro neozelandese ha fatto registrare l’andamento migliore, lievitando dello 0,78% contro il biglietto verde e portandosi a 0,7325, massimo dal 10 agosto. Per la quarta volta da metà agosto, la coppia di valute tenta di sfondare, senza riuscirci, l’area di resistenza compresa fra 0,7335 e 0,7350. Nell’ultima settimana, infatti, le rinnovate attese di un rialzo del tasso dalla Fed avevano impedito al kiwi di apprezzarsi ulteriormente.

In Giappone, lo yen ha esteso i guadagni contro il dollaro, la coppia USD/JPY ha ceduto lo 0,30%, scendendo a 100,08. La valuta testa l’area di supporto a 100 da sette giorni, perché il mercato prevede che la BoJ non permetterà allo yen di rafforzarsi ulteriormente. Sul fronte dei dati, la stima flash sul PMI manifatturiero giapponese di agosto è salita a 49,6 punti dai 49,3 e rispetto ai 51,7 di un anno fa. Nonostante la tendenza positiva che si registra da maggio, si tratta del sesto mese consecutivo di contrazione. Il mercato inizia già a scontare un nuovo allentamento monetario dalla BoJ, come segnala il brusco aumento dell’indice sull’inversione del rischio. Infatti, l’inversione del rischio delta 25 a un mese per l’USD/JPY è balzata al -0,98% rispetto a circa il -2% della scorsa settimana. La volatilità ATM implicita a un mese ha raggiunto il 14,40% rispetto al valore, inferiore al 10%, registrato a metà agosto.

Oggi gli operatori monitoreranno la fiducia dei consumatori in Danimarca; il PMI manifatturiero, servizi e composito in Francia, Germania ed Eurozona; la decisione sul tasso d’interesse in Turchia (il mercato prevede un taglio di 25 punti base, all’8,50%, del tasso d’interesse di riferimento); il PMI manifatturiero, l’indice sul manifatturiero di Richmond e le vendite di case nuove negli USA.

Segui i prossimi seminari online di Swissquote o scarica la piattaforma Metatrader 4 demo

Autore: Finanza.com Blog Network Posts

Categorie
HardwareSoftware

Unpacking AMD’s Zen Benchmark: Is Zen actually 2% Faster than Broadwell?

At a satellite event to Intel’s Developer Conference last week, AMD held an evening soiree with invited analysts and press to talk about their new upcoming Zen microarchitecture. This was intended to be a preview of tomorrow’s Hot Chips presentation, and we’ve already covered the juicier parts of the presentation in terms of microarchitecture announcements as well as diving deep into the Server-based Naples implementation and what the motherboards told us from memory and IO support. 

You can read both here:

AMD Zen Microarchitecture: Dual Schedulers, Micro-op Cache and Memory Hierarchy Revealed
AMD Server CPUs and Motherboard Analysis

There was one other element to the presentation that requires further discussion and analysis, if only to clean up some of the misinformation already present online and to label what was shown with a long list of potential caveats which most people seem to have passed by almost entirely. As part of the show, AMD compared the performance of their Zen engineering sample to an Intel Broadwell-E processor. 

In this test, they told the audience that each system was running eight cores, sixteen threads, and will all cores set to 3 GHz (implying no turbo). Memory arrangements were not disclosed, nor storage: we were told to assume comparable setups. 

We were too busy trying to decipher what was on screen (and take note of the results) to actually photograph the benchmark as it happened (there are videos online), but the benchmark they showed was Blender, an open source rendering engine, with a custom multithreaded workload. The test was to render a mockup of a Zen based desktop CPU, with an effective workload of 50 seconds for these chips. I’ve seen numerous reports about this result saying the difference was 1 or 2 seconds, but with rarely a mention of the benchmark length, which is as important. The overall results were

  Blender Time to Render / sec
Intel Broadwell-E
Core i7-6900K
8C / 16T
3 GHz all-core
49.05
AMD ZEN
Engineering Sample
8C / 16T
3 GHz all-core
48.07
(-0.98 sec, 1.998%)

All things being equal (we’ll get to that in a second), this would suggest that an 8-core AMD has a ~2% advantage over Broadwell-E at the same clock speeds. Despite this result, there are a lot of unverifiable parts to the claim which makes analysis of such a result difficult. I want to go through each of them one by one to ensure everyone understands what was presented. 

I’ll preface this analysis with two things though: one is that AMD was very careful in what they said at the event, and only said as much as they needed to. Thus is why the string of caveats for this benchmark test is fairly long. But also, AMD has to set expectations here: if they choose an environment and test that represents the peak, or relies on something special, users will feel burned again after Bulldozer. AMD has to temper those expectations but still represent a methodology that is effective to them. By leaving so many cards on the table, this can both be a help or a hindrance.

But given the measured and calm, professional nature of the event, as opposed to the wild and wacky AMD events of the past, it was clear (either by design or chance) that the words used said only as much as they needed to. Along with the microarchitecture discussions, it was designed to provide a good stepping stone on to the Hot Chips presentation a few days later.

So, caveats. Or disclaimers not readily provided. Let’s start at the top. 

1) The Results Are Not Externally Verifiable At This Time, As Expected

We were told the setups of the systems being used, but were unable to confirm the results manually. This is typically the case with a high level, early look at performance and other companies do this all the time.

This being said, it would look bad on reports if it to turns out or someone finds a chasm between pre-launch and launch data, so the aspect of reporting this data without understanding this caveat is fundamental. The basis of publishing scientific papers is repeatability and verification – while this wasn’t a scientific presentation, it is important to keep it in the back of your mind when you hear any benchmark numbers (AnandTech included – our numbers are designed to be verifiable and we want to have a history of honesty with our readers, especially when it comes to custom software/workloads we cannot disclose). 

2) No Memory or TDP Numbers Were Provided

We were able to determine that the AMD-based systems were running 2×8 GB of DDR4-2400, although we did not get a look at Intel’s memory arrangement. Similarly, due to the ES nature of the CPU, TDP numbers were also not shared however we did see all the AMD systems use either the AMD Wraith cooler (which is rated at 125W) or the new near silent cooler (95W). That tends to peg the system at a peak power consumption and some of AMD’s current competitive parts actually use a cooler designed for the bracket above in TDP (e.g. A10-7860K at 65W uses the 95W cooler, A10-7890K at 95W uses the 125W cooler). 

3) Blender Is an Open Source Platform

One of the issues of using open source is that the code is editable by everyone and anyone. Any user can adjust the code to optimize for their system and run a test to get a result. That being the case, it can be difficult to accurately determine the code base for this test, and is relatively impossible to determine the code base of Blender that was compiled for this test.

Even in the base code, there could be CPU vendor specific optimizations in either the code or compiler that influences how the code manipulates the cache hierarchy with the workload and adjusts appropriately. It also doesn’t help that Blender has elements in the code called ‘AMD’, which relates to a series of internal rendering features not related to the company. Going down the optimization for specific CPU microarchitectures leads on to another more philosophical issue…

4) Did It Actually Measure IPC? (The Philosophical Debate) 

In the purest sense, measuring the number of instructions per clock that a set of instructions can perform can determine the efficiency of a design. However, the majority of highly optimized code bases do not have general-purpose code – if it detects a particular microarchitecture it can manipulate threads and loops to take advantage of the code design. How should IPC be measured is the main question: using identical code bases makes it easier to understand but are often non-real-world compiler targets, or highly optimized code to show the best of what the processor can do (which means that IPC performance is limited to that benchmark)? With the results we saw, if the difference of about a second in just under fifty seconds translates into a 2% difference, is it accurate to say that this is a 2% IPC increase, or does it rely on optimized/non-optimized code? Optimizing code, or profiling compilers for specific code targets, is nothing new. In the holistic view, most analysts use SPEC benchmarks for this, as they are well-known code structures, even though most benchmarks are compiler targets – while SPEC is not particularly relevant for the real world workloads, it does give an indication about performance for unknown architectures/microarchitectures.

5) The Workload Is Custom 

One of the benefits of software like SPEC, or canned benchmarks like Cinebench, is that anyone (with a license) can pick up the workload and run with it. Those workloads are typically well known, and we can get performance numbers out that have known qualities in their interpretation. With a custom workload, that is not always the case. It comes down to experience – an unknown workload can have a lop-sided implementation of certain branches of code which is unknown when it comes to running the results. This is why rendering one scene in a film can take a vastly different time to another, yet the results for the ‘benchmark’ are significantly different depending on the architecture (one prefers lighting, another prefers tessellation etc.) Using known or standard workloads over long periods of time can offer insights into the results, whereas new workloads cannot, especially with so few results on offer.

6) It Is Only One Benchmark

There is a reason for AMD only showing this benchmark – it’s either a best case scenario, or they are pitching their expectations exactly where they want people to think. By using a custom workload on open source software, the result is very specific and cannot be extrapolated in any meaningful way. This is why a typical benchmark suite offers 10-20 tests with different workloads, and even enterprise standard workloads like SPEC come with over a dozen tests in play, to cater for single thread or multi-thread or large cache or memory or pixel pushing bottleneck that may occur. Single benchmarks on their own are very limited in scope as a result.

7) There’s Plenty about the Microarchitecture and Chip We Don’t Know Yet, e.g. Uncore

One of the more difficult elements on a processor is managing cross-core communication, as well as cross-core-cache snooping. This problem is overtly exponential, with the plausibility of more direct connections per core as the numbers go up. Intel has historically used a torus (ring) interconnect between cores to do this, with their large multi-core chips using a dual ring bus with communication points between the two. We suspect AMD is also using a ring bus in a similar methodology, but this has not been discussed at this time. There’s also the interconnect fabric between the cores and other parts of the chip, such as the Northbridge/Southbridge or the memory controllers. Depending on the test, the core-to-core communication and the interconnect can have different performance effects on the CPU. 

8) Clock Speeds Are Not Final, Efficiency Not Known

Performance of a CPU is typically limited by the power draw – there is no point getting a small amount of performance for a large increase in power such that efficiency has decreased. AMD has stated that power consumption and efficiency was a premier goal as this microarchitecture was developed.

At the demonstration, we were told that the frequency of the engineering samples was set at 3 GHz for all-core operation. We were told explicitly that these are not the final clock speeds, but it at the very least it puts the lower bound on the highest end processor. In reality, 3 GHz could be a walk in the park for the final retail units, depending on how much difference there is between the chips on display and what people will be able to buy. We are under the impression that the CPUs will have turbo modes involved, and those could be staggered based on the cores being used.

But this is why I said that 3 GHz is the lower bound of the high-end processor. We know from these results (assuming point 1 in this list) that the best processor from AMD will do at least 3 GHz. There’s no indication of power, and thus there’s no indication of efficiency either, which is also another important metric left in the ether.

9) We Will Have to Wait to Test

Everyone wants the next technology yesterday, so the ‘gimme gimme gimme’ feeling of new stuff is always there. AMD has already stated that general availability for Zen and Summit Ridge will be Q1, which puts the launch at four months away at a minimum. At this stage of the game, while AMD is trying to be competitive with Intel, they don’t want to generate too much hype and give the game away in case it goes incredibly pear-shaped. There’s the added element of the hardware and software being finalized or updated.

Since I’ve been reviewing, no CPU manufacturer has handed out review units four months before launch (in all honesty, we’re lucky to get a launch date sample a week in advance these days). In fact we’d have to go back to Nehalem and Conroe to find something that was sampled early; however Conroe just passed its 10th birthday and in that case, Intel knew they were on to a clear winner ahead rather than just ‘meeting expectations’. Also, early samples of a great product will mean users will wait for it to come out, which results in revenue loss (the Osborne effect) unless you have zero stock and/or an uncompetitive product that no-one is buying. In this decade, no x86 CPU manufacturer has offered samples this far out. I’d be more than happy for that to change and I would encourage companies to do so, but I understand the reasons why. 

Some Final Words

Much in the same way that taking an IQ test tells you how good you are at an IQ test, it is typically an indication that you are good/bad at other things as well (most well-engineered IQ tests go through a lot of spatial reasoning, for example). In this circumstance, a CPU performing a Blender test is only as good as a Blender test, but given what we know about the Zen microarchitecture, it is probably also good at other things. Just how good, in what metric and to what extent, is almost impossible to say.

AMD has given a glimpse of performance, and they’ve only said as much as they needed to in order to get the message across. However it has been up to the media to understand the reasons why and explain what those caveats are.

Autore: AnandTech