jeudi 15 septembre 2011

Microsoft présente une première version de Windows 8

La prochaine mouture de Windows, numérotée 8, a été dévoilée hier à la conférence des développeurs Build. Elle se fait remarquer par une double interface, l'une traditionnelle, semblable au bureau de Windows 7, pour les applications classiques sur ordinateurs de bureau, et l'autre fortement inspirée des appareils mobiles, donc de Windows Phone 7, prévue surtout pour les tablettes. Le nouveau système Windows acceptera les commandes provenant de surfaces tactiles, de stylos ou de la reconnaissance de gestes, comme sur le boîtier Kinect.
Microsoft a vanté mardi la versatilité de son prochain système d'exploitation Windows 8, qui pourra aussi bien faire fonctionner des ordinateurs que des tablettes, afin de mieux rivaliser avec l'iPad d'Apple. « Windows 8 fonctionne magnifiquement sur toute une gamme d'appareils, des tablettes aux ordinateurs portables de 10 pouces (25 cm de diagonale) et jusqu'aux ordinateurs tout-en-un avec des écrans haute définition de 27 pouces », s'est autofélicité le président de la division Windows de Microsoft, Steven Sinofsky, durant la conférence Build, destinée aux développeurs.
Le système doit notamment permettre de faire fonctionner plusieurs applications ensemble et de synchroniser les dossiers entre plusieurs appareils. En guise d'illustration, Steven Sinofsky s'est fait prendre en photo par une caméra montée sur ordinateur, et le cliché est apparu sur une tablette.
Une version encore expérimentale
Le géant de Redmond (État de Washington, nord-ouest des États-Unis) a offert des prototypes de tablettes aux 5.000 participants à la conférence afin qu'ils puissent commencer à travailler sur ce programme. Les développeurs ne participant pas à l'événement pourront télécharger une version préliminaire de Windows 8 disponible depuis cette nuit.
Steven Sinofsky a souligné qu'il s'agissait d'une version expérimentale, et pas d'un programme fini destiné à être diffusé en l'état auprès du grand public, et il s'est refusé à évoquer une date de lancement. « Nous nous laissons guider par la qualité, pas par le calendrier, et pour l'instant nous nous concentrons sur les applications », a-t-il dit.

dimanche 11 septembre 2011

Calypto Design Systems Acquires Mentor Catapult C Synthesis Tool



SANTA CLARA, Calif. August 26, 2011 – Calypto Design Systems today announced it has acquired Catapult C Synthesis from Mentor Graphics Corporation (NASDAQ: MENT). The merger of two market-leading electronic system level (ESL) products, Catapult C Synthesis and Calypto SLEC System-HLS verification tool, will create a better integrated ESL hardware realization flow, and enhance the company’s partnership with Mentor Graphics, a leader in ESL technology. Terms of the transaction were not disclosed.
”ESL synthesis offers our design community the next great leap in productivity. Much like the move to RTL years ago, the move to higher levels of abstraction based on C and SystemC offers the promise of better quality of results in a shorter amount of time. By combining the market leading products in C synthesis, sequential verification, and power optimization within Calypto, we will be the only company capable of delivering a fully integrated flow, and delivering on that promise of ESL,” said Doug Aitelli, Chief Executive Officer of Calypto Design Systems. “In addition, we remain fully committed to our existing high level synthesis partnerships and to industry-wide interoperability.”
“This is a great deal for Calypto,” said Gary Smith, Chief Analyst at GSEDA. “They are clearly one of the companies on the rise in ESL, and this gives them the chance to offer a compelling power-optimized C to RTL flow if they can integrate all the pieces.”
ESL methods allow designers to work at a higher level of abstraction, greatly reducing errors and allowing greater optimization of integrated circuits (IC) in key attributes like speed and power. To adopt ESL methods, designers need to have confidence that tools, as they translate from the higher level of abstraction to lower levels, don’t introduce errors. Typically, designers have used extensive RTL verification to ensure that no errors have been introduced.
SLEC System-HLS uniquely addresses this challenge with C to RTL formal equivalence checking using patented sequential analysis technology to create an easy to use synthesis and verification flow environment. Designers can perform comprehensive functional verification using SLEC System‐HLS to formally verify equivalence between SystemC ESL models and RTL implementations. This leads to up to 100x speed up times in RTL verification as it removes the need for significant and time consuming RTL simulation to validate that the RTL matches the C or SystemC source. Tight integration between Calypto’s SLEC System-HLS and Catapult C Synthesis will give designers confidence that the IC they designed in C or SystemC is the IC that is being delivered in RTL.
Additionally, the PowerPro SoC Power Reduction Platform can do RTL level power optimizations. Added to the Catapult C Synthesis and SLEC System-HLS hardware realization flow, this allows designers to swiftly go from C and System C designs to power-optimized RTL.
“We remain deeply committed to ESL. We view this transaction as an innovative way to accelerate adoption of ESL methodologies, to strengthen our partnership with Calypto, and as one that complements our continued investment in ESL virtual prototyping environments led by our Vista product,” said Brian Derrick, vice president of marketing at Mentor Graphics. “Calypto’s Sequential Logic Equivalency Checker is a critical and unique technology for enabling the adoption of ESL. Its combination with the market-leading Catapult C Synthesis product and the PowerPro SoC Power Reduction Platform, should give designers the confidence to adopt ESL methods and enjoy the significant benefits that designing at higher levels of abstraction brings.”
Current customers of the Mentor Graphics Catapult C Synthesis tool will continue to be supported by Mentor Graphics. Moving forward, any new customer sales and support will be supplied by Calypto.

About Calypto Design Systems
Calypto Design Systems, Inc. empowers designers to create high‐quality, low-power electronic systems by providing best‐in‐class power optimization and functional verification software, based on its patented Sequential Analysis Technology. Calypto, whose customers include Fortune 500 companies worldwide, is a member of the Cadence Connections program, the IEEE‐SA, Synopsys SystemVerilog Catalyst Program, the Mentor Graphics OpenDoor program, Si2, ARM Connected Community and is an active participant in the Power Forward Initiative. Calypto has offices in Europe, India, Japan and North America. More information can be found at: www.calypto.com.

Calypto Joins ARM Connected Community

SANTA CLARA, Calif., – July 20, 2011 -- Calypto® Design Systems, Inc., the leader in Sequential Analysis Technology, today announced it is a new member in the ARM® Connected Community, the industry’s largest ecosystem of ARM technology-based products and services. As part of the ARM Connected Community, Calypto gains access to a full range of resources to help it market and deploy innovative design platforms that enable developers to get ARM Powered® products to market faster.
Calypto’s SLEC® (Sequential Logic Equivalence Checking) and PowerPro® platforms are used by seven out of the top ten semiconductor companies and most leading consumer electronics companies. Calypto’s products enable electronic designers, including ARM customers, to dramatically improve design quality and reduce power consumption of their system-on-chip (SOC) devices.
“Our products help engineers improve the quality of their ARM hardening flow in two ways,” said Doug Aitelli, Chief Executive Officer at Calypto. “PowerPro reduces the power of the ARM processor and surrounding SOC, and SLEC provides a comprehensive formal verification of the RTL to make sure that no functional errors were introduced during the ARM hardening process. This verification eliminates the need to redesign testbenches and rerun exhaustive simulations, enabling ARM customers to tapeout SOCs with ARM intellectual property faster. As a member of the ARM Connected Community, we now have the opportunity to extend our reach and add value to more of ARM’s customers.”
 “The Connected Community is all about companies working together to provide the most complete solutions in the shortest possible time. By joining the Community, which now comprises more than 850 companies, Calypto increases the large portfolio of skills, products and services that are centered around the ARM architecture, and currently available to developers worldwide,” said Lori Kate Smith, Senior Manager Community Programs for ARM.
Calypto Verification and Power Consumption Benefits for ARM Connected Community
PowerPro CG is used to help ARM’s Connected Community reduce the power consumption of their ARM processors or the surrounding SOC design. Using Calypto’s patented Sequential Analysis Technology, PowerPro CG (Clock Gating) analyzes the design intent of the ARM processor and derives areas where additional clock gating can be implemented or improved. PowerPro then can be used to automatically or manually reduce the power consumption, and SLEC formally verifies the result. In addition, PowerPro MG (Memory Gating) reduces power in the memory sections of a design, creating controllers to shut off the memories for longer period of times. This saves dynamic power, through gating of the memory enable, or leakage power, through activation of light sleep mode.
About the ARM Connected Community
The ARM Connected Community is a global network of companies aligned to provide a complete solution, from design to manufacture and end use, for products based on the ARM architecture. ARM offers a variety of resources to Community members, including promotional programs and peer-networking opportunities that enable a variety of ARM Partners to come together to provide end-to-end customer solutions. Visitors to the ARM Connected Community have the ability to contact members directly through the website.
For more information about the ARM Connected Community, please visit http://cc.arm.com.
About Calypto
Calypto Design Systems, Inc. empowers designers to create high‐quality, low-power electronic systems by providing best‐in‐class power optimization and functional verification software, based on its patented Sequential Analysis Technology. Calypto, whose customers include Fortune 500 companies worldwide, is a member of the Cadence Connections program, the IEEE‐SA, Synopsys SystemVerilog Catalyst Program, the Mentor Graphics OpenDoor program, Si2 and is an active participant in the Power Forward Initiative.
Calypto has offices in Europe, India, Japan and North America.
More information can be found at: www.calypto.com.

TLM 2.0, UVM 1.0 and Functional Verification


The DVCon 2011 conference was held this week and the Accellera Universal Verification Methodology (UVM) 1.0 release is breaking records in term of interest and attendance.  UVM 1.0 is a big deal(!) The core functionality is solid and ready for deployment.  Accellera held a full day tutorial on UVM 1.0 on Monday.  And during a panel discussion on Tuesday afternoon, AMD and Intel announced that they are in the process of adopting it.

TLM 1.0 ports were heavily used in OVM and in UVM 1.0EA (Early Adopter). The UVM 1.0 release adds a partial SystemVerilog implementation of the Open SystemC Initiative TLM 2.0 capabilities. At DVCon John Aynsley, author of TLM 2.0 spec, did a great introduction of TLM 1.0 and TLM 2.0 concepts and capabilities (one of the best I’ve seen so far for TLM). Later he moved on to UVM TLM implementation both in terms of TLM 1.0 and TLM 2.0, covering the benefits and contrasting it with the OSCI SystemC capabilities. His slide is shown below:

 

The TLM2.0 standard was created for modeling memory-mapped buses in SystemC.  Most of the DVCon discussion was devoted to the concepts of TLM 2.0, with a rich (or complex) set of capabilities. For example sockets and interfaces, blocking and non-blocking transports, the generic payload, hierarchical connection, temporal decoupling and more were covered. The main questions asked were: How much of this is relevant to functional verification and, specifically, UVM environments? What do I need to do differently in a UVM verification environment to leverage the TLM 2.0 potential?

Let’s start by focusing on agents that reside within an interface UVC.  As you can see below, monitors contain analysis ports. The monitor does interface level coverage and checking, and distributes events and monitored information to the sequencer, scoreboard, and other components. Obviously, there is nothing different in UVM from OVM to replace this kind of distributed one-to-many  communication. While this is trivial, this brings us to Guideline # 1: In the monitor, keep using the analysis port.

 


Another communication channel is needed between the sequencer that creates transactions and the driver that sends these to the Device Under Test (DUT). What we have in UVM (introduced in OVM) is a producer/consumer port (uvm_seq_item_pull_port) that has the needed API and hides the actual channels (TLM or others) from the implementation. I know that there was not always an agreement on this by all vendors, but Cadence was constantly recommending users to use this abstract layer, as opposed to the direct TLM ports.  TLM 2.0 sockets do not solve all the communication requirements between the sequencer and the driver (for example try_next_item semantic is hard to resolve in either TLM 1.0 or TLM 2.0).

Also, as was mentioned in the Accellera tutorial, the multi-language support is not solved with UVM 1.0 yet -- and for now, this is a vendor specific implementation. This is a great time to re-iterate our existing recommendation:  Guideline #2:  For sequencer-driver communication, use the abstract producer/consumer ports in your code and avoid using the TLM connections directly. This will keep your code forward compatible with existing or future solutions that the implementation uses (we might need extensions to facilitate cross language communication).  Usage of the high-level functions also allow us, the library developers to add more functionality on get_next_item() and iten_done() calls.  

Another communication layer you may need is for stimuli protocol layering. There are multiple ways to implement layering, but Guideline #2 is valid for this use case as well, where one downstream component need to pull items from a different component. If you stick with the abstract API of the producer/consumer port, you are going to keep your environment safe as we take the liberty of improving the communication facilities for you.

Let’s review other benefits of the TLM 2.0 and the value that they can provide to the verification environment. Again, I include John Aynsley’s slide covering the benefits of TLM 2.0 below. See also my analysis for the individual potential benefits.

 

Let’s review the “value” of these benefits in the context of verification:
  • Isn’t TLM 2.0 pass-by-reference capability faster than TLM 1.0, which is is critical for speed?  Indeed, pass by reference is critical for speed and memory usage, but the TLM 1.0 implementation in UVM does not copy by value, so no speed advantages are expected from adopting TLM2.0.
  • What about TLM 2.0 support for timing and phases? TLM 2.0 allows defining the transaction status and phases as part of the transaction. NOTE – This is unrelated to UVM phases. This might be a consideration for UVM. I will argue that the timing and status are more important in verification context for the analysis ports and monitors, as this is the channel that is used for such introspection. This can be considered in the next version of the UVM library as part of replacing the underlying implementation of the producer/consumer ports. In general timing annotation in TLM2.0 is complex especially as it is related to “temporal decoupling.” These are too difficult to be used with little return on investment.
  • A well defined completion model? We need to think of a use case for this … As we listed all the communication use cases for verification, we could not map this one into a mainstream functional verification need.
  • What about the generic payload (GP)? The generic payload is a standard abstract data type that includes typical attributes of memory mapped busses. For example it includes attributes like command, address, data, byte enable and more. An array of extensions exists to enhance this layer with protocol-specific attributes (for example, an AXI transaction defines attributes such as cacheable privileges that are not part of the generic payload definition) .  The generic payload can be used to create protocol independent sequences that can be layered on top of any bus. It is also useful as you communicate to a very abstract model, in early design model before the actual protocols have been decided upon and should be united at some point with the register operation. The generic payload usage does not replace the existing protocol specific sequencer. It also does not lend itself nicely to sequences and randomization as it is hard to constrain the extensions that are stored as array items. To put things in the right perspective, we find the generic payload a good addition to UVM . We used it as part of the Cadence ESL solution and will be happy to share more of our recommendation on the correct usage of the generic payload class. Guideline #3:  check if and how usage of GP can help your specific verification challenges.
  • What about the multi-language potential of TLM 2.0? OSCI TLM 2.0, as specified, is a C++ standard. Portions of it cannot be implemented in SystemVerilog nor does it enable or simplify multi-language communication (in fact passing by reference makes it more challenging to support than TLM 1.0). However, what we hear from users is that communicating to high-level models that use TLM 2.0 interfaces is the main requirement of TLM 2.0, which involves multi-language support. As officially stated multiple times in the Accellera tutorial, the multi-language transaction level communication support is not part of the standard library and was left for the individual vendors to support. This will be tricky for users who would like to keep their testbench  vendor-independent. Guideline #4:  Remember that the current UVM TLM 2.0 multi-language support is not part of the standard library and may lock you to a specific vendor and implementation.

To solve this main TLM2.0 requirement, Cadence is working within IEEE 1800 committee to propose extending the DPI to handle passing of objects between different object-oriented languages. Requirements such as passing items by reference or querying hierarchy and others that are not part of TLM 2.0 will be standardized as language features and will hopefully be supported by all vendors. Cadence is working with multiple users that ask for this solution. If you wish to support this effort follow Guideline #5: Join a standardization body or encourage your vendor to support standard multi-language communication  :-)
Summary of recommendations regarding TLM2.0 and verification:

Guideline #1:  In the monitor, keep using the analysis port.
Guideline #2: 
Use the abstract producer/consumer ports in your code and avoid using the TLM connections directly.
Guideline #3: 
Check if and how usage of GP can help your specific verification challenges.
Guideline #4: 
Remember that the current UVM TLM2.0 multi-language support is not part of the standard library and may lock you to a specific vendor and implementation.
Guideline #5:
Join a standardization body or encourage your vendor to support standard multi-language communication.

In summary, if you find the TLM 2.0 extensions to UVM to be complex, don't worry, you don't really need to bother with them.  You will probably find the TLM 1.0 communication more than sufficient for most of your testbench development needs.  You might find the Generic Payload useful for abstract modeling of transactions, and you can easily adopt GP without worrying about the rest of the TLM 2.0 complexity.  The main requirement for verifying/integrating SystemC TLM 2.0 models with a SystemVerilog testbench is not yet part of the UVM standard, so we invite you to join the effort to standardize a solution for this problem.

Synopsys Introduces Virtualizer Next-Generation Virtual Prototyping Solution

Highlights:
  • Accelerates software development schedules by up to nine months and delivers up to 5X increase in design productivity compared to traditional methods
  • Leverages proven virtual prototyping technologies deployed at more than 50 leading semiconductor and electronic systems companies
  • Fast and accurate simulation with comprehensive system visibility and control delivers near real-time software execution with unparalleled debug and analysis efficiency
  • Integral part of the industry’s most comprehensive solution of tools, models and services for early software development, hardware/software integration, and system validation
  • Enables efficient software-driven verification by linking to Synopsys’ HAPS® FPGA-based prototyping systems and VCS® functional verification solution, as well as other environments
MOUNTAIN VIEW, Calif., July 19, 2011 -- Synopsys, Inc. (Nasdaq: SNPS), a world leader in software and IP for semiconductor design, verification and manufacturing, today announced the availability of Synopsys’ Virtualizer tool set as part of its next-generation virtual prototyping solution. Virtualizer addresses the increasing development challenges associated with software-rich semiconductor and electronic products by enabling companies to accelerate both the development of virtual prototypes and the deployment of these prototypes to software teams throughout the design chain. Prototypes created with Virtualizer allow engineers to accelerate software development schedules by up to nine months, and deliver up to a 5X productivity boost over traditional approaches to teams performing software development, hardware/software integration, system-on-chip (SoC) verification and system validation. 

“As designs increase in complexity and software content to meet the demand for smart devices, companies need to reduce the risk of embedded software project delays and improve developer productivity,” says Steve Balacco, director, embedded software and tools practice, VDC Research. “Synopsys delivers a virtual prototyping solution that directly addresses the debug and analysis needs of embedded software developers in semiconductor and electronic products companies, while bridging the gap with hardware development flows.”

Virtualizer leverages proven technologies from Synopsys’ acquisitions of Virtio, VaST and CoWare as well as expertise gained from deployments at more than 50 leading semiconductor and electronic systems companies. For developers creating a virtual prototype, Virtualizer’s graphical design entry, software debug, and analysis components combined with Synopsys’ broad portfolio of system-level models deliver the fastest time to prototype availability. For software engineers using a virtual prototype of their system to create, integrate, and verify software, Virtualizer Development Kits (VDKs) offer a cost-effective development platform capable of executing unmodified production code at near real-time speed. VDKs provide fast and accurate virtual prototype simulation combined with unmatched multicore-aware software debug and analysis capabilities, concurrent hardware/software analysis, and synchronized debugging with third-party software debuggers and integrated development environments (IDEs). Open and standards-based, Virtualizer supports key industry standards such as OSCI TLM-2.0 and SystemC™ and runs on both Windows and Linux operating systems. 

“Companies deploying virtual prototypes need to easily integrate with existing software development tools,” said Norbert Weiss, international sales and marketing manager at Lauterbach. “The integration of Lauterbach’s TRACE32® with Synopsys’ Virtualizer enables development teams to start software development earlier in a more productive way, as well as expand these benefits from semiconductor to electronic systems companies.”

Virtualizer’s broad set of integration capabilities enables development teams to be more efficient and increase the degree of concurrent engineering in their product development process. Combined with FPGA-based prototyping such as Synopsys’ HAPS systems, Virtualizer facilitates faster SoC validation and software bring up at near real-time performance. Connecting Virtualizer with RTL simulators such as Synopsys’ VCS and emulation platforms such as Eve’s Zebu enables the use of embedded software in hardware verification environments.  Software developers can integrate prototypes based on Virtualizer with their existing debuggers and IDEs, retaining their existing software tool investment. Virtualizer also gives electronic product developers the ability to conduct system validation by networking multiple virtual prototypes together with physical system simulation, testbenches and virtual I/Os. With this broad range of integration capabilities, Virtualizer is uniquely positioned to support the entire electronic supply chain by accelerating development at all stages of the product design cycle.

“With growing hardware complexity, it is critical for verification engineers to start their work as early as possible, and exercise the design with as much real system software as possible,” says Lauro Rizzatti, general manager and marketing vice president of EVE. “The customer-proven integration of Synopsys’ leading virtual prototyping solution and Eve Zebu’s fast emulation platform enables true software-driven scenarios, extending verification coverage and confidence and reducing verification schedules by up to six months.”

“We are focused on helping our customers address their top system-level design challenges: starting software development earlier, accelerating hardware/software integration, and performing full system validation and testing,” says John Koeter, vice president of marketing for IP & systems at Synopsys. “Synopsys’ complete virtual prototyping solution – which includes Virtualizer, an extensive model library, services and support – enables our customers to start software design up to 12 months before first silicon is available. In addition, VDKs cost- effectively enable the development and integration of software throughout the design chain, from IP to SoCs to full systems. At a time of exploding software content at all levels of electronics, Virtualizer enables semiconductor and systems companies to start their software tasks earlier and avoid the risk of surprises late in the development cycle.” 
Availability
Virtualizer is available immediately. Virtualizer Development Kits (VDKs), which incorporate a subset of Virtualizer features specifically targeted for end use cases by software and hardware developers, are also available immediately. For more information, please visit: http://www.synopsys.com/Virtualizer.
About Synopsys
Synopsys, Inc. (Nasdaq:SNPS) is a world leader in electronic design automation (EDA), supplying the global electronics market with the software, intellectual property (IP) and services used in semiconductor design, verification and manufacturing. Synopsys’ comprehensive, integrated portfolio of implementation, verification, IP, manufacturing and field-programmable gate array (FPGA) solutions helps address the key challenges designers and manufacturers face today, such as power and yield management, system-to-silicon verification and time-to-results. These technology-leading solutions help give Synopsys customers a competitive edge in bringing the best products to market quickly while reducing costs and schedule risk. Synopsys is headquartered in Mountain View, California, and has approximately 70 offices located throughout North America, Europe, Japan, Asia and India. Visit Synopsys online at http://www.synopsys.com.
# # #
Synopsys, VCS and HAPS are registered trademarks of Synopsys, IncAll other trademarks or registered trademarks mentioned in this release are the intellectual property of their respective owners.
The complete destruction of the consumer PC market in the US and Europe is well within Apple’s grasp and will begin to unfold next summer. There is nothing that Intel, Microsoft or the retail channels can do to hold back the tsunami that was first set in motion with the iPad last year and comes to completion with the introduction of one more mobile product and the full launch of the iCloud service for all. The dollars that are left on the table to defend the onslaught are too insufficient to put up a fight. Collapse is at hand.

In the military realm there are plenty of examples of Wars that continue seemingly forever with no side able to gain the upper hand and then in just a matter of months - a sudden collapse due to lack of fighting men, shortages of food and armament and finally a realization that there is no home front left to defend. The American Civil War demonstrated no clear winner until late summer and fall of 1864, when General Sherman marched on Atlanta and then to Savannah, presenting the city to Lincoln as a Christmas present. Along the way he destroyed everything along a 50 mile wide by 250-mile path. Farms, crops, railroads, plantations and the railroad distribution channel were taken out leaving the confederate soldier with nothing to carry on the fight. Suddenly, a collapse.

The full range of Apple products in combination with the ever-expanding number of Apple stores (there are 339 worldwide stores with 56 more coming on line) is sucking the oxygen out of retailers like Best Buy who wonder what they will sell under the big roofs. PCs themselves were never a big profit center for the retailers. It was the accessories that offered huge profit margins. Things like mouses, carrying cases, earphones and especially service agreements were the money winners. But with customers now going to Apple Stores or Apple online, the accessory business moves to Apple. Without this business retailers cut back on the number of PCs they have on display which leads to a distribution problem for PC makers and a natural decline in PC sales.

But this is the least of the worries for PC makers and companies like Microsoft, Google and Intel because in one year Apple will split the MAC Air line to introduce a new product I will call the MAC Air – iCloud. Using the same skins as the MAC Air, the new mobile product with come with an A6 processor and an integrated 3G or 4G wireless solution for access to iCloud from anywhere. The underlying hardware will be similar to what is in an iPAD but with perhaps a little more DRAM to handle productivity Apps that are delivered to the device from the remote iCloud Servers that are x86 based. The MAC Air iCloud is a rendering machine for Office Apps that don’t run on the A6 processor. All other iOS apps will be available to the mobile device.

There are many rumors that Apple will switch the MAC Air to an A6 next year, but this will not be the case. Apple is not ready to take this step, yet. It will open the MAC Air business for bid between AMD and Intel with the bet that the processor cost will decline from $220 today to $75 by mid next year. Look for Intel to hold the business but give Apple the price break it needs to drop entry level MAC Air to $799 from $999 today. The volume will increase dramatically, killing off what is left of the $500-$1000 PC market.

The MAC Air – iCloud will come out at a $399 price for consumers that agree to pay a $25 per month service for at least 24 months in order to gain access to the Office Apps and storing data files on the cloud. Like the iPhone service plans, this one will pull in many customers that may have previously chosen a PC. If you look at the Best Buy data on their web site, the high volume runners all sell for <$449 and the majority sell for $349 - $449. This is the sweet spot of what is left of the consumer PC market. This is why Apple will stick their MAC Air iCloud right in between at $399.

For Microsoft and Intel, this is the end of their consumer market and the business model that sucks out the majority of the dollars. Microsoft still demands $30 per PC O/S royalties and Intel and AMD look to get at least $50-$60 per PC for the processor. With Apple moving customers over to iCloud they get to reap the margins of the entry level consumer who wants to join a better ecosphere that is more secure and offers better mobility and in the end much better, hands on customer support.

Apple will coddle users even more with the iCloud based mobile devices. The retailers don’t stand a chance, which means the PC makers will see distribution channels shrivel. Microsoft, Google, Intel and the rest of the PC supply chain have to think of how to change their business model that gets them as close as a handshake away from their customer.

HP Will Farm Out Server Business to Intel


In a Washington Post Column this past Sunday, Barry Ritholtz, A Wall St. Money Manager and who has a blog called the Big Picture, recounts the destruction that Apple has inflicted on a wide swath of technology companies (see And then there were none). He calls it “creative destruction writ large.” Ritholtz though is only accounting for what has occurred to date. I would contend that we are about to start round two and the changes coming will be just as significant. If I were to guess, HP will soon decide to Farm out its Server Business to Intel. Intel will soon realize that they will need to step up to the plate for a number of reasons.

When HP hired Leo Apotheker, the ex-CEO of Software Giant SAP, the Board of Directors (which includes Marc Andreessen and Ray Lane, formerly of Oracle) implicitly fired the flare guns that they were in distress and were going to make radical changes as they reoriented the company into the software sphere of the likes of Oracle and IBM. To do this, they had to follow IBM’s footsteps by first stripping out PCs. IBM, however, sold its PC group to Lenovo back in 2004 before the last downturn. Unfortunately for HP, it will get much less for its PC business than what they paid for Compaq.

The next step for HP is risky but necessary. They need to consolidate server hardware development under Intel. Itanium based servers are selling at a run rate of $500M a quarter at HP are now less than 5% of the overall server market compared to IBM Power and Oracle SPARC, which together account for nearly 30% of the server dollars. Intel and AMD x86 servers make up the rest (See the chart below). In addition, IBM’s mainframe and Power server businesses are growing while HP’s Itanium is down 10% year over year.


Oracle’s acquisition of Sun always intrigued me as to whether it was meant as a short-term effort to force HP to retreat on Itanium or as a much longer-term strategy of giving away hardware with every software sale. When Oracle picked up Sun, it still held a solid #2 position in the RISC world, next to IBM. By taking on Sun, Oracle guaranteed SPARC’s survival and at the same time put a damper on HP growing more share. New SPARC processors were not falling behind Itanium as Intel scaled back on timely deliveries of new cores at new process nodes. More importantly, the acquisition was a signal to ISVs (Independent Software Vendors) to not waste their time porting apps to yet another platform, namely Itanium. Oracle made sure that HP was seen, as an orphaned child when it announced earlier this year that is was withdrawing support for Itanium.

There is only one architecture, at this moment, that can challenge SPARC and Power and it is x86. It is in HP’s interest to consolidate on x86 and reduce its hardware R&D budget. If needed, a nice software translator can be written to get any remaining Itanium apps running on x86. Since the latest XEON processors are three process nodes ahead of Itanium, there should be little performance difference. But what about Intel, do they want to be the box builder for HP?

I would like to contend that Intel has to get into the box business and is already headed there. There chief issue in holding them back is the reaction from HP, Dell and IBM. Neither of them is generating great margins on x86 servers. With regards to Dell, Intel could buy them off with a processor discount on the standard PC business, especially since they will now be the largest volume PC maker. IBM is trickier.

But why does Intel want to go into the server systems business. The answer is several fold. From a business perspective they need more silicon dollars as well as sheet metal dollars. Intel sees another $20-$30B opportunity in ramping up and they will need it to counteract any flatness or drop in processor business in the client side of the business. Earlier this year, Intel bought Fulcrum, if they build the boxes for the data center, then they have the potential to eat away at Broadcom’s $1B switch chip business.

A more interesting angle is the data center power consumption problem. Servers consume 90% of the power in a data center. It used to be that processors were the majority of the power, but with the performance gap growing between processors and DRAM and the rise of virtualization it now becomes a processor and memory problem. Intel is working on platform solutions to minimize power but they expect to get paid for their inventions.

Intel has started to increase prices on server processors based on reducing a data center’s power bill. Over the course of the next few years they will let processor prices creep up, even with the looming threat of ARM. This is a new value proposition that can be taken one step further. If they build the entire data center box with processors, memory, networking and eventually storage (starting with SSDs), then they can maximize the value proposition to data centers, who may not have alternative suppliers.

In some ways Intel is at risk if they just deliver silicon without building the whole data center rack. There are plenty of design groups at places like Google, Facebook and others who understand the tradeoffs of power and performance and would like to keep cranking out new systems based on the best available technology. By Intel putting down its big foot, it could eliminate these design groups and make it more difficult for a new processor entry (AMD or ARM based) from entering the game.

What changes to expect in Verification IP landscape after Synopsys acquisition of nSys?

Even if nSys acquisition by Synopsys will not have a major impact on Synopsys’ balance sheet, it is a kind of earthquake in the Verification market landscape. After the Denali acquisition by Cadence in 2010, nSys was most probably the market leader in verification IP, if we look at the independent VIP providers (excluding Cadence). The company VIP port-folio can bear the comparison with Cadence, as nSys supports: PCI family, Communication standards, Storage Interfaces, USB Port, DDR3/2 Memory Controller, MIPI, AMBA and some miscellaneous interfaces. nSys being privately owned, we don’t know the company revenue, neither if the company is profitable. Was this acquisition an “asset sale”, and just an opportunistic deal closed by Synopsys at low cost, with the side effect to compete directly with Cadence in a market where the company has heavily invested during the last three years? Or the goal is to consolidate Synopsys position in the Interface IP market, where the company is the dominant provider, present, and leader, in every segment (DDRn, PCIe, USB, SATA, HDMI, MIPI…), by adding “independent” VIP to the current offer?



Synopsys was offering “bundled” VIP, and this is not the best way to valorize the product, as the Design IP customer expect to get a bundled VIP almost for free. If Synopsys acquisition of nSys illustrates a real strategy inflection, another side effect will be the lack of accuracy of the “Yalta description”: Cadence dominant in VIP and Synopsys in IP market!

Only Synopsys and nSys management teams know the answer. Today we only can evaluate the impact of this acquisition on the day to day life of SoC design teams, when this SoC integrates an Interface IP... which happen in most of the cases.

An interesting testimonial on nSys web site: "We had debated using bundled VIP solutions, which were available with PCIe IP, but after evaluating the nSys Verification IP for PCIe, we dropped the idea. We were impressed by the level of maturity of the PCIe nVS. We also realized that the PCIe nVS provided us with the ability to do independent verification of the IP that could not have been achieved with the bundled models. The nSys solution has helped our engineering team increase productivity too." From Manoj Agarwal, Manager ASIC, EFI.

The important word in this testimonial is “independent”. We have expressed this concern in the past, when saying:

“Common sense remark about the BFM and IP: when you select a VIP provider to verify an external IP, it is better to make sure that the design team for the BFM and for the IP are different and independent (high level specification and architecture made by two different person). This is to avoid the “common mode failure”, principle well known in aeronautic for example.”

A SoC project manager will have the option to buy an “independent” VIP to run the verification of the interface function… to the same vendor selling the IP. He still can buy it to the main competitor (Cadence) or to one of the remaining VIP provider (Avery, ExpertIO, PerfectVIP, Sibridge Technology, SmartDV Technology), but the one stop shop argument (buy the Controller, the PHY and the Verification IP together) will be reinforced, especially because the VIP comes now from a real independent source.

Is Synopsys acquisition of nSys an opportunistic asset sale? Honestly I don’t know, but this is certainly a stone in Cadence’ garden (as the company has bought Denali in 2010 and products from Yogitech SpA, IntelliProp Inc. and HDL Design House in October 2008 to consolidate their VIP port-folio) and a threat for the remaining VIP provider. Is it good news for Synopsys customers? Yes, because the “one stop shop” will ease the procurement and technical support process. Synopsys customers should just make sure to buy at the right price… market consolidation can make life easier… and price higher!

What changes to expect in Verification IP landscape after Synopsys acquisition of nSys?

Even if nSys acquisition by Synopsys will not have a major impact on Synopsys’ balance sheet, it is a kind of earthquake in the Verification market landscape. After the Denali acquisition by Cadence in 2010, nSys was most probably the market leader in verification IP, if we look at the independent VIP providers (excluding Cadence). The company VIP port-folio can bear the comparison with Cadence, as nSys supports: PCI family, Communication standards, Storage Interfaces, USB Port, DDR3/2 Memory Controller, MIPI, AMBA and some miscellaneous interfaces. nSys being privately owned, we don’t know the company revenue, neither if the company is profitable. Was this acquisition an “asset sale”, and just an opportunistic deal closed by Synopsys at low cost, with the side effect to compete directly with Cadence in a market where the company has heavily invested during the last three years? Or the goal is to consolidate Synopsys position in the Interface IP market, where the company is the dominant provider, present, and leader, in every segment (DDRn, PCIe, USB, SATA, HDMI, MIPI…), by adding “independent” VIP to the current offer?



Synopsys was offering “bundled” VIP, and this is not the best way to valorize the product, as the Design IP customer expect to get a bundled VIP almost for free. If Synopsys acquisition of nSys illustrates a real strategy inflection, another side effect will be the lack of accuracy of the “Yalta description”: Cadence dominant in VIP and Synopsys in IP market!

Only Synopsys and nSys management teams know the answer. Today we only can evaluate the impact of this acquisition on the day to day life of SoC design teams, when this SoC integrates an Interface IP... which happen in most of the cases.

An interesting testimonial on nSys web site: "We had debated using bundled VIP solutions, which were available with PCIe IP, but after evaluating the nSys Verification IP for PCIe, we dropped the idea. We were impressed by the level of maturity of the PCIe nVS. We also realized that the PCIe nVS provided us with the ability to do independent verification of the IP that could not have been achieved with the bundled models. The nSys solution has helped our engineering team increase productivity too." From Manoj Agarwal, Manager ASIC, EFI.

The important word in this testimonial is “independent”. We have expressed this concern in the past, when saying:

“Common sense remark about the BFM and IP: when you select a VIP provider to verify an external IP, it is better to make sure that the design team for the BFM and for the IP are different and independent (high level specification and architecture made by two different person). This is to avoid the “common mode failure”, principle well known in aeronautic for example.”

A SoC project manager will have the option to buy an “independent” VIP to run the verification of the interface function… to the same vendor selling the IP. He still can buy it to the main competitor (Cadence) or to one of the remaining VIP provider (Avery, ExpertIO, PerfectVIP, Sibridge Technology, SmartDV Technology), but the one stop shop argument (buy the Controller, the PHY and the Verification IP together) will be reinforced, especially because the VIP comes now from a real independent source.

Is Synopsys acquisition of nSys an opportunistic asset sale? Honestly I don’t know, but this is certainly a stone in Cadence’ garden (as the company has bought Denali in 2010 and products from Yogitech SpA, IntelliProp Inc. and HDL Design House in October 2008 to consolidate their VIP port-folio) and a threat for the remaining VIP provider. Is it good news for Synopsys customers? Yes, because the “one stop shop” will ease the procurement and technical support process. Synopsys customers should just make sure to buy at the right price… market consolidation can make life easier… and price higher!

2.5D and 3D designs

Going up! Power and performance issues, along with manufacturing yield issues, limit how much bigger chips can get in two dimensions. That, and the fact that you can't manufacture two different processes on the same wafer, mean that we are going up into the third dimension.

The simplest way is what is called package-in-package where, typically, the cache memory is put into the same package as the microprocessor (or the SoC containing it) and bonded using traditional bonding technologies. For example, Apple's A5 chip contains an SoC (manufactured by Samsung) and memory chips (from Elipida and other suppliers). For chips where both layouts are under control of the same design team, microbumps can also be used as a bonding technique, flipping the top chip over so that the bumps align with equivalent landing pads on the lower chip completing all the interconnectivity.


The next technique, already in production at some companies like Xilinx, is to use a silicon interposer. This is (usually) a large silicon "circuit board" with perhaps 4 layers of metal built in a non-leading edge process and also usually containing a lot of decoupling capacitors. The other die are microbumped and flipped over onto the interposer, and the interposer is connected to the package using through silicon vias (TSVs). Note that this approach does not require TSVs on the active die, avoiding a lot of complications.

It is several years before we will see true 3D stacks with TSVs through active die and more than two layers of silicon. It requires a lot of changes to the EDA flow, a lot of changes to the assembly flow, and the exclusion areas around TSVs (where no active circuitry can be placed) may be prohibitive, forcing the TSVs to the periphery of the die and thus lowering significantly the number of connections between die that is possible.

But all of these approaches create new problems to verify power, signal and reliability integrity. To solve this requires a new verification methodology that provides accurate modeling and simulation across the whole system: all the die, interposers, package and perhaps even the board.

TSVs and interposer design can cause inter-die noise and other reliability issues. As said above, the interposer usually contains decaps and so the power supply integrity needs to take these into account. In fact it is not possible to analyze the die in isolation since the power distribution is on the interposer.

One approach, if all the die layout data (including the interposer) is available, is to do concurrent simulation. Typically some of the die may be from an IP or memory vendor and in this casee a mdel-based analysis can be used, with the CPMs (chip power model) standing in for the detailed data that is unavailable.


One challenge that going up in the 3rd dimension creates is the issue of thermal induced failures. Obviously heat generated has a harder time getting out from the center than in a traditional two dimensional chip design. The solution is to create a chip thermal model (CTM) for each die, that must include temperature dependent power modeling (leakage is very dependent on temperature), metal densite and self-heating power. Handing all these models to ta chip-package-system thermal/stress simulation tool for power-thermal co-analysis, the power and temperature distribution can be calculated.

A final problem is signal integrity. The wide I/O (maybe thousands of connections) between the die and the interposer can cause significant jitter due to simultaneous switching. Any SSO (simultaneously switching outputs) solution needs to consider the drivers and receivers on the different die as well as the layout of the buses on the interposer. Despite the interposer being passive (no transistors) its design still requires a comprehensive CPS methodology.

Going up into the 3rd dimension is an opportunity to get lower power, higher performance and smaller physical size (compared to multiple chips on a board). But it brings with it new verification challenges in power, thermal and signal integrity to ensure that it is all going to perform as expected.

Duolog Technologies is First IP Integration Company to Join TSMC Reference Flow 12.0

Dublin, Ireland – August 9th 2011 - Duolog Technologies, the award-winning developer of IP and SoC integration products, today announced that its Socrates integration applications will be available to TSMC customers as part of the TSMC Reference Flow 12.0, the foundry’s latest design reference flow for its advanced process technology. 

“Duolog provides designers with an efficient way to manage the complexity and performance of highly integrated SoCs and a valuable part of the TSMC Reference Flow,” said Suk Lee, Director of Design Infrastructure Marketing, TSMC.
The Socrates tool suite enables rapid IP integration by creating integration-ready IP and quickly assembling subsystems and SoCs. All Socrates tools work with open standards including UVM, TLM2.0 and IP-XACT to reduce the integration process from months to just minutes.
“We are delighted that our Socrates tool suite is included in the TSMC Reference Flow,” commented Brian Clinton, VP of Worldwide Product Support, Duolog Technologies. “Socrates addresses the many challenges of IP integration and reuse. Working with TSMC will enable our mutual customers to conquer these ever increasing challenges.”

About Duolog Technologies: Duolog Technologies is a leading developer of EDA tools that address the increasingly complex challenges of IP integration.  We enable our customers to deliver integrated systems more quickly and cost effectively than their competitors.  Our innovative products and solutions allow for maximum productivity and control throughout the entire SoC lifecycle.

TSMC 28nm and 20nm Update!

First, congratulations to Samsung on their first 20nm test chip press release. Some will say it is a foundry rookie mistake since real foundries do not discuss test chip information openly. I like it because it tells us that Samsung is 6-9 months BEHIND the number one foundry in the world on the 20nm (gate-last HKMG) process node. Samsung gave up on gate-first HKMG? ;-)


Unfortunately, the latest news out of TSMC corporate is that 28nm revenues will be 1% of total revenues in 2011 versus the forecasted 2%. Xbit Labs did a nice article here. The official word is that:

"The delay of the 28nm ramp up is not due to a quality issue, we have very good tape-outs. The delay of ramp up is mainly because of softening economy for our customers. So, customers delayed the tape-outs. The 28nm revenue contribution in the Q4 2011 will be roughly about 1% of total wafer revenue," said Lora Ho, senior vice president and chief financial officer or TSMC.

TSMC's competitors on the other hand, are whispering that there is a 28nm yield problem, using the past 40nm yield ramping issues as a reference point. Rather than speculate and pull things out of my arse I asked people who actually have 28nm silicon how it is going. Unanimously it was, “TSMC 28nm yield is very good!” Altera and Xilinx are already shipping 28nm parts . The other markets I know with TSMC 28nm silicon are microprocessors, GPUs, and MCUs.

"We are far better prepared for 28nm than we were for 40nm. Because we took it so much more seriously. We were successful on so many different nodes for so long that we all collectively, as an industry, forgot how hard it is. So, one of the things that we did this time around was to set up an entire organization that is dedicated to advanced nodes. We have had many, many tests chips run on 28nm, we have working silicon," said Jen-Hsun Huang, chief executive officer of Nvidia.

It is easy to blame the economy for reduced forecasts after what we went through in 2009 and the current debt problems being over reported around the world. The recent US debt debacle is an embarrassment to every citizen of the United States who votes. Next election I will not vote for ANY politician currently in office, but I digress….

So the question is: Why do you think TSMC is REALLY reporting lower 28nm revenues for 2011?

Consider this: TSMC is the first source winner for the 28nm process node, without a doubt. All of the top fabless semiconductor companies will use TSMC for 28nm including Apple, AMD, Nvidia, Altera, Xilinx, Qualcom, Boradcom, TI, LSI, Marvell, Mediatek, etc……. These companies represent 80%+ of the SoC silicon shipped in a year (my guess).

One of the lessons semiconductor executives learned at 40nm is that silicon shortages delay new product deliveries, which cause billions of dollars in lost stock valuation, which gets you fired. Bottom line is semiconductor executives will be much more cautious in launching 28nm products until there is excess capacity, which will be mid 2012 at the earliest.

Other relevant 2011 semiconductor business data points:
  1. The Android tablet market is DOA (iPad2 rules!)
  2. The PC market is dying (Smartphone and tablets, Duh)
  3. Mobile phones are sitting on the shelf (Are we all waiting for the iPhone5?)
  4. Anybody buying a new car this year? Not me.
  5. Debt, debt, unemployment, debt, debt, debt…….
Not all bad news though, last Friday was the 30th anniversary of the day I met my wife and here is how great of a husband I am: First I went with my wife to her morning exercise class. 30+ women and myself dancing and shaking whatever we got. It was a very humbling experience, believe me! Next was a picnic on Mt Diablo recreating one of our first dates, then dinner and an open air concert at Blackhawk Plaza. Life as it should be!

Accellera Approves UVM 1.0 – Bold Step Forward for Functional Verification


There's big news today (Feb. 18) in the functional verification world. As noted in various tweets from standards group members, the Accellera standards organization board has unanimously approved the Universal Verification Methodology (UVM) 1.0 as an industry standard for verification interoperability. Accellera is also offering a UVM tutorial at DVCon in San Jose, Calif. Monday, Feb. 28.
As has been noted in a number of Cadence Community blogs, as well as ongoing press coverage, UVM 1.0 will have a profound impact on functional verification. It establishes a standard, backed by all major EDA vendors, for verification IP (VIP) and testbench interoperability. Just having the SystemVerilog language as a standard is not enough. There's also a need for a standard methodology so that VIP and testbenches can be reusable and interoperable in different simulation environments.
In an earlier blog post Stan Krolikoski, group director for standards at Cadence, said that UVM is "the largest EDA standard/reference implementation effort since the original OSCI SystemC simulator was developed in the early 2000s." It's proceeded quickly, starting with a December 2009 vote to make the Open Verification Methodology (OVM) developed by Cadence and Mentor Graphics the basis of UVM. Cadence has been actively involved in this standards effort and literally "wrote the book" by publishing A Practical Guide to Adopting the Universal Verification Methodology (UVM) by Sharon Rosenberg and Kathleen Meade last year.
This morning, Stan said that "UVM represents a coming together of users and vendors of verification tools/environments to solve a major industry problem -- the existence and popularity of two incompatible verification methodologies, OVM and VMM.  Rather than accept this state of confusion, the VIP Technical Subcommittee came together and produced a single verification methodology, UVM.  With the approval of the UVM 1.0 standard and the subsequent release of the accompanying reference implantation and user's guide, the community will be able to develop verification IP that is interoperable across simulators and verification environments.  This is a seminal example of industry leaders working as a team to harness the power of standardization for the good of both users and vendors."
What's New In UVM 1.0
The UVM 1.0 EA (Early Adopter) release was approved in May 2010. Since it was nearly identical to OVM 2.1.1., it was production ready -- but many design teams were hesitant to adopt it because of the "Early Adopter" label. UVM 1.0 adds a few features, including:
  • A run-time phasing feature that will allow UVM verification components to control aspects of the simulation cycle such as reset, configuration, execution, and shutdown.
  • A register package that will provide a connection between the description of registers and the verification environment to allow control, randomization, and coverage.
  • Support for a subset of Open SystemC Initiative (OSCI) transaction-level modeling TLM 2.0 communication within SystemVerilog.
In short, the time to adopt UVM 1.0 is now. And the work isn't done yet. As noted in previous blog posts, Cadence would like to see the UVM standard extended to provide support for other design/verification languages such as e and SystemC. At DVCon, a Cadence paper will present UVM-MS, a methodology that brings metric-driven verification to mixed-signal design.

UVM-MS – Metric-Driven Verification for Analog IP and Mixed-Signal SoCs


Metric-driven verification and constrained-random stimulus generation have greatly eased digital functional verification, but have rarely been applied to analog IP or mixed-signal SoCs. That may change with a proposed methodology called Universal Verification Methodology-Mixed Signal (UVM-MS), which will be described in a DVCon paper March 1 presented by Cadence and LSI Corp.
UVM 1.0 is an emerging verification interoperability standard that is soon to be released by Accellera. The Cadence/LSI work on mixed-signal metric-driven verification started with the Open Verification Methodology (OVM), which was the basis of UVM. An SoC Realization track paper at CDNLive! Silicon Valley last year, OVM-Based Verification of Analog IP and Mixed-Signal SoCs, offered a preview of what will be presented this year at DVCon.
The DVCon paper, titled UVM-MS: Metrics Driven Verification of Mixed Signal Designs, is authored by Neyaz Khan and Yaron Kashai of Cadence and Hao Fang of LSI Corp.  I recently talked to Neyaz and Yaron to learn more.
Metric-Driven for Analog
Metric-driven verification (MDV) provides a systematic approach to verification that captures intent with an executable verification plan (vPlan), automatically generates test stimulus, and tracks the progress of verification through coverage metrics. This makes it possible to determine when high-quality verification closure has been achieved. An MDV diagram is shown below.

,
So why apply this approach to analog/mixed-signal? Neyaz noted that the analog content of SoCs is growing, and that the quality of verification has become a concern, especially at the interfaces between analog and digital. "There has been a lot of progress in the past 10-15 years in applying MDV to the digital side, but nothing like that has happened on the analog side," he noted.
The UVM-MS methodology primarily concerns functional coverage, which can provide measurements of parameters such as frequency and gain. For example, you may be looking at a variable gain amplifier and trying to verify that the output gain matches the spec.   After generating stimulus to test the amplifier by sweeping the input frequency over the allowed range, you can then use functional coverage metrics along with automated checkers to make sure that the desired range of the gain has been completely tested.
To allow analog functional coverage, the methodology uses the e language to create "signal ports" that sample analog parameters. Due to some limitations in SystemVerilog, the UVM-MS methodology is currently based on e, but the long-term goal is to work with SystemVerilog, Neyaz said. While Cadence offered MDV well before UVM, the new methodology leverages UVM because of its broad vendor acceptance as an industry standard.
Supporting Existing Styles
Analog simulation is traditionally interactive and slow, while digital is batch and fast - how can designers bridge that gap? The UVM-MS methodology supports all analog modeling styles including Spice, Verilog-AMS, and real number models, allowing a speed-versus-accuracy tradeoff.  Low-level analog operations are facilitated by a Verilog-AMS layer that runs underneath the hardware verification language code.  A library of components supports commonly used interfaces between analog signals and testbench functions.
The overall verification methodology, Yaron said, "is pretty much the same as doing MDV with digital, but certain things need to be done to facilitate the integration with analog. The model has to be tied to the testbench in a certain way. One has to structure the verification environment slightly differently for analog design. You need to instantiate some of these [digital] blocks to talk to analog ports and drive some analog signals."
Neyaz noted that the methodology does require teams to invest more effort in analog IP verification. "Keeping in mind that the IP is designed once and used in multiple chips, having high quality IP is a very good investment," he said. The paper details how a current design from LSI Corp. was used for proof of concept.
The DVCon paper is part of session 3.4, which starts at 8:30 am Tuesday March 1 at the DoubleTree Hotel in San Jose, Calif. Conference registration is available at the DVCon web site.

Accellera Adopts OVM 2.1.1 for its Universal Verification Methodology (UVM)

This was a productive week for Accellera. After months of discussions, the Accellera Verification IP Technical Subcommittee (VIP-TSC) voted to adopt OVM 2.1.1 as the base of its verification methodology. Accellera’s OVM version will be called UVM.
In adopting OVM 2.1.1, Accellera signaled it will make further changes. The VIP-TSC has approved changes to (1) modify file names that have OVM in them to UVM, (2) modify any function calls and element names from “ovm” to “uvm,” (3) make possible changes to the “end-of-test” and “callback” code found in OVM 2.1.1, and (4) add a “message catching” feature.
This is good news for Accellera and great news for those who have adopted OVM and use OVM today. I believe OVM users will find it easier to interoperate with this third methodology. As a strong supporter of Accellera standards, we will keep you updated on Accellera UVM as developments merit. Users should expect solid industry support in all their popular tools as the Big-3 EDA suppliers have all voiced public support for this standards project.

Motivation for the UVM

In the beginning, there was SystemVerilog, and it was good. Through it some testbenches were made; without it other testbenches were made. In SystemVerilog was light, but also darkness in the form of a set of missing features that had to be implemented as library on top of verification languages by each user and also in the form of a lack of interoperability of language features between simulators.
To address the missing features there came a verification methodology sent from Synopsys; its name was VMM. The URM from Cadence and the AVM from Mentor came also and later merged to form the OVM, so that through them all engineers might believe in verification libraries.  The libraries did not completely address users’ concerns, but they did serve to validate users’ concerns as valid and worthy of consideration.
The libraries were the solution, and though the solutions were made through them the verification community did not recognize them as the solution because interoperability had not been solved.
Cool Verification 1:1-15 … ;-)
Ahem… As many of you are aware, the Accellera Verification IP Technical Subcommittee (VIP TSC), is currently working on creating a unified universal verification methodology (UVM) that will be supported by the big three EDA vendors.  Ostensibly the library is being created so that users don’t have to make a (potentially limiting) choice between the OVM and VMM, but can instead use a library that is considered an industry standard. Sounds good, right? I’m going to make the potentially controversial claim that very few semiconductor companies actually care about using an industry standard methodology.

Purchasing managers have largely bought into the fallacy that by using SystemVerilog they can easily switch vendors should they choose to do so. But language support between the vendors is still different enough to make it challenging to maintain code for more than one simulator. Vendors have been resistant to publicly sharing the level of support their simulators have for the SystemVerilog language, and no independent comparison is possible due to onerous licensing restrictions.
The VIP TSC will be meeting face to face the second week of March to work out the requirements for the UVM development effort. If my suspicions prove accurate, a lot of time and energy will be spent debating the fine points of verification methodology architecture, but the end result may be no closer to helping the average verification engineer solve the most pressing issues they face. But as usual, I am happy to be proven wrong. is the UVM's purpose to finally solve the SystemVerilog cross-methodology interoperability problem once and for all, or is it a red herring to distract us from the still elusive goal of a vendor independent tool flow...

Scalability Made OVM The Ideal Choice For UVM


The popularity of OVM that made it the idea choice for Accellera's UVM is rooted in it's uniquely scalable architecture.  Today's announcement by Mitsubishi Electric and the OVM Advanced Topics tutorial at DVCon are examples of scalability beyond the common SystemVerilog testbench.
For some verification teams, jumping head-first into the maw of object-oriented programming is daunting.  Object-oriented programming does require a series of fundamental shifts in thinking including writing code for objects that will come in and out of existance during simulation, the ability to override types to localize VIP rather than rewriting it, constrained-random inputs, and more.  For engineers that have built directed testbenches for years, aquiring all of this knowledge and applying it while maintaining schedule and quality commitments is a daunting task.  Mitsubishi found that the module-based overlay provided by Cadence enables the full OVM methodology while creating a static strucure that makes implementing the first object-oriented testbench easier.
At the other end of the scalability spectrum are the OVM Advanced Topics.  Among these are multi-langauge support for e and SystemC, low-power verification using OVM sequences to set-up power-states in the DUT, acceleration for verification performance, and ABV to both utilize assertions with OVM and to include formal analysis in an overall metric driven verification methodology that includes formal. At DVCon on Tuesday February 23rd Cadence verification experts will lead an interactive tutorial on these advanced topics so be sure to register early!
So with the UVM coming soon, how will OVM and future UVM users access all of this advanced technology? First, all of the advanced topics work with the existing OVM 2.1 release so we expect that they will work with UVM 1.0 as it will be based on OVM.  Second and more importantly, many of these are available to users right now in the OVM World contributions area.  If any of these are interesting to you and you want them to be included in the UVM 1.x releases, please contact your Accellera rep or join the VIP TSC and make your wishes known!

Accellera Works Toward a Unified Verification Methodology (UVM)

Accellera believes that the release of its UVM document is only a couple of months away. Can this really be true?
Design verification, in all of its various stages, continues to be a costly and challenging task, in spite of efforts dating back to the 1980s to provide engineers with better design tools. A quarter of a century after the introduction of Verilog and VHDL, languages meant to simplify the description of a design and thus ease the verification burden, costs continue to rise. Silicon respins due to design errors not only have not diminished in number, they have actually increased. This is an indication that complexity has grown more than the ability of verification tools to detect errors.
EDA tools developers have known for some time that not only tools need to be improved, but design methods also need to become more verification aware. So, in the usual highly competitive manner the EDA industry loves, a couple of years ago Cadence and Mentor agreed to develop together a verification methodology for SystemVerilog and called it: OVM which stands for Open Verification Methodology. Of course the word "open" in EDA does not mean freely available and thus Synopsys, who felt excluded from such "openness", insisted with VMM which stands for Verification Methodology Manual. VMM development started as early as 2005 when Synopsys partnered with ARM on the project. So one can safely state that OVM was a response to the lack of "openess" on the part of Synopsys toward allowing the other big two in the game without a price (and why should they?). How provincial EDA must seem to our major users! As you can imagine EDA customers were thrilled at the prospect of another war similar to the Verilog/VHDL wars of twenty or so years ago.
Accellera, the industry standard organization who has demonstrated a willingness to go where no one has gone before, acknowledged the problem and started a Technical Subcommittee with the aim of developing a unified methodology, called UVM, for Unified Verification Methodology.
As the year 2009 was coming to a close, the Technical Subcommittee (TS), at the prodding of "the elephant in the room" that answers to the name of Intel, reached a breakthrough, or so it seems, when Cadence, Mentor, and Synopsys agreed to begin a focused technical work project aimed at unifying the two methodologies. Some of the marketing representatives at Synopsys and Cadence have told me they expect the release of UVM in March this year. As should be expected the unrestrained joy in the streets of the silicon designers and providers should be restrained by appropriate skepticism..
The Details Make Things Difficult
As with every engineering task, the details are getting in the way of rapid progress. To begin with, the agreement was reached by a majority vote, meaning that not everyone was in favor of the motion as written. The problem, according to Mentor is in the timing. The present version of OVM is 2.0.3. It seems that OVM 2.1 was only a week or so away from being released. Yet, according to Synopsys, it is OVM 2.0.3 the version that will be used as the basis for the TS work. Mentor is pointing out the obvious: why disregard the work done to develop the 2.1 version?
Synopsys says that "We need to start with something and build on it. As a base OVM 2.0.3 will serve to create more features and improve users productivity". So, what is new in OVM 2.1 is not discarded a priori, it is just up for review. But when you ask Cadence which version of OVM will be used to create UVM, they say OVM 2.1, because it is released and it does not make sense to go back. Tom Anderson, Product Marketing Director for Verification Software at Cadence expects that OVM world, the association of OVM users, will continue to exist and that it will concern itself with both OVM and UVM topics.
One of the aspects of this work that holds positive, yet unexpected, outcome is that UVM will need to fully harmonize with the TLM specification from OSCI. Although there are many common members of the two organizations, there has never been a formal, structured, cooperation method between Accellera and OSCI. The development of UVM might give both Accellera and OSCI a real opportunity to put together a structure aimed at harmonizing each other's work, to the significant benefit of the EDA industry.
From a political point of view, the person that should be the principal architect of UVM is also a significant obstacle to speedy progress. Janick Bergeron, the architect of VMM, is a well known SystemVerilog and verification expert, and happens to be a Synopsys employee. Since the aim is to take OVM as the base and enrich it with VMM features, Cadence and Mentor expect that the chief architect will be someone who is intimately familiar with OVM, and that person is not Janick.
The final hurdle I am aware of is the matter of the license. OVM is distributed as open source under the Apache style license. Accellera has never developed a standard that is open source, it kind of runs against the idea of "standard". There seems to be significant belief within the interested parties that such issues are manageable. To begin with the matter of what is standard in an open source environment can be easily handled. What Accellera will release is the standard. People are free to add to it but not modify it under the open source environment. The additions will then be considered for inclusion in a later version of the standard. This seems to be a minor problem.
What Will Make the Job Easier
In most cases, like when you need to develop a language or a format, the standard needs to be firmly adhered to and no changes can be tolerated. But OVM, VMM and thus UVM are libraries, and so it is possible to have a standard base library and not only additional capabilities contributed by the open source community, but also OVM and VMM features that would have been deprecated (made obsolete) by the standard, and which are used by customers, that do not survive the merger.
Cadence, Mentor, and Synopsys all have major customers that, just like Intel, need a unified methodology. Here money talks loud and thus the desire of the three vendors to satisfy their customers is great. It must be noticed that the vote to start UVM came at the end of a fiscal quarter, a time when license renewals are discussed with great attention.
Finally no one has suggested forming yet another consortium to get the job done. Accellera has, with the exception of the IEEE, the best experience in EDA in developing lasting standards that span the gamut of all EDA market segments. The job will get done, the standard will be robust, and if not by March, it will be here by June, in time for DAC.

UVM: Collaboration for the Right Reasons

Congratulations to Accellera’s Verification IP Technical Subcommittee (VIP-TSC) for reaching yet another milestone on its journey to achieve harmony among verification standards. The near-unanimous desire and commitment to create a Universal Verification Methodology is an indication of the still growing need for collaboration among verification engineers, verification IP vendors, service providers, and tool suppliers – and their faith in Accellera to do so as an open standards organization.
Now it’s time for the group to start working on their “long term” standard. Their efforts will produce a common base class library that can be used in simulators from multiple design automation tool vendors. The common base class library will foster a broad (universal) verification methodology to benefit verification engineers and developers of verification IP.
The VIP-TSC will provide the industry with an effective verification standard. Hmm. Maybe they will call it the Universal Verification Methodology (UVM).
According to the status report from the VIP-TSC, the next phase of their work is indeed called the Universal Verification Methodology (UVM)!!
And now it’s time for the group to start working on their “long term” standard. Their efforts will produce a common base class library that can be used in simulators from multiple electronic design automation tool vendors. The common base class library will foster a broad (universal) verification methodology to benefit verification engineers and developers of verification IP.
The VIP-TSC working group that will now tackle UVM appears to be focused on a critical aspect of standardization – delivering not only a specification but also a usable reference implementation. In the short-term phase of their work, they created an interoperability guide, and now they will work on providing a single UVM library that will reflect the best of VMM and OVM. This is what I like about an industry collaboration that’s focused as much on deployment of a standard as it is on the creation of it.
This open, inclusive, and timely standard coming to life with support from a wide-ranging verification community. Synopsys strongly endorses this UVM effort under Accellera. I encourage the committee to ensure that UVM not only meets immediate requirements but also builds the foundation of an industry-wide verification methodology for years to come.
Overall, big kudos to the working group  for their focus on the long term goals, their dedication, and their hard work. It’s a great way to start 2010!

Behind Accellera’s Vote For OVM-Based Standardization


As noted in a recent Cadence blog by Tom Anderson, the Accellera Verification IP (VIP) Technical Subcomittee has voted to make the Open Verification Methodology (OVM) the basis of its upcoming “Universal Verification Methodology” (UVM) standard. Here are some thoughts about what this means, why it’s important, and what questions will need to be answered as the UVM standard unfolds.
First, why is a methodology needed? Because the SystemVerilog language description alone does not tell you how to build testbenches or verification IP. Thus, early SystemVerilog users developed in-house methodologies. Synopsys then launched the Verification Methodology Manual (VMM), and Cadence and Mentor Graphics collaborated to produce OVM, which is now available from the very active OVM World web site.
With two different methodologies in the marketplace, many users were faced with having to juggle VMM and OVM VIP and/or testbenches in the same simulation environment. Amid widespread agreement that standardization was needed, the Accellera VIP committee was formed. It was launched with two goals:
  • Interoperability between VMM and OVM, which is provided by last year’s release of a Recommended Practices interoperability guide. It shows how VMM testbenches can work with OVM VIP, and vice versa.
  • Progress towards a single SystemVerilog methodology standard with a common base class library, with eventual IEEE standardization. This is currently referred to as “UVM.”
That said, the development of UVM is just beginning and there are many questions to be answered over the coming weeks and months. For example:
  • Will UVM be a superset of OVM, including all OVM capabilities?
  • OVM World participants have made some good contributions to OVM. Will Accellera include some of these in UVM?
  • Will UVM offer seamless backwards compatibility with existing OVM VIP?
  • How can users migrate VMM testbenches or VIP to the new standard?
  • While Accellera is only looking at SystemVerilog, OVM can work with the e testbench language and SystemC models. Will UVM ultimately support multiple languages?
As noted in a recent blog, true VIP interoperability goes far beyond standard methodologies and class libraries. But a standard is essential for interoperability to be possible. With its latest decision to move forward based on OVM, the Accellera VIP subcomittee is making great progress towards solving the VIP interoperability challenge for SystemVerilog users.