Category Archives: Hardware

AMD Launches 5nm ASIC-based Media Accelerator Card

SANTA CLARA, CA, Apr 10, 2023 – AMD announced the AMD Alveo MA35D media accelerator featuring two 5nm, ASIC-based video processing units (VPUs) supporting the AV1 compression standard and purpose-built to power a new era of live interactive streaming services at scale. With over 70% of the global video market being dominated by live content1, a new class of low-latency, high-volume interactive streaming applications are emerging such as watch parties, live shopping, online auctions, and social streaming.

The Alveo MA35D media accelerator delivers the high channel density, with up to 32x 1080p60 streams per card, power efficiency and ultra-low-latency performance critical to reducing the skyrocketing infrastructure costs now required for scaling such compute intensive content delivery. Compared to the previous generation Alveo U30 media accelerator, the Alveo MA35D delivers up to 4x higher channel density2, 4x max lower latency in 4K3 and 1.8x greater compression efficiency4 to achieve the same VMAF score – a common video quality metric.

“We worked closely with our customers and partners to understand not just their technical requirements, but their infrastructure challenges in deploying high-volume, interactive streaming services profitably,” said Dan Gibbons, general manager of AECG Data Center Group, AMD. “We developed the Alveo MA35D with an ASIC architecture tailored to meet the bespoke needs of these providers to reduce both capital and operating expenses for delivering immersive experiences to their users and content creators at scale.”

Purpose-Built Video Processing Unit

The Alveo MA35D utilizes a purpose-built VPU to accelerate the entire video pipeline. By performing all video processing functions on the VPU, data movement between the CPU and accelerator is minimized, reducing overall latency and maximizing channel density with up to 32x 1080p60, 8x 4Kp60, or 4x 8Kp30 streams per card. The platform provides ultra-low latency support for the mainstream H.264 and H.265 codecs and features next-generation AV1 transcoder engines delivering up to a 52% reduction in bitrate for bandwidth savings versus a comparable software implementation5.

“AMD’s announcement of the new Alveo MA35D add-in card is an exciting advancement of video acceleration for data centers and is an important step in building out a fully-fledged ecosystem to support royalty-free, high-definition video devices, products, and services,” said Matt Frost, Alliance for Open Media Chair. “Live streaming providers are looking for higher density, lower power, lower latency AV1 solutions and by addressing these, Alliance members such as AMD are helping facilitate AV1 deployment and overall adoption.”

AI-Enabled, Intelligent Video Pipeline

The accelerator features an integrated AI processor and dedicated video quality engines designed to improve the quality of experience at reduced bandwidth. The AI processor evaluates content, frame-by-frame, and dynamically adjusts encoder settings to improve perceived visual quality while minimizing bitrate. Optimization techniques include region-of-interest (ROI) encoding for text and face resolution, artifact detection to correct scenes with high levels of motion and complexity, and content-aware encoding for predictive insights for bitrate optimization.

Cost-Effectively Scale Interactive Media

Scaling high-volume streaming services requires maximizing the number of channels per server while minimizing power and bandwidth-per-stream. By delivering up to 32x 1080p60 streams per card at 1 watt per stream6, a 1U rack server equipped with 8 cards delivers up to 256 channels to maximize the number of streams per server, rack or data center.

Software Dev Kit and Product Availability

The platform is accessible with the AMD Media Acceleration software development kit (SDK), supporting the widely used FFmpeg and Gstreamer video frameworks for ease of development.

Alveo MA35D media accelerators are sampling now with production shipments expected in Q3. To accelerate development, an Early Access Program is available to qualified customers with comprehensive documentation and software tools for architectural exploration.

About AMD

For more than 50 years AMD has driven innovation in high-performance computing, graphics, and visualization technologies. Billions of people, leading Fortune 500 businesses, and cutting-edge scientific research institutions around the world rely on AMD technology daily to improve how they live, work, and play. AMD employees are focused on building leadership high-performance and adaptive products that push the boundaries of what is possible.

For more information, visit the AMD website.

1. Source: Bluewave Consulting and Research, March 2022 
2. In published specifications, the Alveo MA35D supports up to 32 1080p60 streams, while the Alveo U30 supports up to 8. Channel density ratios remain the same regardless of resolution. ALV-002
3. In published specifications, the Alveo MA35D delivers 4X lower latency at 8ms vs. Alveo U30 delivering 4K H.264 at 32ms, based on lowest latency capability of each platform. ALV-005
4. Based on testing by AMD Labs in April 2023, using the VMAF scores of a Alveo MA35D AV1 encode compared to Alveo U30 H.264 encode across (13) publicly available video files at various resolutions and bitrates. Actual results may vary. ALV-009
5. Based on testing by AMD Labs in March 2023, using the VMAF scores of Alveo MA35D H.264 encode, H.265 encode, and AV1 encode compared to the VMAF score of an open source x264 very fast SW model across (13) publicly available video files at various resolutions and bitrates. Actual results may vary. ALV-006
6. Typical power for 8 4K streams or 32 1080p60 streams estimated at 35W, based on preliminary testing and subject to change. 50W Total Thermal Design Power (TDP)

AMD Names Jack Huynh Sr VP, GM of Computing, Graphics

SANTA CLARA, CA, Apr 6, 2023 – AMD announced that Jack Huynh has been named senior vice president and general manager of computing and graphics following the retirement from AMD of Rick Bergman, currently the executive vice president of Computing and Graphics. Bergman will remain at AMD through the second quarter to ensure a smooth transition. Huynh has been at AMD for more than 24 years and was most recently responsible for leading all aspects of the company’s semi-custom business. He will report to AMD Chair and CEO Dr. Lisa Su.

“Under Jack’s leadership, AMD has strengthened our position as the leading provider of custom solutions for gaming,” said Dr. Su. “We see strong long-term growth opportunities for our Computing and Graphics business as we bring our high-performance CPU and GPU IP together with our leadership software capabilities to create differentiated solutions across our foundational gaming franchise and a broader set of markets. As we welcome Jack in his expanded role, I also want to personally thank Rick for his many contributions and dedication to our business throughout his years with AMD.”

Huynh has served in a variety of leadership roles at AMD, most recently as the senior vice president and general manager for the AMD Semi-Custom business group, leading strategy, business management, and engineering execution for high performance custom solutions. Prior to that, Huynh served as corporate vice president and general manager where he led end to end business execution of mobility solutions for the AMD Client PC business group.

About AMD

For more than 50 years AMD has driven innovation in high-performance computing, graphics and visualization technologies. Billions of people, leading Fortune 500 businesses and cutting-edge scientific research institutions around the world rely on AMD technology daily to improve how they live, work and play. AMD employees are focused on building leadership high-performance and adaptive products that push the boundaries of what is possible.

For more information, visit the AMD website.

Canon Unveils imageFORMULA DR-M1060II Office Document Scanner

MELVILLE, NY, Mar 30, 2023 – For those who need to scan large amounts of different-sized documents easily, Canon USA Inc., a leader in digital imaging solutions, has introduced the imageFORMULA DR-M1060II Office Document Scanner. This scanner is designed to help businesses stay organized, reduce paper accumulation, digitize useful information, and help streamline workflows in multiple industries, including federal, state, and local government, financial services, insurance, healthcare, education, and manufacturing.

Canon imageFORMULA DR-M1060II – Front view

“Canon is excited to introduce the imageFORMULA DR-M10160II, expanding our range of solutions for our customers,” says Shuji “Steve” Suda, vice president and general manager of Canon USA, Inc. “This scanner can produce high-quality images, and is compact enough where it can fit neatly on a desk or tabletop.”

The imageFORMULA DR-M1060II is designed to handle letter, legal, and ledger-sized paper, and can scan both sides at once. It includes a dual feeding path. The unique default “U-turn” paper path allows users to load and then remove paper in the front of the scanner, and creates a compact scanner design capable of fitting flat against a wall. The user-selectable straight-through path allows for handling of thicker, fragile, or rigid documents such as a driver license or newspaper. The imageFORMULA DR-M1060II feeder design allows you to scan a range of different document sizes in a single batch, while minimizing the chance of a feeding error. You can digitize plastic cards, envelopes, postcards, artwork, industrial drawings, posters, maps and more. The scanner is compatible with Windows, and is able to save physical documents into Microsoft Word, Excel, and PowerPoint, as well as PDF for easy editing and searchability.

Canon imageFORMULA DR-M1060II – with output

The imageFORMULA DR-M1060II scanner’s automatic document feeder can hold up to 80 sheets, helping to reduce the need to manually feed items one by one. The latest Canon CaptureOnTouch software is included and designed to provide advanced functionalities and user experience, such as barcode and meta-data recognition, batch separation, adding pages via drag-and-drop, thumbnail viewing, and more. The imageFORMULA DR-M1060II scanner is ergonomically designed for comfortable feeding in the front of the machine – there is no need to stand up or reach out to set documents in the scanner.

This new scanner includes a three-year limited warranty with advanced exchange service. Extended warranty options are also available through Canon. These, combined with Canon’s US-based technical support, can help maximize uptime throughout the product life.

“The Canon imageFORMULA DR-M1060II supports a range of scanning capabilities that can help organizations improve productivity in the office,” says Lee Davis, senior analyst of Software/Scanners, Keypoint Intelligence.

Availability

The Canon imageFORMULA DR-M1060II Office Document Scanner is now available for purchase via select Canon partners for an MSRP of $1,995.00.

About Canon USA, Inc.

Canon USA, Inc., is a leading provider of consumer, business-to-business, and industrial digital imaging solutions to the United States and to Latin America and the Caribbean markets. With approximately $30.3 billion in global revenue, its parent company, Canon Inc., as of 2022 has ranked in the top-five overall in US patents granted for 37 consecutive years. Canon USA is dedicated to its Kyosei philosophy of social and environmental responsibility.

To learn more about Canon, visit www.usa.canon.com.

OnlineMetals.com Adds TCI Precision’s Ready-to-Ship Blanks

GARDENA, CA, Mar 29, 2023 – TCI Precision Metals announces its Ready-to-Ship, Machine-Ready Blanks are now available to order directly through OnlineMetals.com, the World’s leading e-commerce metal and plastics supplier. The move provides OnlineMetals’ customers the option to go beyond saw-cut materials and order pre-machined blanks made to close tolerance specifications.

Precision blanks eliminate the need for in-house sawing, grinding, flattening, squaring operations, and outside processing. Blanks are consistent, part-to-part, which reduces setup time, and in the case of flat blanks, the production process alleviates residual stress in the material which results in reduced part movement during finish machining.

“Ready to Ship Blanks help shops shorten setup time, reduce scrap, and increase overall throughput by up to 25% by eliminating material prep. Blanks arrive machine-ready for production and are ideal for short-run production machining, tooling, and prototype applications,” said Ben Belzer, president, and CEO of TCI Precision Metals.

Precision blanks eliminate the need for in-house sawing, grinding, flattening, squaring operations, and outside processing. Blanks are consistent, part-to-part, which reduces setup time, and in the case of flat blanks, the production process alleviates residual stress in the material which results in reduced part movement during finish machining. Each blank is deburred, cleaned, and individually packaged to avoid damage during shipping. Ready-to-Ship Blanks arrive square, flat, and parallel within ± .002” of specified dimensions.

“At Online Metals we specialize in cut-to-size, small to medium quantity orders, shipped direct to any location. Ready-to-Ship Blanks provide that extra processing and value that lets customers order materials that arrive ready to go directly from receiving to machining. Customers can use the time they previously spent in setup and prep for more productive use of CNC machining centers,” said Matt Holzhauer, marketplace manager at OnlineMetals.com.

About TCI Precision Metals

Founded in 1956, TCI Precision Metals is a family-owned manufacturer producing precision Machine-Ready Blanks from aluminum, stainless steel, and other alloys. The company also provides Sawing, grinding, milling, and finishing operations on customer-supplied materials.

For more information, visit https://tciprecision.com/.

About OnlineMetals.com

OnlineMetals.com is the world’s leading eCommerce metal and plastics supplier, specializing in cut-to-size, small to medium quantity orders, shipped direct to any location. Online Metals was a garage start-up, founded in Seattle in 1998. The company has grown over the years and expanded to six facilities across the US, offering over 55,000 products.

For more information, visit www.onlinemetals.com.

Kingston FURY Adds White Heat Spreaders to DDR5

FOUNTAIN VALLEY, CA, Mar 28, 2023 – Kingston FURY, the high-performance division of Kingston Technology Company, Inc., a world leader in memory products and technology solutions, today announced the addition of white heat spreaders to its award-winning line of Kingston FURY DDR5 memory modules providing more options for those looking to build a system that stands above the rest, in and out of the game.

Kingston FURY DDR5 White Heat Spreader

Kingston FURY Beast DDR5 offers the superior speed and low latency solutions  to take your experience to the next level of performance, now with a low-profile heat spreader design in white to combine outstanding cooling functionality and bold styling. Go for a simple and easy upgrade with Plug N Play at 4800MT/s1 or  select an Intel XMP 3.0 or  AMD  EXPO Certified kit . Kingston FURY Beast DDR5 RGB lets users personalize  even further with the Kingston FURY CTRL RGB tool by selecting or customizing one of the 18 built-in,  vibrant, and stunning RGB lighting2 effects, all kept smooth and in unison with Kingston’s patented Infrared Sync Technology. The Kingston FURY Beast DDR5 line hits speeds up to 6000MT/s, is available in 8GB, 16GB, and 32GB single modules and, kits of 2 up to 64GB, with kits of 4 coming next month.

For system builders and DIY PC enthusiasts who want to maximize the performance of their next-gen DDR5 platforms and complement the look of the latest PC builds, Kingston FURY Renegade DDR5 enhances your system with aggressive low latencies and extreme speeds up to 7200MT/s. Whether creating content, multi-tasking or pushing the limits in-game, users can tap into the extreme overclocking potential of DDR5 in style. Again, with FURY CTRL users can select from 18 customizable lighting effects to highlight the sleek, aluminum heat spreaders of the Kingston FURY Renegade DDR5 RGB available now in white & silver. Available in single module capacities of 16GB and 32GB and Dual Channel kits of 2 with capacities up to 64GB.

“We’re pleased to expand the look of our Kingston FURY DDR5 lineup with the addition of white heat spreaders,” said Kristy Ernt, DRAM business manager, Kingston. “As creativity flows and gaming evolves, we want to empower our users to choose the modules that best fit their individual style.”

The Kingston FURY DDR5 line of memory is 100% tested at speed and backed by a limited lifetime warranty and legendary Kingston reliability.

Kingston FURY Beast DDR5 Features and Specifications

  • Greater starting speed performance: With an aggressive starting speed at 4800MT/s, DDR5 is 50% faster than DDR4.
  • Improved stability for overclocking: On-die ECC (ODECC) helps maintain data integrity to sustain the ultimate performance while you push the limits!
  • Increased efficiency: Boosted by double the banks and burst length and two independent 32-bit subchannels, DDR5’s exceptional handling of data shines with the latest games, programs and demanding applications.
  • Intel XMP 3.0 certified: Advanced pre-optimized timings, speed and voltage for overclocking performance and save new user-customizable profiles utilizing a programmable PMIC.
  • AMD EXPO certified: AMD’s Extended Profiles for Overclocking
  • Qualified by the world’s leading motherboard manufacturers3Tested and approved so you can build and upgrade with confidence on your preferred motherboard.
  • Low-profile heat spreader: Newly designed heat spreaders in black or white combine bold styling and outstanding cooling functionality.
  • Plug N Play at 4800MT/s: Kingston FURY Beast DDR5 will auto-overclock itself to the highest listed speed allowed by the system BIOS.

RGB:

  • Enhanced lighting with new heat spreader design: Game in style by customizing the black or white heat spreaders with the smooth, stunning range of 18 RGB lighting2 effects using Kingston FURY CTRL or the motherboard manufacturer’s software.
  • Patented Kingston FURY Infrared Sync Technology: Vibrant RGB effects light in unison with Kingston’s patented Infrared Sync Technology.
  • Capacities:
    Singles: 8GB, 16GB, 32GB
    Kits of 2: 16GB, 32GB, 64GB
  • Speeds4: 4800MT/s, 5200MT/s, 5600MT/s, 6000MT/s
  • Latencies: CL36, CL38, CL40
  • Voltage: 1.1V, 1.25V, 1.35V
  • Operating Temperature: 0 °C to 85 °C
  • Non-RGB Dimensions: 133.35 mm x 34.9 mm x 6.62 mm
  • RGB Dimensions: 133.35 mm x 42.23 mm x 7.11 mm

Kingston FURY Renegade DDR5 Features and Specifications

  • Engineered to Maximize Performance: With speeds up to 7200MT/s, Kingston FURY Renegade DDR5 RGB features premium components hand-tuned by engineers, rigorously tested for compatibility across the industry’s leading motherboards, and backed by 100% factory testing at speed for a hassle-free overclock experience.
  • Tap Into Extreme Overclocking Potential: DDR5 ushers in a whole new era of memory technology to make extreme overclocking a more stable option than previous generations. On-die ECC delivers more reliable DRAM components, an on-board PMIC balances power when and where it’s needed, and two independent 32-bit subchannels provide dramatic increases in data efficiency for multi-core processors.
  • Intel XMP 3.0 Certified: Intel Extreme Memory Profile technology makes overclocking a breeze with advanced pre-optimized factory timings, speeds and voltages for overclocking performance. Renegade DDR5 RGB features a programmable PMIC for XMP 3.0, supporting up to two customizable profiles to optimize your own unique timings, speeds, and voltages saved directly to the DIMM.
  • Qualified by the World’s Leading Motherboard Manufacturers: Tested and trusted for your preferred motherboard so you can build with confidence.
  • Aggressive Aluminum Heat Spreader Design: Newly designed black & silver and white & silver aluminum heat spreaders with black PCB keep your rig running— and looking— cool to complement the look of your latest PC build.

RGB:

  • Dynamic RGB lighting: Bring your system to life with 18 preset RGB lighting effects utilizing Kingston FURY CTRL or the motherboard manufacturer’s RGB software with the sleek newly designed black & silver or white & silver aluminum heat spreaders with black PCB.
  • Kingston FURY Infrared Sync Technology: Ensure your RGB lighting effects stay in perfect lock-step with Kingston’s patented Infrared Sync Technology.
  • Capacities: Singles: 16GB, 32GB
    Kits of 2: 32GB, 64GB
  • Speeds:  6000MT/s, 6400MT/s, 6800MT/s, 7200MT/s
  • Latencies: CL32, CL36, CL38
  • Voltage: 1.35V, 1.4V, 1.45V
  • Operating Temperature: 0 °C to 85 °C
  • Non-RGB Dimensions: 133.35mm x 39.2mm x 7.65mm
  • RGB Dimensions: 133.35mm x 44mm x 7.66mm

About Kingston Technology, Inc.

From big data, to laptops and PCs, to IoT-based devices like smart and wearable technology, to design-in and contract manufacturing, Kingston helps deliver the solutions used to live, work and play. The world’s largest PC makers and cloud-hosting companies depend on Kingston for their manufacturing needs, and our passion fuels the technology the world uses every day.

To learn more about how Kingston Is With You, visit Kingston.com.

1. Kingston FURY Plug N Play memory will run in DDR5 systems up to the speed allowed by the manufacturer’s system BIOS. PnP cannot increase the system memory speed faster than is allowed by the manufacturer’s BIOS. Kingston FURY Plug N Play DDR5 products support XMP 3.0 specifications so overclocking can also be achieved by enabling the built-in XMP Profile.

2. Lighting customizable with Kingston FURY CTRL software or with motherboard RGB control software. RGB customization support through third-party software may vary.

3. Featured on the Qualified Vendor Lists (QVL) of the world’s leading motherboard manufacturers.

4. Learn more about megatransfers per second – MT/s denotes megatransfers (million transfers) per second and represents the effective data rate (speed) of DDR (Double Data Rate) SDRAM memory in computing. A DDR SDRAM memory module transfers data on the rise and fall of every clock cycle (1 Hz).

ORIGIN PC Launches New 40 Series Laptops

MIAMI, FL, Mar 24, 2023 – ORIGIN PC, a leader in custom high-performance systems, today launched their latest 40 Series Laptops. On the gaming side, the EON16-S and EON17-X push laptop gaming to new heights. For creators and professionals, the NS-16 and NS-17 offer desktop-like performance on a workstation they can bring anywhere. Featuring the new NVIDIA GeForce RTX 40 Series mobile graphics cards, users can experience firsthand what the next generation of technology holds through a customizable ORIGIN PC.

EON16-S

Meet the future of portable performance with the EON17-X and NS-17, ORIGIN PC’s strongest laptops ever designed. Compared to its predecessor, the new EON17-X is 13% lighter and 42% thinner. In spite of its thin 0.98 inch form factor, expect nothing less than record-breaking strength from the upgraded system. Truly the pinnacle of gaming speed and power, it can be customized with up to the flagship NVIDIA GeForce RTX 4090 graphics card as well as a powerful Intel Core i9-13900HX processor. Both laptops offer a 17.3” display with the options to choose from either a 240Hz QHD display or 144Hz UHD display, based on personal preference and needs.

EON17-X

Professionals can access a top-of-the-line workstation in the NS-17, enabling them to finish the job under any circumstance. Working from home, or professions that require frequent traveling can make the most out of a high-end system they can bring anywhere. Get up to 24TB of storage, allowing gamers and professionals to save whatever files, games, and softwares they desire. Regarding performance, customize with up to 64GB of DDR5 memory to ensure there are no slowdowns even when running resource intensive programs or games at the same time. In addition, connect up to 4 additional displays, allowing for multi-tasking and the ability to simultaneously see several games or projects. For even more connectivity, both the EON17-X and NS-17 include support for USB Type-A, Thunderbolt 4, HDMI, Mini DisplayPort, Ethernet 2.5, Wifi 6, and more.

The EON17-X and NS-17 feature:

  • Intel Core i9-13900HX Processor 24-core processor
  • Up to NVIDIA GeForce RTX 4090 16GB Graphics Card, Tuned for Max Total GPU Power up to 175W
  • 3” 240Hz 2560x1440p Display or 144Hz 3840x2160p Display
  • Up to 24TB Storage
  • Up to 64GB DDR5 Memory
  • Wi-Fi 6 compatibility
  • Support for up to 4 additional displays
  • 2x USB 3.2 Gen 1 Type-A, 2x Thunderbolt 4, 1x HDMI Port, 1x Mini DisplayPort, 1x Ethernet 2.5 Port, 1x Mic Jack, 1x Audio Jack, Kensington Lock
  • ORIGIN PC Lifetime 24/7 US-based Support

Focusing on the portability of laptops, new EON16-S and NS-16 laptops offer quick 40 Series speeds, providing incredible performance through a smaller laptop that weighs in at only 5.95 pounds while 0.78 inches thin. The new 16” laptops come equipped with 13th Gen Intel Core i9-13900H processors, and up to an NVIDIA GeForce RTX 4070 graphics card. Designed with vapor chamber cooling, the two systems offer lower temperatures compared to standard heat pipe cooling, allowing hardware to continue operating efficiently even after extended use. In terms of their display, both the EON16-S and NS-16 come with 16” 240Hz QHD+ monitors. Whether you need to catch every frame, or want high visual quality, you get the best of both worlds. For those who want to expand their view, connect up to 3 additional monitors.

For storage, the 16” laptops can be customized with up to 16TB of storage and 64GB of DDR5 Memory. Again, expect the same versatile connectivity with support for USB-Type A, Thunderbolt 4, HDMI, Mini DisplayPort, Ethernet 2.5, Wifi 6, and more.

The EON16-S and NS-16 feature:

  • Intel Core i9-13900H Processor 14-core processor
  • Up to NVIDIA GeForce RTX 4070 8GB Graphics Card, Tuned for Max Total GPU Power up to 140W
  • 16” 240Hz 2560x1600p Display
  • Up to 16TB Storage
  • Up to 64GB DDR5 Memory
  • Wi-Fi 6 compatibility
  • Support for up to 3 additional displays
  • 2x USB 3.2 Gen 1 Type-A, 1x Thunderbolt 4, 1x HDMI Port, 1x Mini DisplayPort, 1x Ethernet 2.5 Port, 1x Mic Jack, 1x Audio Jack, Kensington Lock
  • ORIGIN PC Lifetime 24/7 US-based Support

Dedicated to offering as many customization options as possible, know that on top of hardware options, the latest laptops can also be personalized with the choice of an HD UV printed panel or laser etching. Dive into improved speeds and portability with ORIGIN PC’s latest laptops powered by NVIDIA GeForce RTX 40 Series graphics and Intel Core 13th Gen processors.

Availability

ORIGIN PC EON16-S, EON17-X, NS-16, and NS-17 laptops are available now.

Learn more about ORIGIN PC’s EON16-S here: https://www.originpc.com/gaming/laptops/eon16-s/

Learn more about ORIGIN PC’s EON17-X here: https://www.originpc.com/gaming/laptops/eon17-x-v2/

Learn more about ORIGIN PC’s NS-16 here: https://www.originpc.com/workstation/laptops/ns-16/

Learn more about ORIGIN PC’s NS-17 here:https://www.originpc.com/workstation/laptops/ns-17-v2/

About ORIGIN 

ORIGIN PC builds custom, high-performance desktops, workstations, and laptops for hardware enthusiasts, digital/graphics artists, professionals, government agencies, and gamers. ORIGIN PCs are hand built, tested, and serviced by knowledgeable gaming enthusiasts, industry veterans, and award-winning system integrators. Every ORIGIN PC comes with free lifetime 24/7 support based in the United States. The ORIGIN PC staff is comprised of award-winning enthusiasts, experienced in the gaming and PC markets who want to share their passion with others. ORIGIN PC is located in Miami, FL and ships worldwide.

For more information, please visit www.ORIGINPC.com.

About CORSAIR

CORSAIR is a leading global developer and manufacturer of high-performance gear and technology for gamers, content creators, and PC enthusiasts. From award-winning PC components and peripherals, to premium streaming equipment, smart ambient lighting, and esports coaching services, CORSAIR delivers a full ecosystem of products that work together to enable everyone, from casual gamers to committed professionals, to perform at their very best.

AWS, NVIDIA Announce Multi-Part Collaboration

Amazon Web Services, Inc. (AWS), an Amazon.com, Inc. company, and NVIDIA announced a multi-part collaboration focused on building out the world’s most scalable, on-demand artificial intelligence (AI) infrastructure optimized for training increasingly complex large language models (LLMs) and developing generative AI applications.

The joint work features next-generation Amazon Elastic Compute Cloud (Amazon EC2) P5 instances powered by NVIDIA H100 Tensor Core GPUs and AWS’s state-of-the-art networking and scalability that will deliver up to 20 exaFLOPS of compute performance for building and training the largest deep learning models. P5 instances will be the first GPU-based instance to take advantage of AWS’s second-generation Elastic Fabric Adapter (EFA) networking, which provides 3,200 Gbps of low-latency, high bandwidth networking throughput, enabling customers to scale up to 20,000 H100 GPUs in EC2 UltraClusters for on-demand access to supercomputer-class performance for AI.

“AWS and NVIDIA have collaborated for more than 12 years to deliver large-scale, cost-effective GPU-based solutions on demand for various applications such as AI/ML, graphics, gaming, and HPC,” said Adam Selipsky, CEO at AWS. “AWS has unmatched experience delivering GPU-based instances that have pushed the scalability envelope with each successive generation, with many customers scaling machine learning training workloads to more than 10,000 GPUs today. With second-generation EFA, customers will be able to scale their P5 instances to over 20,000 NVIDIA H100 GPUs, bringing supercomputer capabilities on demand to customers ranging from startups to large enterprises.”

“Accelerated computing and AI have arrived, and just in time. Accelerated computing provides step-function speed-ups while driving down cost and power as enterprises strive to do more with less. Generative AI has awakened companies to reimagine their products and business models and to be the disruptor and not the disrupted,” said Jensen Huang, founder and CEO of NVIDIA. “AWS is a long-time partner and was the first cloud service provider to offer NVIDIA GPUs. We are thrilled to combine our expertise, scale, and reach to help customers harness accelerated computing and generative AI to engage the enormous opportunities ahead.”

New Supercomputing Clusters

New P5 instances are built on more than a decade of collaboration between AWS and NVIDIA delivering the AI and HPC infrastructure and build on four previous collaborations across P2, P3, P3dn, and P4d(e) instances. P5 instances are the fifth generation of AWS offerings powered by NVIDIA GPUs and come almost 13 years after its initial deployment of NVIDIA GPUs, beginning with CG1 instances.

P5 instances are ideal for training and running inference for increasingly complex LLMs and computer vision models behind the most-demanding and compute-intensive generative AI applications, including question answering, code generation, video and image generation, speech recognition, and more.

Specifically built for both enterprises and startups racing to bring AI-fueled innovation to market in a scalable and secure way, P5 instances feature eight NVIDIA H100 GPUs capable of 16 petaFLOPs of mixed-precision performance, 640 GB of high-bandwidth memory, and 3,200 Gbps networking connectivity (8x more than the previous generation) in a single EC2 instance. The increased performance of P5 instances accelerates the time-to-train machine learning (ML) models by up to 6x (reducing training time from days to hours), and the additional GPU memory helps customers train larger, more complex models. P5 instances are expected to lower the cost to train ML models by up to 40% over the previous generation, providing customers greater efficiency over less flexible cloud offerings or expensive on-premises systems.

Amazon EC2 P5 instances are deployed in hyperscale clusters called EC2 UltraClusters that are comprised of the highest performance compute, networking, and storage in the cloud. Each EC2 UltraCluster is one of the most powerful supercomputers in the world, enabling customers to run their most complex multi-node ML training and distributed HPC workloads. They feature petabit-scale non-blocking networking, powered by AWS EFA, a network interface for Amazon EC2 instances that enables customers to run applications requiring high levels of inter-node communications at  scale on AWS. EFA’s custom-built operating system (OS) bypass hardware interface and integration with NVIDIA GPUDirect RDMA enhances the performance of inter-instance communications by lowering latency and increasing bandwidth utilization, which is critical to scaling training of deep learning models across hundreds of P5 nodes. With P5 instances and EFA, ML applications can use NVIDIA Collective Communications Library (NCCL) to scale up to 20,000 H100 GPUs. As a result, customers get the application performance of on-premises HPC clusters with the on-demand elasticity and flexibility of AWS. On top of these cutting-edge computing capabilities, customers can use the industry’s broadest and deepest portfolio of services such as Amazon S3 for object storage, Amazon FSx for high-performance file systems, and Amazon SageMaker for building, training, and deploying deep learning applications. P5 instances will be available in the coming weeks in limited preview. To request access, visit https://pages.awscloud.com/EC2-P5-Interest.html.

With the new EC2 P5 instances, customers like Anthropic, Cohere, Hugging Face, Pinterest, and Stability AI will be able to build and train the largest ML models at scale. The collaboration through additional generations of EC2 instances will help startups, enterprises, and researchers seamlessly scale to meet their ML needs.

Anthropic builds reliable, interpretable, and steerable AI systems that will have many opportunities to create value commercially and for public benefit. “At Anthropic, we are working to build reliable, interpretable, and steerable AI systems. While the large, general AI systems of today can have significant benefits, they can also be unpredictable, unreliable, and opaque. Our goal is to make progress on these issues and deploy systems that people find useful,” said Tom Brown, co-founder of Anthropic. “Our organization is one of the few in the world that is building foundational models in deep learning research. These models are highly complex, and to develop and train these cutting-edge models, we need to distribute them efficiently across large clusters of GPUs. We are using Amazon EC2 P4 instances extensively today, and we are excited about the upcoming launch of P5 instances. We expect them to deliver substantial price-performance benefits over P4d instances, and they’ll be available at the massive scale required for building next-generation large language models and related products.”

Cohere, a leading pioneer in language AI, empowers every developer and enterprise to build incredible products with world-leading natural language processing (NLP) technology while keeping their data private and secure. “Cohere leads the charge in helping every enterprise harness the power of language AI to explore, generate, search for, and act upon information in a natural and intuitive manner, deploying across multiple cloud platforms in the data environment that works best for each customer,” said Aidan Gomez, CEO at Cohere. “NVIDIA H100-powered Amazon EC2 P5 instances will unleash the ability of businesses to create, grow, and scale faster with its computing power combined with Cohere’s state-of-the-art LLM and generative AI capabilities.”

Hugging Face is on a mission to democratize good machine learning. “As the fastest growing open source community for machine learning, we now provide over 150,000 pre-trained models and 25,000 datasets on our platform for NLP, computer vision, biology, reinforcement learning, and more,” said Julien Chaumond, CTO and co-founder at Hugging Face. “With significant advances in large language models and generative AI, we’re working with AWS to build and contribute the open source models of tomorrow. We’re looking forward to using Amazon EC2 P5 instances via Amazon SageMaker at scale in UltraClusters with EFA to accelerate the delivery of new foundation AI models for everyone.”

Today, more than 450 million people around the world use Pinterest as a visual inspiration platform to shop for products personalized to their taste, find ideas to do offline, and discover the most inspiring creators. “We use deep learning extensively across our platform for use-cases such as labeling and categorizing billions of photos that are uploaded to our platform, and visual search that provides our users the ability to go from inspiration to action,” said David Chaiken, chief architect at Pinterest. “We have built and deployed these use-cases by leveraging AWS GPU instances such as P3 and the latest P4d instances. We are looking forward to using Amazon EC2 P5 instances featuring H100 GPUs, EFA and Ultraclusters to accelerate our product development and bring new Empathetic AI-based experiences to our customers.”

As the leader in multimodal, open-source AI model development and deployment, Stability AI collaborates with public- and private-sector partners to bring this next-generation infrastructure to a global audience. “At Stability AI, our goal is to maximize the accessibility of modern AI to inspire global creativity and innovation,” said Emad Mostaque, CEO of Stability AI. “We initially partnered with AWS in 2021 to build Stable Diffusion, a latent text-to-image diffusion model, using Amazon EC2 P4d instances that we employed at scale to accelerate model training time from months to weeks. As we work on our next generation of open-source generative AI models and expand into new modalities, we are excited to use Amazon EC2 P5 instances in second-generation EC2 UltraClusters. We expect P5 instances will further improve our model training time by up to 4x, enabling us to deliver breakthrough AI more quickly and at a lower cost.”

New Server Designs for Scalable, Efficient AI

Leading up to the release of H100, NVIDIA and AWS engineering teams with expertise in thermal, electrical, and mechanical fields have collaborated to design servers to harness GPUs to deliver AI at scale, with a focus on energy efficiency in AWS infrastructure. GPUs are typically 20x more energy efficient than CPUs for certain AI workloads, with the H100 up to 300x more efficient for LLMs than CPUs.

The joint work has included developing a system thermal design, integrated security and system management, security with the AWS Nitro hardware accelerated hypervisor, and NVIDIA GPUDirect optimizations for AWS custom-EFA network fabric.

Building on AWS and NVIDIA’s work focused on server optimization, the companies have begun collaborating on future server designs to increase the scaling efficiency with subsequent-generation system designs, cooling technologies, and network scalability.

About NVIDIA

Since its founding in 1993, NVIDIA has been a pioneer in accelerated computing. The company’s invention of the GPU in 1999 sparked the growth of the PC gaming market, redefined computer graphics, ignited the era of modern AI and is fueling the creation of the metaverse. NVIDIA is now a full-stack computing company with data-center-scale offerings that are reshaping industry.

For more information, visit www.nvidia.com.

About Amazon Web Services

Since 2006, Amazon Web Services has been the world’s most comprehensive and broadly adopted cloud. AWS has been continually expanding its services to support virtually any workload, and it now has more than 200 fully featured services for compute, storage, databases, networking, analytics, machine learning and artificial intelligence (AI), Internet of Things (IoT), mobile, security, hybrid, virtual and augmented reality (VR and AR), media, and application development, deployment, and management from 99 Availability Zones within 31 geographic regions, with announced plans for 15 more Availability Zones and five more AWS Regions in Canada, Israel, Malaysia, New Zealand, and Thailand. Millions of customers—including the fastest-growing startups, largest enterprises, and leading government agencies—trust AWS to power their infrastructure, become more agile, and lower costs.

To learn more about AWS, visit aws.amazon.com.

NVIDIA Announces Six RTX Ada Lovelace Architecture GPUs

NVIDIA announced six new NVIDIA RTX Ada Lovelace architecture GPUs for laptops and desktops, which enable creators, engineers and data scientists to meet the demands of the new era of AI, design and the metaverse.

Using the new NVIDIA RTX GPUs with NVIDIA Omniverse, a platform for building and operating metaverse applications, designers can simulate a concept before making it a reality, planners can visualize an entire factory before it is built and engineers can evaluate their designs in real time.

The NVIDIA RTX 5000, RTX 4000, RTX 3500, RTX 3000 and RTX 2000 Ada Generation laptop GPUs deliver breakthrough performance and up to 2x the efficiency of the previous generation to tackle the most demanding workflows. For the desktop, the NVIDIA RTX 4000 Small Form Factor (SFF) Ada Generation GPU features new RT Cores, Tensor Cores and CUDA cores with 20GB of graphics memory to deliver incredible performance in a compact card.

The latest NVIDIA RTX Ada Generation GPUs provide the accelerated computing power required for today’s highly collaborative content-creation, design and AI workflows. A new generation of desktop workstations that combine high-end NVIDIA GPUs and smart networking with the latest Intel CPUs can drive innovation for the next wave of product and building designs, AI-augmented applications and industrial metaverse content.

“Running data-intensive applications like generative AI and real-time digital twins in the metaverse requires advanced computing power,” said Bob Pette, vice president of professional visualization at NVIDIA. “These new NVIDIA RTX GPUs provide the horsepower needed for creators, designers and engineers to accomplish their work from wherever they’re needed.”

NVIDIA RTX Laptops Deliver Creative Power to Professionals Anywhere

NVIDIA’s new laptop GPUs deliver up to double the performance and power efficiency over the previous generation for mobile workstations.

The new GPUs include the latest generations of NVIDIA Max-Q and RTX technologies for optimal energy efficiency and photorealistic graphics, and are backed by NVIDIA Studio technologies for creators. Products with NVIDIA RTX GPUs benefit from RTX optimizations in over 110 creative apps, NVIDIA RTX Enterprise Drivers for the highest levels of stability and performance in creative apps, and exclusive AI-powered NVIDIA tools: Omniverse, Canvas and Broadcast.

Professionals using these laptop GPUs can run advanced technologies like DLSS 3 to increase frame rates by up to 4x compared to the previous generation, and NVIDIA Omniverse Enterprise for real-time collaboration and simulation.

NVIDIA RTX 4000 SFF Enables Enhanced Performance, Productivity

The NVIDIA RTX 4000 SFF GPU offers a new level of performance and efficiency for mini-desktops, powering artists, designers and engineers who prefer small workstations.

By delivering unprecedented rendering and visualization performance to compact workstations, the RTX 4000 SFF GPU enables users to enjoy a fluid experience in computer-aided design, graphic design, data analysis, AI applications and software development. Additionally, systems integrators developing specialized solutions — for example, in healthcare or large-scale displays — can benefit from the card’s combination of performance and compact size.

“The versatile NVIDIA RTX 4000 SFF Ada Generation GPU offers Genetec users performance increases of up to 80% and empowers them to decode, view and analyze more video streams,” said John Burger, product line manager for video appliances at Genetec. “As camera resolutions continue to increase and require additional resources to be decoded, the NVIDIA RTX 4000 SFF offers an ideal solution in a compact form factor for Genetec and its partners.”

Next-Generation RTX Technology

The new RTX desktop and laptop GPUs feature the Ada architecture’s latest technologies, including:

  • CUDA cores: Up to 2x the single-precision floating point throughput of the previous generation.
  • Third-generation RT Cores: Up to 2x the throughput of the previous generation, with the ability to concurrently run ray tracing with either shading or denoising capabilities.
  • Fourth-generation Tensor Cores: Up to 2x faster AI training performance of the previous generation, with expanded support for the FP8 data format.
  • DLSS 3: New levels of realism and interactivity for real-time graphics by multiplying performance with AI.
  • Greater GPU memory:
    1. The RTX 4000 SFF provides 20GB of memory with greater bandwidth than the previous generation. The GPU can transfer data to and from its memory more quickly, resulting in improved graphics, compute and rendering performance.
    2. The new NVIDIA RTX Ada Generation Laptop GPUs provide up to 16GB of graphics memory to handle the largest models, scenes, assemblies and advanced multi-application workflows.
  • Extended-reality capabilities: The RTX 4000 SFF and new NVIDIA RTX laptop GPUs provide support for high-resolution augmented-reality and virtual-reality devices, and deliver the high-performance graphics required for experiencing stunning AR, VR and mixed-reality content.

Availability

Next-generation desktop workstations featuring NVIDIA RTX GPUs will be available starting this month from global workstation manufacturing partners including BOXXHP Inc. and Lenovo.

The new NVIDIA RTX laptop GPUs will be available starting this month in mobile workstations from global workstation manufacturer partners. The new NVIDIA RTX 4000 SFF GPU will be available from global distribution partners such as Leadtek, PNY and Ryoyo Electro starting in April at an estimated price of $1,250 and from global workstation manufacturers later this year.

To learn more about NVIDIA RTX, watch NVIDIA founder and CEO Jensen Huang’s GTC 2023 keynoteRegister free for GTC to attend sessions with NVIDIA and industry leaders.

About NVIDIA

Since its founding in 1993, NVIDIA has been a pioneer in accelerated computing. The company’s invention of the GPU in 1999 sparked the growth of the PC gaming market, redefined computer graphics, ignited the era of modern AI and is fueling the creation of the metaverse. NVIDIA is now a full-stack computing company with data-center-scale offerings that are reshaping industry.

For more information, visit www.nvidia.com.

NVIDIA Expands Omniverse Cloud for Industrial Digitalization

NVIDIA announced that NVIDIA Omniverse Cloud, a platform-as-a-service that enables companies to unify digitalization across their core product and business processes, is now available to select enterprises.

NVIDIA has selected Microsoft Azure as the first cloud service provider for Omniverse Cloud, giving enterprises access to the full-stack suite of Omniverse software applications and NVIDIA OVX infrastructure, with the scale and security of Azure cloud services.

The new subscription offering for Omniverse Cloud on Azure makes it easy for automotive teams — from design and engineering to smart factory to marketing — to digitalize their workflows, whether connecting 3D design tools to accelerate vehicle development, building digital twins of automotive factories or running closed-loop simulations to test vehicle performance.

“Every manufactured object, from massive physical facilities to handheld consumer goods, will someday have a digital twin, created to build, operate and optimize the object,” said Jensen Huang, founder and CEO of NVIDIA. “NVIDIA Omniverse Cloud is the digital-to-physical operating system for industrial digitalization, arriving just in time for the trillions of dollars of new EV, battery and chip factories that are being built.”

Omniverse Cloud Delivers Ultimate Flexibility and Scalability

Through Omniverse Cloud, NVIDIA and Microsoft provide customers a full-stack cloud environment and platform capabilities to design, develop, deploy and manage industrial metaverse applications. Omniverse Cloud also connects with the products that customers use from NVIDIA’s partner ecosystem.

Powered by NVIDIA OVX computing systems, Omniverse Cloud enables enterprise developers to customize foundation applications that are included with the platform-as-a-service:

  1. Omniverse USD Composer (formerly Omniverse Create) – to assemble applications based on the Universal Scene Description (USD) framework, compose industrial virtual worlds and create digital twins.
  2. Omniverse USD-GDN Publisher – to publish interactive USD applications such as product configurators to the NVIDIA Graphics Delivery Network, enabling streaming of advanced 3D experiences to any device, anywhere.
  3. NVIDIA Isaac Sim – to train and simulate AI-based robots.
  4. NVIDIA DRIVE Sim – to test and validate autonomous vehicles.
  5. Omniverse Replicator – to generate 3D synthetic data to accelerate the training and accuracy of computer vision AI networks.

Automotive Makers Adopting Omniverse to Achieve Digitalization

Omniverse Cloud builds on the success of and experience with early Omniverse Enterprise customers, including BMW Group, Geely Lotus and Jaguar Land Rover.

BMW Group, which was the first carmaker to adopt Omniverse to build a fully digitalized smart factory, today announced that it will launch the current Omniverse Enterprise platform across its production network worldwide.

“NVIDIA Omniverse has given us an unprecedented ability to design, build and test complex manufacturing systems, which means we can plan and optimize a next-generation factory completely virtually before we build it in the physical world,” said Milan Nedeljković, board member for production at BMW AG. “This will save us time and resources, increase our sustainability efforts and improve operational efficiencies.”

Geely Lotus is adopting Omniverse Enterprise to build digital twins of factories to optimize manufacturing processes.

Jaguar Land Rover is using Omniverse to generate synthetic data to train AI models, as well as validate perception and control algorithms through real-world driving scenarios. The vehicle maker has integrated Omniverse with its state-of-the-art vehicle dynamics models, virtual electronic control units, virtual automotive networks and cloud infrastructure, enabling teams to rapidly iterate software concepts.

Availability

Omniverse Cloud, powered by NVIDIA OVX computing systems, will be available starting with Microsoft Azure in the second half of the year.

Omniverse Cloud-based services will also be available from a network of leading service providers including WPP, the world’s largest marketing and communications company, which is building services to deliver sustainable and automated content supply chains for major brands worldwide.

To learn more about NVIDIA Omniverse Cloud, watch the GTC keynoteRegister free for GTC to attend Omniverse sessions with NVIDIA and industry leaders.

About NVIDIA

Since its founding in 1993, NVIDIA has been a pioneer in accelerated computing. The company’s invention of the GPU in 1999 sparked the growth of the PC gaming market, redefined computer graphics, ignited the era of modern AI and is fueling the creation of the metaverse. NVIDIA is now a full-stack computing company with data-center-scale offerings that are reshaping industry.

For more information, visit www.nvidia.com.

Oracle Cloud Infrastructure Selects NVIDIA BlueField-3 DPU

NVIDIA announced that Oracle Cloud Infrastructure (OCI) has selected the NVIDIA BlueField-3 DPU as the latest addition to its networking stack, offering OCI customers a powerful new option for offloading data center tasks from CPUs.

BlueField-3 is NVIDIA’s third-generation data processing unit that enables enterprises to build software-defined, hardware-accelerated IT infrastructures from cloud to data center to edge. It improves performance, efficiency and security in data centers by offloading, accelerating and isolating infrastructure workloads, thus freeing expensive CPU cores to run business applications.

OCI offers a wide range of cloud infrastructure and platform services to its customers to build and run applications and services in the cloud or on premises. By utilizing BlueField-3, OCI is extending its long-established approach of offloading data center infrastructure tasks from CPUs.

“The age of AI demands cloud data center infrastructures to support extraordinary computing requirements,” said Jensen Huang, founder and CEO of NVIDIA. “NVIDIA’s BlueField-3 DPU enables this advance, transforming traditional cloud computing environments into accelerated, energy-efficient and secure infrastructure to process the demanding workloads of generative AI.”

“Oracle Cloud Infrastructure offers enterprise customers nearly unparalleled accessibility to AI and scientific computing infrastructure with the power to transform industries,” said Clay Magouyrk, executive vice president of Oracle Cloud Infrastructure. “NVIDIA BlueField-3 DPUs are a key component of our strategy to provide state-of-the-art, sustainable cloud infrastructure with extreme performance.”

BlueField-3 Boosts Data Center Performance, Efficiency and Security

BlueField-3 is the foundation of the data center control plane that delivers cloud and AI services. Tests show power reductions of up to 24% on servers using NVIDIA BlueField DPUs compared to servers without DPUs.

The DPUs support Ethernet and InfiniBand connectivity at up to 400 gigabits per second and provide 4x more compute power, up to 4x faster crypto acceleration, 2x faster storage processing and 4x more memory bandwidth compared to the previous generation of BlueField.

BlueField also delivers full backward-compatibility through the NVIDIA DOCA software framework. DOCA equips developers with advanced, zero-trust security capabilities, including the ability to create metered cloud services that control resource access, validate each application and user, isolate potentially compromised machines and help protect data from breaches and theft.

Watch Huang discuss the NVIDIA BlueField-3 DPU in his GTC keynote.

About NVIDIA

Since its founding in 1993, NVIDIA has been a pioneer in accelerated computing. The company’s invention of the GPU in 1999 sparked the growth of the PC gaming market, redefined computer graphics, ignited the era of modern AI and is fueling the creation of the metaverse. NVIDIA is now a full-stack computing company with data-center-scale offerings that are reshaping industry.

For more information, visit www.nvidia.com.