Quantcast
Channel: GitHub – ETCentric
Viewing all 65 articles
Browse latest View live

Google Key Transparency Project to Boost Messaging Security

$
0
0

To improve encryption, Google has launched an open source project, Key Transparency, a follow-up to its Certificate Transparency, both of which focus on the need to verify the authenticity of the person or server the user believes he is connecting to. Keybase, a collection of verified users and their “cryptographic credentials” is one solution, but Google now wants to ascertain that the contacts are verified systematically and are privacy-protected, by having the address “double-check” itself.

TechCrunch reports that Google collaborated with the CONIKS team, Open Whisper Systems and the security team at Yahoo on Key Transparency, which relies on a large-scale database of accounts (and their public keys). The encoded system, which is “obscure to an attacker but verifiable by users,” is “efficient, auditable, highly scalable … and potentially integrated into credential-tracking services like Keybase or into secure communications.”

Google_Headquarters_Logo

An overview of the complicated technical method is available on GitHub.

InfoSecurity Magazine reports that the “new toolkit for encryption key transparency designed to help developers improve messaging security” is a “generic, secure way to discover a recipient’s public keys for addressing messages correctly.” PGP and other current systems “require users to manually verify recipients’ accounts in-person.”

Google security/privacy engineers Ryan Hurst and Gary Belvin report that, “One of our goals with Key Transparency was to simplify this process and create infrastructure that allows making it usable by non-experts.”

“Users should be able to see all the keys that have been attached to an account, while making any attempt to tamper with the record publicly visible,” they said. “This also ensures that senders will always use the same keys that account owners are verifying.”

Many cryptographers have welcomed the new initiative, but Venafi chief cybersecurity strategist Kevin Bocek points out that, “its success will depend on developer interest.”

“Building a database of public keys not linked to digital certificates has been attempted before with PGP and never gained widespread adoption,” he adds.


Microsoft Camera Rig Gives HoloLens Developers Video Hack

$
0
0

Microsoft has come up with a new camera rig that allows HoloLens mixed reality app makers to capture video from a HoloLens and make it easier to show a person interacting with that app, something Microsoft dubs “spectator view.” The details of the hardware-software combo were published as open source on the HoloLens’ GitHub page. The HoloLens headset is wireless, which lets the user move around the room freely, and is based on four cameras, lightly tinted lenses and a holographic processing unit.

The Verge calls the new camera set-up, “a small step, but it’s one headed in the right direction for head-mounted displays,” that comes at a time when VR/AR companies are “running into ethical boundaries as well, sometimes relying on special effects rather than ‘real’ video to get their points across.”

Microsoft_HoloLens_Spectator_View

Showing off apps has become a challenge for HoloLens app developers, who contend with the small field of view that makes CGI images look tiny. Up until now, they have relied on the HoloLens’ built-in camera, which records video from the headset wearer’s point of view; the downsides are that the video quality is low and the outward-facing camera “can’t provide visuals of the person actually wearing the HoloLens.”

“Mixed reality capture is kind of the equivalent of the camera on your cell phone,” said Microsoft head of HoloLens business strategy Ben Reed. “It’s handy, it’s convenient, it does a good job of what it’s meant to do, but it’s not designed to be broadcast quality, what you’d see on TV. We realized we needed another way show other people what the wearer is seeing.”

Microsoft’s own solution is an in-house rig that uses a RED Dragon camera, way out of the price range of most app developers.

This new “spectator view” solution simply requires any camera with an HDMI output. The user mounts the HoloLens to the camera, stabilized on a tripod (which requires a custom-made mount), and then wirelessly connects it to a PC running Unity, to share positioning data. Next step, the user outputs video via an HDMI cable, sending positioning data and video to the PC. Unity takes care of processing the data, and the result is “video of people walking around the room wearing HoloLens headsets” as well as “the apps and the digital objects and the games they’re playing around them.”

Reed notes that HoloLens customers have been requesting this new tool, although “it’s still unclear exactly how many HoloLens users are out there in the world,” with Microsoft saying only that sales are “in thousands.” Although the new rig may not impact that many people, says The Verge, in the “rapidly advancing world of augmented reality, virtual reality, and mixed reality, it at least offers a hack for making videos that feel, well, real.”

The Microsoft developer site provides extensive details regarding the spectator view approach.

Amazon Promotes Alexa With SDK, Revenue for Developers

$
0
0

Amazon is going full bore promoting its virtual assistant Alexa. In an effort to make it available on more devices, the company has debuted the Alexa Voice Service Device SDK toolset, which lets developers integrate a fully functional version to their devices, offering speech recognition and all the other Alexa capabilities such as notifications, weather reports, streaming media and thousands of voice apps. Amazon is providing additional incentive to developers by paying those whose voice apps demonstrate customer engagement.

TechCrunch reports that the SDK was “previously available in an invite-only developer preview period,” during which “over 50 commercial device makers have been working to add Alexa to their products.” For example, Technicolor added Alexa to its Home Networking Gateway and Extender, and Huawei’s Mate 9 smartphone includes Alexa as an option.

The new SDK, which is available through a free, open source license on GitHub, fits into Amazon’s strategy to “bring Alexa to as many devices as possible.” Other tools include “hardware development kits, APIs, and documentation on how to create Alexa-enabled products.” Some of those products are the Ecobee4 smart thermostat, the Triby Internet radio, and speakers, alarm clocks and smart watches.

Amazon_Echo_Dot_Speaker

VentureBeat states that, “Amazon appears to be the first of the major tech companies with AI assistants and third-party integrations — like Google, Samsung, Apple, and Microsoft — with a program to compensate developers based on engagement created by their voice app.” To measure engagement, Amazon will look at “minutes of usage, new customers, customer ratings, and return visitors.”

Developers in the U.S., U.K. and Germany are eligible, and “developers with a skill active in all three countries will receive separate payments based on engagement in each country.” Developers have already begun making games for Alexa, but Amazon developer evangelist Paul Cutsinger said that “those working in education, food and drink, music, health and fitness, productivity, and lifestyle” are now eligible as well.

The Verge reports that Amazon is also outfitting Arizona State University’s student engineering dorms with 1,600 Echo Dots, in a “program that encourages engineering students to practice voice user interface development skills on consumer hardware.” Students moving into the work/live Tooker House can “opt in to the program and receive an Echo Dot for their dorm room.”

ASU engineering students can also enroll in “one of three upcoming fall courses that teach concepts like voice user interface development, which includes Alexa skills.” By putting the Alexa Skills Kit into students’ hands, Amazon is clearly hopeful that ASU’s engineering students will also build skills, “which can ideally be incorporated into student project programs, or solve needs in the local community.”

Google Debuts Software Tools for AR App, Web Developers

$
0
0

Google just released ARCore, software to enable developers to more easily create augmented reality apps. The company took its first step into augmented reality in 2014, when it introduced Tango, its 3D mapping system. But it had a hard time getting Android phone makers to make the necessary hardware upgrades to foster widespread AR adoption. Google now hopes that, rather than expensive hardware upgrades, developers will be more enticed by its software solution for allowing apps and sites to track physical objects and overlay them with virtual images.

Bloomberg reports that, “ARCore will be available for developers to preview on Tuesday with Google’s own Pixel phones and Samsung Electronics’s S8 smartphone.” The software will be fully launched, with more Android devices, this winter.

“We have a path to getting this on north of 100 million phones very quickly,” said Google VR/AR head Clay Bavor. Rival Apple is making its own AR software, ARKit, available “on about half a billion iPhones and iPads later this year.” Google “imagines a broad array of applications for its AR technology,” and “has also hinted at commerce uses,” with trials using Tango for BMW and Gap virtual showrooms.

Google_ARCore

Although “Tango’s specialized cameras and depth sensors give it more capabilities than ARCore and Apple’s ARKit,” Google says ARCore offers “light detection and the ability to place and manipulate virtual objects easily on real surfaces, that come close to matching Tango’s abilities.” The key challenge is to “convince Android software creators and device makers to adopt its software.”

Whereas Apple can “easily roll out software updates for a billion-plus devices,” Google has to deal with numerous Android and smartphone partners. Still, Android engineering vice president Dave Burke is confident that, “rivals like Facebook and Snap will adopt the tool for their Android apps, along with other companies that haven’t tinkered with the nascent tech yet.”

Google’s blog states the company is “targeting 100 million devices at the end of the preview,” and “working with manufacturers like Samsung, Huawei, LG, Asus and others to make this possible.” ARCore “works with Java/OpenGL, Unity and Unreal and focuses on three things” — motion tracking, environmental understanding and light estimation.

The company has also invested in apps and services to support AR creation. “We built Blocks and Tilt Brush to make it easy for anyone to quickly create great 3D content for use in AR apps … [and] we’re also working on Visual Positioning Service (VPS), a service which will enable world scale AR experiences well beyond a tabletop.” Google will also release “prototype browsers for web developers so they can start experimenting with AR.” Google encouraged feedback via GitHub.

VR Industry Forum Draft Guidelines Push for Open Ecosystem

$
0
0

Over the weekend, the Virtual Reality Industry Forum (VRIF) released its draft VR and 360 video production and distribution guidelines at IBC 2017 in Amsterdam. The draft document begins with an intro section suggesting best practices for VR/360 production, including experiences with three degrees of freedom (3DOF). It then makes specific recommendations for the technical aspects of visual and audio VR/360 content production, media and presentation profiles, and content security. VRIF aims to release the full guidelines, with an emphasis on an open ecosystem, at CES 2018 in January.

“They will be supported by interoperability test streams that enable rapid and independent development and deployments of VR services based on VRIF’s guidelines,” notes the press release.

Virtual_Reality_VR_Data

“What is so unique about these guidelines is they take the interests of all ecosystem participants into account and focus on important, but often overlooked factors,” said Paul Higgs of Huawei, chair of the Guidelines Working Group and board member of VRIF. “The VR industry is starting to move away from proprietary systems and toward large scale solutions, and the Guidelines facilitate that transition.”

“The purpose of presenting a draft of the guidelines at IBC is to give the public a chance to review them and identify any issues, so that we can incorporate as much relevant information as possible,” he added.

VRIF has grown to 40 members, including Dolby, DTS, Huawei, Intel, Nokia, Qualcomm, Sony Pictures, Technicolor and Verizon, among others.

Other standardization efforts related to VR and immersive experiences are underway through Khronos Group’s VR Standard Initiative, the IEEE Digital Senses Initiative, SMPTE, and the Industry of VR Alliance.

VRIF is seeking feedback on the draft standards via its website and GitHub by October 31, 2017.

Tech Firms Sign a Cybersecurity Pledge to Protect Customers

$
0
0

Led by tech titans Facebook and Microsoft, more than 30 tech companies have signed a Cybersecurity Tech Accord as part of their efforts to protect customers from cyberattacks and “the misuse of their technology.” According to the agreement, tech companies pledge not to assist governments that initiate attacks against “innocent civilians and enterprises.” Among the signatories are companies that power Internet technology and information infrastructure, including Cisco, Cloudflare, Dell, HP, LinkedIn, Nielsen, Nokia, Oracle, Symantec and VMware.

Additional signatories of the Cybersecurity Tech Accord include ABB, ARM, Avast, Bitdefender, BT, CA Technologies, DataStax, DocuSign, Fastly, FireEye, F-Secure, GitHub, Guardtime, HPE, Intuit, Juniper Networks, RSA, SAP, Stripe, Telefónica, Tenable and Trend Micro.

Amazon, Apple, Google and Twitter have not signed, but the Accord “remains open to consideration of new private sector signatories, large or small and regardless of sector.”

Cyber_Security_Graphic

“The devastating attacks from the past year demonstrate that cybersecurity is not just about what any single company can do but also about what we can all do together,” said Microsoft president Brad Smith. “This tech sector accord will help us take a principled path towards more effective steps to work together and defend customers around the world.”

According to the press release, “The companies will do more to empower developers and the people and businesses that use their technology, helping them improve their capacity for protecting themselves. This may include joint work on new security practices and new features the companies can deploy in their individual products and services.”

The Accord’s tenets “commit the companies to providing stronger defenses against cyberattacks while also helping to ‘empower’ developers, customers, and businesses to protect themselves,” explains VentureBeat. “The most interesting vow within the pledge, however, is the ‘no offense’ clause, which states: The companies will not help governments launch cyberattacks and will protect against tampering or exploitation of their products and services through every stage of technology development, design and distribution.”

“The impetus for the effort came largely from Mr. Smith, who has been arguing for several years that the world needs a ‘digital Geneva Convention’ that sets norms of behavior for cyberspace just as the Geneva Conventions set rules for the conduct of war in the physical world,” The New York Times reports. “Although there was some progress in setting basic norms of behavior in cyberspace through a United Nations-organized group of experts several years ago, the movement has since faltered.”

Tech Accord participants plan to meet during the RSA Conference in San Francisco this week.

Facebook Suspends Quiz App Linked to Cambridge University

$
0
0

Facebook is scrutinizing another quiz app, myPersonality, created by University of Cambridge academics following the Cambridge Analytica debacle. According to New Scientist, the myPersonality app collected data from six million people, about 40 percent of whom agreed to share their Facebook information. The app creator countered that Facebook had known about myPersonality for years. But the app is also being investigated by Britain’s Information Commissioner’s Office for whether the data was properly anonymized.

Business Insider reports that, according to New Scientist, Facebook “secured the information of about 3 million user profiles.” This app actually has a connection to Cambridge Analytica; Aleksandr Kogan, who harvested that data, was a myPersonality project collaborator until 2014.

Facebook_Grid_Design

The University of Cambridge website says that the university’s Psychometrics Centre deputy director David Stillwell created myPersonality in 2007. It adds that Cambridge academics also shared the data “with registered academic collaborators around the world … resulting in over 45 scientific publications in peer-reviewed journals.”

Facebook VP of product partnerships Ime Archibong said, “If myPersonality refuses to cooperate or fails our audit, we will ban it.” Facebook earlier suspended 200 apps “and investigated thousands of others in case they misused people’s data.”

In the U.K., the Information Commissioner’s Office is looking into a New Scientist claim that, “a username and password to access some of the data were shared by a lecturer on GitHub.” Stillwell created the app before he joined Cambridge, said a spokesperson who added that the app did not go through the university’s “ethical approval processes.”

The New York Times reports that, “the Justice Department and the FBI are investigating Cambridge Analytica, the now-defunct political data firm, and have sought to question former employees and banks that handled its business.” But, it adds, “prosecutors provided few other details, and the inquiry appears to be in its early stages, with investigators seeking an overview of the company and its business practices.”

Justice Department assistant chief of its securities and financial fraud division Brian Kidd is one of the prosecutors on the case. Kidd, with another prosecutor and an FBI agent, interviewed former Cambridge Analytica employee Christopher Wylie.

Microsoft Is Acquiring GitHub in Stock Deal Worth $7.5 Billion

$
0
0

Microsoft confirmed that it is purchasing GitHub in an all-stock deal valued at $7.5 billion. Acquiring GitHub — a service used by startups and major names such as Microsoft and Google to store code and collaborate, and an essential tool for 28 million developers — is a logical move for the Washington-based tech giant. With CEO Satya Nadella at the helm, Microsoft has been increasing its efforts to serve software developers through cloud services. With GitHub in its arsenal, “Microsoft would be rolling up a crucial part of the ecosystem,” notes Recode.

“The era of the intelligent cloud and intelligent edge is upon us,” wrote Nadella on the Microsoft Blog. “Computing is becoming embedded in the world, with every part of our daily life and work and every aspect of our society and economy being transformed by digital technology.”

Microsoft_Sign_Logo

“Developers are the builders of this new era, writing the world’s code,” he adds. “And GitHub is their home.”

Nadella explains that every industry is being impacted by technology and developer workflows will increasingly influence business practices “from marketing, sales and service, to IT and HR.” He also identifies Microsoft’s commitment to open source in this process.

Nadella listed three specific opportunities for Microsoft and GitHub moving forward:

  • First, we will empower developers at every stage of the development lifecycle — from ideation to collaboration to deployment to the cloud.
  • Second, we will accelerate enterprise developers’ use of GitHub, with our direct sales and partner channels and access to Microsoft’s global cloud infrastructure and services.
  • Finally, we will bring Microsoft’s developer tools and services to new audiences.

“Microsoft is now one of the biggest contributors to GitHub, and as Nadella moves the company away from complete dependence on the Windows operating system to more in-house development on Linux, the company needs new ways to connect with the broader developer community,” reports Bloomberg. “GitHub preferred selling the company to going public and chose Microsoft partially because it was impressed by Nadella.”

The deal is expected to close later this year. According to Nadella, “GitHub will be led by CEO Nat Friedman, an open source veteran and founder of Xamarin, who will continue to report to Microsoft Cloud + AI Group executive vice president Scott Guthrie; GitHub CEO and co-founder Chris Wanstrath will be a technical fellow at Microsoft.”


The Reel Thing: Academy Debuts Digital Source Master Specs

$
0
0

At The Reel Thing conference in Hollywood, the Academy’s Science and Technology Council managing director Andy Maltz and Dr. Wolfgang Ruppel at Germany’s RheinMain University of Applied Sciences introduced the specifications of the Academy Digital Source Master, built on a suite of SMPTE standards. Maltz described the background that led to the Digital Source Master. “The Digital Dilemma published in 2007, identified open source software and digital file format standardization as key components to the solution,” he said.

Maltz explained the rationale for the creation of the Digital Source Master. “The ‘finished movie’ was never defined,” he said. “As a result there is a large variety of digital image files and metadata that gets delivered from final mastering. This makes reformatting for distribution and preparation for long-term archiving very labor intensive and error prone.” Ruppel revealed that the specifications for the Academy Digital Source Master just went live on the ACESCentral site.

The Academy Digital Source Master includes the already-established ACES2065-1 (Academy Color Encoding Specification), which defines a common color encoding method, as well as a specialization of IMF defined in Application #5 (SMPTE ST 2067-50) that adds ACES2065-1 images; the Academy Digital Source Master specification adds Look Modification Transforms (LMTs) to the package.

An LMT is an optional element used in combination with an ACES Output Transform to establish a creative “look.” The Digital Source Master package may also contain “sidecar assets” with composition-related metadata, as specified in SMPTE ST 2067-9. Ruppel noted that IMF #5 provides for human-readable metadata, and that the open-source IMF Tool will be ACES-ready in Q4 2018 and is already on GitHub.

“With the publication of the Academy DSM spec, and the recent launch of the Academy Software Foundation, two major steps toward solving the Digital Dilemma are now realized,” said Maltz. He and Ruppel added that the Academy Sci-Tech Council will sponsor a plugfest in October at the Pickford Center, where vendors will have the chance to test interoperability.

“ADSM is the solution for delivery and archiving of ACES master file sets and a future-proof data structure, based on industry requirements of all major Hollywood studios,” Maltz concluded. “Open Source software (with both the IMF Tool and C++ libraries) enables sustainable archiving and broad access.”

To review and comment (for free) on the Academy DSM spec, visit ACESCentral. For more information on the Academy Software Foundation (and to sign up to participate — also for free), visit here.

The post The Reel Thing: Academy Debuts Digital Source Master Specs appeared first on ETCentric.

Google Opens Titan Security Key Availability to All Consumers

$
0
0

At its Cloud Next 2018 conference, Google debuted the Titan Security Key, its version of a FIDO (Fast Identity Online) physical device to authenticate logins over Bluetooth. Now, only a few weeks after the announcement, Google has made it available for purchase at $50 in its Google Play Store. Google Cloud enterprise customers have been able to access the Titan Security Key for the past two months. The product comes with a USB key, a Bluetooth Low Energy key, and an adapter for devices with USB Type-C ports.

VentureBeat reports that the Titan Security Key’s price is “roughly equivalent to the price of a Yubikey, the current FIDO standard-bearer.” But Google product management director for information security Sam Srinivas emphasized, “it’s not meant to compete with other FIDO keys on the market … [but] rather is for customers who want security keys and trust Google.”

Google has been working with Yubico, NXP, Dropbox, Facebook, GitHub, Salesforce, Stripe and Twitter among others since 2014 to develop the nonprofit FIDO Alliance standards and protocols, “including the new Worldwide Web Consortium’s Web Authentication API.” When a user registers a FIDO device “with an online service, it creates a key pair: an on-device, offline private key, and an online public key.” The device prompts the user for a PIN code, password, fingerprint or voice to “prove possession” of the private key.

The FIDO Alliance mission is to make it easier to “log into apps, websites, and services securely, and to reduce the amount of work required for developers,” and Google reports that FIDO keys have prevented phishing attempts on its 100,000+ employees. Google product manager Christiaan Brand noted that the FIDO keys are an improvement over SMS-based systems because the latter are “too confusing.”

“[And for that reason,] even if they wildly improve security above baseline, they can be phished,” he added. He’s right, since “it’s relatively trivial for hackers to impersonate someone and convince a cell phone provider to redirect their text messages to another number … [and] fooling someone into giving up their password isn’t much harder.”

Google Prompt sends two-factor login prompts to Android phones or, with iOS, to Google Search, and also is “one of several that offers token-based authentication (via the Google Authenticator app), which generates unique, offline passcodes — hashes — every few seconds.” But Srinivas stressed that, “there’s no substitute for a physical key,” which would stop even a hacker who’s stolen the password and two-factor code.

Google’s Advanced Protection Program, which is aimed at protecting “high-profile targets against hacking,” requires a physical key. Srinivas said that, with regard to the Titan Security Key, the company will “run awareness campaigns targeted at politicians, business executives, and other people who it thinks need security the most.”

The post Google Opens Titan Security Key Availability to All Consumers appeared first on ETCentric.

IBM Is Buying Red Hat, Aims to Be Top Hybrid Cloud Provider

$
0
0

IBM and open-source software provider Red Hat announced that they have reached an acquisition agreement. Marking what will be the third-largest tech acquisition in U.S. history, IBM will purchase all issued and outstanding common shares of Red Hat in a deal valued at approximately $34 billion. Red Hat is the largest distributor of open-source operating system Linux. The deal reflects IBM’s ambitions for a piece of the fast-growing cloud computing market. “The acquisition of Red Hat is a game-changer,” said Ginni Rometty, IBM chair, president and chief exec. “It changes everything about the cloud market.”

“IBM will become the world’s #1 hybrid cloud provider, offering companies the only open cloud solution that will unlock the full value of the cloud for their businesses,” she added. “Most companies today are only 20 percent along their cloud journey, renting compute power to cut costs. The next 80 percent is about unlocking real business value and driving growth. This is the next chapter of the cloud. It requires shifting business applications to hybrid cloud, extracting more data and optimizing every part of the business, from supply chains to sales.”

According to the press release, IBM plans to provide an “open approach to cloud, featuring unprecedented security and portability … across multiple public and private clouds, all with consistent cloud management.”

Additionally, the company will “maintain Red Hat’s open source innovation legacy, scaling its vast technology portfolio and empowering its widespread developer community,” and Red Hat will “operate as a distinct unit within IBM’s Hybrid Cloud team.”

According to Recode, the deal “comes at an acquisitive time in the enterprise space: Microsoft made a splash with its $7.5 billion purchase of Github earlier this year; Amazon and Google are also striving to gain the edge in cloud computing. Despite its pre-Web 1.0 dominance, IBM has struggled for relevance in this age, and has seen its share price fall by 30 percent over the last five years. It is clearly betting that a big acquisition can change that.”

North Carolina-based Red Hat was founded 25 years ago and is currently the largest distributor of the popular Linux operating system.

“With the deal for Red Hat, IBM is trying to position itself as a kind of corporate ‘Switzerland’ in cloud computing — a trusted partner of businesses that are moving to the cloud, but are leery of becoming dependent on one major cloud supplier,” notes The New York Times. “In the cloud model, software developers write applications that run on remote data centers. The advantage can be lower costs and faster development of new business software.”

“Open source is the default choice for modern IT solutions, and I’m incredibly proud of the role Red Hat has played in making that a reality in the enterprise,” said Jim Whitehurst, president and CEO of Red Hat. “Joining forces with IBM will provide us with a greater level of scale, resources and capabilities to accelerate the impact of open source as the basis for digital transformation and bring Red Hat to an even wider audience – all while preserving our unique culture and unwavering commitment to open source innovation.”

“IBM may have found something more elementary than ‘Watson’ to save its flagging business,” suggests TechCrunch. “Though the acquisition of Red Hat is by no means a guaranteed victory” for IBM, it could prove to “be a better bet for ‘Big Blue’ than an artificial intelligence program that was always more hype than reality.”

The post IBM Is Buying Red Hat, Aims to Be Top Hybrid Cloud Provider appeared first on ETCentric.

Facebook Introduces Open-Source Image Processing Library

$
0
0

Facebook unveiled Spectrum, an open-source image processing library to help improve the quality and reliability of images uploaded through its own apps. Spectrum, which Facebook first showed publicly and launched in beta in November, is now on GitHub, available to the developer community. As higher quality cameras on smartphones have become a key selling point, consumers are dealing with larger image files, which can be a stumbling block since they eat up more device memory and more network bandwidth.

VentureBeat reports that this problem is why “apps such as WhatsApp and Facebook compress images.” The trade-off of compression, however, is image quality. “What was a 3MB picture at 2980 x 2384 pixel resolution could be roughly a fifth that size when displayed in the app, which translates to reduced clarity.”

Spectrum — defined as a “client-side image transcoding library for both Android and iOS apps” — also reduces file size (which means faster uploads and less data consumption) but, via a “declarative” API, makes it easier to control image quality without the app developer needing to write additional code. “In short, rather than telling an app step by step how an image should be transcoded, Spectrum allows developers to stipulate what they want done — and Spectrum takes care of the orchestration.”

Spectrum “prefers a lossless approach when cropping and rotating JPEG images,” said Facebook, but when resizing, it “optimizes the interplay between decoder sampling and pixel-perfect resizing.” Spectrum also “integrates with native image compression libraries, including MozJpeg, a JPEG encoder launched by Mozilla’s research team … which can reduce a file size by 10-15 percent in preparation for upload.”

This integration lets Spectrum control advanced parameters including chroma subsampling, “which is a compression practice that attributes less resolution to an image’s color in favor of luminance data.” For images requiring more defined colors (especially those involving illustrations or sharp images), Spectrum “intervenes.”

According to Facebook, “the consistent API makes these features accessible to developers who are not image experts.” Facebook reported that it’s been in development on Spectrum for 18 months, “gradually incorporat[ing] its own apps.” During its beta, Facebook “gathered input” and incorporated fixes and “support for less common” chroma subsampling in JPEG files.

The post Facebook Introduces Open-Source Image Processing Library appeared first on ETCentric.

Dropbox, Google and Sony Debut Tech at Sundance Festival

$
0
0

At the Sundance Film Festival, tech companies now pitch new tools to the M&E industry. This year, Dropbox is offering a time-based commenting feature for video files, and Google and Sony are open-sourcing a tool that will simplify cloud rendering. Dropbox’s new feature will aid audio and video review by adding time-based commenting. Google, in partnership with Sony Picture Imageworks, will introduce OpenCue, which breaks down rendering steps and then schedules and manages the job across rendering farms.

Variety reports, with Dropbox’s new time-based commenting feature, “anyone working on a project [will be able] to leave comments at a specific location within a video, making it easier to directly pinpoint to issues within a media file.”

“Instead of commenting ‘There’s a popping noise on the soundtrack about a minute in,’ reviewers can place a comment at the 0:51 mark that says, ‘Remove popping noise’,” explained the company’s blog post. Dropbox Professional Business Advanced, Enterprise or Education account users will be able to avail themselves of the new feature.

OpenCue, the tool launched by Google and Sony Pictures Imageworks, “helps studios to manage their rendering cues.” Code-named Cue when it was internal at Sony, the original project “has been used to render hundreds of movies across 150,000 cores, housed both in Sony’s own data center as well as in the Google Cloud.” OpenCue can also be used for on-premise rendering. As members of the Academy Software Foundation, Google and Sony open-sourced the project, which is available on Sony’s Github pages and was released under the Apache license.

TechCrunch adds that Google’s effort to “bring the Hollywood studios to its Cloud Platform” most recently included “the launch of its Los Angeles cloud region last year, as well as its acquisition of the Zync cloud renderer back in 2014.” At Imageworks, it says, “Cue 3 was actually [the company’s] internal queuing system, which is at least 15 years old.” Google worked with Imageworks to “open-source the system” and both companies scaled it to 150,000 cores.

“As content production continues to accelerate across the globe, visual effects studios are increasingly turning towards the cloud to keep up with demand for high-quality content,” wrote Google product manager Todd Prives. “The scalability and the security that the cloud offers provides (sic) studios with the tools needed to adapt to today’s fast-paced, global production schedules.” Previously, Sony “open-sourced and contributed to tools like OpenColorIO and Alembic.”

At Sundance, Variety noted, “more than two-thirds of the films premiering … this year were made with the help of Dropbox,” and Google Cloud has been used to render parts of “The Jungle Book” with 14,000 cores and more than 360,000 hours of computing.

The post Dropbox, Google and Sony Debut Tech at Sundance Festival appeared first on ETCentric.

Password-Free Logins Getting Closer to Becoming a Reality

$
0
0

WebAuthn, with the approval of the World Wide Web Consortium (W3C) and the FIDO Alliance, just became an official web standard for password-free logins. After W3C and the FIDO Alliance first introduced it in November 2015, WebAuthn gained the support of many W3C contributors including Airbnb, Alibaba, Apple, Google, IBM, Intel, Microsoft, Mozilla, PayPal, SoftBank, Tencent and Yubico. With WebAuthn, which is supported by Android and Windows 10, users can log-in via biometrics, mobile devices or FIDO security keys.

VentureBeat reports that browsers “Google Chrome, Mozilla Firefox, and Microsoft Edge all added support last year … [and] Apple has supported WebAuthn in preview versions of Safari since December.”

“Now is the time for web services and businesses to adopt WebAuthn to move beyond vulnerable passwords and help web users improve the security of their online experiences,” said W3C chief executive Jeff Jaffe. “W3C’s Recommendation establishes web-wide interoperability guidance, setting consistent expectations for web users and the sites they visit.”

W3C is in the process of adopting WebAuthn on its own site; Dropbox, Facebook, GitHub, Salesforce, Stripe, and Twitter have already adopted it.

The FIDO Alliance, with its FIDO2 specifications, doesn’t want to stop at obsoleting passwords for websites, but wants to “kill the password everywhere, a goal it has been working on for years and will likely still be working on for years to come.” FIDO2, which is a core component of WebAuthn, “is a standard that supports public key cryptography and multifactor authentication — specifically, the Universal Authentication Framework (UAF) and Universal Second Factor (U2F) protocols.” The FIDO Alliance also offers “testing tools and a certification program.”

It addresses “traditional authentication issues in four ways.” With security, “FIDO2 cryptographic login credentials are unique across every website; biometrics or other secrets like passwords never leave the user’s device and are never stored on a server … [which] eliminates the risks of phishing, all forms of password theft, and replay attacks.”

It offers convenience, as users can log in “with simple methods such as fingerprint readers, cameras, FIDO security keys, or their personal mobile device,” and privacy “because FIDO keys are unique for each Internet site … [and] cannot be used to track users across sites.” Last, scalability is supported because websites can “enable FIDO2 via an API call across all supported browsers and platforms on billions of devices consumers use every day.”

The creation of WebAuthn as a standard, said FIDO Alliance executive director Brett McDowell, is a milestone. “We’re moving into the next phase of our shared mission to deliver simpler, stronger authentication to everyone using the Internet today, and for years to come,” he added.

The post Password-Free Logins Getting Closer to Becoming a Reality appeared first on ETCentric.

Facebook Introduces Open-Source Image Processing Library

$
0
0

Facebook unveiled Spectrum, an open-source image processing library to help improve the quality and reliability of images uploaded through its own apps. Spectrum, which Facebook first showed publicly and launched in beta in November, is now on GitHub, available to the developer community. As higher quality cameras on smartphones have become a key selling point, consumers are dealing with larger image files, which can be a stumbling block since they eat up more device memory and more network bandwidth.

VentureBeat reports that this problem is why “apps such as WhatsApp and Facebook compress images.” The trade-off of compression, however, is image quality. “What was a 3MB picture at 2980 x 2384 pixel resolution could be roughly a fifth that size when displayed in the app, which translates to reduced clarity.”

Spectrum — defined as a “client-side image transcoding library for both Android and iOS apps” — also reduces file size (which means faster uploads and less data consumption) but, via a “declarative” API, makes it easier to control image quality without the app developer needing to write additional code. “In short, rather than telling an app step by step how an image should be transcoded, Spectrum allows developers to stipulate what they want done — and Spectrum takes care of the orchestration.”

Spectrum “prefers a lossless approach when cropping and rotating JPEG images,” said Facebook, but when resizing, it “optimizes the interplay between decoder sampling and pixel-perfect resizing.” Spectrum also “integrates with native image compression libraries, including MozJpeg, a JPEG encoder launched by Mozilla’s research team … which can reduce a file size by 10-15 percent in preparation for upload.”

This integration lets Spectrum control advanced parameters including chroma subsampling, “which is a compression practice that attributes less resolution to an image’s color in favor of luminance data.” For images requiring more defined colors (especially those involving illustrations or sharp images), Spectrum “intervenes.”

According to Facebook, “the consistent API makes these features accessible to developers who are not image experts.” Facebook reported that it’s been in development on Spectrum for 18 months, “gradually incorporat[ing] its own apps.” During its beta, Facebook “gathered input” and incorporated fixes and “support for less common” chroma subsampling in JPEG files.

The post Facebook Introduces Open-Source Image Processing Library appeared first on ETCentric.


Dropbox, Google and Sony Debut Tech at Sundance Festival

$
0
0

At the Sundance Film Festival, tech companies now pitch new tools to the M&E industry. This year, Dropbox is offering a time-based commenting feature for video files, and Google and Sony are open-sourcing a tool that will simplify cloud rendering. Dropbox’s new feature will aid audio and video review by adding time-based commenting. Google, in partnership with Sony Picture Imageworks, will introduce OpenCue, which breaks down rendering steps and then schedules and manages the job across rendering farms.

Variety reports, with Dropbox’s new time-based commenting feature, “anyone working on a project [will be able] to leave comments at a specific location within a video, making it easier to directly pinpoint to issues within a media file.”

“Instead of commenting ‘There’s a popping noise on the soundtrack about a minute in,’ reviewers can place a comment at the 0:51 mark that says, ‘Remove popping noise’,” explained the company’s blog post. Dropbox Professional Business Advanced, Enterprise or Education account users will be able to avail themselves of the new feature.

OpenCue, the tool launched by Google and Sony Pictures Imageworks, “helps studios to manage their rendering cues.” Code-named Cue when it was internal at Sony, the original project “has been used to render hundreds of movies across 150,000 cores, housed both in Sony’s own data center as well as in the Google Cloud.” OpenCue can also be used for on-premise rendering. As members of the Academy Software Foundation, Google and Sony open-sourced the project, which is available on Sony’s Github pages and was released under the Apache license.

TechCrunch adds that Google’s effort to “bring the Hollywood studios to its Cloud Platform” most recently included “the launch of its Los Angeles cloud region last year, as well as its acquisition of the Zync cloud renderer back in 2014.” At Imageworks, it says, “Cue 3 was actually [the company’s] internal queuing system, which is at least 15 years old.” Google worked with Imageworks to “open-source the system” and both companies scaled it to 150,000 cores.

“As content production continues to accelerate across the globe, visual effects studios are increasingly turning towards the cloud to keep up with demand for high-quality content,” wrote Google product manager Todd Prives. “The scalability and the security that the cloud offers provides (sic) studios with the tools needed to adapt to today’s fast-paced, global production schedules.” Previously, Sony “open-sourced and contributed to tools like OpenColorIO and Alembic.”

At Sundance, Variety noted, “more than two-thirds of the films premiering … this year were made with the help of Dropbox,” and Google Cloud has been used to render parts of “The Jungle Book” with 14,000 cores and more than 360,000 hours of computing.

The post Dropbox, Google and Sony Debut Tech at Sundance Festival appeared first on ETCentric.

Password-Free Logins Getting Closer to Becoming a Reality

$
0
0

WebAuthn, with the approval of the World Wide Web Consortium (W3C) and the FIDO Alliance, just became an official web standard for password-free logins. After W3C and the FIDO Alliance first introduced it in November 2015, WebAuthn gained the support of many W3C contributors including Airbnb, Alibaba, Apple, Google, IBM, Intel, Microsoft, Mozilla, PayPal, SoftBank, Tencent and Yubico. With WebAuthn, which is supported by Android and Windows 10, users can log-in via biometrics, mobile devices or FIDO security keys.

VentureBeat reports that browsers “Google Chrome, Mozilla Firefox, and Microsoft Edge all added support last year … [and] Apple has supported WebAuthn in preview versions of Safari since December.”

“Now is the time for web services and businesses to adopt WebAuthn to move beyond vulnerable passwords and help web users improve the security of their online experiences,” said W3C chief executive Jeff Jaffe. “W3C’s Recommendation establishes web-wide interoperability guidance, setting consistent expectations for web users and the sites they visit.”

W3C is in the process of adopting WebAuthn on its own site; Dropbox, Facebook, GitHub, Salesforce, Stripe, and Twitter have already adopted it.

The FIDO Alliance, with its FIDO2 specifications, doesn’t want to stop at obsoleting passwords for websites, but wants to “kill the password everywhere, a goal it has been working on for years and will likely still be working on for years to come.” FIDO2, which is a core component of WebAuthn, “is a standard that supports public key cryptography and multifactor authentication — specifically, the Universal Authentication Framework (UAF) and Universal Second Factor (U2F) protocols.” The FIDO Alliance also offers “testing tools and a certification program.”

It addresses “traditional authentication issues in four ways.” With security, “FIDO2 cryptographic login credentials are unique across every website; biometrics or other secrets like passwords never leave the user’s device and are never stored on a server … [which] eliminates the risks of phishing, all forms of password theft, and replay attacks.”

It offers convenience, as users can log in “with simple methods such as fingerprint readers, cameras, FIDO security keys, or their personal mobile device,” and privacy “because FIDO keys are unique for each Internet site … [and] cannot be used to track users across sites.” Last, scalability is supported because websites can “enable FIDO2 via an API call across all supported browsers and platforms on billions of devices consumers use every day.”

The creation of WebAuthn as a standard, said FIDO Alliance executive director Brett McDowell, is a milestone. “We’re moving into the next phase of our shared mission to deliver simpler, stronger authentication to everyone using the Internet today, and for years to come,” he added.

The post Password-Free Logins Getting Closer to Becoming a Reality appeared first on ETCentric.

Fyusion Demos Photoreal 3D Imaging Tech at SIGGRAPH

$
0
0

Fyusion, a computer vision/machine learning company, is demonstrating a new 3D imaging technology this week at SIGGRAPH 2019. The technology, aimed at providing digital marketers with photoreal images of products and scenes, uses light field technology to attain greater realism. The company has raised $70 million, including $3 million from Japan’s Itochu trading company and a “strategic investment” from Cox Automotive. The software is already being used for commercial purposes in automotive, retail and fashion industries.

VentureBeat reports that the San Francisco-based company’s earlier focus on converting smartphone pictures into 3D holographic images helped it “gain more than 150 million monthly active users.” This more recent business, however, is likely be more lucrative. Cox Automotive, a digital wholesale market for used vehicles, is using Fyusion software to “display 3D images of cars on its websites,” and Itochu is using it “to show images of models wearing outfits on its brands’ retail sites.”

Among its strengths, Fyusion’s technology can handle “fine-grained textures like grass and foliage, transparent surfaces, and reflections” — features already available to professionals but not to “the masses.”

“Fyusion’s technology is groundbreaking because it is low cost and produces the highest quality results,” said a company statement. “It’s also conceptually simple.” The technology relies on a deep network “to promote each source view to a layered representation of the scene, advancing recent work on the multiplane image (MPI) representation … [and] then synthesizes novel views by blending renderings from adjacent layered representations.”

The result, according to Fyusion, “is a 4,000 times decrease in the number of images needed to produce a 3D image, making it easy for anyone to create high-quality 3D images using only a smartphone.”

“These new advancements are a big step for light field research and as they continue to get incorporated into our products will give us a big new competitive advantage,” said Fyusion chief executive Radu Rusu. Fyusion technology can be found on GitHub, where it “is available for testing.”

The post Fyusion Demos Photoreal 3D Imaging Tech at SIGGRAPH appeared first on ETCentric.

Google Open-Sources Technology For Real-Time Captions

$
0
0

Google is looking to help developers create real-time captioning for long-form conversations in multiple languages. The company recently open-sourced the speech engine used for Live Transcribe, its Android speech-to-text transcription app designed for those who are deaf or hard of hearing, and posted the source code on GitHub. Live Transcribe, launched in February, is a tool that uses machine learning algorithms to convert audio into captions. Live Transcribe can transcribe speech in more than 70 languages and dialects into captions in real-time.

“Unlike Android’s upcoming Live Caption feature, Live Transcribe is a full-screen experience, uses your smartphone’s microphone (or an external microphone), and relies on the Google Cloud Speech API,” reports VentureBeat.

Live Transcribe allows users to type responses back on the screen. It is available on 1.8 billion Android devices. (Live Caption will be exclusive to select Android Q devices.)

According to the Google Open Source Blog, “relying on the cloud introduces several complications — most notably robustness to ever-changing network connections, data costs, and latency. Today, we are sharing our transcription engine with the world so that developers everywhere can build applications with robust transcription.” (The source code is available on GitHub.)

Google’s speech engine closes and restarts to accommodate for pauses and silence. It also “buffers audio locally and then sends it upon reconnection,” notes VB. Google evaluated audio codecs such as FLAC, AMR-WB and Opus, which all had different pros and cons based on different conditions. For example: “To reduce latency even further than the Cloud Speech API already does, Live Transcribe uses a custom Opus encoder. The encoder increases bitrate just enough so that ‘latency is visually indistinguishable to sending uncompressed audio.’”

“Opus, AMR-WB, and FLAC encoding can be easily enabled and configured,” explains VB. The Live Transcribe speech engine also “contains a text formatting library for visualizing ASR confidence, speaker ID, and more.”

The post Google Open-Sources Technology For Real-Time Captions appeared first on ETCentric.

Google Open-Sources Real-Time Gesture Recognition Tech

$
0
0

Google relied on computer vision and machine learning to research a better way to perceive hand shapes and motions in real-time, for use in gesture control systems, sign language recognition and augmented reality. The result is the ability to infer up to 21 3D points of a hand (or hands) on a mobile phone from a single frame. Google, which demonstrated the technique at the 2019 Conference on Computer Vision and Pattern Recognition, also put the source code and a complete use case scenario on GitHub.

According to VentureBeat, Google also implemented its new technique “in MediaPipe, a cross-platform framework for building multimodal applied machine learning pipelines to process perceptual data of different modalities (such as video and audio).”

“The ability to perceive the shape and motion of hands can be a vital component in improving the user experience across a variety of technological domains and platforms,” wrote research engineers Valentin Bazarevsky and Fan Zhang in a Google AI Blog post. “We hope that providing this hand perception functionality to the wider research and development community will result in an emergence of creative use cases, stimulating new applications and new research avenues.”

The new technique is made up of three AI models that work together: Blaze Palm, that looks at the hand’s palm, analyzing a frame and returning a hand bounding box; “a hand landmark model that looks at the cropped image region defined by the palm detector and returns 3D hand points; and a gesture recognizer that classifies the previously-computed point configuration into a set of gestures.”

Among the challenges, BlazePalm, “has to contend with a lack of features while spotting occluded and self-occluded hands.” Google researchers “trained a palm detector instead of a hand detector” to overcome this problem, “since estimating bounding boxes of objects like fists tends to be easier than detecting hands and fingers.”

After the palm is detected, “the hand landmark model takes over, performing localization of 21 3D hand-knuckle coordinates inside the detected hand regions,” a task that took “30,000 real-world images manually annotated with coordinates, plus high-quality synthetic hand model rendered over various backgrounds and mapped to the corresponding coordinates.” Finally, the gesture recognition system determines “the state of each finger from joint angles and maps the set of finger states to predefined gestures.”

According to Bazarevsky and Zhang, the system can recognize counting gestures from multiple cultures (e.g. American, European and Chinese) and various hand signs including a closed fist, ‘OK’, ‘rock’, and ‘Spiderman’.”

Bazarevsky, Zhang and their team “plan to extend the technology with more robust and stable tracking, and to enlarge the number of gestures it can reliably detect and support dynamic gestures unfolding in time.”

For more information on MediaPipe, visit the GitHub post.

The post Google Open-Sources Real-Time Gesture Recognition Tech appeared first on ETCentric.

Viewing all 65 articles
Browse latest View live




Latest Images