317 stories
·
1 follower

Voting Software

15 Comments and 45 Shares
There are lots of very smart people doing fascinating work on cryptographic voting protocols. We should be funding and encouraging them, and doing all our elections with paper ballots until everyone currently working in that field has retired.
Read the whole story
popular
6 days ago
reply
PhaChayFy
6 days ago
reply
Australia
Share this story
Delete
13 public comments
caffeinatedhominid
1 day ago
reply
Yep.
tante
5 days ago
reply
xkcd on voting software is spot-on
Oldenburg/Germany
wmorrell
6 days ago
reply
Hazmat suit, too. Just to be safe.
rjstegbauer
7 days ago
reply
Amen!! Paper... paper... paper. It's simple. It's trivial to recount. Everyone already knows how to use it. It's cheap. It's verifiable. Just... use... paper.
ianso
7 days ago
reply
Yes!
Brussels
ChrisDL
7 days ago
reply
accurate.
New York
reconbot
7 days ago
reply
Legitimately share this comic with anyone who represents you in government.
New York City
cheerfulscreech
7 days ago
reply
Truth.
jth
8 days ago
reply
XKCD Nails Secure Electronic Voting.
Saint Paul, MN, USA
skorgu
8 days ago
reply
100% accurate.
jsled
8 days ago
reply
endorsed; co-signed; it. me. &c.

(alt text: «There are lots of very smart people doing fascinating work on cryptographic voting protocols. We should be funding and encouraging them, and doing all our elections with paper ballots until everyone currently working in that field has retired.»)
South Burlington, Vermont
alt_text_bot
8 days ago
reply
There are lots of very smart people doing fascinating work on cryptographic voting protocols. We should be funding and encouraging them, and doing all our elections with paper ballots until everyone currently working in that field has retired.
alt_text_at_your_service
8 days ago
reply
There are lots of very smart people doing fascinating work on cryptographic voting protocols. We should be funding and encouraging them, and doing all our elections with paper ballots until everyone currently working in that field has retired.
srsly
8 days ago
Seconding this policy ^^

ReportingObserver: know your code health

1 Share

ReportingObserver: know your code health

TL;DR

There's a new observer in town! ReportingObserver is a new API that lets you know when your site uses a deprecated API or runs into a browser intervention:

const observer = new ReportingObserver((reports, observer) => {
  for (const report of reports) {
    console.log(report.type, report.url, report.body);
  }
}, {buffered: true});

observer.observe();

The callback can be used to send reports to a backend or analytics provider for further analysis.

Why is that useful? Until now, deprecation and intervention warnings were only available in the DevTools as console messages. Interventions in particular are only triggered by various real-world constraints like device and network conditions. Thus, you may never even see these messages when developing/testing a site locally. ReportingObserver provides the solution to this problem. When users experience potential issues in the wild, we can be notified about them.

ReportingObserver has only shipped in Chrome 69. It is being considered by other browsers.

Introduction

A while back, I wrote a blog post ("Observing your web app") because I found it fascinating how many APIs there are for monitoring the "stuff" that happens in a web app. For example, there are APIs that can observe information about the DOM: ResizeObserver, IntersectionObserver, MutationObserver. There are APIs for capturing performance measurements: PerformanceObserver. Other APIs like window.onerror and window.onunhandledrejection even let us know when something goes wrong.

However, there are other types of warnings which are not captured by these existing APIs. When your site uses a deprecated API or runs up against a browser intervention, DevTools is first to tell you about them:

DevTools console warnings for deprecations and interventions.
Browser-initiated warnings in the DevTools console.

One would naturally think window.onerror captures these warnings. It does not! That's because window.onerror does not fire for warnings generated directly by the user agent itself. It fires for runtime errors (JS exceptions and syntax errors) caused by executing your code.

ReportingObserver picks up the slack. It provides a programmatic way to be notified about browser-issued warnings such as deprecations and interventions. You can use it as a reporting tool and lose less sleep wondering if users are hitting unexpected issues on your live site.

ReportingObserver is part of a larger spec, the Reporting API, which provides a common way to send these different reports to a backend. The Reporting API is basically a generic framework to specify a set of server endpoints to report issues to.

The API

The API is not unlike the other "observer" APIs such as IntersectionObserver and ResizeObserver. You give it a callback; it gives you information. The information that the callback receives is a list of issues that the page caused:

const observer = new ReportingObserver((reports, observer) => {
  for (const report of reports) {
    // → report.id === 'XMLHttpRequestSynchronousInNonWorkerOutsideBeforeUnload'
    // → report.type === 'deprecation'
    // → report.url === 'https://reporting-observer-api-demo.glitch.me'
    // → report.body.message === 'Synchronous XMLHttpRequest is deprecated...'
    // → report.body.lineNumber === 11
    // → report.body.columnNumber === 22
    // → report.body.sourceFile === 'https://reporting-observer-api-demo.glitch.me'
    // → report.body.anticipatedRemoval === <JS_DATE_STR> or null
  }
}});

observer.observe();

Filtered reports

Reports can be pre-filter to only observe certain report types:

const observer = new ReportingObserver((reports, observer) => {
  ...
}, {types: ['deprecation']});

Right now, there are two report types: 'deprecation' and 'intervention'.

Buffered reports

The buffered: true option is really useful when you want to see the reports that were generated before the observer was created:

const observer = new ReportingObserver((reports, observer) => {
  ...
}, {types: ['intervention'], buffered: true});

It is great for situations like lazy-loading a library that uses a ReportingObserver. The observer gets added late but you don't miss out on anything that happened earlier in the page load.

Stop observing

Yep! It's got a disconnect method:

observer.disconnect(); // Stop the observer from collecting reports.

Examples

Example - report browser interventions to an analytics provider:

const observer = new ReportingObserver((reports, observer) => {
  for (const report of reports) {
    sendReportToAnalytics(JSON.stringify(report.body));
  }
}, {types: ['intervention'], buffered: true});

observer.observe();

Example - be notified when APIs are going to be removed:

const observer = new ReportingObserver((reports, observer) => {
  for (const report of reports) {
    if (report.type === 'deprecation') {
      sendToBackend(`Using a deprecated API in ${report.body.sourceFile} which will be
                     removed on ${report.body.anticipatedRemoval}. Info: ${report.body.message}`);
    }
  }
});

observer.observe();

Conclusion

ReportingObserver gives us an additional way for discovering and monitoring potential issues in your web app. It's even a useful tool for understanding the health of your code base (or lack thereof). Send reports to a backend, know about the real-world issues users are hitting on your site, update code, profit!

Future work

In the future, my hope is that ReportingObserver becomes the de-facto API for catching all types of issues in JS. Imagine one API to catch everything that goes wrong in your app:

I'm also excited about tools integrating ReportingObserver into their workflows. Lighthouse is an example of a tool that already flags browser deprecations when you run its "Avoids deprecated APIs" audit:

Lighthouse audit for using deprecated APIs.
The Lighthouse audit for using deprecated APIs could use ReportingObserver.

Lighthouse currently uses the DevTools protocol to scrape console messages and report these issues to developers. Instead, it might be interesting to switch to ReportingObserver for its well structured deprecation reports and additional metadata like anticipatedRemoval date.

Additional resources:

Read the whole story
PhaChayFy
16 days ago
reply
Australia
Share this story
Delete

Dobot

1 Share
Read the whole story
PhaChayFy
65 days ago
reply
Australia
Share this story
Delete

A Stealthy Harvard Startup Wants To Reverse Aging in Dogs, and Humans Could Be Next

1 Share
The idea is simple, if you ask biologist George Church. He wants to live to 130 in the body of a 22-year-old. From a report: The world's most influential synthetic biologist is behind a new company that plans to rejuvenate dogs using gene therapy. If it works, he plans to try the same approach in people, and he might be one of the first volunteers. The stealth startup Rejuvenate Bio, cofounded by George Church of Harvard Medical School, thinks dogs aren't just man's best friend but also the best way to bring age-defeating treatments to market. The company, which has carried out preliminary tests on beagles, claims it will make animals "younger" by adding new DNA instructions to their bodies. Its age-reversal plans build on tantalizing clues seen in simple organisms like worms and flies. Tweaking their genes can increase their life spans by double or better. Other research has shown that giving old mice blood transfusions from young ones can restore some biomarkers to youthful levels. "We have already done a bunch of trials in mice and we are doing some in dogs, and then we'll move on to humans," Church told the podcaster Rob Reid earlier this year. The company's efforts to keep its activities out of the press make it unclear how many dogs it has treated so far. In a document provided by a West Coast veterinarian, dated last June, Rejuvenate said its gene therapy had been tested on four beagles with Tufts Veterinary School in Boston. It is unclear whether wider tests are under way. However, from public documents, a patent application filed by Harvard, interviews with investors and dog breeders, and public comments made by the founders, MIT Technology Review assembled a portrait of a life-extension startup pursuing a longevity long shot through the $72-billion-a-year US pet industry. "Dogs are a market in and of themselves," Church said during an event in Boston last week. "It's not just a big organism close to humans. It's something people will pay for, and the FDA process is much faster. We'll do dog trials, and that'll be a product, and that'll pay for scaling up in human trials."

Read more of this story at Slashdot.

Read the whole story
PhaChayFy
95 days ago
reply
Australia
Share this story
Delete

Chromebooks are ready for your next coding project

1 Comment

This year we’re making it possible for you to code on Chromebooks. Whether it’s building an app or writing a quick script, Chromebooks will be ready for your next coding project.

Last year we announced a new generation of Chromebooks that were designed to work with your favorite apps from the Google Play store, helping to bring accessible computing to millions of people around the world. But it’s not just about access to technology, it’s also about access to the tools that create it. And that’s why we’re equipping developers with more tools on Chromebooks.

Pixelbook Android Terminal.jpg

Support for Linux will enable you to create, test and run Android and web app for phones, tablets and laptops all on one Chromebook. Run popular editors, code in your favorite language and launch projects to Google Cloud with the command-line. Everything works directly on a Chromebook.

Linux runs inside a virtual machine that was designed from scratch for Chromebooks. That means it starts in seconds and integrates completely with Chromebook features. Linux apps can start with a click of an icon, windows can be moved around, and files can be opened directly from apps.

A preview of the new tool will be released on Google Pixelbook soon. Remember to tune in to Google I/O to learn more about Linux on Chromebooks, as well as more exciting announcements.

Read the whole story
PhaChayFy
95 days ago
reply
When?!?
Australia
Share this story
Delete

Google Duplex: An AI System for Accomplishing Real-World Tasks Over the Phone

1 Share


A long-standing goal of human-computer interaction has been to enable people to have a natural conversation with computers, as they would with each other. In recent years, we have witnessed a revolution in the ability of computers to understand and to generate natural speech, especially with the application of deep neural networks (e.g., Google voice search, WaveNet). Still, even with today’s state of the art systems, it is often frustrating having to talk to stilted computerized voices that don't understand natural language. In particular, automated phone systems are still struggling to recognize simple words and commands. They don’t engage in a conversation flow and force the caller to adjust to the system instead of the system adjusting to the caller.

Today we announce Google Duplex, a new technology for conducting natural conversations to carry out “real world” tasks over the phone. The technology is directed towards completing specific tasks, such as scheduling certain types of appointments. For such tasks, the system makes the conversational experience as natural as possible, allowing people to speak normally, like they would to another person, without having to adapt to a machine.

One of the key research insights was to constrain Duplex to closed domains, which are narrow enough to explore extensively. Duplex can only carry out natural conversations after being deeply trained in such domains. It cannot carry out general conversations.

Here are examples of Duplex making phone calls (using different voices):
Duplex scheduling a hair salon appointment:
Duplex calling a restaurant:

While sounding natural, these and other examples are conversations between a fully automatic computer system and real businesses.

The Google Duplex technology is built to sound natural, to make the conversation experience comfortable. It’s important to us that users and businesses have a good experience with this service, and transparency is a key part of that. We want to be clear about the intent of the call so businesses understand the context. We’ll be experimenting with the right approach over the coming months.

Conducting Natural Conversations
There are several challenges in conducting natural conversations: natural language is hard to understand, natural behavior is tricky to model, latency expectations require fast processing, and generating natural sounding speech, with the appropriate intonations, is difficult.

When people talk to each other, they use more complex sentences than when talking to computers. They often correct themselves mid-sentence, are more verbose than necessary, or omit words and rely on context instead; they also express a wide range of intents, sometimes in the same sentence, e.g., “So umm Tuesday through Thursday we are open 11 to 2, and then reopen 4 to 9, and then Friday, Saturday, Sunday we... or Friday, Saturday we're open 11 to 9 and then Sunday we're open 1 to 9.”
Example of complex statement:

In natural spontaneous speech people talk faster and less clearly than they do when they speak to a machine, so speech recognition is harder and we see higher word error rates. The problem is aggravated during phone calls, which often have loud background noises and sound quality issues.

In longer conversations, the same sentence can have very different meanings depending on context. For example, when booking reservations “Ok for 4” can mean the time of the reservation or the number of people. Often the relevant context might be several sentences back, a problem that gets compounded by the increased word error rate in phone calls.

Deciding what to say is a function of both the task and the state of the conversation. In addition, there are some common practices in natural conversations — implicit protocols that include elaborations (“for next Friday” “for when?” “for Friday next week, the 18th.”), syncs (“can you hear me?”), interruptions (“the number is 212-” “sorry can you start over?”), and pauses (“can you hold? [pause] thank you!” different meaning for a pause of 1 second vs 2 minutes).

Enter Duplex
Google Duplex’s conversations sound natural thanks to advances in understanding, interacting, timing, and speaking.

At the core of Duplex is a recurrent neural network (RNN) designed to cope with these challenges, built using TensorFlow Extended (TFX). To obtain its high precision, we trained Duplex’s RNN on a corpus of anonymized phone conversation data. The network uses the output of Google’s automatic speech recognition (ASR) technology, as well as features from the audio, the history of the conversation, the parameters of the conversation (e.g. the desired service for an appointment, or the current time of day) and more. We trained our understanding model separately for each task, but leveraged the shared corpus across tasks. Finally, we used hyperparameter optimization from TFX to further improve the model.
Incoming sound is processed through an ASR system. This produces text that is analyzed with context data and other inputs to produce a response text that is read aloud through the TTS system.
Duplex handling interruptions:
Duplex elaborating:
Duplex responding to a sync:

Sounding Natural
We use a combination of a concatenative text to speech (TTS) engine and a synthesis TTS engine (using Tacotron and WaveNet) to control intonation depending on the circumstance.

The system also sounds more natural thanks to the incorporation of speech disfluencies (e.g. “hmm”s and “uh”s). These are added when combining widely differing sound units in the concatenative TTS or adding synthetic waits, which allows the system to signal in a natural way that it is still processing. (This is what people often do when they are gathering their thoughts.) In user studies, we found that conversations using these disfluencies sound more familiar and natural.

Also, it’s important for latency to match people’s expectations. For example, after people say something simple, e.g., “hello?”, they expect an instant response, and are more sensitive to latency. When we detect that low latency is required, we use faster, low-confidence models (e.g. speech recognition or endpointing). In extreme cases, we don’t even wait for our RNN, and instead use faster approximations (usually coupled with more hesitant responses, as a person would do if they didn’t fully understand their counterpart). This allows us to have less than 100ms of response latency in these situations. Interestingly, in some situations, we found it was actually helpful to introduce more latency to make the conversation feel more natural — for example, when replying to a really complex sentence.

System Operation
The Google Duplex system is capable of carrying out sophisticated conversations and it completes the majority of its tasks fully autonomously, without human involvement. The system has a self-monitoring capability, which allows it to recognize the tasks it cannot complete autonomously (e.g., scheduling an unusually complex appointment). In these cases, it signals to a human operator, who can complete the task.

To train the system in a new domain, we use real-time supervised training. This is comparable to the training practices of many disciplines, where an instructor supervises a student as they are doing their job, providing guidance as needed, and making sure that the task is performed at the instructor’s level of quality. In the Duplex system, experienced operators act as the instructors. By monitoring the system as it makes phone calls in a new domain, they can affect the behavior of the system in real time as needed. This continues until the system performs at the desired quality level, at which point the supervision stops and the system can make calls autonomously.

Benefits for Businesses and Users
Businesses that rely on appointment bookings supported by Duplex, and are not yet powered by online systems, can benefit from Duplex by allowing customers to book through the Google Assistant without having to change any day-to-day practices or train employees. Using Duplex could also reduce no-shows to appointments by reminding customers about their upcoming appointments in a way that allows easy cancellation or rescheduling.
Duplex calling a restaurant:

In another example, customers often call businesses to inquire about information that is not available online such as hours of operation during a holiday. Duplex can call the business to inquire about open hours and make the information available online with Google, reducing the number of such calls businesses receive, while at the same time, making the information more accessible to everyone. Businesses can operate as they always have, there’s no learning curve or changes to make to benefit from this technology.
Duplex asking for holiday hours:

For users, Google Duplex is making supported tasks easier. Instead of making a phone call, the user simply interacts with the Google Assistant, and the call happens completely in the background without any user involvement.
A user asks the Google Assistant for an appointment, which the Assistant then schedules by having Duplex call the business.
Another benefit for users is that Duplex enables delegated communication with service providers in an asynchronous way, e.g., requesting reservations during off-hours, or with limited connectivity. It can also help address accessibility and language barriers, e.g., allowing hearing-impaired users, or users who don’t speak the local language, to carry out tasks over the phone.

This summer, we’ll start testing the Duplex technology within the Google Assistant, to help users make restaurant reservations, schedule hair salon appointments, and get holiday hours over the phone.
Yaniv Leviathan, Google Duplex lead, and Matan Kalman, engineering manager on the project, enjoying a meal booked through a call from Duplex.
Duplex calling to book the above meal:

Allowing people to interact with technology as naturally as they interact with each other has been a long standing promise. Google Duplex takes a step in this direction, making interaction with technology via natural conversation a reality in specific scenarios. We hope that these technology advances will ultimately contribute to a meaningful improvement in people’s experience in day-to-day interactions with computers.
Read the whole story
PhaChayFy
95 days ago
reply
Australia
Share this story
Delete
Next Page of Stories