×

Google Releases TensorFlow.Text Library for Natural Language Processing

MMS Founder
MMS RSS

Google released TensorFlow.Text (TF.Text), a new text-processing library for their TensorFlow deep-learning platform. The library allows several common text pre-processing activities, such as tokenization, to be handled by the TensorFlow graph computation system, improving consistency and portability of deep-learning models for natural-language processing (NLP).

In a recent blog post, software engineer and TF.Text team-lead Robbie Neale gave a high-level overview of the contents of the new release, focusing on the new tools for tokenizing text strings. The library also includes tools for pattern-matching, n-gram creation, unicode normalization, and sequence constraints. The code is designed to operate on RaggedTensors: variable-length tensors which are better-suited for processing textual sequences. A key benefit of the library, according to Neale, is that these pre-processing steps are now first-class citizens of the TensorFlow compute graph, which gives them all the advantages of that system. In particular, according to the documentation, “[y]ou do not need to worry about tokenization in training being different than the tokenization at inference….” 

Because deep-learning algorithms require all data to be represented as lists of numbers (a.k.a. tensors), the first step in any natural-language processing task is to convert text data to numeric data. Typically this is done in pre-processing scripts before handing the result to the deep-learning framework. The most common operation is tokenization: breaking the text into its individual words. Each unique word is given a numeric ID; often this is simply its index in a list of all known words. The result is a sequence of numbers which can be input to a neural network.

However, even though tokenization is not strictly part of the neural network model, it is a necessary component of the full NLP “pipeline.” Any code that uses a trained neural network for NLP inference must replicate the tokenization and other pre-processing tasks that were including in the training system. Now that TF.Text allows these pre-processing tasks to be represented as operations in TensorFlow, the full pipeline can be saved as part of the model and consistently reproduced at inference time with no extra steps.

A Twitter user pointed out that Keras, the high-level deep-learning API that runs on top of TensorFlow, already has text pre-processing functionality. Neale replied:

Keras has a subset, but not the breadth of TF.Text. We are actively talking with them to fill in gaps we believe language engineers want, but are not provided in the core Keras API, and I wouldn’t be surprised if additional Keras layers are provided by TF.Text in the future.

The TensorFlow.Text source code and a tutorial notebook are available on GitHub.

Subscribe for MMS Newsletter

By signing up, you will receive updates about our latest information.

  • This field is for validation purposes and should be left unchanged.

Amazon EventBridge – Event-Driven AWS Integration for SaaS Applications Now Generally Available

MMS Founder
MMS RSS

At the recent AWS Summit Event in New York, Amazon announced the general availability of Amazon EventBridge, a serverless event bus that allows AWS, Software-as-a-Service (SaaS), and custom applications to communicate with each other using events. 

With EventBridge customers can build and manage event-driven solutions by controlling the event ingestion, delivery, security, authorization, and error-handling in a central place. Furthermore, the service doesn’t require the customer to manage any infrastructure or scaling, and they only pay for the events their applications consume. 

Amazon EventBridge shares the same event processing model that forms the basis for CloudWatch Events – and as Jeff Barr explains in a blog post about Amazon EventBridge, everything a customer knows about CloudWatch still applies with one addition:

In addition to the existing default event bus that accepts events from AWS services, and calls to PutEvents and from other authorized accounts, each partner application that you subscribe to will also create an event source that you can then associate with an event bus in your AWS account. You can select any of your event buses, create EventBridge Rules, and select Targets to invoke when an incoming event matches a rule.


Source: https://aws.amazon.com/eventbridge/

Through the AWS Management Console, Command Line Interface (CLI), or SDK, customers can start with EventBridge and create a new event bus and receive events from SaaS applications. Subsequently, they can then create a rule to match events from a list of AWS services or SaaS applications and proceed to set up targets for their events.

Many companies that have gone all cloud do not necessarily use the available operational services by the cloud vendor. With EventBridge, Amazon can serve these customers with an opportunity to integrate their operations with third-party services like ZenDesk, PagerDuty, and SignalFx. 


Source: https://aws.amazon.com/blogs/aws/amazon-eventbridge-event-driven-aws-integration-for-your-saas-applications/

Adrian McDermott, president of products, Zendesk, told InfoQ:

Businesses want to manage customer data in a way that works for them. However, developers often have their hands tied with legacy CRM solutions and cannot adequately manage events’ customer data. With the Zendesk Events Connector for Amazon EventBridge, businesses can easily stream data and events into popular AWS services such as S3, Redshift, Kinesis, and Sagemaker. This allows businesses to build on top of their customer data for the next wave of connected customer experiences. 

Furthermore, more integrations with EventBridge are coming. On Reddit, in a thread around Amazon EventBridge release, a responded stated:

We’re working with a lot of other SaaS providers helping them to build integrations, and you’ll see new integrations show up in the console as we roll them out.

Also, with EventBridge, companies can automate specific tasks such as allowing AWS Lambda to respond to particular events and perform a restart of a Virtual Machine, cleanse marketing data, or run particular business logic. This type of automation can enable companies to standardize their operations on AWS and is what Amazon hopes they will do. Amazon has set up a dedicated partner program for this purpose to encourage more SaaS providers to add integrations for their offerings. 

Finally, Amazon is not the only public cloud provider with an Event service – Microsoft already has a service available for quite some time called Azure Event Grid. Event Grid is a service which enables developers to manage events in a unified way in Azure, and doesn’t natively support SaaS integrations like EventBridge.
Currently, Amazon EventBridge is available in the following regions:

  • Americas – US East (Ohio and N. Virginia), US West (Oregon and N. California), Canada (Central), and South America (Sao Paulo)
  • Europe – Stockholm, Paris, Ireland, Frankfurt, and London
  • Asia Pacific – Mumbai, Tokyo, Hong Kong, Seoul, Singapore, and Sydney

Furthermore, Amazon will start supporting more regions soon in China and Osaka. 

Lastly, Amazon will charge customers on the number of events published to the event buses in their account, billed at $1 for every million events. Furthermore, note that Amazon will not charge for events published by AWS services. For pricing details, see the pricing page.

Subscribe for MMS Newsletter

By signing up, you will receive updates about our latest information.

  • This field is for validation purposes and should be left unchanged.

The Three Key Dimensions of Front-End Frameworks, by Evan You, Vue.js Creator

MMS Founder
MMS RSS

Evan You, Vue.js front-end framework creator, recently talked at JS Conf Asia 2019 about seeking the balance in framework design. According to You, frameworks should be distinguished by the design tradeoffs they offer on scope, render mechanism, and state mechanism, rather than on popularity- and community-based metrics. Rather than a binary (good vs. bad) framework evaluation, framework are best graduated on a continuous tradeoff axis.

The abundance of front-end frameworks has given birth to a number of comparison studies, aiming at helping developers and architects pick the most appropriate framework for the needs of their projects. While a large set of criteria can be used for the comparison, popularity- and community-based metrics (GitHub stars, npm downloads, stackoverflow questions) are not, according to You, the most decisive factors which should drive a framework adoption. You contends that the tradeoffs originating from the framework design are most enlightening. You groups the key tradeoffs on the three dimensions of scope, rendering and state management.

Scope refers to what the framework tries to do for its users.

So-called Cathedral frameworks such as Angular or Aurelia favor a top-down design, in which most of the conceivable issues developers will run into have already been factored into the framework design. Features like validation, animation, internationalization, or accessibility are baked into the framework, so developers do not have to implement and integrate their own solution.

On the other end of the scope dimension are so-called Bazaar frameworks, with a focused scope. React for instance sees itself as focusing on the view layer of front-end applications, with other specialized libraries coming together to handle additional concerns such as data fetching (relay), or side-effects (redux-saga).

You places the Vue.js framework somewhat in the middle (Progressive framework). On the one hand, Vue largely focuses on the view layer. On the other hand, Vue’s layered design allows for opt-in, official, documented solutions for common problems.

Developers need understand the pros and cons to each approach.

The Cathedral approach may be productive in the short-term with its optimized, built-in abstractions covering most common problems. The associated ecosystem of user-defined libraries may as a result be more consistent and coherent. On the down side, that ecosystem is smaller and harder to grow. The built-in abstractions may have a large scope and require substantial, specialized learning and training.

The Bazaar approach has fewer concepts to get started with. The larger flexibility allows for a large ecosystem from which user can build arbitrarily complex systems. For the framework team, the smaller maintenance surface means they can more easily explore new ideas (React introducing Hooks) and optimizing their area of focus (React Concurrent Mode). However, users needs to spend significant time picking up and integrating a plethora of extra libraries which largely varying levels of maintenance and documentation.

The Progressive approach takes pros and cons from both sides. Developers for instance often celebrate how easy it is to get started with Vue, and the documentation of officially supported libraries. However, Progressive frameworks share the same maintenance burden as Cathedral frameworks, while not being able to exhibit an ecosystem as rich as Bazaar frameworks.

The second dimension of analysis is the view rendering mechanism. This involves how the UI structure is expressed, and how the UI is rendered.

On the one hand, in JSX- or TSX-based frameworks (such as React, Stencil or Dojo), developers can use the full power of the Turing-complete underlying language to express arbitrarily complex logic. Those frameworks usually see the view as data (virtual DOM), with large customization opportunities in user land, including using view data for rendering to alternative targets (rendering into a terminal, or to pdf). On the negative side, the fully dynamic nature of render functions makes it hard to optimize for by default. Such frameworks, like React, may include escape hatches to enable developers to optimize rendering themselves (React’s shouldComponentUpdate and useMemo). Optimizing rendering in those frameworks involves a significant learning phase, as developers may have to deeply acquaint themselves with implementation details of the framework.

On the other hand, template-based frameworks such as Vue or Svelte, bound the expressiveness of the rendering function to the capability of the template language. Templates are harder to customize out of the possibilities already considered in the template language. The rigid structure however enables optimized-by-default rendering of the UI. In Svelte, for instance, the following template <h1> Hello {name} </h1> will result in running the following compiled code when name changes, updating chirurgically only the part of the DOM which needs to be changed:

p(changed, ctx){
  if (changed.name) {
    set_data (t1, ctx.name)
  }
}

Such by-default optimization are difficult to achieve for frameworks which use a virtual DOM and compute the DOM operations to perform in case of updates by keeping track and diffing two versions of a virtual DOM – thus exhibiting a higher memory and CPU usage profile.

You notes that Vue strikes a middle ground, by being template-based by default, with by-default rendering optimizations, while still letting developers falling back to full-fledged JavaScript render functions when the need arises.

As a third dimension to evaluate framework on, You mentions the mechanism by which state is managed by the framework, in particular the reactivity and data synchronization abilities offered by the framework.

You, in his presentation, doubts the existence of a universally best framework:

The framework landscape is like a multi-dimensional space with multiple ever-moving entities, each seeking its own balance point.
(…) Where is the perfect balance point? Is a single perfect balance point even optimal for JS devs as a whole?
(…) There isn’t a single Good vs. Bad spectrum for frameworks.

Developers need understand front-end framework design tradeoffs and how that fits the requirements of their projects.

Subscribe for MMS Newsletter

By signing up, you will receive updates about our latest information.

  • This field is for validation purposes and should be left unchanged.

Practical Domain-Driven Design with Events and Microservices – Indu Alagarsamy at QCon New York

MMS Founder
MMS RSS

Domain-driven design (DDD) concepts like Bounded Contexts and Messaging technologies can be used to build reliable systems that can scale with the business changes. Indu Alagarsamy recently spoke at the QCon New York 2019 Conference about using the combination of well-defined bounded contexts and events to develop autonomous microservices that are flexible to adapt to the business changes.

Alagarsamy said when you start to use the technology of messaging to communicate between clean and well-defined bounded contexts, you get to remove temporal coupling. Bounded contexts provide clarity, and the models in each bounded context are logically consistent and can evolve freely.

She discussed an example use case of an e-commerce application where the product is one of the core entities. Product entity is context-based, and means different things to different domain teams as follows:

  • Sales: It’s a thing that has a description, images, and a price
  • Inventory: It’s a thing that is either available or not
  • Shipping: It’s a thing that has weight and dimensions that needs to be packaged

Unified models are hard, and finding the right boundaries between the business domain can be a challenge. She suggested that domain models can be split according to teams and departments to help with a better organization of the model. It can also be split according to business processes in the system.

Alagarsamy discussed how bounded contexts communicate with each other, using the example of an Airline application.

She talked about two important components of event driven microservices: Commands and Events. Events are useful as the communication mechanism between different bounded contexts. They help minimize the temporal coupling between bounded contexts. Commands are useful as a communication style within the single bounded context. Writing code becomes much simpler once you model the events and commands.

During the requirements gathering phase, look for key word “when” in the description of business requirements; this usually indicates a business event. She discussed some examples of Events in an airlines application.

WHEN an aircraft type is changed:

  • Passenger gets notified with a new booking proposal
  • Passenger can cancel flight
  • Passenger can accept the proposed booking

She also talked about how events and messages are associated with business processes in an application. A business process can be triggered by an event from a different bounded context. Multiple messages can take part in a business process. When designing the messages, make them immutable, i.e. get rid of public setters. Instead, set the properties at construction time of the domain class.

Saga Pattern can be used to manage multiple messages that take part in the same transaction. The pattern allows you to take compensating actions in case any of the steps in the business process don’t succeed.

Event Storming is another critical step in the overall requirements analysis effort. It’s a collaborating technique used by development teams with business stakeholders for the exploration of complex business domains. Domain modeling using event storming techniques helps to identify when an event occurs, and what actions the application should take. We can use this to align the commands with the events.

It’s also important to use proper naming conventions for the various elements in the domain model. Give your command handlers proper names and verify the names during code reviews and peer reviews. Check for language and naming of events, classes, and handlers.

Alagarsamy concluded the presentation by saying that models are not perfect, and that teams should follow the practices to ensure their domain models are in good alignment with business objectives and requirements.

  • Talk to domain experts. Event storm with them.
  • Evolve and refactor with an obsession for domain language.
  • Strive for Autonomy. Use events to communicate between bounded contexts.

If you are intested in Domain Driven Design topics, check out the original book by Eric Evans or the recent book titled Domain-Driven Design: The First 15 Years which includes essays from the DDD community leaders, including Indu Alagarsamy.

Subscribe for MMS Newsletter

By signing up, you will receive updates about our latest information.

  • This field is for validation purposes and should be left unchanged.

Thinking about Moving Up to Automated Machine Learning (AML)

MMS Founder
MMS RSS

Summary: Are you wondering about moving up to Automated Machine Learning (AML)?  Here are some considerations to help guide you.

 

Are you wondering about moving up to Automated Machine Learning (AML)?  Or perhaps you’ve already made the decision but are wondering about the capabilities of individual platforms, their strengths and limitations and how to choose.  Here are some considerations to help guide you.

What’s Your Motivation?

This is intended to be a little broader than business case and requirements.  Chances are your broader motives fall into one or more of the following buckets and can certainly involve more than one at the same time.

  1. Efficiency

So far the greatest motivation behind AML adoption has belonged to companies who are already deploying large numbers of ML models.  If you’re creating and managing dozens or even hundreds of models as is frequently the case in insurance, banking, and ecommerce then the ability to create more models and keep them refreshed is an obvious issue. 

Cost savings are a top motivation as fewer data scientists can now do the work of many.  Speed, that is time to benefit is also greatly enhanced especially in the model refresh and deploy cycle.

  1. Broader Participation

Be aware that many of the up and coming AML platforms differentiate themselves based on audience.  Those that appeal to your existing data science team offer easier and more complete access to choices in data prep, feature selection, model selection, and model tuning with their hyperparameters.

The larger emerging camp seeks to make the process much easier for less experienced modelers.  On the one hand this can be your first year data science hires who will rely more on the automated features than perhaps the more experienced team members.  On the other hand there are platforms so completely automated that they encourage LOB Managers, analysts, and other citizen data scientists to participate directly in model building.

Having more people directly participating in model building can seem like a very desirable objective.  Be sure you have sufficient controls to prevent putting models into operation that haven’t been fully vetted by your experienced data scientists.  It’s still possible for the operator of a fully automated tool to create a model that’s not sufficiently accurate, won’t generalize, or worse, predicts exactly the wrong thing.

  1. Just Getting Started

If you’re just getting started on your digital journey and don’t yet have a dedicated data scientist or two, you might be tempted to sign up for an AML and give your LOB Managers and analysts enough training to get started.  Don’t go there.

As in the last section, it’s still possible for an inexperienced modeler to create a model that will leave you worse off than having no model at all.  You’re going to need some quality control before you turn new models loose on your customers or processes.

 

How Much Does Accuracy Count

In machine learning there is always a practical tradeoff between model accuracy and time to develop.  Your data scientists will no doubt be happy to continue to deliver increasing incremental gains in model accuracy for days or weeks. 

Still, it’s important to understand the tradeoff between model accuracy and revenue or margin.  It’s not unusual for small gains in accuracy to create proportionately much larger gains in campaign results.

Your data science team lead no doubt understands this and has already put some controls in place.  The real issue is whether the automated output of the AML platform meets your minimum requirements.

Determining this will require some benchmarking during the selection process so that you have side-by-side comparisons.  Most all AML platforms use multiple algorithms and teams of algorithms run in competition with one another to select the winners. 

Accuracy within the AML may be less than optimum if the number of candidate algorithms is restricted to just a few.  It’s just as likely however that any shortfall in accuracy may have occurred in the automated data prep, cleansing, feature engineering, and feature selection.  You’ll need experienced members of your data science team to help you evaluate this issue.

 

Basic Feature Set

At this stage in market maturity, any AML you consider should offer all of the following automated capabilities:

Data Blending:  The combination of data from different sources into a single file.  This still requires the operator to specify things like inner or outer joins of data sets.  The most advanced platforms may also be able to detect whether the data from two different sources with the same name (e.g. ‘sales’) has the same meaning.  At this point however it’s best to have either really robust data governance (and not many do) or to have modelers sufficiently intimate with the data that they can detect this sort of mismatch.

Data Prep and Cleansing:  In this category is automated correction of data in incompatible formats (dates, values with embedded commas, etc.)  Most AML data prep platforms do a good job at this.  Cleansing is more complex.  It involves for example the identification of outliers and how they are to be treated, the correction of badly skewed distributions, the conversion of categoricals into independent features, or even the compression of data ranges (typically -1 to 1) to create data sets as required for some specific types of algorithms like neural nets.

Feature Engineering:  In concept feature engineering is simple.  For example converting related variables into ratios (e.g. debt to income) or dates into number of days since other events have occurred (age of the account, days since last purchase, etc.).  In automated form this frequently requires the AML to create all possible combination of these artificial features without regard for whether they are logical, and then let the algorithms figure out which are predictive (typically only a small fraction).  Depending on how this is handled in the AML this can add a very large amount of compute overhead.  You’ll want to examine if this step creates any unforeseen requirements in time or compute cost.

Feature Selection and Modeling:  These are traditionally thought of separately but I’ve combined them here as AML platforms might.  In traditional modeling feature selection can be a separate step that precedes model creation to make the modeling process more accurate and efficient.  However, it’s also possible to have the models consider all possible features and to automatically eliminate those which are least predictive. 

Automated modeling typically involves running parallel contests on the data with different algorithms.  During the contests the AML should also be varying the hyperparameters of the different models to attempt to achieve an optimum result.  How feature selection, modeling, and hyperparameter tuning is handled by the platform will require your detailed attention during trials.

Model Deployment:  Your AML should be able to automatically generate production code in your choice of language compatible with your operating systems (typically Python, C+, Java, or other popular production languages). 

Model Management and Refresh:  The first time you deploy a model in your operating systems you will need to define exactly where it goes.  Thereafter a complete AML should be able to monitor the model, determine when a refresh is appropriate, and with minimum human intervention refresh the model and automatically redeploy it.  There are human quality control verifications in this process but once the model has been developed, refresh and redeploy should require only a small fraction of the original labor for original development.

 

Some Advanced Considerations

Automation of the Entire Process:  In a fully automated system, particularly one focused on maintaining and refreshing existing models it’s important that the entire process can be programmatically defined.  In this way the entire process from data capture through deployment and all the customized steps in between can be captured and repeated making the end-to-end process truly automated.

Data Types: Depending on your business you may have a variety of data inputs that may have special needs including unstructured or semi-structured text or image data, or streaming data.  A few AML platforms can handle these more advanced requirements.  A few AML platforms already have the ability to create deep learning CNN and RNN models though this type of modeling is not yet common in business.

Prepackaged Automation Libraries:  During initial model development your data science team will have identified specific steps in the process that need particular attention.  These might include data prep, feature selection, or hyperparameter optimization.  Ideally your AML platform will include libraries or APIs of callable solutions that can shortcut data scientist labor on these tasks.

Training Data Requirements:  Some algorithms that might be considered during the competition for best model may be particularly data hungry.  You will want to understand the tradeoffs between including these algorithm types against the availability or cost of acquisition of sufficient training data.

On Premise Solution:  Some AML platforms that are particularly compute intensive (as many are) are optimized for a SaaS cloud delivery solution.  If your business requires an on-prem or private cloud solution for data security you’ll need to identify the cost and complexity of this option.

While AMLs are positioned for their simplicity, there are many factors to be considered before jumping in.  You’ll want help from your data scientist pros in selecting the right one.

 

 

Other articles by Bill Vorhies

 

About the author:  Bill is Contributing Editor for Data Science Central.  Bill is also President & Chief Data Scientist at Data-Magnum and has practiced as a data scientist since 2001.  His articles have been read more than 2 million times.

He can be reached at:

Bill@DataScienceCentral.com or Bill@Data-Magnum.com

 

Subscribe for MMS Newsletter

By signing up, you will receive updates about our latest information.

  • This field is for validation purposes and should be left unchanged.

Microsoft, Salesforce and the Ethereum Foundation Join Open-source Hyperledger Blockchain Project

MMS Founder
MMS RSS

In a recent press release, Hyperledger, an open-source blockchain and distributed ledger project, announced eight new members have joined their consortium including Microsoft, Salesforce and the Ethereum Foundation. These organizations join established members like Airbus, Cisco, IBM and Intel.

The Hyplerledger project is a multi-stakeholder initiative, hosted by the Linux Foundation, which focuses on building blockchain frameworks and tools for enterprises. The existing frameworks include capabilities for smart contract machine development (Hyperledger Burrow), decentralized identity (Hyperledger Indy) and permissioned/permission-less support for Ethereum Virtual Machines (EVM) (Hyperledger Sawtooth). From a tooling perspective, Hyperledger supports the infrastructure for peer-to-peer interactions (Hyperledger Aries), performance benchmarking (Hyperledger Caliper) and shared cryptographic libraries (Hyperledger Ursa). 

Microsoft’s involvement in blockchain goes back several years as they have been building out capabilities in Azure for organizations requiring blockchain-as-a-service capabilities. These investments include Project Bletchley, R3/Corda/Quorum protocol support and Microsoft-Truffle partnership to name a few. 

In addition, Microsoft has been focused on contributing to open standards and specifications by being a founding member of both the Enterprise Ethereum Alliance (EEA) and the Token Taxonomy Initiative (TTI). With Microsoft’s current involvement in collaborating on open standards, joining the Hyperledger project seemed like a logical next step. Marley Gray, principal program manager on the Azure Blockchain team, explains:

We’re excited to join the Hyperledger community and look forward to rolling up our sleeves and being an active contributor to both discussions and project code. We believe that developing standards and open specifications, as well as collaborating on implementations of them, is critical to removing customer blockers and accelerating blockchain as a mainstream technology. Through our work related to the EEA and TTI, we have identified several opportunities for Microsoft to lean in and contribute code in project areas such as tokens, ledger integration, and developer experience.

Salesforce is relatively new to the blockchain space having recently introduced their low-code blockchain platform for CRM. Their offering was built on Hyperledger Sawtooth and customized for the Salesforce Lightning platform. The goal of Salesforce Blockchain is to:

Lower the barrier for creating trusted partner networks by enabling companies to easily bring together authenticated, distributed data and CRM processes.

The motivation for Salesforce to join the Hyperledger project includes tapping into the broader blockchain community. Adam Caplan, svp, Emerging Technology, Salesforce explains:

Blockchain is quickly becoming a foundational technology for organizations to deliver a truly connected customer experience and Hyperledger has created a great blockchain community that we’re excited to learn from and be a part of.

The Ethereum Foundation and Hyperledger have often been seen as competitors. However, this does not seem to be the case moving forward. In a recent tweet, the Ethereum Foundation twitter account shared their support:

The Ethereum Foundation is proud to lend our support to the efforts of both the @EntEthAlliance and @Hyperledger through our membership today. Together, we’ll continue to drive forward #Ethereum’s progress and adoption.

For additional information about Hyperledger’s open source frameworks and tools, please visit their GitHub repository. 

Subscribe for MMS Newsletter

By signing up, you will receive updates about our latest information.

  • This field is for validation purposes and should be left unchanged.

Presentation: Panel Debate – Is There a Difference Between Agile and Business Agility?

MMS Founder
MMS RSS

Bio

Dean Latchana supports organizations to succeed in a dynamic market by helping them to develop awareness and ways of working that are contextual to their needs.

About the conference

Many thanks for attending Aginext 2019, it has been amazing! We are now processing all your feedback and preparing the 2020 edition of Aginext the 19/20 March 2020. We will have a new website in a few Month but for now we have made Blind Tickets available on Eventbrite, so you can book today for Aginext 2020.

Subscribe for MMS Newsletter

By signing up, you will receive updates about our latest information.

  • This field is for validation purposes and should be left unchanged.

How to learn the maths of Data Science using your high school maths knowledge – Gradient Descent

MMS Founder
MMS RSS

This post is a part of my forthcoming book on Mathematical foundations of Data Science.

In the previous blog, we saw how you could use basic high school maths to learn about the workings of data science and artificial intelligence

In this post we extend that idea to learn about Gradient descent

We have seen that a regression problem can be modelled as a set of equations which we need to solve. In this section, we discuss the ways to solve these equations. Again, we start with ideas you learnt in school and we expand these to more complex techniques. In simple linear regression, we aim to find a relationship between two continuous variables. One variable is the independent variable and the other is the dependent variable (response variable). The relationship between these two variables is not deterministic but is statistical.  In a deterministic relationship, one variable can be directly determined from the other. For example, conversion of temperature from Celsius to Fahrenheit is a deterministic process. In contrast, a statistical relationship does not have an exact solution – for example – the relationship between height and weight. In this case, we try to find the best fitting line which models the relationship. To find the best fitting line, we aim to reduce the total prediction error for all the points. Assuming a linear relation exists, the equation can be represented as y = mx + c for an input variable x and a response variable y. For such an equation, the values of m and c are chosen to represent the line which minimises the error. The error being defined as

Image source and section adapted from sebastian raschka

This idea is depicted above.

More generally, for multiple linear regression, we have

y = m1x1 + m2x2 + m3x3

when expressed in the matrix form, this is represented as

            y = X . m

Where X is the input data, m is a vector of coefficients and y is a vector of output variables for each row in X. To find the vector m, we need to solve the equation.

There are various ways to solve a set of linear equations

 

The closed form approach using the Matrix inverse

For smaller datasets, this equation can be solved by considering the matrix inverse. Typically, this was the way you learned to solve a set of linear equations in high school

Source matrix inverse – maths is fun

 

This closed form solution is preferred for relatively smaller datasets where it is possible to compute the inverse of a matrix within a reasonable computational cost. The closed form solution based on computing the matrix inverse does not work large datasets or where the matrix inverse does not exist.

Instead of considering the matrix inverse, there is another way to solve a set of linear equations. Consider that the in the above diagram for the X and Y axes, the overall problem is to reduce the value of the loss function. In the above diagram, the cost function J(w), the sum of squared errors (SSE), can be written as:

 

Hence, to solve these equations in a statistical manner (as opposed to deterministic), we need to find the minima of the above loss function. The minima of the loss function can be computed using an algorithm called Gradient Descent

The Gradient descent approach to solve linear equations

Typically, in two dimensions, the Gradient descent algorithm is depicted as a hiker (the weight coefficient) who wants to climb down a mountain (cost function) into a valley (representing the cost minimum), and each step is determined by the steepness of the slope (gradient). Considering a cost function with only a single weight coefficient, we can illustrate this concept as follows:

 

Using the Gradient Decent  optimization algorithm, the weights are updated incrementally after each epoch (= pass over the training dataset). The magnitude and direction of the weight update is computed by taking a step in the opposite direction of the cost gradient. In three dimensions, solving this equation involves computing the minima of the loss function expressed in terms of slope and the intercept as below

 

 

http://ucanalytics.com/blogs/intuitive-machine-learning-gradient-descent-simplified/

Use of Gradient descent in multi-layer perceptrons

You saw in the previous section, how the Gradient descent algorithm can be used to solve a linear equation. More typically, you are likely to encounter gradient descent to solve equations for multilayer perceptrons i.e. deep neural networks (where we have a set of non-linear equations). While the same principle applies, the context is different as we explain below

 

In a neural network, like the linear equation, we also have a loss function. Here,  the loss function represents a  performance metric which reflects how well the neural network generates values that are close to the desired values. The loss function is intuitively the difference between the desired output and the actual output. The goal of the neural network is to minimise the loss function for the whole network of neurons. Hence, the problem of solving equations represented by the neural network also becomes a problem of minimising the loss function. A combination of Gradient descent and backpropagation algorithms are used to train a neural network i.e. to minimise the total loss function

The overall steps are

  • In forward propagate, the data flows through the network to get the outputs
  • The loss function is used to calculate the total error
  • Then, we use backpropagation algorithm to calculate the gradient of the loss function with respect to each weight and bias
  • Finally, we use Gradient descent to update the weights and biases at each layer
  • Repeat above steps to minimize the total error of the neural network.

Hence, in a single sentence we are essentially propagating the total error backward through the connections in the network layer by layer, calculate the contribution (gradient) of each weight and bias to the total error in every layer, then use gradient descent algorithm to optimize the weights and biases, and eventually minimize the total error of the neural network.

Explaining the forward pass and the backward pass

In a neural network, the forward pass is a set of operations which transform network input into the output space. During the inference stage neural network relies solely on the forward pass. For the backward pass, in order to start calculating error gradients, first, we have to calculate the error (i.e. the overall loss). We can view the whole neural network as a composite function (a function comprising of other functions). Using the Chain Rule, we can find the derivative of a composite function. This gives us the individual gradients. In other words, we can use the Chain rule to apportion the total error to the various layers of the neural network. This represents the gradient that will be minimised using Gradient Descent.

A recap of the Chain Rule and Partial Derivatives

We can thus see the process of training a neural network as a combination of Back propagation and Gradient descent. These two algorithms can be explained by understanding the Chain Rule and Partial Derivatives. 

The Chain Rule

The chain rule is a formula for calculating the derivatives of composite functions. Composite functions are functions composed of functions inside other function(s). Given a composite function f(x) = h(g(x)), the derivative of f(x)  is given by the chain rule as

You can also extend this idea to more than two functions. For example, for a function f(x) comprising of three functions A, B and C – we have  

f(x) = A(B(C(x)))

The chain rule tells us that the derivative of this function equals:

Gradient Descent and Partial Derivatives

As we have seen before, Gradient descent is an iterative optimization algorithm which is used to find the local minima or global minima of a function. The algorithm works using the following steps

  • We start from a point on the graph of a function
  • We find the direction from that point, in which the function decreases fastest
  • We travel (down along the path) indicated by this direction in a small step to arrive at a new point

The slope of a line at a specific point is represented by its derivative. However, since we are concerned with two or more variables (weights and biases), we need to consider the partial derivatives. Hence, a gradient is a vector that stores the partial derivatives of multivariable functions. It helps us calculate the slope at a specific point on a curve for functions with multiple independent variables. We need to consider partial derivatives because for complex(multivariable) functions, we need to determine the impact of each individual variable on the overall derivative.  Consider a function of two variables x and z. If we change x, but hold all other variables constant, we get one partial derivative. If we change z, but hold x constant, we get another partial derivative. The combination represents the full derivative of the multivariable function.

Conclusion

In this post, you understood the application of Gradient descent both in neural networks but also back to linear equations

Please follow me on Linkedin – Ajit Jaokar if you wish to stay updated about the book

 

References

Ken Chen on linkedin

under the hood of neural networks part 1 fully connected

Subscribe for MMS Newsletter

By signing up, you will receive updates about our latest information.

  • This field is for validation purposes and should be left unchanged.

Using Helm Charts Tools to Manage Kubernetes Deployments at Delivery Hero

MMS Founder
MMS RSS

Delivery Hero’s (DH) engineering team wrote about their use of Helm and related tools to simplify management of multiple Kubernetes environments, sensitive data, and configuration. The other tools are helmfile, helm-diff, and helm-secrets. InfoQ reached out to Max Williams, Principal System Engineer at Delivery Hero AG, to gain more insights.

Multiple teams at DH use Helm charts to package applications and cluster level tools. Helmfile maintains information about which clusters have which charts installed, analogous to Ansible playbooks. Since they are stored in Git, it is easy to track and push changes to non-production clusters first. DH’s cluster consists of 100s of nodes and 1000s of pods, according to Williams.

Helm – the Kubernetes package manager – is a CNCF-hosted project widely used to install applications (as “charts” in Helm parlance) on Kubernetes clusters. Most teams start out using YAML files to declare the state of their clusters. This can quickly become unmanageable as the number of files increases. Helm simplifies this by grouping together all the resources needed to install an application into a chart. helmfile in turn helps to track different versions of charts across different environments (staging, production) by maintaining helm values files, chart versions and Kubernetes cluster contexts together.

The third tool that DH uses is helm-diff, which shows a color-coded diff of changes between versions. Even if the diffs look ok, things can still go wrong once deployed. Williams notes that they have not faced such instances yet, except for “issues that don’t already exist in Helm itself”.

helm-secrets is a Helm plugin that can encrypt, decrypt and view secrets files and uses the Mozilla sops tool as the underlying encrypted file editor. sops provides a “a wrapper around a text editor that takes care of the encryption and decryption transparently”. This integrates with both GCP and AWS Key Management Systems (KMS). DH’s Kubernetes is hosted “mostly on EKS and GKE, and some teams are still using kops clusters on AWS but not for much longer,” says Williams.

DH also uses Terraform for infrastructure automation and Sysdig and Prometheus for monitoring.

Subscribe for MMS Newsletter

By signing up, you will receive updates about our latest information.

  • This field is for validation purposes and should be left unchanged.

Presentation: Creating a Trusted Narrative for Data-driven Technology

MMS Founder
MMS RSS

Transcript

Dr. Joshi: I’m the clinical lead for AI. What does that mean? Those of you who know what NHS England is put your hands up. This is the first conference I’ve been to that’s not a health conference where people actually know. So I won’t explain that, I won’t tell you how money flows, and I won’t tell you what we do because you all know. Those of you who are developing something in health as well put your hand up or are from a health background. Well done. Thank you, sir, we will refer to you for future questions and answers.

This is not my slide deck. I work with a fantastic girl called Jess Molly, who most of you can find on Twitter, but she is the one who really should be up here. She’s doing a Master’s in the ethics of AI and health and care, and this is her slide deck. I will put all credit to her, but I will try and tell you a little bit about what we’re doing, why we’re doing it, and what we want to do over the next couple of years in this space. Those of you who are familiar with health, so my background is das a doctor. I’m an A&E doctor. I work in hospitals around London, so if any of you ever fall sick in this room, we definitely know how to call 999 but more importantly, you’re in safe hands. You’re in good hands here.

In health, we’ve been doing ethics for a long time. We get taught at medical school, so I was taught medical ethics. I didn’t go to all the lectures. Is it important one might say now? Yes, it was, but at the time, it was quite boring, and nobody really understood the point. We have this beautiful lecture theatre, I went to UCL up the road, which went all the way up. In the medical ethics lectures you’d have this gap of about 500 rows where it was empty and then at the very back, we all sat because we just could not be bothered to listen to medical ethics. Now, as you grow up and you become a consultant, which is one of the senior levels in health and care, you realize, actually, you should have gone there. Because all the complaints come to you and you try and understand how do you actually deal with some of those complaints, but you’re not interested in my medical background. You want to know about some of the stuff we’re doing in health and care and AI.

What’s Happening in Health and Care and Ai?

A bunch of reports have been reported out there and written. We wrote one ourselves, this one over here, on what’s happening in health and care and AI? Actually, we don’t need to tell you this, but it’s bit of a Wild West. There’s loads of stuff being written. If you’re an early-stage company, if you want to get VC funding, whack in the term AI. You’re almost guaranteed to get funding. Actually, a lot of investors then come and talk to us “Do you think this company is really doing anything?” – “Have you actually looked at what it’s developing and whether it’s developing stuff ethically and in a good way?” “Well, no. They told us they can definitely get rid of doctors in 10 years’ time.” I was like, “There you’re going wrong. Think about what you’re doing.”

You need to set some rules in this game, don’t you? I always feel a little bit nervous standing up here in front of professionals who actually know what they’re doing and saying, “How do we set the rules?” but as government, we all want to set some rules. We’ve set up the Office of AI who are there to do four things, they say. They’re looking at how to develop the skills, which is really important. They’re also looking at how to make the UK a good place to do this stuff, but also to set the rules around this game of how do we, as a society, when we’re talking about artificial intelligence, or machine learning or whatever you want to call it, how do we do this in a good and ethical way?

Gareth [Rushgrove] talked about we’ve got loads of data. One of the things we had, when we went out to start, we wrote a Code of Conduct. This is my punch line. I’ll give it to you up front. One of the things we did when we started thinking about this Code of Conduct was, “Why do we need it? We’ve already got medical ethics. We have something called HRA,” which is our health and research, that body, advisory group or something. I can’t remember what HRA stands for, which is terrible, isn’t it? But they’re there; you need ethical approval before you do medical research, and it’s very clear, the guidelines.

One of the things that came out was that it’s not always clear – if you’re doing product development, so a lot of you will be developing products and also if you got ethical research approval, but then you went on and you did something else that wasn’t exactly what you said you were doing when you started out, so we wanted to talk to lots of people. We spoke to these people. I don’t know. Have any of you heard of understanding patient data? Definitely, go and have a look at their site. They’re a spinout of the Wellcome Trust, which is a think tank, and they really say, “Patient data is a funny thing, isn’t it?”

How many of you here have an app or something on your smartphone or on your whatever it is, your Apple watch, that you share data with freely about your health? It’s nearly all of you. You guys are really a distinct audience, but how many of you are aware of whether that data is being used for you, your direct need, or whether it’s being collected and utilized for bigger things? Bear in mind, you’re a really clever audience. The average reading age of the UK population is between five and seven, so people who come and see me in my A&E they barely understand when I say, “Take one tablet four times a day, one tablet four times a day.” “Yes, doctor, definitely.” Now, they will download these apps, they will download loads of things, and they will share things. They don’t always know what they’re doing in that space.

We, therefore, have the responsibility, as the safeguarders of the system, to ensure that those people, the data they’re sharing, but also that they understand what they’re doing, is right and ethical. We’ve worked really closely with the UPD, the Understanding Patient Data guys, to understand what the problem was and what the need was. I won’t go into this, but we’ve had some issues in the past, where people haven’t always understood the rules. Therefore, how do we decide what the rules of the game is? You need to create an ecosystem, don’t you, when you do this stuff? People will always talk about, “Oh, yes, let’s set some rules. Boom, there you go. Let’s have it, and really go and adopt it. I don’t care how, you adopt it but go and adopt it. ” But actually, how do you create an ecosystem in health that’s using large sets of data, that understands that actually, this is slightly different because obviously, those of you familiar with GDPR know that the rules of the game are slightly different?

What we wanted to do was try and create a bit of the rules of the game, which we did. I won’t say we’ve done it, I’ll say we’re starting to do with the Code of Conduct, which I’ll come to, but also then how do we win the community. How do we bring people along with us? When I say people, I mean you in the room, but also people who are the workforce. We’ve got 1.2 million people who work for the NHS. That’s just under the banner of the NHS contract, but then you’ve got a huge community of people. You’ve got charities, you’ve got volunteers sectors, you’ve got the think tanks that all work in this thing we believe of free at the point of care, health and care. Then, you’ve got the people in the system, the people who are buying, the people who are commissioning, but also the people who are developing, like some of you in the room. And so we have to create an ecosystem across all three areas to say, “Here are the rules. We want to work with you, so you help us design the rules of the game, but we also want to make sure that those who are deploying or buying this stuff understand the rules of the game, so that when you’re doing something here, it’s not a barrier when these guys want to then adopt it.”

Distributed Responsibility

It’s been a bit of a jigsaw puzzle that we’ve been trying to develop, but that’s what we want to do. Here are all the players. I mean, I’ve named some of them, but what does this say? Data controllers. You need to understand that there are lots of players in this game and you all understand this, but they all have to work together. We can’t turn around and say, “These are the rules. End of,” or the regulators turn around and go, “We set out the regulation. End of.” Let’s work together in this space to actually fundamentally understand what we need to do.

Again, there’s sort of this vision of a regulatory loop, so when you develop something and I know this was talked about and Catherine [Flick] talked about it earlier, but you just don’t do it and it’s out there. You develop it, you improve it, you go around. There’s a loop, isn’t there? Make sure when you’re thinking about that loop that actually you work together in that space.

Finding the Balance

This is what, I think, is the most important thing – finding that balance between innovation and regulation and understanding that it’s okay to innovate, but when you do, make sure you do it in a way that fulfills some of those requirements there on the left. I’m not sure if anybody has talked about, but there are always political wins, aren’t there? There’s somebody somewhere that wants a thing, and you have to develop that thing, and whether that thing is the right thing or not requires a body of people to stand up and say, “No, I don’t believe this is right or wrong.”

I’ll give you an example of that. I used to work for an online medication provider, and we would provide medications. They were all done in a safe, ethical way and we followed the GMC guidance of remote prescribing, but sometimes, we would provide medication to young boys. When I say young boys, I mean boys who are 19 or 20. We followed the rules, we made sure it was all okay, but you have to think, “Was that the right thing to do, to do it in a remote way? So I never saw you, but I understood that you had a need, because you declared you had a need.” Then, one of the things we pushed back, we said, “We, as the system who are providing these medications, we need to think a bit more. This isn’t just about our profit balance sheet here. We need to think whether this is the right thing to do.”

We then put in some other safety measures to make sure we called these people up and explained what they were getting and whether they really needed that. This has nothing to do with technology, but it has to do with the fact that when there’s a remote interface, people can sometimes think they can get away with it because they don’t see you. When you sit in my A&E cubicle – I was going to say my office, but I definitely don’t. I have a little two by two area if I’m lucky. Otherwise, it’s in the corridor with a bit of a shield – you look at me. You look at me in the eye, and I have to make you believe what I’m saying is right and true. When you do it on an interface, when you’re doing it on a computer screen, or you’re doing it over the phone, it’s harder and understanding that balance between right and wrong – maybe I’m not explaining this correctly, but – sometimes you can find that you step away from it because you believe in the product and what you’re doing. However, at that individual level, it can be quite tricky. This is what we’re trying to say is there is a balance, but we can only really create that balance together.

10 Principles

Over the last year, we worked really closely with some academics. We worked closely with the developing agencies, so developers out there. We worked with commissioners as well, which is why we’ve got some funny principles in here. But most importantly, we try to work with the workforce and say, “What is it that you need from us as government or as people who are central commissioners, to say, ‘How do you create some rules around these games?'” Now, this is by no means an extensive, really great list like the AMC one, which maybe we should adopt some of their principles, but understand some things. Just be really simple – understand what your user needs are. In health, if you’re not creating to fulfill a need, but you’re only creating because there happens to be a data set available, which in the NHS, we do. We have some great data sets available, we’ve got HES data, we’ve got CPID data, we’ve got large data sets if you’re looking at screenings. If you’re looking at doing some really fun things, we’ve got them, you can access them, we have good ways to access those datasets. However, are you solving a problem?

One of the things I was just telling Gareth earlier is, I was at a conference a couple of weeks ago and a company in India (India, huge population) had created an algorithm which could screen diabetic retinopathy. They’d done it on a great big data set because obviously, there’s a lot of diabetics in India and we’re at higher risk of it from our genetic profile, but they found, “Great, we’ve screened this population. We now know that X percent are at risk of developing diabetic retinopathy, but we don’t know what to do now.” Say I screened all of you and you guys here I’m going to tell you, “Right, guys, in five years’ time, you’re all going to get diabetic retinopathy.”

For those of you don’t know what diabetic retinopathy is, it’s where your eyesight starts narrowing, you lose vision, and it’s because of your diabetes, which may or may not be poorly controlled. But I can’t do anything about it because I don’t have any doctors, I don’t have any healthcare staff and actually, I’ve just told you something, which is fundamentally going to change the way you live your life, but I can’t help you with that. There is my ethical dilemma. I’ve told you something. I’ve helped you, genuinely. I believe I’ve genuinely helped you because I’ve told you something you didn’t know, but now I’m standing there and I go, “Well done, but there’s nothing I can do to help you in that space.”

When you are thinking about solving a problem, which is why we understand your user need, and understand what are you going to contribute to that. It’s great to predict. Everybody says in AI, “We could do diagnostics, and we can really predict the future.” Great, let’s do it. I’m not against that, I think that’s a brilliant thing to do – to predict pathology. However, make sure we have the infrastructure in place to then catch the people who you’ve predicted. If you find that, you’re in a community that, for example, something really simple, doesn’t have wound care specialist nurses, or generally doesn’t have any diabetic specialist nurses, don’t go out and develop something that’s going to predict something when the people can’t catch up with that.

You can argue and say, “We need to do that because we don’t have any people.” Don’t ever forget there’s a human involved in all of this and in health and care, you are at your most vulnerable. How many of you have been sick or been in hospital? You’re vulnerable. When you’re sick, when you’re ill, you’re vulnerable. When you’re well, you try and maintain that wellness. People with chronic conditions will tell you very clearly they don’t want to be identified by their condition. They are still them. They happen to have something, and those of you in the room who have a chronic condition may agree with that statement. Always remember that when you’re developing something or when you’re doing something because you happen to have the data or you happen to have the technology, what is the problem you’re trying to solve and is that ethically okay to do?

I’m going to take a pause there considering I’ve been waffling on for about 20 or 30 minutes, but also do have a look at our Code of Conduct. It’s there on the .gov.uk site. We’re not by any means saying this is perfect. This is a start for health and care in a world where people are throwing money at it and people are developing at pace because there are good sets of data, although the data is not always brilliant. Some of it is a little bit dodgy, but the idea is think about it and do it in a good way, do it in an ethical way, and stick with the frameworks that are out there as well.

We do have, as I said, the HRA and we have a really good ethics framework. DCMS, the Digital Culture Media Sport guys, have got an ethical workbook that you can work through as well, so there are some good tools out there to help you. We’ve tried to base some of our ethics principles on theirs, and we’re working quite closely with the Turing Institutes as well to say, “Pick at it. Make sure that what this is, is actually helping innovation and helping people do things, but in a good, right way, especially when it comes to health and care because these people are vulnerable, but they also don’t always understand.” I’ve been in an A&E. I’ve been sick and admitted to ITU, and I didn’t understand, and this is my field. I know what was happening, but at that point in time, my brain shut down. I was just a girl in an A&E and I didn’t understand, and I assumed that the people looking after me did understand and were completely aware. If you are developing something for the clinicians or the workforce, make sure they understand as well, because the only question everybody always asks in health and care is why? They don’t really ask “How?” They only ever ask “Why?”

Questions and Answers

Participant 1: Do you think algorithms that affect psychology should be regulated?

Dr. Joshi: Currently, we have something called an Apps library in the NHS. It’s there, it went live a couple of weeks ago; it was in beta for about a-year-and-a-half. We won’t talk about that too much, but there’s quite a few chatbots in mental health, but there are also quite a few developers out there who are developing algorithms to a) help you understand the problem in front of you. There’s two ways in mental health. You’re a therapist, so you belong to an allied health professional, and then you’re a clinical psychologist or a psychiatrist, so you give drugs.

Those people who are therapists, obviously, do a lot of talking, do a lot of verbal communication, and they do use algorithms to help them understand. One of the things Catherine [Flick] talked about earlier, is understanding the diversity of your algorithms. You don’t need me to explain it, but they are being used a lot, and one of the things we’ve pushed is don’t just base your whole product on your 12,000 user base. Test it outside as well, and make sure it’s applicable, especially when it comes to children. But this is a fine line.

Participant 2: Even consumer algorithms fall into that category, whether it’s what shows up on your news feed or Facebook feed or Instagram, etc. Do you think those should be regulated as well?

Dr. Joshi: It’s a fine line between regulation and innovation, isn’t it? One of the things I say is you need to create intelligent customers. If more of us go out and spread this message out there and say, “Yes, it’s fine.” – you can’t regulate the world. Let’s be honest. You can’t regulate everything and actually, those of you familiar with regulations, and we have this problem and it’s nothing actually to do with an algorithm. It’s the fact that in the UK, we can regulate within England, but we have a separate regulation body for Wales and Scotland, but you might live on the border. As a human being, if you’re living on the border, which regulation applies to you? You could say, “Here, I’m going to go to a clinic in England versus a clinic Wales.” Two sets of regulation apply. They’re regulation, but they might not be exactly what you need. So whilst, yes, regulation is good and it can help, we, as society, need to raise the bar and make sure we understand.

Participant 3: There’s been a fairly large amount of ruckus in the United States about services that do genetic testing like 23andMe, in particular, they were acquired by a pharmaceutical company. What is your viewpoint on whether people are entitled to not just the results of their data being used for research, but also potentially compensation, if people are profiting from it?

Dr. Joshi: That’s a really hard question. Thank you for that. It’s difficult. Those of you who have done your own personal genetic testing, it’s something that is, as a consumer, is exciting, isn’t it? It’s exciting to know, and it’s interesting to know whether or not that’s something that will help you or not help you. Here in the UK, we have quite clear laws about what you are able to share and not, as somebody who’s requested that from an insurance perspective and obviously, as the NHS, we have a slightly different health and care system.

This is my personal opinion, so it’s not a government opinion or anything, but I’d say it really depends. For me, genetic testing is something that’s quite dangerous. There was a recent case in the papers, I don’t know if you read here in the UK, where a gentleman was diagnosed with a genetic condition and the hospital didn’t tell his daughter who was pregnant at the time. She then sued because she felt, if they had told her that he had this condition, she wouldn’t have then gone on to have a baby, who could have been at risk of this condition. Take out pharmacies, pharmaceutical industry. You’ve got a family here who are having a potential crisis. I don’t know what the outcome was, I don’t know whether she went on to have the child who did have the condition or not. But if you just think, if your dad or your mom hadn’t told you about something and now, you found out that they did, and then you don’t tell your children, even within the family unit, that causes ethical dilemmas, doesn’t it?

I think then for us to turn around and go, “Oh, yes, we definitely need that information to help us understand whether or not we should be giving you something,” it’s already a big messy story, so let’s try and get the family unit right before we start trying to get those. I’m not a genetic therapist, I’m an A&E doctor. I find it really difficult to have that conversation with people, even on a one-to-one level. Nobody trained me to say, “This is how you talk.” I can tell you, “You’re going to get a life-threatening condition and die.” I’m trained to say that. I’m not trained, not me anyway, but there may be a new generation who have to tell people about these kinds of conditions, so this is what I mean. We have to bring people along with us on this journey and not just say, “You’re great. Well done, pharma, for doing that.”

Participant 4: You talked about the ethics classes at medical school and that it was there on the syllabus, but actually, lots of people didn’t necessarily pay as much attention as they might do. There’s been a bunch of conversations recently about ethics education in computer science and just technology disciplines in general. What can we on the technology side learn from what’s worked and what’s not in ethics education on the medical side? If we’re just adding classes, are computer scientists more likely to turn up than medical professionals probably will?

Dr. Joshi: I don’t know.

Participant 4: What can we learn in hindsight?

Dr. Joshi: What can we learn? I think it’s case studies. How many of you remembered 100% of my talk? Probably one person in the room and even then you were probably writing it or recording it, but you remember stories, don’t you? We all remember stories, and that’s how we, as a human species, have communicated for generations. One of the things that we all learn and remember was when the lecturer stood up and gave them their worst ethical dilemma or why it was important to understand. I don’t really have any computer science stories, maybe you do, but what are the stories that made you learn? You know, what were the real case examples?

One of the things I do is I belong to a network called One HealthTech. It’s about creating diversity and championing women, but also people from a diverse background. Yesterday, we had an event for International Women’s Day where we tried to get people to talk about their failures and actually, it was really successful because getting people to talk about their failures and their dilemmas they have in life makes you then remember. Now I remember all four people who spoke. I don’t remember what they did, but I remember what their story was, so that would be my take home. And don’t be afraid, to be honest. People always think they’ve got a lot to prove in this world. I try.

Participant 5: As a follow up to that, technologists are brilliant at trying to create solutions with a lot of certainties. Stories are a lot to do with nuance, and they’re a lot more fuzzy around the edges, and technologists aren’t very good with fuzzy around the edges. If you were to say, “Yes, here’s all this data and here’s all this stuff,” what would be the kinds of things that you would want technologists to be building, if you want us to help with stories? Does that make sense, the question?

Dr. Joshi: Sort of. I’ll be honest, not entirely.

Participant 5: Let me slightly rephrase the question then. We’re very good at trying to get perfect answers. Answers within the medical profession are very rarely perfect. I’m a son of a consultant pediatrician, so I know that they are never perfect. What are the right kinds of things for us to build? Because we shouldn’t ever try and build perfect, as far as I’m concerned.

Dr. Joshi: A roundabout way of answering that is the extremes of the world. Children, you’re familiar with. Grown-up people, as in old people over 85, they are never perfect. You know, even if you told me now, “My child had a little bit of a fever. Should I bring them into A&E?” without a doubt, I’d probably say, “Yes,” because nobody wants to risk that. It’s the same when my old lady comes into my A&E, she’s 94, she’s fallen over, she’s fine, but it’s 2:00 in the morning and then she says, “I want to go home,” I’m just, “No. You know what? You stay here. Unless you have a support system, you’re not going home on your own,” so there’s extremes.

In the middle, yes, we care, but we’re a bit more robust. I would say maybe 18 to 50 to 60, you’re a bit more robust, and there’s quite often clear-cut answers, so there are conditions that have good clear-cut answers. Going back to the user need, if you’re fulfilling a need, so if you look at London’s data set and you say, “Actually, is there a user need for people who call up, I don’t know, 111,” so we’re not talking about emergencies, but 111, which is for acute conditions. And I’m just looking at, say, 20 to 45-year-olds who call up with query vaccines – I just made that up – nice user need, nice simple problem, clear-cut answer, that’s fine, but don’t try and answer that question in 2 to 10-year-olds because it won’t be a clear-cut answer. Just think about the population you’re doing it in, and how you would feel if that was your child, grandma, mum.

Participant 6: What is your viewpoint on the world-free NHS Trust selling the A&E data to Google to track potential renal failure? Is that something we can expect more of? If it is, should there be an opt-out possibility or can there be an opt-out possibility, even though it was randomized?

Dr. Joshi: I think you had three questions in there. The first question is, in this world, we’re learning. That was a first case; it made a big publicity because of the companies involved. However, they weren’t the only ones that had made that mistake. The line between data for direct care, data for research, data consent and confidentialities are quite blurred. And these aren’t my words, these are words of people who are experts in the field. We have something called a National Data Guardian, who have also said these are quite blurred lines, so we, as a system, have a responsibility to clarify those lines, which we are doing.

We recently published something on information governance and how you actually exchange data, that was, I think, released a couple of months ago from our IG experts. But the question is, can you opt out? Yes, so we have also a national opt-out system, which is there. It’s run by our delivery partner, NHS Digital. Can we expect more of that to happen? This goes back to my original point, which I keep making – we need to work collaboratively with people, so let’s not just set the rules and say, “You have to follow the rules.” Let’s bring those people along with us to understand the rules and say, “Okay, if you are the IG expert in your areas,” it might be a CCG, it might be a trust, or it might be what we now call ICSs and integrated care systems, “Understand clearly the rules, but work with us to develop those rules so actually, in the future, we don’t make these mistakes again.”

Participant 7: Thanks very much, the talk was really interesting. I’m interested in the GDPR and how that’s had an impact on what you do because you were talking about informed consent, basically, in terms of apps, in particular, and that you have problems with vulnerable users of apps. I’m wondering if the GDPR has helped you at all with that or if it has been a problem.

Dr. Joshi: Somebody told me I wasn’t allowed to say GDPR, Data Protection Act 2018. I got told off, but maybe it’s a government thing.

Participant 7: Sorry, I don’t know a lot of European stuff.

Dr. Joshi: It’s with any new regulations. We’re also changing medical device regulations, but I’m not allowed to say the B word, but, if the B thing happens, then that’s going to cause a lot of trouble. Whenever any either new regulation comes in or regulation changes, it’s about bringing the community with you and understanding. A lot of the colleges have sent out lots of columns and engagement to say, “This is what it means.” From our perspective, it’s been great. I think it’s been really helpful. In health, it’s a bit blurred those lines, and also the right term, the explanation. What does that mean? It’s huge, and at what point. Maybe I need to clarify it. We didn’t have a problem with vulnerable people. We just need to be aware of the vulnerability versus it’s a problem, and I think that’s helped. It has certainly helped, for sure.

Participant 8: I have a question about AI and its limits and how it can potentially impact face-to-face contact. Hypothetically, if you had AI, which, I assume, would be of the narrow variety within healthcare, and there are many people with respiratory problems, and it’s able to detect this, it’s able to optimize logistics, whatever, in order to solve this. What if it turns out that we have this epidemic of respiratory issues, and it’s due to the lack of enforcement of building regulations, people have been getting these problems because of mold or whatever, or pollution. I’m guessing the AI would not be able to detect this because I assume that we wouldn’t want to just simply treat the symptoms of any issues, we want to find the root cause because they would be more efficient in the end. How would you see AI intersecting with all these other parts of social infrastructure?

Dr. Joshi: I get really nervous about saying the term AI. As a doctor, I’m not really an AI expert by any means. For me, my understanding is it’s just a form of stats and a form of math. We don’t really ask that question about other statistical models like that. Maybe I think the question that needs answering is how do we intersect with other parts of the ecosystem and other data sets? You’ve talked about respiratory, but a really good example would be your consumer data, and how we link your consumer data into understanding whether or not, for example, if you’re elderly or you’ve got dementia or frailty. Does your shopping data match your health data? A really high-level clear example, which we can actually do something about. We just need to create the right APIs and the right standards to do that.

Long question, short answer is yes. We need to work on getting those two datasets to match regardless of whether it’s respiratory or social determinants or whether it is. Something as simple as does your Waitrose points match the fact that you didn’t buy any toilet roll or you bought loads of toilet roll, and then walked into A&E five days later with DMV? You know, that would really help me by the way, but we’ve got to work, but it’s a much bigger system. Those of you who are familiar with data standards in health and care – I always get this number wrong, so I’ll just make it up – there’s 120 different ways to measure your blood pressure or record your blood pressure. Simple. For me, it’s three numbers over two numbers. Sometimes, it’s three over three. Sometimes it’s two over two, get worried, but generally, three numbers over two numbers. That’s simple, isn’t it? It should be a simple way to record three numbers over two numbers, but when you record that data, there’s 100 different ways of doing it. So we need to get our data standards right, get the APIs right to then link into other data standards to enable us to answer those questions, which will be in a narrow form at first.

Participant 9: My question is definitely not coming from my tech journalist background, but as a frequent flyer of Evelina and an SG1 mother. I see you’re talking about all this high-level data, but I see huge problems at the lower level. Eight GPs in my neighborhood just got failures from NHS because they weren’t checking results coming from the hospital, or my experience with Evelina and St. Thomas has been glorious, but then the GP experience and the fact that these two sides can’t share data. And I’m a huge tech ethics person, but I look at it as a more pressing issue. Is there a way to combine the systems because the GPs system is completely separate, in my understanding. Even if you get a service like pediatric bloods at the hospital, only the GP can read them, not the emergency, things like that. So I’m more curious about the bare level tech.

Dr. Joshi: Yes, and now you’re talking about like real-world problems. I thought we were talking about ethics. Yes, there is work being done. We have a program called LHCRE, which is Local Health and Care Record Exemplars. We have five areas, which are basically that. It’s quite difficult. I’m no technology expert, but getting one system to talk to another system, an archaic system, is very difficult, in my understanding and my knowledge, which is why I was talking about basic data standards. Let’s get basic data standards right. We can then start connecting.

There are ways around this, I understand, you can have ways around it. In London, we have a system called One London where currently they are working on an architecture where you can read, so you might not be able to write, but you can definitely read and that is happening. It may be that your GP practices aren’t quite up there, but from a hospital perspective, that is definitely happening, and they are reading into certain GP systems. Without going into the whole politics of it all, we can talk about this all day, that is happening, yes. And Matt Hancock’s tech vision is all about that – I don’t know if you’ve read it.

Moderator: If you want to work on that problem, I don’t know if anyone is here, but the folks at NHS Digital are doing a lot of really good work, in the last few years in particular.

Participant 10: Do you face a lot of resistance towards AI in this sector? If so, do you need to raise awareness or convince people or how do you do this?

Dr. Joshi: We use the term AI, but in healthcare, like in most other industries, we have a lot of resistance to change. I remember working in Southampton, and one of these consulting companies came along, and they said, “You can really create efficiencies in your A&E by moving your red paper from there to here.” I went, “Okay, thanks. Great use of money there.” We moved our printer and we put red paper. They said, “Put red paper in it so people know it’s an emergency,” so we put red paper in it, and then we moved the printer closer to the pod where you can put blood up, and we did this for about a week.

We’re all really keen and then one of the secretary staff was sick, so nobody ordered in red paper. Simple. Then we all were, “Well, where’s the white paper? We just need paper because we don’t have time to fluff around with where’s the paper.” Then what we realized was the paper was still stored over here, so we just moved the printer back the way it was. So we paid all this money for somebody to tell us how to create efficiencies in our A&E, but actually, none of us were bought along with that change. We were just, “Great, thanks for that. We’ll just leave the printer where it was using the paper that we always knew,” and blood still took a little bit longer to get the results for.

The moral of that story is people are resistant to change, and especially in an industry like ours in health, where we are so indoctrinated in this paternalistic view of health and care. One of the things we try and champion is turn the model upside down. You, the individual, should be in charge of your health and care. You should only come to me when you need me, not because you can’t find that information or you don’t have the tools to look after yourself. Part of my day job actually is I’m the Empower the Person. We have three pillars in NHS England. One is called Empower the Person, where we’re trying to develop tools and technologies to help you. But the long answer is yes, there’s resistance, but bring people with you. The moral of the story – bring people with you. Don’t instill an algorithm or something and say, “This is definitely going to work. Use it.” Understand their needs, and then solve the problem. User-centered design.

Participant 11: I’ve been massively blown away and inspired by this and I’m especially inspired that technology is like your secondary or tertiary skill, and yet you talk about it with such elegance. As a technologist, I don’t know how to start conversing in your world, the medical world. How do I go about learning a bit more about that, without getting a medical book that’s really not particularly consumable?

Dr. Joshi: Be yourself. The great thing about health and the great thing about medicine is we’re all experts. You are an expert in you, as a human being. How you work, that’s a detail. I mean, I don’t even remember the Krebs cycle. It’s not important oxygen, carbon dioxide, something, something. There are some basic things. There are basic pathologies that you need to figure out, but the most important thing people care about in medicine is the “why”, not necessarily the “how”, and you are probably an expert in the “why” already. You just need the confidence and the self-belief that you do know. Jennifer, are you a medical expert?

Participant 12: No.

Dr. Joshi: No, but do you feel like one?

Participant 12: Yes, for my son, sometimes.

Dr. Joshi: Yes, see, and we have these. We have what we call patients or experts by experience. You don’t need to know anything about medicine per se. You need to understand health and what makes a person tick and what makes the system tick. But that hasn’t answered your question of how the NHS works, which is a very long conversation, and that I would definitely do some background reading on.

See more presentations with transcripts

Subscribe for MMS Newsletter

By signing up, you will receive updates about our latest information.

  • This field is for validation purposes and should be left unchanged.