GitHub Introduces go-gh to Simplify the Creation of GitHub CLI Extensions

MMS Founder
MMS Sergio De Simone

Article originally posted on InfoQ. Visit InfoQ

Introduced in GitHub CLI 2.0, extensions allow developers to add new features by writing small Go programs. To make it easier to create extensions, GitHub is now releasing a new library, go-gh. Additionally, the latest version of GitHub CLI introduces two new commands to search and browse the catalog of available extensions.

Written in Go, the go-gh library contains portions of the code that is used in GitHub CLI itself and that GitHub hopes will help developers write high quality extensions in less time.

To create a new extension, you can use the gh ext create command. This will initialize a Go project with some boilerplate code in main.go and set it up to link against go-gh. The library provides a number of functions for common tasks, such as accessing the repository the current directory is tracking, or creating a REST, HTTP, or GraphQL client to send requests directly to GitHub APIs.

One advantage of using go-gh-provided REST, HTTP, or GraphQL clients is they will transparently use the current GitHub CLI environment configuration, thus relieving the programmer from dealing with hostname, auth token, default headers, and other requirements of using those APIs. If you need custom authentication workflows, i.e., unrelated to how the user has authenticated with GitHub CLI using git auth logon, you can do that, too.

To increase developer productivity when creating extensions, go-gh also attempts to nicely integrate with the rest of features provided by the gh command, supporting the possibility of executing existing commands. For example, this is how you would execute gh issue list -R cli/cli to capture its output in stdOut:

    args := []string{"issue", "list", "-R", "cli/cli"}
    stdOut, stdErr, err := gh.Exec(args...)

The gh.Exec function uses os/exec and GitHub’s own safeexec behind the scene to provide the expected result.

To help developers get started with go-gh, GitHub is providing a step-by-step tutorial describing how you can select a repository, accept an argument, talk to the API, formatting the output and so on. Additionally, the official documentation provides extensive examples for each of go-gh functions.

On a related note, GitHub 2.20 introduces two new commands to make it easier for developers to discover extensions. Executing git ext browse will display a textual UI (Terminal User Interface) to list and filter extensions as well as to install or uninstall them. git ext search is the counterpart to git ext browse meant for automation and scripting. This is a traditional shell command that will output a specified number of results sorted by a given criteria. For example, this is how you can list the ten most recently updated extensions whose owner is xyz:

gh ext search --limit 10 --sort updated --owner xyz

The command can return its results as a textual table or in JSON format.

go-gh is available on GitHub.

About the Author

Subscribe for MMS Newsletter

By signing up, you will receive updates about our latest information.

  • This field is for validation purposes and should be left unchanged.


Article: Data Protection Methods for Federal Organizations and Beyond

MMS Founder
MMS Alex Tray

Article originally posted on InfoQ. Visit InfoQ

Key Takeaways

  • Some of the most substantial data breaches occur due to human error: theft of hard drives or physical devices misconfigured databases, and mistakes that lead to important data loss.
  • The Federal Data Strategy is a framework describing the US government’s 10-year plan to “accelerate the use of data to deliver on mission, serve the public, and steward resources while protecting security, privacy, and confidentiality.”
  • Safe disposal of sensitive data on employees’ PCs and smartphones helps optimize the devices’ performance and data usage.
  • The only reliable way to regain control of your data after a ransomware attack is a valid backup.
  • A well-thought-out backup strategy should include creating backup copies on a daily, weekly, and monthly basis. If malware sneaks unnoticed into daily or weekly backups, you can recover most of your data from a monthly copy.

Data loss or theft is a highly probable and ugly eventuality, which is a problem for those in charge of preventing data breaches. Some of the most substantial data breaches occur due to human error: theft of hard drives or physical devices, misconfigured databases, and mistakes that lead to important data loss involving social security or driver’s license numbers, bank accounts of citizens, voting affiliations, and other sensitive data. However, you can prepare for and prevent data loss in the case of a data leak by backing up all critical data. The goal of the backup is to record the data that can be restored.

In this post, we define the Federal Data Strategy framework and explain the responsibilities of a chief data officer (CDO) and the methods to enhance data protection both offline and online for federal organizations. We will take a look at:

  • What the Federal Data Strategy is and why it is important
  • Who should be in charge of data-related workflows in an organization?
  • Which data protection methods can you leverage for a federal organization
  • How you can utilize these protection measures effectively to protect sensitive data

So let’s get started.

Federal Data Strategy: Definition 

According to OMB (Office of Management and Budget), the Federal Data Strategy is a framework describing the US government’s 10-year plan to “accelerate the use of data to deliver on mission, serve the public, and steward resources while protecting security, privacy, and confidentiality.”

This framework includes:

  • A mission statement
  • 10 operating principles remaining relevant throughout the strategy’s lifetime
  • 40 best practices that are aspirational goals for 5-10 years
  • Steps to implement the practices

The Federal Data Strategy involves using those points for federal data management guidance and leveraging the value of federal (and federally-sponsored) data. The original action plan was published in 2020, when the COVID-19 pandemic made working from home the “new normal” and set new challenges for data confidentiality and security. The latest Federal Data Strategy Plan update was released in October 2021, wrapping up the successes and lessons learned since the first iteration of the framework.

Chief Data Officer: Responsibility Area

A chief data officer (CDO) is a senior leadership representative who is responsible for the governance and use of data across an organization. What does a chief data officer do? As part of their functions, chief data officers organize and oversee data-related activities, such as:

  • Data governance: monitoring, governing, and consulting on enterprise data
  • Data operations: providing data efficiency, availability, and usability
  • Data innovation: digital transformation initiatives, revenue generation, and data cost reduction
  • Data analytics: providing and supporting data analytics efforts, as well as reports on markets, customers, operations, and products

In many organizations, the responsibilities of CDOs can overlap with those of chief analytics officers (CAOs) and chief digital officers (CDOs). This makes a clear-cut definition difficult to provide. Also, chief data officers can closely interact with the marketing department, particularly with the chief marketing officer (CMO), to ensure efficient data usage for improving sales and relations with customers. 

Data Protection Methods for CDO’s to Apply

The principles and methods of data protection remain the same, regardless of whether the organization is local or federal. And the main principle that the chief data officers and leaders of federal organizations should consider when thinking of data protection methods is thoroughness. 

Check and apply the recommendations for offline and online methods below. These measures, when applied correctly, increase the resilience of your organization’s infrastructure and help implement the elements of the Federal Data Strategy.

Offline Data Protection Methods

Although digital transformation and technological achievements make organizations concentrate on the latest online security principles and standards, offline protection methods remain viable. Implement the following best practices to optimize offline data protection and reduce the probability of a critical data breach.

Safely Dispose of Sensitive Data on Smartphones and PCs 

Safe disposal of sensitive data on employees’ PCs and smartphones helps organizations avoid sensitive data leaks by irreversibly wiping out data from drives. For example, consider using a specially designed application to overwrite hard drives before removing old corporate computers. The same should also be done for smartphones. Otherwise, an organization risks causing data leakage incidents if some dedicated and smart hacker finds those devices and accesses their hard drives.

Shred Redundant Documents

You must also shred all physical documents containing sensitive information. For instance, receipts, credit offers, insurance forms, and bank statements should be made unintelligible before disposal. Otherwise, thieves can glean the sensitive data they contain. Then, criminals can abuse that data to steal identities.

Lock Physical Rooms

Lock up all rooms containing physical gadgets storing critical data. Don’t forget to lock away PCs and other offline materials containing sensitive data.

Don’t Write Down Passwords

You need to train all your employees never to write down their passwords. For instance, writing their passwords to shared company computers can accidentally cause a password leakage into the wrong hands.

Use Cameras

A good idea is to use CCTV cameras in your office for monitoring. This way, you can easily monitor for any illegal activity or attempts to access safes or rooms with restricted information.

Protect Keyboards

Training your staff to shield their keyboards when typing in their passwords is also essential. This way, it becomes easier to avoid keyword theft.

Encrypt Hard Drives

Lastly, encode all laptop and desktop hard drives. This encryption will ensure that nobody can access critical information if a laptop is lost or stolen.

Online Data Protection Methods

Online methods also require care and thoroughness to reap the most benefits from them. Follow the recommendations below to secure the organization’s networks and online resources from hacker infiltration.

Don’t Overshare on Social Media

These days, social media is the “in thing.” Therefore, many enterprises and other federal organizations want to prove to the world that they are “also social.” However, the only prudent choice is to share your data on these platforms sparingly and carefully. Otherwise, identity thieves can steal that data and cause trouble for your organization.

Mind Your Passwords

Did you know that a seven-character complex password can be cracked in 31 seconds? While shorter passwords or less complex ones can be cracked almost instantly? Having a weak password is one of the most accessible routes hackers can use to break into your systems. Moreover, avoid synchronizing work email accounts with personal ones because this compromises login credentials, passwords, and access codes. 

A strong password consists of at least 8 symbols, including uppercase and lowercase letters, numbers, and special symbols. Keep in mind that strong passwords should not have any meaning. Don’t use, for example, your pet’s nickname or your child’s birth date as a password unless you want hackers to easily access your account.

Use a VPN

A VPN isolates your traffic to protect you against hackers and spies who aim to steal your details while transacting online.

Implement 2FA

You should also use two-factor authentication as an extra layer to boost your password protection.

Ensure Cloud Storage Encryption

Fortify your online protection by using encryption from a reputable cloud service provider. Check out and ensure your vendor offers local encoding and decoding for your vital information. This way, the service provider will be responsible for decoding and encoding your PC data for secure cloud storage, and nobody will access your data without permission.

An industry-accepted encryption specification that governments and security organizations use is the Advanced Encryption Standard (AES). The reliability of AES relies on the principle of encrypting the data in one block and not separate bits. The most reliable algorithm here is AES-256, which encrypts data in 256-bit blocks.

Another widely used standard to encrypt data is RSA (Rivest-Shamir-Adleman). RSA encryption relies on a public encryption key and a private decryption key on the side of a data recipient. This is a reliable way, for example, to keep personal data private when sending that data via online tools. On the other hand, unlike AES, RSA is not appropriate for encrypting considerable amounts of data.

Double-Check Everyone

An effective strategy is to screen every person you or your employee “meets” online. For example, if they reach out to you claiming to represent a popular organization that you know, avoid providing any sensitive information to them until their association with that organization is proven.

Set and Run Backup Workflows

Remaining one step ahead of security solutions, hackers can bypass any protection system sooner or later. Therefore, a valid backup is the only reliable way to regain control of your data after, for example, a ransomware attack. Organize a process to copy and store your organization’s sensitive data in different protected repositories. You can keep the data online for quick access and recovery and offline for long-term retention and increased resilience. 

A copy kept offline can remain safe and usable when your main production site and online backup storage fall victim to ransomware. A well-thought-out strategy also includes backup tiering: creating backup copies on a daily, weekly, and monthly basis. Thus, in case security monitoring software fails and malware sneaks unnoticed into more relevant daily or weekly backups, you can recover most of your data from a monthly copy.

Additionally, modern data protection solutions enable you to make backups immutable. Immutability protects the data in a repository from changes or deletions throughout a set period. Therefore, immutable backups can be used for recovery even if ransomware reaches backup storage.

Find a solution that would help you schedule and automate backup and recovery workflows, enable backup tiering, configure retention policies, and enable backup immutability. That is the only way to ensure timely backup updating and data restoration even in the aftermath of a data loss disaster.

Conclusion

The importance of data protection for federal organizations is impossible to overestimate. Being aware of modern tendencies and challenges in the IT field, the US government introduced the Federal Data Strategy in 2020. This framework combines principles and practices for sensitive data protection improvement at the federal level for the upcoming decade. This guide offers particular methods and practices that the chief data officer can apply to improve data resilience.

Although the guide focuses on federal organizations, the provided data protection methods would work for any organization regardless of size and industry field. Check and apply the recommendations to improve the resilience of your organization’s data and infrastructure.

About the Author

Subscribe for MMS Newsletter

By signing up, you will receive updates about our latest information.

  • This field is for validation purposes and should be left unchanged.


eBay New Recommendations Model with Three Billion Item Titles

MMS Founder
MMS Claudio Masolo

Article originally posted on InfoQ. Visit InfoQ

eBay developed a new recommendations model based on Natural Language Processing (NLP) techniques and in particular on BERT model. This new model, called ‘ranker’ use the distance score between the embeddings as a feature, in this way the information in the titles of the products is analyzed from the semantic points of view. Ranker allows eBay to increase the metrics of Purchases, Clicks, and Ad Revenue by 3.76%, 2.74%, and 4.06% compared with the previous model in production on the native app (Android and iOS) and web platform.

The eBay Promoted Listing Similar Raccomendation Model (PLSIM) is composed of three stages: retrieve the Promoted Listing Similar, called ‘recall set’, those are most relevant. Apply the trained ranker, trained with offline historical data, to rank the recall set accordingly to the likelihood of purchase. Reranking the listing by incorporating seller ad-rate. The features of the model include: recommended item historical data, recommended item-to-seed item similarity, product category, country, and user personalization features. The model is continuously trained using a Gradient Boost Tree to rank items accordingly to the relative purchase probability; the incorporation of deep-learning-based features in similarity detection increase significantly the performances.

The previous version of the recommendation ranking models evaluates the product titles using Term Frequency-Inverse Document Frequency (TF-IDF) as well as the Jaccard similarity. This token base approach has basic limitations and doesn’t consider the context of the sentences and the synonyms. Instead, BERT, a deep learning approach, has excellent performance on language understanding. Since the eBay corpora are different than books and Wikipedia, eBay engineers introduce eBERT, a BERT variant, pre-trained on the eBay items titles. It is trained with 250 million sentences form Wikipedia and 3 billion from eBay titles in several languages. In offline evaluations, this eBERT model significantly outperforms out-of-the-box BERT models on a collection of eBay-specific tagging tasks, with a F1 score of 88.9.

eBERT architecture is too heavy for an high-throughput inference, in this case, the recommendations can’t be delivered on time. To address this issue eBay developed MicroBERT, another model that is a smaller version of BERT and optimized for CPU inference. MicroBERT uses eBERT as a teacher in the training phase using a knowledge distillation process. In this way, microBERT retains 95%-98% of eBERT quality with a decreasing inference time of 300%.

Finally, microBERT, is fine-tuned using a contrastive loss function called InfoNCE. Item’s titles are encoded as embedding vectors, the model is trained to increase the cosine similarity of the thematical distance between these vectors (that represents the embeddings of the titles) that are known to be related to each other while decreasing the cosine similarity of all other pairings of item titles in a mini-batch.

This new ranking model achieves a 3.5% improvement in purchase rank (the average rank of the sold item) but the complexity of this model makes it hard to run the recommendations in real-time. This is why the title embeddings are generated by a daily batch job and stored in NuKV (eBay’s cloud-native key-value store) with item titles as key and embeddings ad values. With this approach, eBay, is able to meet the latency required.

About the Author

Subscribe for MMS Newsletter

By signing up, you will receive updates about our latest information.

  • This field is for validation purposes and should be left unchanged.


Making IntelliJ Work for the Dev: The Insights Exposed by the New Book Written by Gee and Scott

MMS Founder
MMS Olimpiu Pop

Article originally posted on InfoQ. Visit InfoQ

Working smarter, not harder is an idiom we more often hear, together with being in the flow. As professional developers spend most of their productive time writing code in an IDE, it will probably amplify their productivity to better know the tool. Helen Scott, lead developer advocate at JetBrains, and Trisha Gee, engineer, author, keynote speaker and former developer advocate at JetBrains, transposed their impressive careers into a new book, Getting to Know IntelliJ IDEA, for developers to improve their productivity with IntelliJ IDEA. In order to extract its gist, InfoQ reached out to them with a couple of questions.

InfoQ: Congratulations on publishing the book and thank you for responding to the questions for InfoQ’s readers. Can you please introduce yourselves?

Helen Scott:

After a couple of years of writing code, I decided that it isn’t for me, so I pivoted my career to Technical Writing following my passion for communication. Now I come full circle and use my drive for writing, speaking and learning coupled with my first love of Java in my Java Developer Advocate role, helping others excel in their chosen careers. IDEs weren’t around when I was a developer, but Trisha’s knowledge blew me away.

Trisha Gee:

In my 20 years plus career, as a developer at LMAX or as a developer advocate at JetBrains, I learned how important knowing your IDE is. Through the years I produced a lot of content for supporting others with tips and techniques to use IntelliJ Idea at its best for creating Java applications. At one point I realized that I needed to find a coherent story about getting started with IntelliJ and how to use it further for day-to-day development. The book tells that story.

InfoQ: Who does your book target? The seasoned professional or the novice?

Scott:

This was one of the concerns we had while writing. We focused on three major themes:

  • Keyboard First
  • Always Green
  • Staying in the Flow

All these focused on two personas:

  • Trisha – tips for the advanced user
  • Hellen – tips for the beginner

Gee:

Great question! We set ourselves the VERY difficult task to target both. The first category will probably want to read the book from cover to cover, being able to learn more. The second one will skip some parts focusing on more advanced subjects.

InfoQ: How do you recommend reading the book? Is it a one-time read, or a reference to visit when needed?

Gee:

The higher purpose of the book is to make the reader understand how IntelliJ IDEA “thinks” and in this way allow her to become more productive by making the IDE work for her. By understanding how Idea was designed to help, when treading new waters you would know where to look, or the right combination of shortcuts to use.

The book is split into four distinct parts:

  • How IDEA was developed to make the developer’s life easier
  • Two parts containing step-by-step tutorials and key features
  • The last part provides a deeper dive into how specific functionality works in more detail

InfoQ: What are the key takeaways of the book?

Scott:

Learn how to get IntelliJ IDEA to do all the heavy lifting so you can focus on what you do best – solving problems. We believe that being able to use your tools efficiently boosts your effectiveness and perhaps even your happiness!

Gee:

IntelliJ IDEA is powerful and useful. You don’t need to know all of its functionality, but you should know the 20% of the features you’ll use 80% of the time. We’d like you to see when you’d use those features and how they help you. Using your IDE effectively will make you more productive, yes, but in my experience, it also makes development more satisfying. Even, dare I say it, more fun. Besides, the concepts apply to all IDEs from the Intellij family like PyCharm or Webstorm.

Even if writing the book was a tremendous effort, for Helen and Trisha it was just the beginning. Starting from the available content, they will create workshops and be present wherever the audience is curious about how the IDE can make their work more productive and fun. So, at points even if developers think they cannot extract more productivity from a workday, mastering the tools that they use could boost it even further.

About the Author

Subscribe for MMS Newsletter

By signing up, you will receive updates about our latest information.

  • This field is for validation purposes and should be left unchanged.


Creating Great Psychologically Safe Teams

MMS Founder
MMS Rafiq Gemmail

Article originally posted on InfoQ. Visit InfoQ

Sandy Mamoli, Agile coach and author of Creating Great Teams: How Self-Selection Lets People Excel, recently appeared on the No Nonsense Agile Podcast to discuss her experiences in creating and sustaining high-performing teams with a common purpose and underlying safety. Mamoli also shared approaches for dealing with typical toxic team behaviours, addressing questions relating to team members who are highly skilled but lack team cohesion, weakly skilled yet highly cohesive, “charming slackers” and those who avoid conflict to create “artificial harmony.” Keith Ferrazzi, author of Competing in the New World of Work and organisational coach, also recently wrote about his success in using social contracts within a large Agile transformation to cultivate empowered teams with candour. Mamoli and Ferrazi both described having the safety to engage in candid feedback as a critical ingredient for high performing teams.

Referring to Daniel Pink’s 3 elements of intrinsic motivation, Mamoli emphasised the importance of autonomy and purpose to high-performing teams, saying that many organisations successfully provide “mastery” to teams, while autonomy and purpose are more often neglected. She spoke about how most teams are usually formed by manager selection of individuals, “without giving them a purpose.” Mamoli, who has been enabling self-selection since her experience with TradeMe in 2014, said of team self-selection that “instead of managers deciding who is going where, we trust people to come up with a good solution for the entire organisation.” Mamoli explained:

People do their best work if they can choose who they work with and what they work on. If you choose a team that has a particular purpose you choose into that purpose. You can take responsibility to ensure you have the skills that are necessary and you want to work together. I find that people (team members) get this right, because they are the ones who have the information about what they want to do and who they want to work with.

Mamoli, who started her career as an Olympic athlete, also spoke about teams choosing their own managers, much as a sports person would choose a specific coach to learn from. While acknowledging that this “is not always possible in all organisations,” she said that this was particularly valuable where the manager is “someone who takes care of you, provides pastoral care, helps you in making career decisions, who coaches you or who you bounce ideas off.”

Discussing behaviours which hinder achieving high-performance, Mamoli spoke of the importance of understanding individual context when faced with staff or teams which avoid conflict to create an “artificial harmony.” As an example, she shared her experience of creating safe environments for teams within countries where “admittance of a fault may result in dismissal.” Mamoli said:

I have learned to have a lot of empathy with people not admitting mistakes if they are in a different context. It took a really long time and repeated experience of no-punishments-for-mistakes to build a team out of people where safety is just not there.

Ferrazzi wrote about the importance of candour to team health, stating that teams should be seen as peers to leaders. He describes enabling safety using “candour breaks”; sections of a meeting where teams are encouraged to point out “what is not being said?” He also uses “red-flag replays” as spontaneous interruptions for historical fact checking. Ferrazzi also instilled unusual “safe-words,” after which a team agrees to listen and intentionally avoid interrupting. Eg. “Yoda in the room.” Highlighting the dangers of conflict avoidance, Ferrazzi wrote of its risks and the importance of addressing it:

Conflict avoidance can be corrosive, even deadly, causing teams to miss opportunities and needlessly exposing them to risk. Members might recognize hazards but decline to bring them up, perhaps for fear of being seen as throwing a colleague under the bus… No matter how sensitive the issue or how serious the criticism, members must feel free to voice their thoughts openly—though always constructively—and respond to critical input with curiosity, recognizing that it is a crucial step toward a better solution.

Mamoli pointed out that “there is a lot of misunderstanding around psychological safety,” saying that “it doesn’t mean we’re super nice to each other and feel comfortable all the time.” She explained that the resulting behaviour should be that teams “hold each other accountable” and can safely provide direct feedback saying “this is what I need from you. Or you are not doing this.” She said that “this is what we need to remember psychological safety really means.”

Pointing to the importance of understating your starting context, Ferrazzi wrote about some of the tensions often seen at the start of a less healthy team-centric transformation:

Before you can change the ways in which your team members interact and operate, you need a clear view of how they are functioning right now. Too often members have an unspoken agreement to avoid conflict, stick to their individual areas of responsibility, and refrain from criticism in front of the boss. And they may be willing to take advice only from higher-ups, not recognizing the vital role of peer-to-peer feedback. All that needs to change.

When asked about disproportionate salaries and conflict avoidance by women and others on work visas, Mamoli shared the importance of understanding that there is often a fear of losing their jobs, which would also “uproot their entire family.” Her approach is to get “people out of their shell” not only because it’s good for the team, but to enable them to acknowledge and understand that “just doing as you’re told to do is not the road to success.”

Mamoli was asked if a high-performing team could be achieved with only novice team members, lacking in stills but possessing the right behaviours. Mamoli said that without skill “there is no team spirit that can completely eliminate that problem” with immediacy. She said that if you have the right behaviour, in time “it will be possible to succeed.” Mamoli shared her experience with a Wellington based company, with a very young team where no one knew very much about service design. The team was given five weeks to learn about service design. Mamoli shared that “this was a team with the behaviours being sent out to acquire the skills.” She said that “over time”, they did become a “great team.”

In contrast, Mamoli was asked about whether a team of highly skilled experts can become high-performing, when they did not have appropriate team behaviours. She cited Robert I. Sutton’s book on Building a Civilised Workplace and Surviving One, whilst identifying this as a behaviour problem, rather than a personality problem. Mamoli explained that “behaviour can be learned,” however, if this fails she also pointed out that one may need to remove the “toxic personality for the good of the team.” Mamoli spoke of her approach to coaching such team members:

Differentiate between the behaviour and the person. Give feedback on the behaviour and tell them what would be useful instead. These are the things you do. This is the impact. How about you do this other thing instead? I don’t believe in wrapping feedback… Pick the right time and the right person. Ask if you can give feedback, then be honest and upfront.

Mamoli was also asked about challenges with the “charming slacker” who is broadly liked but does not perform. She described the persona as “hard to deal with” because “other people like them and don’t see the problem.” She said that “if it has an impact on the team” she would have a conversation without making team members “gang up.” Mamoli also points out that “sometimes the lazy slacker has other qualities which are immensely important for the team” and can make a “huge difference for team cohesion.” She said that one must assess if the damage of removing the individual would “be greater than the hit on team performance.”

As part of a team’s social contract, Ferrazzi listed a range of intentional lenses including measuring conflict-avoidance, the presence of silos, excessive hierarchy, shared success, growth mindedness, maintaining a team “energy-level”, supportiveness and achieving innovation. He advocates workshopping and prioritising weaknesses; he recommends “start with a limited number and expand once practices have begun to shift.”

Mamoli shared that ultimately a team with psychological safety would address issues arising within the team. She said that such a high-performing team would have a “conversation with each other” rather than “escalating to my manager.” Describing her experience of high-performing teams, Mamoli listed the behaviours she’d expect to see in a team which has reached such a stage:

They enjoy working with each other. They bounce off each other. They respect each other. They want to work together. There is banter. There is constructive criticism. There is constructive feedback, and there is no sugar coating… It’s people who are honest, frank, direct with each other, and they have clarity and also a desire to reach their goal together, so they can overcome any differences they might have because simply their shared goal is more important.

About the Author

Subscribe for MMS Newsletter

By signing up, you will receive updates about our latest information.

  • This field is for validation purposes and should be left unchanged.


BigCode Project Releases Permissively Licensed Code Generation AI Model and Dataset

MMS Founder
MMS Anthony Alford

Article originally posted on InfoQ. Visit InfoQ

The BigCode Project recently released The Stack, a 6.4TB dataset containing de-duplicated source code from permissively licensed GitHub repositories which can be used to train code generation AI models. BigCode also released SantaCoder, a 1.1B parameter code generation model trained on The Stack. SantaCoder outperforms similar open-source code generation models.

BigCode is a collaborative organization sponsored by HuggingFace and ServiceNow Research, with the mission of developing responsible and open-source language models. In response to recent criticism of some code generation AI models for using copyrighted code in their training data, BigCode began investigating the performance of models trained only on source code with permissive licenses, such as Apache or MIT. BigCode also created web-based tools for developers to determine if their code is contained in The Stack and to request it be excluded. To test the performance of models trained on The Stack, BigCode trained SantaCoder, which outperforms previous open-source code generation models on the MultiPL-E benchmark. According to BigCode:

We release all permissively licensed files for 30 common programming languages, along with a near-deduplicated version. In future work, we would like to further improve the released dataset. We are open to releasing data of other programming languages, plan to work on methods for removing PII and malicious code, and start experimenting with giving developers the possibility to have their data removed from the dataset. We hope The Stack will be a useful resource for open and responsible research on Code LLMs.

AI models for generating code are currently an active research area. In 2021, InfoQ covered OpenAI’s Codex and GitHub’s CoPilot, which are based on GPT-3 language models that are fine-tuned on code stored in public GitHub repositories. Although these models perform quite well at generating code, they have been criticized for copyright violations. In late 2022, InfoQ covered a lawsuit against Microsoft and OpenAI that alleges copyright violations, including lack of attribution required by the licenses of the included source code.

One goal of The Stack is to avoid these violations by only including source code with permissive licenses; that is “those with minimal restrictions on how the software can be copied, modified, and redistributed.” This includes MIT and Apache 2.0, but excludes “copyleft” licenses such as GPL, in part because copyleft advocates point out that models trained on GPL code could be considered “derivative works” which must themselves adopt the copyleft license.

Because excluding these repositories reduces the amount of training data, the BigCode team investigated whether this would reduce the performance of models trained on the dataset. They found that by near-deduplicating the dataset—that is, by removing from the dataset both exact duplicates as well as files that are very similar—that model quality was competitive with Codex. When training the 1.1B parameter SantaCoder model, the team discovered that filtering the dataset to only include 5-star repositories, however, reduces model quality “significantly.”

Thomas Wolf, co-founder of HuggingFace, joined a Twitter discussion about SantaCoder. In response to a user’s complaint about the quality of code generated by the model, Wolf replied:

It’s a completion model, not (yet) an instruct fine tuned model so you should formulate your task as a completion task. For instance by writing your prompt as a code comment or docstring.

Both The Stack dataset and the SantaCoder model are available on HuggingFace.

About the Author

Subscribe for MMS Newsletter

By signing up, you will receive updates about our latest information.

  • This field is for validation purposes and should be left unchanged.


Article: Why Team-Level Metrics Matter in Software Engineering

MMS Founder
MMS Ian Phillipchuk

Article originally posted on InfoQ. Visit InfoQ

Key Takeaways

  • If you care about things like impact, speed and quality then you need some form of metrics to work with so that you can begin to track your progress
  • The DORA Metrics of Deployment Frequency (DF), Lead Time for Changes (LTC), Mean Time to Recovery (MTTR), and Change Failure Rate (CFR) are a useful starting point
  • Tracking metrics at the team level allows the team to gain insight into their performance and experiment based on real feedback
  • Metrics should also be consolidated across the whole delivery organization
  • Metrics alone don’t tell the whole story – they need to be intpereted with insight and care

In a world where everything can have perspective, context and data, it doesn’t make sense to limit that to just part of your software development process. Your department’s work doesn’t stop when it’s submitted to git, and it doesn’t start when you get assigned your ticket. From the time that a work item first comes into focus, to when it slides into your production code like a tumbler into your existing solution, there are many areas where something can go right and just as many more ways where something might go wrong. Measuring those areas like any others in your pipeline is crucial to making improvements. We’re going to spend a little bit of time reviewing our terms and concepts, but then we’re going to dive into Jobber’s development process and discover how we:
 

  • Made the process of QA much easier by integrated Just-In-Time QA builds for our development branches
  • Streamlined our PR process to get work through our approvals and testing quicker
  • Integrated new services for handling failures and outages
  • And, discovered the reason why we weren’t getting our engineers enough time to put their heads down and just work, (spoilers, it was Meetings!), and why talking to your developers is as important as employing engineering metrics

The most commonly accepted industry standard for these measurements is the Four Keys set out by the DORA Team at Google of Deployment Frequency (DF), Lead Time for Changes (LTC), Mean Time to Recovery (MTTR), and Change Failure Rate (CFR). At its heart, these four metrics measure how frequently you deploy your code to production (DF), the time between the work being finished and being deployed (LT), how long it takes to recover from a serious production issue (MTTR), and how often your newly hotfixed code causes issues in production (CFR). In the abstract, they are key metrics across the general categories of Impact you are having on your customers, the speed at which you are doing so, and the consistency or quality of the services you are delivering. If you care about things like Impact, Speed and Quality then you need some form of metrics to work with so that you can begin to track your progress.

At Jobber, a provider of home service operations management software, we track these metrics and more as a Product Development department to ensure that we’re tracking changes to our progress in a measurable way. They help our teams be agile as they make changes, and be data-driven in their execution. If a team wants to try a new method of triaging bugs, or a new PR process, we can track that in real time against not only their previous performance, but also chart that same metric across the department as a whole, eliminating the risk of larger departmental noise ruining our data. Then those metrics on the individual or team tactical level roll up into groups, departments and eventually our entire organization. This gives us the fidelity to drill down into any layer we’d like, from the individual all the way to Jobber as a whole, and how we stack up against other organizations.

The Four Keys metrics are not the only thing we track as well but it’s always important to put a caveat on the data that you collect. Some of that noise at an individual, group or department level is very human in nature, and that context is invaluable. While we collect data on as many different facets as we can, we also recognize the human side of those metrics, and how a difficult project, switch in mission, or personal circumstance might affect one or more key metrics for an individual, team or even department.

That being said, Metrics (DORA/Four Keys and otherwise) have helped Jobber make a number of changes to our development process; including investing in build-on-demand CI/CD pipelines not only for our production environments but our developer environments as well, drastically impacting our LTC by getting test builds out to internal stakeholders and testers minutes after an engineer has proposed a fix. We’ve also streamlined our PR review process by cutting down on the steps required for us to push out hotfixes, new work and major releases, cutting down significantly on our DF. After reviewing some of our failure metrics, we integrated new services for dealing with outages and tech incidents, really improving our MTTR. Let’s dig into each of those examples a little deeper.

When we started examining our metrics more closely we realized that there were a number of improvements we could make to our development process. Specifically, in investigating our Lead Time to Change and Deployment frequency, we realized that a key step where we were behind our competition was our ability to deliver code to internal parties quickly and efficiently. We discovered that our time between a change being first ready for review, and being reviewed was much slower than companies of a similar size. This allowed us to look deeper, and we found that by getting our builds to our product owners, stakeholders, and others responsible for QA quicker would mean that we could tighten those loops and deploy much quicker. We rolled out on-demand Bitrise mobile builds for all of our new PRs, and that meant that it now only took 30 minutes to deliver builds to all interested parties with the contents of a change or a revision. This not only accelerated our feature development, but also had a meaningful impact on our MTTR and CFR metrics by simply getting code through our review process much faster.

When we looked at our Mean Time to Recovery and Change Failure Rate metrics we also discovered that we weren’t as efficient there as we would like. We were responsive in our responses to failure, but there was room to improve, especially in our communication and organization around incidents. We integrated Allma as an issue collaboration layer within our Slack channels, organizing and focusing communication around an incident. Before it was difficult for people to “hop-in” to help out with an issue, since it was typically scattered in a number of different places. Allma flows helped us to sort out those misconceptions and confusions by centralizing discussion of an active issue in one place, allowing many parties to jump in, monitor, or contribute to the resolution of an issue. In this as in the previous case, monitoring our metrics allowed us to find process and tooling changes in addition to specific technical or framework changes.

I want to zoom into a specific problem though that crept up in a really interesting way. When we looked through Jellyfish, our engineering measurement tool, we noticed a fundamental problem: our ICs (individual contributors) weren’t coding enough! We measure how many PRs our engineers push out as well as “Coding Days,” a rough approximation of how often during a day an engineer spends working on code versus the other parts of their day. We saw that over the year, our ICs were spending less and less time working on code and more and more time working on the other demands on their time. A simple and obvious solution of “just code more” jumps to the untrained mind, but like any problem with metrics or data, sometimes a signal is telling you something but measuring it can often result in a lot of noise. The best way to work through that noise is to zoom in from the zoomed out view that we often view our metrics into the personal experiences of your ICs, and that takes some tough conversations sometimes.

Underpinning those conversations has to be an element of trust, or else you won’t get any useful context out of it. At Jobber, while we examine as much data as we can to help inform our decisions we recognize that it’s at most only half of the equation, with the lived experiences of those being measured being the second half. A Jobberino is never measured by their data; it’s only used to contextualize what we’re seeing on our teams and groups and never used to define them or their work. This means that together we can look at metrics not as a measurement of someone’s worth to the company, but rather as the signals that might hint at a true problem. So when we sat down to make sense of the drop in our PRs, the first place we went was right to the source and started exploring the issue directly with our engineers, seeing what was keeping them from their important work building features and crushing bugs.

In what I’m sure is a familiar refrain to many who read this, when we dug into the data and the context around it, unsurprisingly meetings were the result. Specifically, meetings placed at inopportune times that would oftentimes break that all important technical flow. As you may or may not know, a significant amount of problems that engineers deal with will require at least 4 hours of time to solve. Therefore, meetings that break up that all important flow often meaningfully set back the resolution of those problems. Those problems are often the hardest to solve as well so you pay an opportunity cost there as well, as the most significant problems are the hardest set back. In this case, a relatively normal metric (the amount of time that our ICs are spending coding) was in this case burying a mountain of useful changes and information, and we’re now actively monitoring the large chunks of time our engineers have available, in addition to just roughly measuring their productive time. We would never have found that information if we had not first measured our engineering efforts, and definitely would not have had those critical contextual conversations had we not had environment of trust that allowed us to tackle the signal, not the noise.

Beyond the Four Keys lies all sorts of interesting metrics, as well. We also measure not only the amount of defects resolved, but also how many we’re closing per week. Or the amount of time that a PR takes before it’s closed (as well as the number of PRs reviewed and comments on those PRs!). We even measure how many times teams and departments update our internal documentation and wiki resources, how many times we reinvest back into other developers or documentation each week. It ultimately comes down to this: if you don’t track it, you can’t measure it. And if you don’t measure it, then you can’t improve it. Especially at the engineering manager level and beyond, as you tweak, modify and adjust your policies, processes and tools, you want to have that visibility into the success or failure of any particular change. We have no worries slapping OKRs onto our features, and we should have that same vim and vigor on tracking our contributions to the business of development. Just make sure to always get the boots on the ground approach before you see some magical trend-line in the data.

About the Author

Subscribe for MMS Newsletter

By signing up, you will receive updates about our latest information.

  • This field is for validation purposes and should be left unchanged.


Sigstore Releases Python Client

MMS Founder
MMS Matt Campbell

Article originally posted on InfoQ. Visit InfoQ

Sigstore has announced the 1.0 stable release of sigstore-python, a Python-based Sigstore-compatible client. The client provides a CLI as well as an importable Python API. It is able to sign and verify with any Sigstore-supported identity and has ambient identity detection for supported environments.

Sigstore provides a standard for signing, verifying, and protecting open-source software. It supports a process known as keyless signing where Sigstore generates ephemeral signing certificates without needing to maintain a private key. When generating a certificate, Sigstore encodes information from the OIDC token. This includes the path to the repository, the specific commit of the build, and a link to the file that contains the build instructions.

According to William Woodruff, the project had two main goals: usability and reference quality. As Woodruff explains, “sigstore-python should provide an extremely intuitive CLI and API, with 100 percent documentation coverage and practical examples for both.” There are other Sigstore clients being developed, such as for Go, Ruby, Java, and Rust, but the team would like sigstore-python to be among the “most authoritative in terms of succinctly and correctly implementing the intricacies of Sigstore’s security model.”

To achieve the usability goal, the client obfuscates away many of the complicated bits of cryptography and opts to present two main primitives: signing and verifying. For example, signing can be accomplished via the CLI using sigstore sign:

$ echo "hello, i'm signing this!" > hello.txt
$ sigstore sign hello.txt

On desktops, this will prompt an OAuth2 workflow to provide authentication. On supported CI platforms, the client will automatically select an OpenID Connect identity. Currently, GitHub Actions, Google Compute Engine (GCE), and Google Cloud Build (GCB) are supported. There are plans to add support for GitLab CI and CircleCI.

With the importable Python API, it is possible to accomplish the same tasks as the CLI but within Python. For example, the above signing example but using the Python API looks like this:

mport io

from sigstore.sign import Signer
from sigstore.oidc import Issuer

contents = io.BytesIO(b"hello, i'm signing this!")

# NOTE: identity_token() performs an interactive OAuth2 flow;
# see other members of `sigstore.oidc` for other credential
# mechanisms.
issuer = Issuer.production()
token = issuer.identity_token()

signer = Signer.production()
result = signer.sign(input_=contents, identity_token=token)
print(result)

The GitHub Action can be enabled by adding sigstore/gh-action-sigstore-python to the desired workflow. Note that the workflow must have permission to request the OIDC token to authenticate with. This is done by setting id-token: write on the job or workflow:

jobs:
  selftest:
    runs-on: ubuntu-latest
    permissions:
      id-token: write
    steps:
      - uses: actions/checkout@v3
      - name: install
        run: python -m pip install .
      - uses: sigstore/gh-action-sigstore-python@v1.0.0
        with:
          inputs: file.txt

Woodruff notes that the project is committing to semantic versioning for both the Python API and the CLI. They indicate that breaking changes will not be made without a corresponding major version bump. In future releases, Woodruff indicates there will be further integration into PyPI and the client-side packaging toolchain. They also hope to stabilize their GitHub Action.

sigstore-python is open-source and available under the Apache 2.0 license. Additional details can be found in the API documentation or in the #python channel of the Sigstore Slack.

About the Author

Subscribe for MMS Newsletter

By signing up, you will receive updates about our latest information.

  • This field is for validation purposes and should be left unchanged.


Java News Roundup: MicroProfile 6.0, Kotlin 1.8, Spring Framework Updates

MMS Founder
MMS Michael Redlich

Article originally posted on InfoQ. Visit InfoQ

This week’s Java roundup for January 9th, 2023, features news from JDK 20, JDK 21, Spring Framework 6.0.4 and 5.3.25, Spring Data 2022.0.1 and 2021.2.7, Spring Shell 2.1.5 and 3.0.0-RC1, MicroProfile 6.0, Quarkus 2.15.3, Micronaut 3.8.1, Micrometer Metrics 1.10.3, Micrometer Tracing 1.0.1, Project Reactor 2022.0.2, Piranha 23.1.0, Apache Tomcat 9.0.71, JHipster Lite 0.26.0 and Kotlin 1.8.0.

JDK 20

Build 31 of the JDK 20 early-access builds was made available this past week, featuring updates from Build 30 that include fixes to various issues. More details on this build may be found in the release notes.

JDK 21

Build 5 of the JDK 21 early-access builds was also made available this past week featuring updates from Build 3 that include fixes to various issues.

For JDK 20 and JDK 21, developers are encouraged to report bugs via the Java Bug Database.

Spring Framework

The release of Spring Framework 6.0.4 delivers new features such as: Kotlin DSL support for the MockMvc class and the andExpectAll() method defined in the ResultActions interface; a new ExecutingResponseCreator class to delegate request and response; compatibility with Hibernate ORM 6.2; and native support for the @Convert annotation on JPA entities. This version will be included in the upcoming release of Spring Boot 3.0.2. More details on this release may be found in the release notes.

The release of Spring Framework 5.3.25 ships with new features such as: optimize object creation in the handleNoMatch() method defined in the RequestMappingHandlerMapping class; and add a title to factory methods of the SockJSFrame class for accessibility compliance. This version will be included in the upcoming release of Spring Boot 2.7.8. More details on this release may be found in the release notes.

Spring Data 2022.0.1 and 2021.2.7 have been released featuring mostly bug fixes and dependency upgrades to sub-projects such as: Spring Data MongoDB versions 4.0.1 and 3.4.7; Spring Data Neo4j versions 7.0.1 and 6.3.7; and Spring Data Elasticsearch 5.0.1 and 4.4.7. These releases will be consumed by upcoming releases of Spring Boot.

Versions 2.1.5 and 3.0.0-RC1 of Spring Shell have been released. Version 2.1.5 features an upgrade to Spring Boot 2.7.7 and a backport of some recent bug fixes. Version 3.0.0-RC1 features: an upgrade to Spring Boot 3.0.1; a better model of defining error handling with annotations; the CommandParser interface now reports errors for unrecognized options; and the CommandRegistration.Builder interface now has a shared configurable instance. More details on these releases may be found in release notes for version 2.1.5 and version 3.0.0-RC1.

MicroProfile

The MicroProfile Working Group has released MicroProfile 6.0 featuring alignment with Jakarta EE 10 and a new specification, Telemetry 1.0, that replaces the original Open Tracing specification. Updated specifications provided in this version are: Metrics 5.0, JWT Authentication 2.1, Open API 3.1, Reactive Messaging 3.0 and Reactive Streams Operators 3.0. The Open Tracing 3.0 specification, having been placed in the set of standalone specifications, is still available to developers. The Jakarta EE Core Profile, new for Jakarta EE 10 and now included in MicroProfile, contains the historical JSR- and Jakarta EE-based specifications, namely CDI, JAX-RS, JSON-P and JSON-B. More details on this release may be found in the release notes and InfoQ will follow up with a more detailed news story.

Quarkus

Red Hat has released Quarkus 2.15.3.Final that delivers bug fixes and enhancements such as: ensure proper operation with the Kotlin implementation of the QuarkusApplication interface; introduce a JSON Stream parser for the Reactive REST Client; the ability to automatically enable/disable GraphQL Federation; and throw an IllegalStateException with basic information about the provider that failed to provide a resource. More details on this release may be found in the changelog.

Micronaut

The Micronaut Foundation has released Micronaut 3.8.1 featuring bug fixes, updates in testing and dependency upgrades to modules: Micronaut Servlet 3.3.3, Micronaut Data 3.9.4 and Micronaut AWS 3.10.5. More details on this release may be found in the release notes.

Micrometer

The release of Micrometer Metrics 1.10.3 delivers bug fixes and a number of dependency upgrades such as: Dropwizard Metrics 4.1.35, Gradle Enterprise Gradle Plugin 3.12, Reactor 2020.0.26, Reactor Netty 1.0.26 and AWS Cloudwatch SDK 2.18.41.

Similarly, the release of Micrometer Tracing 1.0.1 ships with bug fixes and a number of dependency upgrades such as: Gradle Wrapper 7.6, Testcontainers 1.17.6, Mockito 4.11.0 and Micrometer BOM 1.10.3.

Project Reactor

Project Reactor 2022.0.2, a second maintenance release, provides dependency upgrades to reactor-core 3.5.2 and reactor-netty 1.1.2.

Piranha

Piranha 23.1.0 has been released. Along with the many bug fixes, this latest release delivers new features such as: integrate Eclipse Exousia 1.0.0, the compatible implementation of Jakarta Authorization, and MicroProfile Config; split the Jakarta Security module; add support for login configuration to SecurityManager API; and mark FileAuthenticationFilter as asynchronous. More details on this release may be found in their documentation and issue tracker.

Apache Software Foundation

Apache Tomcat 9.0.71 has been released with notable changes such as: correct a regression in the refactoring that replaced the use of the URL constructors; use the HTTP/2 error code, NO_ERROR, so that the client does not discard the response upon resetting an HTTP/2 stream; and change the default of the system property, GET_CLASSLOADER_USE_PRIVILEGED, to true unless the Expression Language library is running on Tomcat. More details on this release may be found in the changelog.

JHipster

JHipster Lite 0.26.0 has been released featuring a number of bug fixes and enhancements such as: a new annotation, @ExcludeFromGeneratedCodeCoverage, to replace the existing @Generated annotation in places where it was explicitly added to skip a code coverage check; a refactored generate.sh script for Spring Boot; and add git information for generated Spring Boot applications.

Kotlin

JetBrains has released Kotlin 1.8.0 featuring: new experimental functions for JVM to recursively copy or delete directory content; improved performance in the kotlin-reflect artifact; compatibility with Gradle 7.3; and a new -Xdebug compiler option for a better debugging experience. More details on this release may be found in the what’s new page.

About the Author

Subscribe for MMS Newsletter

By signing up, you will receive updates about our latest information.

  • This field is for validation purposes and should be left unchanged.


.NET 7 Brings Networking Improvements

MMS Founder
MMS Edin Kapic

Article originally posted on InfoQ. Visit InfoQ

The .NET 7 launch has brought many improvements around the whole API surface of the .NET Framework. In networking operations, .NET 7 improves the capabilities and performance of the existing HTTP and WebSockets protocols. It exposed a new protocol called QUIC and has many performance improvements compared to .NET 6.

The performance improvements around networking capabilities in .NET 7 were made possible by removing unneeded memory allocations (in the SslStream class) or by replacing old implementations with new ones. Specifically, the connection opening is now faster on sockets for all platforms. Some operations were made faster, like the IndexOf method collection on the response body and the HtmlDecode utility method. For secure connections, the security protocols allowed for optional performance-improving actions such as TLS resume support or OSCP stapling of certificate non-revocation proof and .NET 7 now implements those optimisations.

The existing protocols such as HTTP and WebSockets have been updated with small improvements over the board. HTTP connection pooling changes introduced in .NET 6 brought problems for some users, and .NET 7 fixes them. Accessing HTTP headers from code is now thread-safe and faster. WebSockets are now supported over HTTP/2, their handshake response information is exposed in the CollectHttpResponseDetails setting of the ClientWebSocketsOptions class and HTTP upgrade requests over WebSockets allow for the passing of a custom HttpClient instead of the one encapsulated in the request.

Among the networking security improvements, the big change is the abstraction of the underlying security challenge negotiation protocol. Higher-level clients such as HttpClient, SmtpClient and SQL Server clients allow for NTLM or Kerberos authentication, but there was no generic support for these protocols below them. .NET 7 introduces a new low-level API embodied in the NegotiateAuthentication class. It maps to the Windows authentication SSPI library or the GSSAPI system library on Linux. The usage of the new API is done by specifying the NegotiateAuthenticationClientOptions (or its NegotiateAuthenticationServerOptions twin) in the NegotiateAuthentication constructor first and by invoking the GetOutgoingBlob method later.

X509 certificate chain validation is a standard when using certificate-based authentication for servers. During the validation, additional server certificates are downloaded and there was no way to influence this behaviour. .NET 7 introduces a new property for the SslClientAuthenticationOptions class called CertificateChainPolicy that allows changing the default chain validation with download denials, custom timeout settings or by using a custom certificate store.

QUIC is a custom transport-level protocol developed by Google in 2012 and supported by Chrome, Firefox and Edge browsers. It uses UDP as an underlying packet protocol, multiplexes the connections and uses encryption. QUIC was designed to be a modern TCP replacement for server connections and to allow for faster connection-oriented communication. .NET implements QUIC since .NET 5, but it was only used for the HTTP/3 protocol, which is essentially the HTTP over QUIC. In .NET 7, System.Net.Quick namespace exposes QuicListener, QuicConnection and QuicStream classes that can be used to establish and consume QUIC connections. Developers should be aware not to use QUIC as the only communication protocol because Internet routers can block UDP connections or only support TCP.

About the Author

Subscribe for MMS Newsletter

By signing up, you will receive updates about our latest information.

  • This field is for validation purposes and should be left unchanged.